text
stringlengths
0
128k
// // GodRays.cpp // rayMarching_2 // // Created by Pierre Tardif on 09/08/2019. // #include "GodRays.hpp" void GodRays::setup(){ initGui(); positionLight = {0,0}; } void GodRays::initGui(){ shaderControl.setName(name); shaderControl.add(lightDirDOTviewDir.set("lightDirDOTviewDir", 0.7, 0,2)); shaderControl.add(lightMoveSpeed.set("lightMoveSpeed", {0.3,0.5}, {0,0}, {3,3})); shaderControl.add(samples.set("samples", 32, 0,50)); } void GodRays::addUniforms(ofShader* shader, bool active){ shader->setUniform1f(name+"Active", active?1:0); if(active){ updatePositionLight(); shader->setUniform1f("lightDirDOTviewDir", lightDirDOTviewDir); shader->setUniform2f("lightPositionOnScreen", positionLight); shader->setUniform1f("samplesRay", samples ); } } void GodRays::updatePositionLight(){ positionLight.x = WIDTH * ofNoise(ofGetElapsedTimef() * lightMoveSpeed->x); positionLight.y = HEIGHT * ofNoise(ofGetElapsedTimef() * lightMoveSpeed->y); }
Egyptian Sand Dance Overview Pre-built Rides None. Animals in the park at the start 6 Camels (all are female) – roaming freely Objectives Apprentice Entrepreneur Tycoon * Guest in park: 400 - can be achieved at any time. * Animals of species: 8 (Camel) - can be achieved at any time. * Minimum Coaster Excitement: 4 - one coaster * Guest in park: 500 - can be achieved at any time. * Animals of species: 12 (Camel) - can be achieved at any time. * Repay Loan * Guest in park: 600 - can be achieved at any time. * Animals of species: 16 (Camel) - can be achieved at any time. * Minimum Coaster Excitement: 4 - two coasters Scenario Guide (Strategy guide for beating the scenario goes here) Available Scenery * Trees * Shrubs and Bushes * Gardens * Path Items * Walls and Fences
install/bin/stdin-killer: macOs (Darwin) compatibility The failing check is addressed by https://github.com/ceph/teuthology/pull/1894 @batrick hm, I do not have access to teuthology on readthedocs, do you mind adding me? I'm not sure. You may be better asking @neha-ojha @batrick you're listed as an admin, but she's not: https://readthedocs.org/projects/teuthology/ @batrick you're listed as an admin, but she's not: https://readthedocs.org/projects/teuthology/ Do you have a readthedocs username? Your emails didn't work in their lookup system. Mine for example is: https://readthedocs.org/profiles/batrick/ which I login to with via github auth. Yep, same for me - I hadn't added my RH email to the profile, but I've done that just now. @batrick you're listed as an admin, but she's not: https://readthedocs.org/projects/teuthology/ Do you have a readthedocs username? Your emails didn't work in their lookup system. Mine for example is: https://readthedocs.org/profiles/batrick/ which I login to with via github auth. Yep, same for me - I hadn't added my RH email to the profile, but I've done that just now. I've invited you to both projects. @batrick you're listed as an admin, but she's not: https://readthedocs.org/projects/teuthology/ Do you have a readthedocs username? Your emails didn't work in their lookup system. Mine for example is: https://readthedocs.org/profiles/batrick/ which I login to with via github auth. Yep, same for me - I hadn't added my RH email to the profile, but I've done that just now. I've invited you to both projects. Thanks! Appreciate it.
Sun Ce Dynasty Warriors Warriors Orochi Dynasty Tactics Personality Appearance Voice Actors * Michael Lindsay - Dynasty Warriors 4 (English) * Yuri Lowenthal - Dynasty Warriors 5~6, Warriors Orochi series (English) * Steve Blum - Dynasty Tactics 2 (English) * Takahiro Kawachi - Dynasty Warriors and Warriors Orochi series (Japanese) * Hideo Ishikawa - Dynasty Tactics 2 (Japanese) * Kazuhiko Inoue - Romance of the Three Kingdoms drama CD series Quotes * See also: Sun Ce (Quotes) * Sun Ce rallying to attack Lu Bu; Dynasty Warriors 6 Dynasty Warriors 4 * A series of swings, then a strong strike. Dynasty Warriors 4 * Level 10 Weapon: Overlord * Base Attack: 45 * Stage: Unification of Jiang Dong Dynasty Warriors 4: Xtreme Legends * Level 11 Weapon: Hierophant * Base Attack: 48 * Attributes: Level 19 Dragon Amulet, Level 10 Jump Scroll, Level 15 Huang's Bow, Level 20 Elixir * Stage: The Shadow of Sun Ce * Restrictions: No bodyguards * Requirements: Clear the stage in under 1:50 minutes after defeating the real Yu Ji's appearance. * Strategy: * 4) There's a 5-minute time limit. * 5) Level 11 message. Dynasty Warriors 5 * 4th Weapon: Overlord * Base Attack: 36; Weight: Medium * Stats: Charge +15, Attack +18, Life +15, Speed +18, Musou +15 * Stage: The Trials of Sun Ce (Sun Ce's forces) * Location: In a small area to the north. * Requirements: Defeat all of Yu Ji's phantoms, Sun Jian, and Da Qiao in less than 10 minutes. Dynasty Warriors 6 {|cellspacing="0" border="1" align="left" width="30%"
Criminal Law Act 1977 The Criminal Law Act 1977 (c. 45) is an act of the Parliament of the United Kingdom. Most of it only applies to England and Wales. It creates the offence of conspiracy in English law. It also created offences concerned with criminal trespass in premises, made changes to sentencing, and created an offence of falsely reporting the existence of a bomb. Part II - Offences relating to entering and remaining on property This Part implemented recommendations contained in the Report on Conspiracy and Criminal Law Reform (Law Com 76) by the Law Commission. Section 6 - Violence for securing entry Section 6 creates an offence of using or threatening unauthorised violence for the purpose of securing entry into any premises, while there is known to be a person inside opposing entry. Violence is taken to include violence to property, as well as to people. This section has been widely used by squatters in England and Wales, as it makes it a crime in most circumstances for the landlord to force entry, as long as the squatters are physically present and express opposition to the landlord's entry. "Squatters rights" do not apply when the property appears to be occupied (e.g. there are signs of current use, furniture, etc.). Section 6 is referred to in printed legal warnings, which are commonly displayed near the entrances to squatted buildings. Squatters are not protected by the Protection from Eviction Act 1977, which makes it a crime to evict tenants without following the legal process. Reasonable force used by a bailiff executing a possession order would not be considered unauthorised violence, so landlords can still legally regain possession through the courts. Laws regarding squatting residential properties were revised in the Legal Aid, Sentencing and Punishment of Offenders Act 2012. * Section 7 - Adverse occupation of residential premises * Section 8 - Trespassing with a weapon of offence Section 9 - Trespassing on the premises of foreign missions etc The purpose of this offence is to fill the lacuna that might otherwise have been left in the law by the abolition of the common law offence of conspiracy to trespass by section 5(1) of the Act. * Section 10 - Obstruction of court officers executing process for possession against unauthorised occupiers * Section 11 - Power of entry for the purpose of Part II of the Act * Section 12 - Supplementary provisions Section 13 - Abolitions and repeals This section abolished existing offences and repealed earlier statutes that were superseded by Part II of the Act. Subsection (1) abolished the common law offence of forcible entry and any offence at common law of forcible detainer. Subsection (2) repealed: * The Forcible Entry Act 1381 * The Forcible Entry Act 1391 * The Forcible Entry Act 1429 * The Forcible Entry Act 1588 * The Forcible Entry Act 1623 Part III - Criminal procedure, penalties etc This Part implemented recommendations contained in the Report of the Interdepartmental Committee on the Distribution of Criminal Business between the Crown Court and Magistrates' Courts (Cmnd 6323) (1975). Section 14 - Preliminary This section provided that sections 15 to 24 had effect for the purpose of securing that, as regards mode of trial, there were only three classes of offence, namely offences triable only on indictment, offences triable only summarily and offences triable either way, for laying down a single procedure applicable to all cases where a person who had attained the age of seventeen appeared or was brought before a magistrates' court on an information charging him with an offence triable either way, and for related purposes. * Section 15 - Offences which are to become triable only summarily Section 16 - Offences which are to become triable either way Subsection (2) replaced section 19 of the Magistrates' Courts Act 1952. This section was replaced by section 17 of the Magistrates' Courts Act 1980. Section 17 - Offence which is to become triable only on indictment This section made the offence of criminal libel triable only on indictment. It did this by repealing section 5 of the Newspaper Libel and Registration Act 1881. It was repealed by the Statute Law (Repeals) Act 1993 because it was spent by virtue of section 15 of the Interpretation Act 1978. * Section 18 - Provisions as to time limits on summary proceedings for indictable offences * Section 19 - Initial procedure on information for offence triable either way * Section 20 - Court to begin by considering which mode of trial appears more suitable * Section 21 - Procedure where summary trial appears more suitable * Section 22 - Procedure where trial on indictment appears more suitable * Section 23 - Certain offences triable either way to be tried summarily if the value involved is small * Section 24 - Power of court, with consent of legally represented accused, to proceed in his absence * Section 25 - Power to change from summary trial to committal proceedings, and vice versa * Section 26 - Power to issue summons to accused in certain circumstances * Section 27 - General limit on power of magistrates' court to impose imprisonment * Section 28 - Penalties on summary conviction for offences triable either way * Section 29 - Maximum penalties on summary conviction in pursuance of section 23 * Section 30 - Penalties and mode of trial for offences made triable only summarily * Section 31 - Increase of fines for certain summary offences * Section 32 - Other provisions as to maximum fines * Section 33 - Penalty for offences under section 3 of the Explosive Substances Act 1883 * Section 34 - Power of magistrates' court to remit a person under 17 for trial to a juvenile court in certain circumstances * Section 35 - Power to commit a person under 17 for trial extended to related offences in certain cases * Section 36 - Enforcement of fines imposed on young offenders * Section 37 - Supervision orders * Section 38 - Execution throughout United Kingdom of warrants of arrest * Section 39 - Service of summonses and citation throughout the United Kingdom * Section 40 - Transfer of fine orders * Section 41 - Transfer of remand hearings * Section 42 - Remand of accused already in custody * Section 43 - Peremptory challenge of jurors * Section 44 - Appeals against conviction * Section 45 - Cases where magistrates' Court may remit offender to another such court for sentence * Section 46 - Committal for sentence of offences tried summarily * Section 47 - Prison sentence partly served and partly suspended * Section 48 - Power to make rules as to furnishing of information by prosecutor in criminal proceedings * Section 49 - Power to order the search of persons before the Crown Court Section 50 - Amendment of the Road Traffic Act 1972 This section abolished the offences of causing death by dangerous driving, dangerous driving and dangerous cycling (whilst re-enacting those parts of the same provisions that referred to reckless driving and cycling). Subsection (1) substituted sections 1 and 2 of the Road Traffic Act 1972. Subsection (2) substituted section 17 of that Act. Section 51 - Bomb hoaxes This section creates an offence of bomb hoaxes. Section 52 - Misuse of Drugs Act 1971: redefinition of cannabis This section substitutes the definition of cannabis in section 37(1) of the Misuse of Drugs Act 1971 so that it includes leaves and stalks of the plant other than mature stalk separated from the rest of the plant. It was enacted in response to the successful appeal in R v Goodchild [1977] 2 All ER 163, [1977] 1 WLR 473 for the possession of dried leaves and stalks of the plant containing cannabis resin because these could not be described as "flowering and fruiting tops" of the plant and therefore did not fall within the definition then provided. Section 53 - Amendments of the Obscene Publications Act 1959 with respect to cinematograph exhibitions This section amends the Obscene Publications Act 1959. Section 54 - Inciting girl under sixteen to have incestuous sexual intercourse See incitement. Section 55 - Amendment of the Rabies Act 1974 and the Diseases of Animals (Northern Ireland) Order 1975 This section amends the Rabies Act 1974 and the Diseases of Animals (Northern Ireland) Order 1975. Section 56 - Coroners inquests This section implemented recommendations contained in the Report of the Committee on Death Certificates and Coroners (Cmnd 4810) (1971). Subsection (3) substituted section 20 of the Coroners (Amendment) Act 1926. Subsection 4 repealed the City of London Fire Inquests Act 1888. Section 57 - Probation and conditional discharge: power to vary statutory minimum or maximum period This section amended the Powers of Criminal Courts Act 1973. Section 58 - Proceedings involving persons under 17: increase of certain pecuniary limits This section amended section 8(3) of the Criminal Justice Act 1961 and the Children and Young Persons Act 1969. Section 59 - Alteration of maximum periods in default of payments of fines etc This section substituted paragraph 1 of Schedule 3 to the Magistrates' Courts Act 1952. Section 60 - Increase in maximum amount of compensation which may be awarded by a magistrates' court This section amended section 35(5) of the Powers of the Criminal Courts Act 1973. Part V * Section 63 applies to Scotland Section 65 - Citation, etc. The following orders have been made under section 65(7): * The Criminal Law Act 1977 (Commencement No. 1) Order 1977 (S.I. 1977/1365 (C. 47)) * The Criminal Law Act 1977 (Commencement No. 2) Order 1977 (S.I. 1977/1426 (C. 51)) * The Criminal Law Act 1977 (Commencement No. 3) Order 1977 (S.I. 1977/1682 (C. 58)) * The Criminal Law Act 1977 (Commencement No. 5) Order 1978 (S.I. 1978/712 (C. 16)) * The Criminal Law Act 1977 (Commencement No. 7) Order 1980 (S.I. 1980/487 (C. 17)) * The Criminal Law Act 1977 (Commencement No. 9) Order 1980 (S.I. 1980/1632 (C. 69) * The Criminal Law Act 1977 (Commencement No. 11) Order 1982 (S.I. 1982/243 (C. 9)) * The Criminal Law Act 1977 (Commencement No. 12) Order 1985 (S.I. 1985/579 (C. 8))
Talk:Pandora moth Untitled Other resources not yet incorporated: http://www.forestpests.org/acrobat/pandora.pdf —Bunchofgrapes (talk) 21:22, 8 November 2006 (UTC) * Done; thanks for the tip! Eventualism at work. Djembayz (talk) 04:38, 27 November 2012 (UTC)
using NUnit.Framework; using AutoPoco.Configuration; using AutoPoco.Testing; namespace AutoPoco.Tests.Unit.Configuration { [TestFixture] public class DataSourceFactoryTests { [Test] public void Build_ReturnsNewFactory() { DatasourceFactory factory = new DatasourceFactory(typeof(BlankDataSource)); BlankDataSource source = factory.Build() as BlankDataSource; Assert.NotNull(source); } } }
Osteochondral Fracture: Top Open Access Journals|OMICS International|journal Of Trauma And Treatment OMICS International organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members. Osteochondral Fracture: Top Open Access Journals when the articular cartilage which is present at the end of a joint is torn by any reason it is called osteochondral fracture. Mostly knee and ankle joints are affected because these joints have a lot of strain and weight, and this is why they are so vulnerable. There are so many ways by which this fracture can be identified; one of those is x ray. Apart from X-ray other medical technologies or imaging system are used to get a detailed view of fractured joint The top open access journals are peer reviewed scholarly journals of Trauma and Treatmen. The top open access journals are freely available on the public internet domain, allowing any end users to read, download, copy, distribute, prink, search or link to the full texts of the articles. These provide high quality, meticulously reviewed and rapid publication, to cater the insistent need of scientific community. These journals are indexed with all their citations noted. The top open access journals are indexed in MEDLINE, PUBMED, SCOPUS, COPERNICUS, CAS, EBSCO and ISI. • Share this page Last date updated on July, 2014 Top
""" Copyright (C) 2018 Ridgeback Network Defense, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import time from dwave.system.samplers import DWaveSampler from neal import SimulatedAnnealingSampler from dwave.system.composites import EmbeddingComposite useQpu = False # change this to use a live QPU trials = 5000 # How many trials in the coin flipping experiment """ fun-coin.py ----------- Tutorial for flipping a coin. This tutorial covers timing and probability distributions. To use a live QPU, set useQpu to True. Quantum computers are wonderful at generating random numbers. Let's flip some coins! """ print('') print('Coin Flipperama!') print('================') print(' ?????? ') print(' ?? ?? ') print(' ?? ?? ?? ') print(' ?? ?? ?? ') print(' ?? ?? ') print(' ?????? ') print('Flip a bunch of coins and show the distribution.') print('') # At the top of this file, set useQpu to True to use a live QPU. if (useQpu): sampler = DWaveSampler() # We need an embedding composite sampler because not all qubits are # working. A trivial embedding lets us avoid dead qubits. sampler = EmbeddingComposite(sampler) else: sampler = SimulatedAnnealingSampler() # Initialize a binary quadratic model. # It will use 2000 qubits. All biases are 0 and all couplings are 0. bqm = {} # binary quadratic model distrib = {} # distribution msg = 'How many coins do you want to flip at the same time?' try: coins = raw_input(msg) except: try: coins = input(msg) except: print('I give up! Why can\'t I ask you questions?') max_coins = 50 try: max_coins = int(coins) except: print('That is a weird number. I am going with 50.') max_coins = 50 if (max_coins > 2000): print('Too many coins! I am only flipping 2000 at a time.') max_coins = 2000 if (max_coins < 1): print('Too few coins! I am going to flip one coin at a time.') max_coins = 1 for i in range(0, max_coins): bqm[(i,i)] = 0 # indicate a qubit will be used distrib[i] = 0 # initialize the distribution to all 0 distrib[max_coins] = 0 # We need one extra slot for the distribution print('Okay, for each trial I am going to flip %d coins' % max_coins) print('and I will repeat this for %d trials.' % trials) print('Next, I will display a distribution of how many coins came up heads.') print('This is very exciting, don\'t you think?') print('') print('DON\'T BLINK!') print('') start = time.time() response = sampler.sample_qubo(bqm, num_reads=trials) end = time.time() total = (end - start) try: qpu_access_time = response.info['timing']['qpu_access_time'] except: qpu_access_time = 0 print('QPU access time is not available. This makes me sad.') print('Whew! That was really tough. It took me '+'{:10.4f}'.format(total)+' seconds to flip '+'{:d}'.format(trials * max_coins)+' coins.') print('Of all that time, the QPU was used for '+'{:10.8f}'.format(qpu_access_time/1000000)+' seconds.') print('') print('Give me a moment to sort out these results...') # This is a very slow, brute force nested loop. for datum in response.data(): # for each series of flips n = 0 for key in datum.sample: # count how many heads or tails if (datum.sample[key] == 1): n += 1 distrib[n] += 1 # Determine the maximum in our distribution array # so we can normalize the widths of the bars. max_count = 0 for i in range(0, len(distrib)): if (distrib[i] > max_count): max_count = distrib[i] print('Ah, here we go. Here is your distribution for') print('the total number of heads per trial:') print('---------------------------------------------') print('') # Print out the distribution! width = 72 # the maximum width of a bar for i in range(0, len(distrib)): print(i, 'x' * int( round( 60 * (distrib[i] / max_count) ) ) ) print('') print('Wasn\'t that fun? Have nice day! :-)') """ Here are some Sample timing metrics available from a sampler response. All times are in µs (1µs = 0.000001s). {'timing': { 'total_real_time': 827389, 'qpu_access_overhead_time': 2487, 'anneal_time_per_run': 20, 'post_processing_overhead_time': 360, 'qpu_sampling_time': 819800, 'readout_time_per_run': 123, 'qpu_delay_time_per_sample': 21, 'qpu_anneal_time_per_sample': 20, 'total_post_processing_time': 2185, 'qpu_programming_time': 7589, 'run_time_chip': 819800, 'qpu_access_time': 827389, 'qpu_readout_time_per_sample': 123 } } If you want to know how much QPU time your operation will take, total, calculate: num_reads * (readout_time_per_run + qpu_delay_time_per_sample + qpu_anneal_time_per_sample) In the above sample timing, the total QPU time was 827389µs for 5000 samples. That is 819800µs for the the samples, and 7589µs QPU overhead. That is 163.96µs per sample, or (123µs + 21µs + 20µs) per sample. So, the books balance. Here is sample output from flipping 10 coins at a time. QPU time is 0 because it was run with a simulated annealer. Coin Flipperama! ================ ?????? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?????? Flip a bunch of coins and show the distribution. How many coins do you want to flip at the same time? 10 Okay, for each trial I am going to flip 10 coins and I will repeat this for 5000 trials. Next, I will display a distribution of how many coins came up heads. This is very exciting, don't you think? DON'T BLINK! QPU access time is not available. This makes me sad. Whew! That was really tough. It took me 0.1940 seconds to flip 50000 coins. Of all that time, the QPU was used for 0.00000000 seconds. Give me a moment to sort out these results... Ah, here we go. Here is your distribution for the total number of heads per trial: --------------------------------------------- 0 1 xxx 2 xxxxxxxxxxx 3 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 4 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 5 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 7 xxxxxxxxxxxxxxxxxxxxxxxxxxxxx 8 xxxxxxxxxx 9 xxx 10 Wasn't that fun? Have nice day! :-) """
#! /usr/bin/env bash set -e PATCH_DIR=/var/vcap/jobs-src SENTINEL="${PATCH_DIR}/${0##*/}.sentinel" if [ -f "${SENTINEL}" ]; then exit 0 fi patch -d "$PATCH_DIR" --force -p1 <<'PATCH' From 56eb140a6751971695d8a8ebf18524fc8900d318 Mon Sep 17 00:00:00 2001 From: Mark Yen <[email protected]> Date: Wed, 30 Jan 2019 10:40:27 -0800 Subject: [PATCH] nfsv3driver: install: Don't rely on "GNU" substring in library path Non-Debian distributions (that haven't done the whole multiarch thing) will not have the multiarch tuple in the path. --- jobs/nfsv3driver/templates/install.erb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git jobs/nfsv3driver/templates/install.erb jobs/nfsv3driver/templates/install.erb index 9808062..6f46fb6 100755 --- jobs/nfsv3driver/templates/install.erb +++ jobs/nfsv3driver/templates/install.erb @@ -27,7 +27,7 @@ if [ "$codename" == "xenial" ]; then fi # Figure out where the libraries should be installed. -libdir="/usr/$(dirname "$(ldconfig -p | awk '/gnu\/libc.so/ { print $NF }')")" +libdir="/usr/$(dirname "$(ldconfig -p | perl -ne "m/\blibc.so.*$(uname -m | tr _ -)/ && print((split)[-1])")")" # make sure there arent any existing fuse-nfs mounts pkill fuse-nfs | true -- 2.16.4 PATCH touch "${SENTINEL}" exit 0
566 A MANUAL OF PHYSIOLOGY the left and right rotatory moieties have been separated. The former has exactly the same power of raising the blood-pressure as the natural adrenalin, the latter only r L to r V as much. Practically the same proportion holds when the power of the two isomers in producing glycosuria is compared. This con stitutes important corroboration of the view already referred to (p. 522) that adrenalin glycosuria is caused by an action on the sympathetic system. For the effect on the blood- pressure is known to be thus produced (Cushny). The function of the cortex is unknown. It is stated that it contains cholin, a sub stance which lowers the blood-pressure, instead of raising it, as adrenalin does. It has been suggested that the adrenal glands have thus a double chemical grip upon the circulation, and can influence it in either direction, just as the bulb can influence it through its double nervous grip. But it is possible that the depressor substance of the cortex may be only a toxic body neutralized or destroyed in the glands. In any case the func tional difference between cortex and medulla is easily under stood when we reflect that the morphological history of the two tissues is quite different. The medulla is developed from cells which push their way into the gland from the rudiments of the sympathetic ganglia at that level, and is therefore of ectodermic origin. The cortex is derived from the same mesodermic structure which gives rise to the kidneys and genital organs. The existence of secretory fibres for the adrenal glands in the splanchnic nerves has been rendered probable by the experiments of Dreyer, who finds that the amount of active substance in the blood of the suprarenal vein, as tested by its physiological effect when injected into an animal, is increased by stimulation of those nerves. Pituitary Body. In the pituitary body three parts may be distinguished : (i) The anterior lobe proper, or pars anterior, consisting of epithelial cells, many of which are filled with granules of the type seen in glandular epithelium, and abun dantly provided with bloodvessels ; (2) the pars intermedia, consisting of epithelial cells, less granular and less richly supplied with bloodvessels than those of the pars anterior ; (3) the pos terior lobe proper, or pars nervosa, consisting chiefly of neuroglia closely invested by epithelial cells of the pars intermedia, and invaded by the colloid secreted by these cells. These differences in the structure of the anterior and posterior lobes of the pituitary body correspond to a difference in their development, the anterior lobe, with the pars intermedia, being derived from an inpushing of the ectoderm of the buccal cavity, and the posterior lobe from an extension of the neural ectoderm, which grows backwards as the infundibular process till it meets and blends with that por tion of the buccal invagination which* gives rise to the pars
Indigenous Peoples of Illinois Tribes and Bands of Illinois * Chippewa * Chippewa Indians * Delaware * Fox * Illinois * Iowa * Iroquois * Kaskaskia * Kickapoo * Michigamea * Miami * Moingwena * Ottawa * Peoria * Piankashaw * Potawatomi * Sauk and Fox * Sulk * Shawnee * Winnebago * Winnebago Indians Wyandot Indians * Wyandot Reservations Agencies of the Bureau of Indian Affairs * Chicago Agency * Prairie du Chien Agency 1824-1842 Family History Library * FOX INDIANS * MIAMI INDIANS * SAUK INDIANS and under the subject: INDIANS OF NORTH AMERICA- ILLINOIS. Other sources can be found in the Family History Library Catalog by using a Place Search under: ILLINOIS- NATIVE RACES The following references may be helpful for those searching for American Indians in Illinois: * The Lyman Copeland Draper Collection which includes: * Chief Joseph Brant papers (Family History Library film<PHONE_NUMBER>44) * Tecumseh Papers, (Shawnee Chief) 1768-1823 (Family History Library film<PHONE_NUMBER>38) See Also: * Illinois-History for a calendar of events * Illinois-Military for a list of forts
/* * Copyright 2001-2003 Neil Rotstan Copyright (C) 2004 Derek James and Philip Tucker * * This file is part of JGAP. * * JGAP is free software; you can redistribute it and/or modify it under the terms of the GNU * Lesser Public License as published by the Free Software Foundation; either version 2.1 of the * License, or (at your option) any later version. * * JGAP is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without * even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser Public License for more details. * * You should have received a copy of the GNU Lesser Public License along with JGAP; if not, * write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA * 02111-1307 USA * * Modified on Feb 3, 2003 by Philip Tucker */ package org.jgapcustomised; import java.io.Serializable; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.List; import org.jgapcustomised.event.GeneticEvent; import org.apache.log4j.Logger; import com.anji_ahni.integration.ActivatorTranscriber; import com.anji_ahni.integration.Transcriber; import com.anji_ahni.neat.AddConnectionMutationOperator; import com.anji_ahni.neat.Evolver; import com.anji_ahni.neat.NeatConfiguration; import com.anji_ahni.neat.SpeciationStrategyOriginal; import com.anji_ahni.util.Properties; import com.ojcoleman.ahni.util.ArrayUtil; /** * Genotypes are fixed-length populations of chromosomes. As an instance of a <code>Genotype</code> is evolved, all of * its <code>Chromosome</code> objects are also evolved. A <code>Genotype</code> may be constructed normally, whereby an * array of <code>Chromosome</code> objects must be provided, or the static <code>randomInitialGenotype()</code> method * can be used to generate a <code>Genotype</code> with a randomized <code>Chromosome</code> population. Changes made by * Tucker and James for <a href="http://anji.sourceforge.net/">ANJI </a>: * <ul> * <li>added species</li> * <li>modified order of operations in <code>evolve()</code></li> * <li>added <code>addChromosome*()</code> methods</li> * </ul> */ public class Genotype implements Serializable { private static Logger logger = Logger.getLogger(Genotype.class); public static final String SPECIATION_STRATEGY_CLASS_KEY = "speciation.class"; /** * The current active Configuration instance. */ protected Configuration m_activeConfiguration; protected Properties props; protected SpeciationParms m_specParms; protected SpeciationStrategy m_specStrategy; /** * Species that makeup this Genotype's population. */ protected List<Species> m_species = new ArrayList<Species>(); /** * Chromosomes that makeup thie Genotype's population. */ protected List<Chromosome> m_chromosomes = new ArrayList<Chromosome>(); protected int generation; protected int targetPerformanceType; protected Chromosome fittest = null; protected Chromosome bestPerforming = null; protected int zeroPerformanceCount = 0; protected int zeroFitnessCount = 0; Chromosome previousFittest = null; Chromosome previousBestPerforming = null; protected int maxSpeciesSize, minSpeciesSize; /** * This constructor is used for random initial Genotypes. Note that the Configuration object must be in a valid * state when this method is invoked, or a InvalidconfigurationException will be thrown. * * @param a_activeConfiguration The current active Configuration object. * @param a_initialChromosomes <code>List</code> contains Chromosome objects: The Chromosome population to be * managed by this Genotype instance. * @throws IllegalArgumentException if either the given Configuration object or the array of Chromosomes is null, or * if any of the Genes in the array of Chromosomes is null. * @throws InvalidConfigurationException if the given Configuration object is in an invalid state. */ public Genotype(Properties props, Configuration a_activeConfiguration, List<Chromosome> a_initialChromosomes) throws InvalidConfigurationException { // Sanity checks: Make sure neither the Configuration, the array // of Chromosomes, nor any of the Genes inside the array are null. // --------------------------------------------------------------- if (a_activeConfiguration == null) { throw new IllegalArgumentException("The Configuration instance may not be null."); } if (a_initialChromosomes == null) { throw new IllegalArgumentException("The array of Chromosomes may not be null."); } for (int i = 0; i < a_initialChromosomes.size(); i++) { if (a_initialChromosomes.get(i) == null) { throw new IllegalArgumentException("The Chromosome instance at index " + i + " of the array of " + "Chromosomes is null. No instance in this array may be null."); } } this.props = props; targetPerformanceType = props.getProperty(Evolver.PERFORMANCE_TARGET_TYPE_KEY, "higher").toLowerCase().trim().equals("higher") ? 1 : 0; // Lock the settings of the Configuration object so that the cannot // be altered. // ---------------------------------------------------------------- a_activeConfiguration.lockSettings(); m_activeConfiguration = a_activeConfiguration; m_specParms = m_activeConfiguration.getSpeciationParms(); m_specStrategy = (SpeciationStrategy) props.singletonObjectProperty(props.getClassProperty(SPECIATION_STRATEGY_CLASS_KEY, SpeciationStrategyOriginal.class)); adjustChromosomeList(a_initialChromosomes, a_activeConfiguration.getPopulationSize(), null); addChromosomes(a_initialChromosomes); generation = 0; } /** * adjust chromosome list to fit population size; first, clone population (starting at beginning of list) until we * reach or exceed pop. size or trim excess (from end of list) * * @param chroms <code>List</code> contains <code>Chromosome</code> objects * @param targetSize */ private void adjustChromosomeList(List<Chromosome> chroms, int targetSize, Chromosome popFittest) { List<Chromosome> originals = new ArrayList<Chromosome>(chroms); while (chroms.size() < targetSize) { int idx = chroms.size() % originals.size(); Chromosome orig = originals.get(idx); Chromosome clone = new Chromosome(orig.cloneMaterial(), m_activeConfiguration.nextChromosomeId(), orig.getObjectiveCount(), orig.getNoveltyObjectiveCount()); chroms.add(clone); if (orig.getSpecie() != null) { orig.getSpecie().add(clone); } } if (chroms.size() > targetSize) { // remove random chromosomes Collections.shuffle(m_chromosomes, m_activeConfiguration.getRandomGenerator()); Iterator<Chromosome> popIter = m_chromosomes.iterator(); while (chroms.size() > targetSize && popIter.hasNext()) { Chromosome c = popIter.next(); // don't randomly remove elites or population fittest (they're supposed to survive till next generation) if (!c.isElite && c != popFittest) { if (c.getSpecie() != null) c.getSpecie().remove(c); // remove from species popIter.remove(); } } } } /** * Add the specified chromosomes to this Genotype. * * @param chromosomes A collection of Chromosome objects. */ protected void addChromosomes(Collection<Chromosome> chromosomes) { Iterator<Chromosome> iter = chromosomes.iterator(); while (iter.hasNext()) { Chromosome c = iter.next(); m_chromosomes.add(c); } } /** * Add Chromosomes to this Genotype described by the given ChromosomeMaterial objects. * * @param chromosomeMaterial A collection of ChromosomeMaterial objects. */ protected void addChromosomesFromMaterial(Collection<ChromosomeMaterial> chromosomeMaterial) { Iterator<ChromosomeMaterial> iter = chromosomeMaterial.iterator(); while (iter.hasNext()) { ChromosomeMaterial cMat = iter.next(); Chromosome chrom = new Chromosome(cMat, m_activeConfiguration.nextChromosomeId(), m_activeConfiguration.getObjectiveCount(), m_activeConfiguration.getNoveltyObjectiveCount()); m_chromosomes.add(chrom); } } /** * @param cMat chromosome material from which to construct new chromosome object * @see Genotype#addChromosome(Chromosome) */ /* * protected void addChromosomeFromMaterial(ChromosomeMaterial cMat) { Chromosome chrom = new Chromosome(cMat, * m_activeConfiguration.nextChromosomeId()); m_chromosomes.add(chrom); } */ /** * add chromosome to population and to appropriate specie * * @param chrom */ /* * protected void addChromosome(Chromosome chrom) { m_chromosomes.add(chrom); * * // specie collection boolean added = false; Species specie = null; Iterator<Species> iter = m_species.iterator(); * while (iter.hasNext() && !added) { specie = iter.next(); if (specie.match(chrom)) { specie.add(chrom); added = * true; } } if (!added) { specie = new Species(m_activeConfiguration.getSpeciationParms(), chrom); * m_species.add(specie); //System.out.println("adding species"); } } */ /** * @return List contains Chromosome objects, the population of Chromosomes. */ public synchronized List<Chromosome> getChromosomes() { return m_chromosomes; } /** * @return List contains Species objects */ public synchronized List<Species> getSpecies() { return m_species; } /** * Retrieves the Chromosome in the population with the highest fitness value. * * @return The Chromosome with the highest fitness value, or null if there are no chromosomes in this Genotype. */ public synchronized Chromosome getFittestChromosome() { if (getChromosomes().isEmpty()) { return null; } // Set the highest fitness value to that of the first chromosome. // Then loop over the rest of the chromosomes and see if any has // a higher fitness value. // -------------------------------------------------------------- Iterator<Chromosome> iter = getChromosomes().iterator(); Chromosome fittestChromosome = iter.next(); double fittestValue = fittestChromosome.getFitnessValue(); while (iter.hasNext()) { Chromosome chrom = iter.next(); if (chrom.getFitnessValue() > fittestValue) { fittestChromosome = chrom; fittestValue = fittestChromosome.getFitnessValue(); } } return fittestChromosome; } /** * Performs one generation cycle, evaluating fitness, selecting survivors, repopulting with offspring, and mutating * new population. This is a modified version of original JGAP method which changes order of operations and splits * <code>GeneticOperator</code> into <code>ReproductionOperator</code> and <code>MutationOperator</code>. New order * of operations (this is probably out of date now): * <ol> * <li>assign <b>fitness </b> to all members of population with <code>BulkFitnessFunction</code> or * <code>FitnessFunction</code></li> * <li><b>select </b> survivors and remove casualties from population</li> * <li>re-fill population with offspring via <b>reproduction </b> operators</li> * <li><b>mutate </b> offspring (note, survivors are passed on un-mutated)</li> * </ol> * Genetic event <code>GeneticEvent.GENOTYPE_EVALUATED_EVENT</code> is fired between steps 2 and 3. Genetic event * <code>GeneticEvent.GENOTYPE_EVOLVED_EVENT</code> is fired after step 4. */ public synchronized Chromosome evolve() { try { m_activeConfiguration.lockSettings(); BulkFitnessFunction bulkFunction = m_activeConfiguration.getBulkFitnessFunction(); Iterator<Chromosome> it; // Reset evaluation data for all members of the population. for (Chromosome c : m_chromosomes) { c.resetEvaluationData(); } // Fire an event to indicate we're now evaluating all chromosomes. // ------------------------------------------------------- m_activeConfiguration.getEventManager().fireGeneticEvent(new GeneticEvent(GeneticEvent.GENOTYPE_START_EVALUATION_EVENT, this)); // If a bulk fitness function has been provided, then convert the // working pool to an array and pass it to the bulk fitness // function so that it can evaluate and assign fitness values to // each of the Chromosomes. // -------------------------------------------------------------- if (bulkFunction != null) { bulkFunction.evaluate(m_chromosomes); } else { // Refactored such that Chromosome does not need a reference to Configuration. Left this // in for backward compatibility, but it makes more sense to use BulkFitnessFunction // now. FitnessFunction function = m_activeConfiguration.getFitnessFunction(); it = m_chromosomes.iterator(); while (it.hasNext()) { Chromosome c = it.next(); int fitness = function.getFitnessValue(c); c.setFitnessValue(fitness); } } // Fire an event to indicate we've evaluated all chromosomes. // ------------------------------------------------------- m_activeConfiguration.getEventManager().fireGeneticEvent(new GeneticEvent(GeneticEvent.GENOTYPE_EVALUATED_EVENT, this)); // Remove all chromosomes which have 0 fitness value(s), no point putting resources into speciating and // otherwise processing them when they're almost certainly going to be discarded when selection takes place. zeroFitnessCount = 0; Iterator<Chromosome> chromItr = m_chromosomes.iterator(); while (chromItr.hasNext()) { Chromosome c = chromItr.next(); double f = ArrayUtil.sum(c.getFitnessValues()); if (Double.isNaN(f) || f == 0) { zeroFitnessCount++; if (m_chromosomes.size() > 1){ //Modified: Will not allow removal of the last element chromItr.remove(); } } } if (m_chromosomes.isEmpty()) { logger.warn("Entire population received zero fitness value."); } // Speciate population. m_specStrategy.speciate(m_chromosomes, m_species, this); // Update originalSize for each species. for (Species species : m_species) { species.originalSize = species.size(); } // Remove clones from population and collect some stats. We do this after speciation. minSpeciesSize = Integer.MAX_VALUE; maxSpeciesSize = 0; for (Species s : m_species) { //List<Chromosome> removed = s.cullClones(); //m_chromosomes.removeAll(removed); if (s.size() > maxSpeciesSize) maxSpeciesSize = s.size(); if (s.size() < minSpeciesSize) minSpeciesSize = s.size(); } // Find best performing individual. Collections.sort(m_chromosomes, new ChromosomePerformanceComparator(targetPerformanceType == 0)); Chromosome topChrom = m_chromosomes.get(0); if (previousBestPerforming == null || !m_chromosomes.contains(previousBestPerforming) || topChrom.getPerformanceValue() > previousBestPerforming.getPerformanceValue() || ((int) (topChrom.getPerformanceValue() * 1000) == (int) (previousBestPerforming.getPerformanceValue() * 1000) && topChrom.getFitnessValue() > previousBestPerforming.getFitnessValue())) { bestPerforming = topChrom; } else { bestPerforming = previousBestPerforming; } previousBestPerforming = bestPerforming; // Set which species contains the best performing individual. for (Species s : m_species) { s.containsBestPerforming = false; } bestPerforming.getSpecie().containsBestPerforming = true; // Determine zero performance count. zeroPerformanceCount = 0; for (Chromosome c : m_chromosomes) { if (c.getPerformanceValue() == 0 || Double.isNaN(c.getPerformanceValue())) { zeroPerformanceCount++; } } // Select chromosomes to generate new population from, and determine elites that will survive unchanged to next generation. // Note that speciation must occur before selection to allow selecting correct proportion of parents and elites for each species. // ------------------------------------------------------------ NaturalSelector selector = m_activeConfiguration.getNaturalSelector(); selector.add(m_activeConfiguration, m_species, m_chromosomes, bestPerforming); m_chromosomes = selector.select(m_activeConfiguration); selector.empty(); assert m_species.contains(bestPerforming.getSpecie()) : "Species containing global bestPerforming removed from species list."; assert m_chromosomes.contains(bestPerforming) : "Global bestPerforming removed from population." + bestPerforming; // Find fittest individual (this has been moved from just below bulkFunction.evaluate(m_chromosomes) because now the // selector can change the overall fitness, which is what we're using here. if (previousFittest != null && m_chromosomes.contains(previousFittest)) { // Attempt to reuse previous fittest if available. fittest = previousFittest; } else { fittest = null; } for (Chromosome c : m_chromosomes) { if (fittest == null || fittest.getFitnessValue() < c.getFitnessValue() || (fittest.getFitnessValue() == c.getFitnessValue() && fittest.getPerformanceValue() < c.getPerformanceValue())) { fittest = c; } } previousFittest = fittest; // For each species calculate the average (shared) fitness value and then cull it down to contain only parent chromosomes. Iterator<Species> speciesIter = m_species.iterator(); while (speciesIter.hasNext()) { Species s = speciesIter.next(); // Set the average species fitness using its full complement of individuals from this generation. s.calculateAverageFitness(); // Remove any individuals not selected as parents from the species. s.cull(m_chromosomes); } if (m_species.isEmpty()) { logger.info("All species removed!"); } assert m_species.contains(bestPerforming.getSpecie()) : "Species containing global bestPerforming removed from species list."; assert m_chromosomes.contains(bestPerforming) : "Global bestPerforming removed from population."; // Repopulate the population of species and chromosomes with those selected // by the natural selector // ------------------------------------------------------- // Fire an event to indicate we're starting genetic operators. Among // other things this allows for RAM conservation. m_activeConfiguration.getEventManager().fireGeneticEvent(new GeneticEvent(GeneticEvent.GENOTYPE_START_GENETIC_OPERATORS_EVENT, this)); // Execute Reproduction Operators. // ------------------------------------- List<ChromosomeMaterial> offspring = new ArrayList<ChromosomeMaterial>(); for (ReproductionOperator operator : m_activeConfiguration.getReproductionOperators()) { operator.reproduce(m_activeConfiguration, m_species, offspring); } // Execute Mutation Operators. // ------------------------------------- for (MutationOperator operator : m_activeConfiguration.getMutationOperators()) { operator.mutate(m_activeConfiguration, offspring); } // Cull population down to just elites (only elites survive to next gen) m_chromosomes.clear(); speciesIter = m_species.iterator(); while (speciesIter.hasNext()) { Species s = speciesIter.next(); s.cullToElites(bestPerforming); if (!s.isEmpty()) { s.newGeneration(); // updates internal variables m_chromosomes.addAll(s.getChromosomes()); } } assert m_chromosomes.contains(bestPerforming) : "Global bestPerforming removed from population."; assert m_species.contains(bestPerforming.getSpecie()) : "Species containing global bestPerforming removed from species list."; // Add offspring // ------------------------------ addChromosomesFromMaterial(offspring); for (Species s : m_species) { List<Chromosome> removed = s.cullClones(); m_chromosomes.removeAll(removed); } // Do we really care if we're a little bit off the target population size? // In case we're off due to rounding errors //if (m_chromosomes.size() != m_activeConfiguration.getPopulationSize()) { // adjustChromosomeList(m_chromosomes, m_activeConfiguration.getPopulationSize(), bestPerforming); //} assert m_chromosomes.contains(bestPerforming) : "Global bestPerforming removed from population."; // Fire an event to indicate we've finished genetic operators. Among // other things this allows for RAM conservation. // ------------------------------------------------------- m_activeConfiguration.getEventManager().fireGeneticEvent(new GeneticEvent(GeneticEvent.GENOTYPE_FINISH_GENETIC_OPERATORS_EVENT, this)); // Fire an event to indicate we've performed an evolution. // ------------------------------------------------------- m_activeConfiguration.getEventManager().fireGeneticEvent(new GeneticEvent(GeneticEvent.GENOTYPE_EVOLVED_EVENT, this)); generation++; } catch (InvalidConfigurationException e) { throw new RuntimeException("bad config", e); } assert m_chromosomes.contains(bestPerforming) : "Global bestPerforming removed from population."; return fittest; } public Chromosome getFittest() { return fittest; } public Chromosome getBestPerforming() { return bestPerforming; } public int getNumberOfChromosomesWithZeroPerformanceFromLastGen() { return zeroPerformanceCount; } public int getNumberOfChromosomesWithZeroFitnessFromLastGen() { return zeroFitnessCount; } public SpeciationParms getParameters() { return m_specParms; } public int getMaxSpeciesSize() { return maxSpeciesSize; } public int getMinSpeciesSize() { return minSpeciesSize; } public Configuration getConfiguration() { return m_activeConfiguration; } public int getGeneration() { return generation; } /** * @return <code>String</code> representation of this <code>Genotype</code> instance. */ public String toString() { StringBuffer buffer = new StringBuffer(); Iterator<Chromosome> iter = m_chromosomes.iterator(); while (iter.hasNext()) { Chromosome chrom = iter.next(); buffer.append(chrom.toString()); buffer.append(" ["); buffer.append(chrom.getFitnessValue()); buffer.append(']'); buffer.append('\n'); } return buffer.toString(); } /** * Convenience method that returns a newly constructed Genotype instance configured according to the given * Configuration instance. The population of Chromosomes will created according to the setup of the sample * Chromosome in the Configuration object, but the gene values (alleles) will be set to random legal values. * <p> * Note that the given Configuration instance must be in a valid state at the time this method is invoked, or an * InvalidConfigurationException will be thrown. * * @param a_activeConfiguration * @return A newly constructed Genotype instance. * @throws InvalidConfigurationException if the given Configuration instance not in a valid state. */ public static Genotype randomInitialGenotype(Properties props, Configuration a_activeConfiguration) throws InvalidConfigurationException { if (a_activeConfiguration == null) { throw new IllegalArgumentException("The Configuration instance may not be null."); } a_activeConfiguration.lockSettings(); // Create an array of chromosomes equal to the desired size in the // active Configuration and then populate that array with Chromosome // instances constructed according to the setup in the sample // Chromosome, but with random gene values (alleles). The Chromosome // class' randomInitialChromosome() method will take care of that for // us. // ------------------------------------------------------------------ int populationSize = a_activeConfiguration.getPopulationSize(); List<Chromosome> chroms = new ArrayList<Chromosome>(populationSize); for (int i = 0; i < populationSize; i++) { ChromosomeMaterial material = ChromosomeMaterial.randomInitialChromosomeMaterial(a_activeConfiguration); chroms.add(new Chromosome(material, a_activeConfiguration.nextChromosomeId(), a_activeConfiguration.getObjectiveCount(), a_activeConfiguration.getNoveltyObjectiveCount())); } return new Genotype(props, a_activeConfiguration, chroms); } public double getAveragePopulationFitness() { long fitness = 0; Iterator<Chromosome> iter = m_chromosomes.iterator(); while (iter.hasNext()) { Chromosome chrom = iter.next(); fitness += chrom.getFitnessValue(); } return fitness / m_chromosomes.size(); } /** * Compares this Genotype against the specified object. The result is true if the argument is an instance of the * Genotype class, has exactly the same number of chromosomes as the given Genotype, and, for each Chromosome in * this Genotype, there is an equal chromosome in the given Genotype. The chromosomes do not need to appear in the * same order within the populations. * * @param other The object to compare against. * @return true if the objects are the same, false otherwise. */ public boolean equals(Object other) { try { // First, if the other Genotype is null, then they're not equal. // ------------------------------------------------------------- if (other == null) { return false; } Genotype otherGenotype = (Genotype) other; // First, make sure the other Genotype has the same number of // chromosomes as this one. // ---------------------------------------------------------- if (m_chromosomes.size() != otherGenotype.m_chromosomes.size()) { return false; } // Next, prepare to compare the chromosomes of the other Genotype // against the chromosomes of this Genotype. To make this a lot // simpler, we first sort the chromosomes in both this Genotype // and the one we're comparing against. This won't affect the // genetic algorithm (it doesn't care about the order), but makes // it much easier to perform the comparison here. // -------------------------------------------------------------- Collections.sort(m_chromosomes); Collections.sort(otherGenotype.m_chromosomes); Iterator<Chromosome> iter = m_chromosomes.iterator(); Iterator<Chromosome> otherIter = otherGenotype.m_chromosomes.iterator(); while (iter.hasNext() && otherIter.hasNext()) { Chromosome chrom = iter.next(); Chromosome otherChrom = otherIter.next(); if (!(chrom.equals(otherChrom))) { return false; } } return true; } catch (ClassCastException e) { return false; } } }
Javascript or Jquery Encryption techniques I would like to have a public key encryption where i want some javascript function to encrypt some data. Is there any Javascript encryption techniques with high security? Thanks Javascript AES encryption provides a good solution to this question. I think that encrypting things with javascript can work just fine, if you have a good use case for it. The fact that the code is open shouldn't matter at all, because encryption algorithms are well known anyway. Where you are going to run into problems is the way in which the private key is supplied. DO NOT put the private key in your javascript code. The key should be provided by the user only. As long as you follow that rule, you should be good. +1 for actually answering the question. There are some valid use cases for this (for example, a HTML5 offline app). javascript code is in plain view online so any encryption method via javascript will be plainly visible, i don't think there's any realistic way to do that without using a server side language (i.e. php) If you put your web page(s) under https, then ALL your data will be encrypted, no need for additional algorithms/libraries/headaches I would generally suggest that there is little reason to encrypt anything in JavaScript. If you need to transport something over the wire, utilize a secure wire protocol instead. JavaScript has a number of deficiencies for this sort of thing, not the least of which is the fact that it sits in very accessible memory space if used in a browser context. I hate when everyone fails to mention that ssl/tls is not a secure protocol, but in fact prone to attacks, especially from goverments which all serious security statements should ultimately consider. If you can't trust the NSA or the certificate provider, how can I ensure security between the user and my server? Further more, the user should also be protected against me, the server! I believe ssl/tls + javascript encryption is the way to go, and the motivitaion is secure javascript delivery + isolated private and public key generation. The random key problem can be addressed by having the user move the mouse and enter keys as a true random generator.
package jports; import jports.reflection.AspectMember; public class CopyMachineMember<T, D> { private final AspectMember<T> source; private final AspectMember<D> destination; public CopyMachineMember(AspectMember<T> source, AspectMember<D> destination) { this.source = source; this.destination = destination; } public void copy(T sourceEntity, D destinationEntity) { Object val = source.getValue(sourceEntity); destination.setValue(destinationEntity, val); } public String getSourceName() { return this.source.getName(); } public String getDestinationName() { return this.destination.getName(); } }
Page:United States Statutes at Large Volume 44 Part 2.djvu/861 SIXTY-NINTH CONGRESS. Sess. I. Cue. 7j14, 745. 1926. 82]. promulgated by the Secmtaryyof the Interior with reference tothe manaigexnent and ctre of tho. park, or fora thefprotection of the pro§rty»thm.gein, ion the preservation ~from injurgunr spoliation of tim r, natural curiosities, oxvother objects wi ' ·8l»1d.·&&I‘k or for the protectionnsf the anima1s,.birds, and fish in saidpar, shall be dzomedzguilty ofia miedemeanoxnand shall: be subjected to a fine of lnottili more than $500 or imprisonment not exceeding six months or 0 1 V ¤ Sac. 6.-That ·all‘parts of township 17 south, ranges 31 and 32 G§§,‘g“§§§,g§°°‘°““ east, and township 18 south, ranged 31 east, Mount·D1.ab1o base and L¤¤<1¤ <iw¢¤m<i¤¤» meridian, which ` are north of the hydrographic divide faxing through Farewell Ga, and which are not added to an made part of the Sequoia Ifational Park by the provisions of; this Act, are herehy designated as the Sequoia National Game Refuge, and ,¤;’¤ggg**;g1*i{g·{),,{¤d·;¤_{; the huntmg, trapping, killin? orscapt ' of birds and game or wzxés¤i}m1§,¤¤nw:un. other wild animals upon the ands o the nited States awitbin the limits of the said area shall be unlawful, exce t under such regulations as may be prescribed from time to time gy the Secretary of Agricnltiirg and any persons violating such regulations or the *°‘”""*‘“‘°“°‘°‘· provi ions of this section shall `be deemed 'lty of a misdemeanor, and shall, upon conviction in any United bgtzltes court of competent jurisdiction, be fined in a not exceeding $1,000, or `by imniwsonmentcfoii a period not one year, or shall suffer bot line and imprisonment, inthe discretion of the oourt:eProm2ied, ,q¤·;m*•e_;, 0, mm That it is the of this section to protect from trespass the 1¤•d· Enblic lands o the United States and the game animals which may thereon, and not tc interafem with the o eration of the local §sme. laws as nfecting private or State·laud)¤: Provided jierthsr, Lmdmmn { hat thelands included in said game refuge shall oontinuetto {be sequin. Namuiuiuéig parts of the Sequoia National Forest-and nothingeontained in this °°‘·'°‘“"”"’°“’°'· section shall prevent the Secregzg of Agriculture from other uses of said lands under inieonformity with the laws an the rules and reiilations applicable thereto so far new may be consistent with e purposes, for which said game refuge is established. » A ` a Approved, July 3, 1926. ` Ju 3,1 . CHAP. 745.-An Act To provide for the leasing of public lands in Alaska for iH¥yR·$g§l fur farming, and for other purposes. ' [Pub N°·*°°· Be it enacted by the Senate and House of Representative: of the United States of America in Congress assembled, That the Secretary I I d I of the Interior in order to encourage and {remote developments of um mxgmgirizi. n' production of iurs in the Territoig of Alas in, is hereby authorized to lease to corporations onganize under the laws of the United States, or of any State or erritory thex·eof,»citizena cf the ’U¤ited St8tBS,~OI' associations of such citizens,] public lands of the United States in the Territory of Alaska suite le for fur farming, in areas urge. time ¤¤·1 not exceeding si; hundred and-iforty acres, and for periods not ` exceeding ten {years; upon such terms and conditions as heemay by eneral regula ions rescribe: Provided, That where leases are given {:;•;*g¤·•Lm umm gereunder for islands 01* lands within the same such lease may, in the uw. discretion of the Secretary of the Interior, be for an area not to exceedthirty square miles: Provided further, That nothing herein ¤°° 1*** contained shall prevent the prospecting, locating, development, enter- ` ing, leasing, or patenting of the mineralaesources of any lends so leased under laws applicable thereto: Ami provided f·m‘the¢,·That c,u*Q,'0‘gf*°' kmds °*· this Act- shellmot be held nor construed to apply to the Pribilof v¤1.36.¤·3¥¢- Islands, declared a special reservations by the Act of Congress
Deep learning models have obtained state-of-the-art results for medical image analysis. However, when these models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an image- and feature-level adaptation method in a sequential manner. First, images from the source domain are translated to the target domain through an un-paired image-to-image adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations. Furthermore, to improve the networks segmentation performance, information about the shape, texture, and con-tour of the predicted segmentation is included during the adversarial train-ing. C-MADA is tested on the task of brain MRI segmentation, obtaining competitive results.
/*! * Copyright 2011-2023 Unlok * https://www.unlok.ca * * Credits & Thanks: * https://www.unlok.ca/credits-thanks/ * * Wayward is a copyrighted and licensed work. Modification and/or distribution of any source files is prohibited. If you wish to modify the game in any way, please refer to the modding guide: * https://github.com/WaywardGame/types/wiki */ import type { Reference } from "game/reference/IReferenceManager"; import type Translation from "language/Translation"; import type { Segment } from "language/segment/Segments"; import type { IInterpolationOptions, IStringSection } from "utilities/string/Interpolator"; export declare enum ListEnder { None = 0, And = 1, Or = 2 } export declare enum TextContext { None = 0, Lowercase = 1, Uppercase = 2, Title = 3, Sentence = 4 } export interface ISerializedTranslation { isSerializedTranslation: true; id: string; context?: TextContext; normalize?: true; args?: TranslationArg[]; failWith?: string | ISerializedTranslation | IStringSection[]; reformatters?: ISerializedTranslation[]; reference?: Reference; tooltip?: ISerializedTranslation | IStringSection[]; interpolator?: ISerializedInterpolator; } export interface ISerializedInterpolator { options?: IInterpolationOptions; segments?: Segment[]; } export type TranslationArg = string | number | boolean | Translation | ISerializedTranslation | IStringSection | TranslationArg[] | ITranslationArgRecord | (() => TranslationArg) | undefined | null; export interface ITranslationArgRecord { [key: string]: ITranslationArgRecord | TranslationArg; }
Dorothy Baker Dorothy Baker may refer to: * Dorothy Baker (madam) (1915–1973), American madam * Dorothy Baker (writer) (1907–1968), American novelist * Dorothy Beecher Baker (1898–1954), American teacher
TCMalloc - get size of allocation for a pointer Using TCMalloc - given heap allocated object, is there any way to get the allocated size of the object (meaning only the size passed in malloc call)? I'm asking for a "reliable" method (i.e, not going a word size back assuming the allocation size is stored before the pointer) Why do you need this? There is no such thing forn the standard malloc either? @Michael Walz - I don't think it is important why I need this. Also, TCMalloc is much more comprehensive library than standard malloc, and it has also heap profiling tool, etc... The "why" is often important (See this: http://xyproblem.info/) @Michael Walz - allocation counting for requests, when each request exclusively owns a group of threads. Each request is inspected once in a while to see if it satisfies memory constraints. The Idea is to Proxy the existing malloc/free calls with a minimal management of a counter per request. When malloc is called the size is trivially known, but not when free is called You could write your own mymalloc/myfree functions that call the original malloc/free or TCMalloc/free or what ever, and store the allocated size before the actual data. But this adds a little overhead. @Michael Walz I'm trying to avoid any overhead in management. Your idea is a well-known approach, however, it can cause serious fragmentations (and performance degradations) if calling malloc of multiple of page size. Since version 1.6, TCMalloc includes: size_t tc_malloc_size(void*); which returns the usable size of the allocation starting at the argument. It is identical to the glibc malloc_usable_size (or BSD's malloc_size), and libtcmalloc includes aliases for both of those functions. However, it is not necessarily the originally requested size. It may be larger (and usually is). I don't believe that TCMalloc (or most other malloc implementations) retain that metadata, so there is (afaik) neither a reliable nor an unreliable mechanism to time travel back to the original malloc call and inspect the request size. I tried it, with calling malloc_usable_size, and it seems to have alias to tc_malloc_size (I cannot call directly to tc_XX api since the library is preloaded).
Invisible Items Never experienced this before. http://cloud-4.steamusercontent.com/ugc/707401792821395401/1F7945F69217E5DFFF777DB4B7974803E33ECCA4/ All items are invisible, no idea why. Nobody? Any plugins to take note of? Clean installation? Cannot reproduce. Please keep all issues only you are having to the forums (http://forums.cloudsixteen.com)
Corruption of homeostatic mechanisms in the guanylyl cyclase c signaling pathway underlying colorectal tumorigenesis Colon cancer, the second leading cause of cancer-related mortality worldwide, originates from the malignant transformation of intestinal epithelial cells. The intestinal epithelium undergoes a highly organized process of rapid regeneration along the crypt-villus axis, characterized by proliferation, migration, differentiation and apoptosis, whose coordination is essential to maintaining the mucosal barrier. Disruption of these homeostatic processes predisposes cells to mutations in tumor suppressors or oncogenes, whose dysfunction provides transformed cells an evolutionary growth advantage. While sequences of genetic mutations at different stages along the neoplastic continuum have been established, little is known of the events initiating tumorigenesis prior to adenomatous polyposis coli (APC) mutations. Here, we examine a role for the corruption of homeostasis induced by silencing novel tumor suppressors, including the intestine-specific transcription factor CDX2 and its gene target guanylyl cyclase C (GCC), as early events predisposing cells to mutations in APC and other sequential genes that initiate colorectal cancer. CDX2 and GCC maintain homeostatic regeneration in the intestine by restricting cell proliferation, promoting cell maturation and adhesion, regulating cell migration, and defending the intestinal barrier and genomic integrity. Elimination of CDX2 or GCC promotes intestinal tumor initiation and growth in aged mice, mice carrying APC mutations, or mice exposed to carcinogens. The roles of CDX2 and GCC in suppressing intestinal tumorigenesis, universal disruption in their signaling through silencing of hormones driving GCC, and the uniform over-expression of GCC by tumors underscore the potential value of oral replacement with GCC ligands as targeted prevention and therapy for colorectal cancer. Introduction In the oncogenomic model of colorectal cancer, epithelial cells progress through a series of morphological stages driven by underlying genetic mutations that result in the transformation into an invasive carcinoma. Damage to the genome exceeds the capacity for repair, producing irreversible mutational changes that result in the neoplastic phenotype. Genetic and epigenetic changes, reflecting corruption of DNA damage sensing and repair circuits, in conjunction with the uncoupling of apoptosis, lead to the gain-of-function of oncogenes and the concomitant loss-offunction of tumor suppressors. 1 In turn, these changes disrupt intestinal homeostasis producing hyperproliferation, amplification of genetic instability, altered migration, disrupted adhesion, resistance to apoptosis and failure to repair damaged cells. Moreover, in colorectal cancer, alterations driving transformation often occur in a specific sequence, suggesting that different genes, and the homeostatic processes they regulate, play essential roles at different stages along the transformation continuum. Genetic alterations important to the colorectal tumorigenesis sequence have been identified, including APC, β-catenin, Axin, MSH, K-RAS, SMAD and p53, among others. Adenomatous polyposis coli (APC) is mutated in more than 80% of sporadic colorectal tumors and germline mutations in APC underlie the inherited intestinal neoplastic syndrome Familial Adenomatous Polyposis. APC, as a "gatekeeper" for colon cancer, inhibits cell proliferation, 2 regulates cell migration 3 and maintains chromosomal stability 4,5 to defend intestinal homeostasis. 6 Mutations in APC or its downstream effector β-catenin, initiate the growth of small benign polyps. However, these mutations are not sufficient to support the progression of hyperplastic lesions to invasive carcinoma. Other signaling pathways, including transforming growth factor β (TGFβ) family members and TP53 (p53), are required the crypt-villus axis: absorptive enterocytes, goblet cells, enteroendocrine cells and Paneth cells. 24 Enterocytes, which constitute approximately 80% of epithelial cells, are polarized columnar cells mediating digestive functions, such as hydrolysis and absorption of nutrients, and secretion of fluid and electrolytes. 25 The main characteristics of mature enterocytes include welldeveloped microvillus brush border membranes containing key functional proteins mediating cognate digestive and absorptive functions. Goblet cells are mucin-secreting cells that protect the intestinal lumen and facilitate nutrient absorption by enterocytes. 26 Enteroendocrine cells, which constitute 1% of epithelial cells, produce autacoids, peptides and hormones and are part of the neuroendocrine system in the intestine, with paracrine and autocrine functions locally and endocrine functions supporting systemic activities, for example in the hypothalamus. 27 Finally, Paneth cells protect the mucosa by secreting antimicrobial peptides, digestive enzymes and growth factors into the lumen. 28 Paneth cells, absent in the colon, are one of the proposed mechanisms defending the small intestine against tumorigenesis by mediating innate immune responses to intestinal pathogens. 28 Enterocytes, goblet and enteroendocrine cells migrate toward the villus tip (small intestine) or surface of the crypt (colon) where they initiate a program of apoptosis or are exfoliated into the intestinal lumen by as yet unknown mechanisms. 29 In contrast, Paneth cells differentiate during a 5-8 day downward migration to the crypt base. 28 Turnover of enteroendocrine and Paneth cells is relatively slow compared to other cell types in intestine. 28 Rapid cell renewal and the transition from proliferation to differentiation require tight homeostatic control of subordinate cell physiological circuits, including proliferation, differentiation, migration and apoptosis in epithelial cells. Disruption of these circuits corrupts normal structure and function contributing to intestinal tumorigenesis. CDX2 and GCC Signaling Regulate Intestinal Homeostasis CDX2, a member of the homeodomain transcription factor CDX family (CDX1 and CDX2), is expressed in the intestinal epithelium during embryogenesis and in adults. 30 In contrast to CDX1, 31 which localizes to the progenitor cell compartment regulating cell proliferation that supports intestinal renewal, CDX2 is expressed throughout the crypt-villus axis and maintains intestinal homeostasis by regulating the transition of cells from proliferation to differentiation. 32 CDX2 regulates the expression of intestinal lineage genes in specific regions of the intestine, such as sucrase-isomaltase in the small intestine, carbonic anhydrase I in the colon 33 and GCC in the small intestine and colon. 16 CDX2 is also expressed in intestinal metaplasia of stomach and esophagus, promoting the transition to the intestinal epithelial cell phenotype, reflecting an anterior homeotic shift. 34 CDX2 opposes intestinal tumorigenesis by maintaining homeostasis, which is required for intestinal epithelial renewal. Indeed, mice harboring mutations of CDX2 develop spontaneous colon polyps. 15,35 Moreover, CDX2 elimination potentiates tumor initiation and growth in the colon of Apc ∆716/+ mice through hyperproliferation, for tumor progression. The TGFβ family, a group of small polypeptide hormones, negatively controls colon cell growth through SMAD4, 7 and their silencing through mutations of canonical TGFβ receptors contributes to neoplastic progression. 8 p53, 9 a well-established genomic guardian, maintains genomic integrity by inhibiting cell growth through cell cycle arrest, 10 inducing apoptosis 11 and promoting DNA repair 12 in response to DNA damage. An emerging paradigm suggests that early events contributing to the initiation of the tumorigenic continuum that precede mutations in APC and its downstream effectors convey an evolutionary advantage to intestinal epithelial cells, which is essential to transformation. Mutations of CDX2, 13,14 a tissue-specific homeodomain transcription factor regulating the development of the intestine, 15 and silencing of guanylyl cyclase C (GCC) signaling, the intestinal receptor for the paracrine hormones guanylin and uroguanylin, and a target gene of CDX2, 16 characterize the earliest identifiable stages along the transformation continuum. 17,18 Mutations in CDX2 and dysregulation of GCC signaling, which reflect silencing of guanylin and uroguanylin expression, could contribute to the loss of genomic integrity and the development of mutations in APC and its downstream effectors, reflecting loss of normal proliferative and DNA quality control, and predisposing epithelial cells to tumor initiation. 19,20 The Intestinal Crypt-Villus Axis The intestinal mucosa is covered by a single layer of epithelial cells, which is organized in vertical anatomical units underlying specialized organ functions. In the small intestine, the major organ for nutrient absorption, villi projecting into the lumen and flasklike crypts embedded in the mesenchyme expand the secretory and absorptive surface area and provide the structure supporting digestive processes. 21 In contrast, the large intestine exhibits a comparatively smooth surface with tubular crypts embedded in the colonic mesenchyme. 22 Intestinal epithelial cells cover the lumenal surface of the crypt-villus axis and provide a physical barrier between systemic and mucosal compartments by sealing epithelial cells with tight junctions. A highly dynamic process of continuous epithelial cell proliferation, migration, terminal differentiation, apoptosis and shedding maintains the structural and functional integrity of the crypt-villus axis. Thus, tubular crypts embedded in the mesenchyme form the proliferating zone in the crypts of the small intestine and colon, and regenerative stem cells at the bottom of the crypts give rise to rapidly proliferating daughter cells. In contrast to crypts, villi are covered by permanently differentiated cells projecting into the lumen, supporting their digestive and absorptive functions. 23 Stem cells proliferate relatively slowly and their regeneration rate is not fast enough to support intestinal epithelium renewal. Thus, rapid transit cell proliferation will amplify the supply of cells to meet the demand of epithelium renewal. 22 Transit cells, which initiate a program of terminal phenotypic maturation triggered by as yet undefined signals, lack the ability to proliferate indefinitely while they are migrating along the crypt-villus axis. 22 Transit cells give rise to four principal cell types characterizing a "crypt progenitor-like" phenotype. Moreover, beyond a greater number of proliferating cells in the crypt of APC-deficient mice, their spatial organization is altered and cells in S phase are distributed throughout the elongated crypts rather than restricted to the lower two thirds. Furthermore, altered proliferation is associated with accumulation of dephosphorylated β-catenin, which is resistant to degradation, in APC-deficient mice. 6 In turn, β-catenin activates Wnt-downstream target genes, including cyclinD1, and promotes intestinal cell growth. 2,38 Disruption of proliferative homeostasis, mutually reinforced by Wnt signaling and APC mutation, leads to overgrowth of un-differentiated cells contributing to intestinal tumorigenesis. Inactivation of APC is also recognized as a key early event in the development of human sporadic and inherited colorectal cancers. Patients with germline mutations of APC develop numerous colorectal polyps, 39 and targeted mutation of Apc in mice results in multiple intestinal tumors. 40 Interestingly, targeted silencing of CDX2 and GCC signaling promotes tumor initiation in the colon Apc Min/+ mice, 20,35 which suggests that mutations of these genes prior to APC create an evolutionary advantage in hyperproliferation for intestinal epithelial cell transformation. Indeed, CDX2, a key transcription factor mediating intestinal development, is frequently mutated in human colorectal cancer. 14 Similarly, expression of the endogenous hormones for GCC, guanylin and uroguanylin, is uniformly lost at the early stages in human and mouse intestinal tumorigenesis. 18,41 In that context, elimination of CDX2, GCC and guanylin increases the size of the proliferating crypt compartment, the number of proliferating cells in that compartment, and accelerates their cell cycle. 20,35,36,42 These effects are potentiated by genotoxic insults, revealed as hyperplasia of normal intestinal epithelium in Gucy2c -/mice carrying Apc mutations (Apc Min/+ ) or exposed to AOM. Moreover, corruption of the proliferative restriction and acceleration of the cell cycle by eliminating CDX2 or GCC signaling promote tumor initiation and growth in Apc Min/+ and AOM-treated mice, reflected by an increase in the number and size of adenoma, and the associated crypt hyperplasia in normal adjacent mucosa. 20 Patients with germline mutations of APC do not necessarily develop colorectal cancer, although they are at much higher risk for intestinal neoplasia than the general population. Additional genetic alterations are required for tumors to form in the colon. TGFβ and SMAD are members of a signaling pathway frequently mutated subsequent to APC in the colorectal carcinogenesis sequence. Targeted silencing of TGFβ receptor 43 and SMAD2 44 and SMAD4 7 promotes tumor progression and invasion in Apc Min/+ mice by accelerating cell proliferation. Interestingly, inactivation of TGFβ receptors increases both colonic tumor number and size while inactivation of Smad2 in Apc Min/+ mice does not change the total number of tumors, but increases mortality reflecting intestinal obstruction caused by large tumors. These observations suggest that TGFβ/SMAD signaling is implicated in colorectal tumor progression and invasion by predominantly restricting cell proliferation. 43,44 Taken together, disruption of intestinal proliferative homeostasis reflecting mutations in APC and alterations in genes with dysregulated G 1 /S transition and increased chromosomal instability. 35 GCC, a downstream transcriptional target of CDX2, maintains intestinal homeostasis by restricting proliferation and promoting differentiation. 19,36 Targeted elimination of GCC signaling in mice (Gucy2c -/-) increases crypt length along a decreasing rostral-caudal gradient by disrupting component homeostatic processes. 36 Crypt expansion reflects hyperplasia of the proliferating compartment, with an increase in rapidly cycling progenitor cells and reciprocal reduction in differentiated cells, including Paneth and goblet, but not enteroendocrine, cells. 36 Moreover, crypt hyperplasia in Gucy2c -/mice is associated with adaptive increases in cell migration and apoptosis. Further inactivation of GCC signaling promotes intestinal tumor initiation and growth in Apc Min/+ mice heterozygous for the Apc allele, and in mice exposed to the carcinogen, azoxymethane (AOM). In the context of uniform disruption of GCC signaling during human colorectal carcinogenesis, reflecting the silencing of guanylin and uroguanylin, the endogenous paracrine hormones for GCC, these studies suggest GCC signaling also suppresses intestinal tumorigenesis by coordinating homeostatic circuits required for intestinal epithelial renewal. 19,20 These previously under-appreciated roles of CDX2 and GCC signaling in maintaining intestinal homeostasis and the nearuniversal mutation of CDX2 and/or silencing of GCC signaling early along the transformation continuum suggest that dysregulation of these signaling pathways contributes to disruption of intestinal homeostasis, reflecting hyperproliferation and loss of genomic integrity, predisposing epithelial cells to intestinal tumor initiation. 19,20 Cell Proliferation and Intestinal Tumorigenesis Intestinal epithelial renewal requires the availability of a continuous supply of cells produced by proliferation. In crypts, cell proliferation is predominantly regulated by the Wingless signaling cascade, which provides a unique microenvironmental niche for maintaining and activating proliferating cell reservoirs. Upon Wingless/Wnt signal activation, β-catenin in the cytoplasm translocates to the nucleus and binds to Tcf transcription factors to generate a complex that activates downstream target genes. Abrogation of Wnt signaling by removal of Tcf4 or β-catenin or by overexpression of the Wnt inhibitor, Dickkopf 1 (Dkk-1), results in a complete loss of proliferation and death of the mouse five days after birth. 37 On the other hand, intestinal epithelial cell renewal is highly controlled and restricted by multiple antiproliferative mechanisms. Disruption of these circuits produces continuous cycling of DNA replication and cell division. In turn, these effects result in cell hyperplasia and accumulation of mutations that potentiate hyperproliferation, prevent terminal differentiation and prevent apoptosis, which ultimately establishes the invasive carcinoma phenotype. APC is a negative regulator of the Wingless signaling cascade. Intestine-specific inactivation of APC in mice disrupts Wnt signaling, producing nuclear accumulation of β-catenin and mortality five days after birth. APC-deficient cells in the intestine maintain adaptor, EB1, 52 which localizes at the midplane of the mitotic body and is required for proper spindle assembly in Drosophila 53 and positioning in yeast. 54 Moreover, APC mutations in vivo lead to cytokinetic failure, mitotic defects and tetraploidy in intestine, associated with disoriented spindles, misaligned chromosomes and tetraploid progenitor cells in the crypts. 4,50 These defects are observed in morphologically normal crypt cells in Apc Min/+ mice with normal levels of β-catenin expression and sub-cellular distribution. 4,50 Targeted inactivation of CDX2 or GCC increases chromosomal instability in normal intestinal epithelial cells prior to mutation of the second allele of APC in the colons of APC deficient mice. 19,20,35 Deletion of one allele of Cdx2 in mice results in spontaneous colon cancer 15 and dramatically (6 folds) potentiates tumor multiplicity in colons of APC mutant mice through mTOR-mediated chromosomal instability. 35 Indeed, increased tumor initiation and growth by Cdx2 deletion is associated with hyperproliferation quantified by Ki67 staining together with other proliferative markers and genetic instability quantified by anaphase bridge index in normal intestinal crypts. 35 These changes in the pre-transformed stage create a selective survival advantage for transformed cells, amplifying tumor initiation and growth reflected by a higher frequency of loss of heterozygosity (LOH) of Apc in tumors. 20,35 Interestingly, a deficiency of GCC in the intestine compromises genomic integrity as quantified by increased DNA oxidation and double-strand DNA breaks in crypt cells. 19,20 Indeed, GCC signaling reduces the production of DNA damage by modulating the reprogramming of metabolism from glycolysis to oxidative phosphorylation and characterizing the switch from proliferation to differentiation. 19 In turn, metabolic reprogramming suppresses ROS production in crypt cells 19,20 or promotes DNA damage repair, 19 defending genomic integrity. The combination of reduced DNA damage 19 and enhanced DNA damage repair contributes to maintenance of genomic integrity in GCC-expressing mice, although the precise contribution of GCC signaling to steady-state maintenance of the genome, including damage detection and assessment, mutational repair, and the associated coordination of replicative decision making, remains to be defined. These findings suggest that CDX2 mutations and silencing of GCC signaling may provide an environment in the earliest stages of neoplasia in which chromosomal instability together with hyperproliferation, as a self-reinforcing mechanism, lead to further genetic damage and tumor promotion and progression. Mutations in p53, a well-established guardian of genomic integrity, also are frequently observed in colorectal cancers, although at relatively late stages along the transformational continuum. 45 In this context, it is interesting to note that p53 deletion alone is not sufficient to initiate intestinal tumorigenesis. 55 Chromosomal integrity is maintained by p53 through regulation of gene transcription in response to DNA damage. 12 Upon genotoxic insult, p53 senses the DNA damage and activates the transcription of p21 to arrest the cell cycle 10 and prevent replication of damaged DNA. Also, p53 activates the transcription of the mediators of DNA damage repair 12 and apoptosis 11 to selectively remove damaged cells beyond repair. 11 Constitutive 56 and occurring before and after APC mutations, corrupts the organization of the crypt-villus axis. Disruption of homeostatic integrity results in a niche susceptible to genotoxic insults and overgrowth of progenitor cells, producing neoplastic transformation and hyperproliferation required for intestinal tumor growth. Chromosomal Instability and Intestinal Tumorigenesis Colorectal cancer arises through a series of morphologic changes, corresponding to specific gene mutations at each stage, which provide a selective survival advantage continuously expanding the pool of transformed cells. In turn, sequential genotoxic insults to these hyperplastic cells induce multiple mutations in greater numbers of genes producing tumor progression, invasion and metastasis. In humans, FAP patients require about 20 years to lose their functional APC allele. 45 Similarly, in sporadic colorectal cancer, progression from adenoma to metastatic carcinoma requires 20-40 years. 45 In both cases, genetic instability plays a central role in initial transformation of progenitor cells and accelerating the rate of mutation in transformed cells mediating tumor progression. 46 Chromosomal instability and aneuploidy are classic characteristics of cancer 46 and predictive of poor prognosis. 46 Chromosomal instability reflects DNA damage that exceeds the capacity for DNA damage repair, associated with a failure to eliminate damaged cells. DNA damage in intestinal epithelial cells is caused by endogenous and exogenous genotoxic insults. The predominant endogenous insults are from reactive oxygen species (ROS) as side products of oxidative metabolism that supports rapid proliferation of crypt cells. 47,48 Exogenous damage reflects a variety of environmental mutagens, for example, alkylating agents. 47,48 DNA damage repair, including damage detection, assessment and mutational repair through recruitment of repair machinery, is facilitated by suspending cell cycle progression and promoting apoptosis through cell cycle checkpoint-dependent mechanisms. Moreover, chromosomal instability and proliferation are mutually-reinforcing. Beyond the expanded potential for linearly propagating somatic mutations in rapid proliferating cells, cells in S phase with unwound and accessible double-strand DNA are more susceptible to genotoxic insults. Furthermore accelerated progression through G 1 and premature entry into S is associated with amplification of genetic instability. 35,49 APC maintains chromosomal fidelity through mitotic checkpoint mechanisms, and this effect is independent of proliferative restriction reflecting antagonism of β-catenin. Further, chromosomal instability producing aneuploidy, revealed by mutations of APC, occurs at the earliest stages of colorectal cancer progression, preceding deregulation of β-catenin. 50 During mitosis, APC regulates spindle assembly, orientation 4 and chromosomal segregation in human colon cancer cells or embryonic stem cells from Apc Min/+ mice. 4 More specifically, APC localizes to the ends of microtubules embedded in kinetochores and forms a complex with the checkpoint proteins Bub1 and Bub3. 51 Interaction of APC with microtubules of the mitotic spindle is mediated by the compared to wild-type littermates. 3 Therefore, decreased apoptosis in Apc Min/+ mice might reflect reduced proliferation in intestine. Further, inducible inactivation of APC increases apoptosis, quantified by caspase 3 staining with a concurrent increase in cell proliferation. 6 Interestingly, apoptotic cells in APC-deficient mice are enlarged, reflecting cell death at the G 2 /M checkpoint resulting from mitotic catastrophe, 4 consistent with the hypothesis that APC regulates chromosomal stability through mitotic checkpoint mechanisms. 6 Alterations in cell death that are proportional to changes in proliferation in the intestine suggest that apoptosis compensates for uncontrolled proliferation associated with chromosomal instability induced by APC mutations by removing excess or damaged cells. In that context, APC might not play a direct role in mediating cell death. Surprisingly, selective inactivation of apoptosis in intestinal epithelial cells does not induce spontaneous tumorigenesis, even though mice with targeted inactivation of BAX and BAK or p53, 57 exhibit compromised spontaneous or induced apoptotic responses in intestine. As mentioned above, cell death contributes to intestinal homeostasis by removing excess or damaged cells. Mice with compromised apoptosis exhibit colonic hyperplasia with altered differentiation, and are more susceptible to carcinogen-induced formation of aberrant crypt foci. 65 Overexpression of CDX2 in human colon cancer cells increases apoptotic sensitivity, suppressing tumor growth in mice. 66 Furthermore, elimination of Cdx2 in mice amplifies susceptibility to intestinal tumorigenesis following AOM challenge, which is associated with a 50% reduction of apoptosis in colonic cells. 67 In contrast, a role for GCC in regulating apoptosis in intestine remains undefined. As mentioned above, the impact of CDX2 and GCC signaling on apoptosis as a mechanism contributing to intestinal tumorigenesis is likely in the context of their effects on restricting proliferation and maintaining genetic integrity. 20,35 Similarly, other key gate keeper genes important for colon cancer progression, including Netrin, 68 the ligand for DCC; PUMA 69 and p21, 70 major downstream target products of p53; inhibit intestinal tumorigenesis exclusively by sensitizing apoptotic responses in the context of compromised proliferative controls. Taken together, apoptosis is one mechanism, beyond cell migration, that removes excess and damaged cells from the crypt-villus axis and, together with mechanisms controlling proliferation, maintains the homeostatic balance between proliferating and differentiated cell populations. The contribution of apoptosis to dysregulation of intestinal homeostasis in tumorigenesis, especially in cancer initiation, appears to reflect mechanisms controlling proliferation and genetic integrity, rather than a primary mechanism underlying transformation. In that respect, the intestinal epithelium is one of the most rapidly renewing tissues in adults, requiring a robust cell supply provided by proliferation to support regeneration 37 and corruption of that proliferative mechanism is central the initiation of intestinal tumorigenesis. Certainly, failure of programmed cell death to remove damaged cells beyond repair can contribute to the propagation of chromosomal instability and cell transformation in the context of rapid proliferation driving homeostatic regeneration. inducible 57 elimination of p53 in intestinal epithelial cells does not alter crypt-villus homeostasis under physiological conditions, although epithelial cells in p53-null mice fail to undergo apoptosis following exposure to radiation. 57 Moreover, loss of p53 does not amplify the morphological changes in the crypt-villus architecture following Apc loss. 58 Eliminating p53 minimally impacts tumor multiplicity, but dramatically enhances the invasiveness of intestinal adenomas in Apc Min/+ mice. 55 These data suggest that p53, regulating chromosomal fidelity and apoptosis, but not proliferation, in normal intestinal epithelium, plays important roles in tumor progression and malignant transformation in the late stages of colorectal carcinogenesis rather than in early lesion development. In contrast to the roles of p53 in colon cancer pathophysiology, targeted p53 activation by a p53 modulator, CP-31398, dramatically reduces tumor number in Apc Min/+ mice in both prophylactic and therapeutic models by stabilizing p53, thereby suppressing proliferation while promoting apoptosis in the colon. 59 In summary, regardless of the precise sequence of genetic events contributing to colorectal carcinogenesis, genetic alterations contributing to hyperproliferation and loss of genomic quality control will lead to further damage through a self-reinforcing cycle. Chromosomal instability, as a hallmark of colon cancer can promote initiation and progression of colorectal carcinogenesis by increasing the rate of gene alterations, creating the niche required for tumor evolution. Cell Death (Apoptosis) and Intestinal Tumorigenesis Spontaneous cell death occurs predominantly in the lower pole of the crypts, where the progenitor cells are located, and at the villus tips, where cells are shed into the intestinal lumen. Cell death in the lower crypts exhibits the classical characteristics of apoptosis, including apoptotic bodies and condensed nuclei, which deletes excess or DNA-damaged cells. 60 On the other hand, mechanisms mediating cell death at villus tips remain undefined, because these cells rarely exhibit apoptotic morphology 61 and their nuclei are enlarged rather than condensed. Some studies support the apoptotic hypothesis, reflecting the identification of DNA fragmentation 62 and expression of BAX and cleaved caspase 3, 63 at villus tips. Regardless of the detailed mechanisms, cell death along the crypt to villus axis is an important event controlling and stabilizing the overall cell population by deleting excess cells and defending genetic integrity by removing DNA-damaged cells. In that context, disruption of mechanisms controlling cell death may contribute to tumorigenesis in intestine. The role of APC in apoptosis was revealed in human colon cancer cells by overexpression of APC. 64 Expression of APC in APC-inactive human colon cancer cells leads to diminution of cell growth by inducing cell death through apoptosis. However, the role of APC in apoptosis has been challenged in animal studies in both Apc Min/+ mice 3 and inducible APC knockout mice. 6 Apc Min/+ mice exhibit a decrease of apoptosis in enterocytes quantified by TUNEL, which appears to support previous observations. 64 However, this effect on apoptosis is associated with a concomitant 45% decrease in proliferation measured by PCNA staining barrier, producing local and systemic chromosomal instability predisposing to mutations of APC and other genes contributing to colorectal tumorigenesis. In turn, APC mutations amplify migratory inhibition in crypts and, together with hyperproliferation inherent in that niche, promote genomic instability leading to tumor initiation. Novel Strategies of Targeted Colon Cancer Prevention and Therapy There is an unmet clinical need to bridge the gap between existing screening paradigms and barriers promoting underutilization of primary chemoprevention strategies by the at-risk population including aged adults and patients carrying inherited germline mutations. 79 In that context, elucidating the detailed mechanisms underlying intestinal homeostasis and colorectal tumorigenesis identifies targets for chemoprevention. The novel roles of CDX2 and GCC in homeostatic mechanisms: restricting cell proliferation, maintaining genomic stability, regulating cell migration and defending intestinal integrity, and in suppressing tumor initiation and progression, especially in the context of APC mutations, underscore the utility of CDX2 and GCC as targets for colon cancer prevention. Ideal targets 80 include oncogenes, which exhibit overexpression or over-activation in tumors compared to normal tissues, or tumor suppressors, which are mutant or silenced in tumors, although reconstitution of altered tumor suppressor signaling is technically more challenging than suppressing over-active oncogenes. On the other hand, ideal chemopreventive agents exhibit activities at the target in a therapeutically achievable range, with limited or absent collateral activities in non-target tissues, for optimum safety. 80 GCC may fit the criteria of the ideal target for colorectal cancer prevention and represent a unique model for reconstitution of signaling underlying tumor suppression. GCC expression is primarily restricted to lumenal membranes of intestinal epithelial cells and robustly amplified in primary and metastatic colorectal tumors. Dysfunction of GCC signaling, reflecting the universal silencing of endogenous paracrine hormone expression early in colorectal tumorigenesis, can be simply restored by oral delivery of GCC ligands with activities compartmentalized to intestine and primary intestinal tumors. These considerations suggest that hormone replacement targeting GCC might be uniquely well qualified for chemoprevention of colorectal cancer in the absence of collateral tissue damage. Cell Migration, Adhesion and Intestinal Tumorigenesis Cell migration from the bottom of crypts to villus tips is tightly controlled to complete the regenerative process without compromising the integrity of the epithelial barrier, although the underlying mechanisms are not completely understood. Migration of epithelial cells is an active process, rather than passive movement in response to cell replacement from crypts by rapid proliferation. APC is a dominant regulator of enterocyte migration, and dysregulation of this process may contribute to intestinal tumorigenesis. 71 In mice, intestinal epithelial cells renew every 48-72 hours as quantified by tracking labeled progenitor cell migration with 3 H-thymidine 72 or BrdU. 36 Enterocyte migration along the cryptvillus axis is decreased by 25% in Apc Min/+ mice, 3 and those cells exhibit increased residence time 3 and reduced adhesion junction structural integrity 3 reflected by a decrease in membrane-bound E-cadherin and the dissociation of E-cadherin from β-catenin. 73 Moreover, enterocyte migration is completely abrogated in Apc -/mice and crypts are elongated and populated by rapidly proliferating, but stationary, epithelial cells. 6 Instead of moving toward villus tips, Apc -/cells accumulate in upper regions of crypts, forming abnormal crypt foci in the absence of β-catenin nuclear accumulation. 71 Following this initial corruption of cell migration, hyperproliferating cells in abnormal crypts progress to micro-and macro-adenomas. 73 Migration, cell adhesion and proliferation exhibit reciprocal regulation that maintains epithelial homeostasis. Disruption of any individual component process breaches intestinal barriers, 74,75 including physical and molecular 76 barriers, which causes systemic genotoxicity promoting local and systemic tumorigenesis. 74,76 Both delay 6,73 and acceleration 36,76 of cell migration disrupts intestinal integrity to promote tumorigenesis. Conversely, CDX2 and GCC signaling defend intestinal barrier integrity by coordinating cell proliferation and migration, 20,35 and regulating cell adhesion. 77 Mutating CDX2 or eliminating GCC signaling early in transformation induces a tumorigenic microenvironment by disrupting the intestinal barrier, inducing inflammation 78 and APC dysfunction which, in turn, produce the cell accumulation that characterizes tumor initiation. 20,35 In summary, cell migration, another mechanism beyond cell death and shedding, removes cells from crypts and, together with mechanisms controlling proliferation, maintains the homeostatic balance between proliferating and differentiated cell populations. Further, cell migration, in coordination with adhesion and proliferation, maintains barrier integrity, defending against systemic exposure to genotoxic and pathogenic insults from the gut lumen. Indeed, disruption of pathways regulating cell migration and adhesion breaches the intestinal
#include <unordered_map> #include <fstream> #include "vg.pb.h" #include "stream.hpp" #include "CommonUtils.h" #include "fastqloader.h" std::vector<vg::Alignment> pickPairs(const std::vector<vg::Alignment>& alns, const std::unordered_map<std::string, size_t>& readLens, int maxSplitDist, int minPartialLen) { std::unordered_map<std::string, std::vector<const vg::Alignment*>> startsPerRead; std::unordered_map<std::string, std::vector<const vg::Alignment*>> endsPerRead; for (auto& aln : alns) { assert(readLens.count(aln.name()) == 1); size_t alnlen = 0; for (int i = 0; i < aln.path().mapping_size(); i++) { alnlen += aln.path().mapping(i).edit(0).to_length(); } if (alnlen < minPartialLen) continue; if (aln.query_position() == 0) { startsPerRead[aln.name()].push_back(&aln); } if (aln.query_position() + alnlen == readLens.at(aln.name())) { endsPerRead[aln.name()].push_back(&aln); } } std::vector<vg::Alignment> result; for (auto pair : startsPerRead) { size_t currentPairNum = 0; for (auto start : pair.second) { assert(start->query_position() == 0); int startEnd = 0; for (int i = 0; i < start->path().mapping_size(); i++) { startEnd += start->path().mapping(i).edit(0).to_length(); } assert(startEnd >= minPartialLen); for (auto end : endsPerRead[pair.first]) { int endStart = end->query_position(); if (abs(startEnd-endStart) > maxSplitDist) continue; vg::Alignment left { *start }; vg::Alignment right { *end }; left.set_name(pair.first + "_pair" + std::to_string(currentPairNum) + "_1"); right.set_name(pair.first + "_pair" + std::to_string(currentPairNum) + "_2"); result.push_back(std::move(left)); result.push_back(std::move(right)); currentPairNum++; } } } return result; } std::unordered_map<std::string, size_t> getReadLens(std::string filename) { auto reads = loadFastqFromFile(filename); std::unordered_map<std::string, size_t> result; for (auto read : reads) { result[read.seq_id] = read.sequence.size(); } return result; } int main(int argc, char** argv) { std::string inputAlns { argv[1] }; int maxSplitDist = std::stoi(argv[2]); std::string readFile { argv[3] }; std::string outputAlns { argv[4] }; int minPartialLen = std::stoi(argv[5]); auto readLens = getReadLens(readFile); auto alns = CommonUtils::LoadVGAlignments(inputAlns); auto pairs = pickPairs(alns, readLens, maxSplitDist, minPartialLen); std::ofstream alignmentOut { outputAlns, std::ios::out | std::ios::binary }; stream::write_buffered(alignmentOut, pairs, 0); }
Tart Abbey Tart Abbey, also Le Tart Abbey, was the first nunnery of the Cistercian movement. It was located in the present commune of Tart-l'Abbaye in Burgundy (Côte-d'Or), near Genlis, on the banks of the River Ouche and only a few miles away from Cîteaux Abbey, the Cistercian mother house. The community moved to Dijon in 1623, and the abbey buildings in Tart were destroyed by war shortly afterwards; only ruins remain. Foundation and first century The foundation charter of Tart Abbey is dated 1132, although the deed mentions three previous gifts from 1125. The founder was Arnoul Cornu, lord of Tart-le-Haut, and his wife Emeline, and their gift consisted of the land of Tart, the tithes of Rouvres and Tart-la-Ville and the grange of Marmot. It seems clear that the creation of this community was the result of a lengthy series of transactions, which may have begun in about 1120, involving not only Arnoul but the lord of Vergy (his overlord); Josserand de Brancion, Bishop of Langres; the family of Hugh II, Duke of Burgundy; the cathedral chapter of Langres; and Stephen Harding, abbot of the nearby Cîteaux Abbey. The first abbess was Elizabeth de Vergy, widow of Humbert de Mailly, lord of Faverney or Fauverney, daughter of Savary de Donzy, Count of Chalon-sur-Saône. She was previously a novice in a Benedictine nunnery, Jully Abbey or Priory, at Jully-les-Nonnains, from where the new foundation at Tart was settled. She remained its head for the next 40 years. Pope Eugene III put the abbey under Papal protection by a bull of 1147, confirmed by his successors. Thanks to its support from the upper echelons of society, if not to more popular appeal, the abbey received sufficient endowments to ensure its financial stability through the difficult times to come. Its lands included several vineyards, and the sale of wine was a significant element in the abbey's economy: five hectares of the Vignoble de Bourgogne, others located at Beaune, Chambolle-Musigny, Morey-Saint-Denis, Chézeaux and Vosne-Romanée. Physical labour in the fields and vineyards was regarded as too strenuous for female religious, and the work was undertaken by lay brothers from Cîteaux. These were often in short supply, and the nuns were obliged to hire day-labourers to make up the shortfall. The abbot of Cîteaux also oversaw the spiritual discipline of the nunnery and was responsible for the appointment of the abbess, who was not elected by the community, as was the practice elsewhere. Tart soon became the head of the female branch of the Cistercians, and was directly responsible for the foundation of many further nunneries in France and more in Spain. By the end of the 13th century, when the supply of gifts was drying up, the abbey had amassed sufficient wealth, mostly in the form of land, and gained sufficient ability to manage it, to secure their future through the hardships to come, of which there were many: the Hundred Years' War, the Grandes Compagnies and the Écorcheurs, and the epidemics and calamities that these brought with them, lasted more or less right up to the start of the French Wars of Religion. Decadence and reform For the first century of its existence, under the close supervision of the mother house at Cîteaux, Tart Abbey maintained very high standards of devotion and rigour, which assured its predominant position at the head of the women's houses of the Cistercian Order. After that, however, a decline began to set in, brought about partly by deteriorating external conditions - wars, famine, pestilence, economic crisis and so on - but also by the tendency, which affected most if not all medieval women's religious foundations, for wealthy and influential families to use them as secure accommodation for their unmarried and widowed female relatives. Such women were by no means always inclined to the religious life, and their presence in any numbers inevitably affected a community's spiritual practice and discipline for the worse. By the 16th century the abbey was in a state of advanced decadence and moral collapse, which neither bishops nor popes were able to remedy, and was notorious for its worldly life and sexual impropriety. In 1617, however, Jeanne-Françoise de Courcelles de Pourlan (b. 1591), who had been educated as a girl at Tart, returned as abbess, with a strong determination to bring about the required reform. Despite the great resistance of the rest of the community, she found a powerful ally in Sébastien Zamet, Bishop of Langres. Opposition to the reform, inside and outside the nunnery, was so great that there was an attempt on the bishop's life. Eventually they decided that reform was impossible as long as the community remained in the abbey at Tart, and that the only way to bring it about was to transfer the nunnery to Dijon, on the basis that in a town it was far easier to maintain seclusion and the discipline of the spiritual life. Accordingly, those of the community who were willing to accept the new and stricter life - five, plus two novices - moved to Dijon on 24 May 1623. Dijon The first few years in Dijon were not comfortable. There were long delays in preparing suitable premises, made longer by the severe reduction in the income of the community in Dijon that resulted when in 1636 the troops of Matthias Gallas sacked and burnt the abbey buildings at Tart in the course of the Thirty Years' War, except for an isolated chapel. After the election of an opponent of the reform, Pierre Nivelle, as abbot of Cîteaux, Jeanne de Pourlan (who had taken the religious name of Jeanne de Saint Joseph) put herself under the jurisdiction of the Bishop of Langres. At the same time she changed the previous system, whereby the abbot of Cîteaux had directly nominated the abbess, to a three-yearly election by the nuns. The community was dissolved during the French Revolution. After passing through a number of uses, the buildings are now a museum of Burgundian life, the Musée Perrin de Puycousin, and the former church is now the Dijon Museum of Sacred Art (Musée d'art sacré de Dijon).
[Congressional Record Volume 169, Number 51 (Tuesday, March 21, 2023)] [Senate] [Page S839] PLEDGE OF ALLEGIANCE The Presiding Officer led the Pledge of Allegiance, as follows: I pledge allegiance to the Flag of the United States of America, and to the Republic for which it stands, one nation under God, indivisible, with liberty and justice for all. ____________________
What do you think are the causes? What do you think are the causes? What do you think the causes are? I've heard both are true but the former is more popular. Is this true? and if so, could you tell me the reason? What grammar should I read to learn more about this? What do you think are the causes? What do you think the causes are? These two questions have the same fundamental content, because they derive from canonical declarative forms which have the same fundamental content: You think X are the causes. You think the causes are X. The particular kind of predication employed in the subordinate clause here is like a mathematical equation: are is equivalent to = and the subject and predicate complement are identical. subjX = the causespredc has the same meaning as subjthe causes = Xpredc I prefer the second version, ...the causes are, because this places x, the matter you are asking about, in the predicate, which is the ordinary 'new information' position. Note that this will not be the case with verbs other than BE, or with BE when it predicates something other than an identity. ☑ How big do you think his head is? ... You think his head is big. but not ☒ How big do you think is his head? .... You think big is his head.
/* * Copyright 2013 Bazaarvoice, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ using FluentAssertions; using FluentAssertions.Json; using Newtonsoft.Json.Linq; using NUnit.Framework; using System; using System.Collections.Generic; using System.Linq; namespace Jolt.Net.Test { [Parallelizable(ParallelScope.All)] public class SimpleTraversalTest { private static IEnumerable<TestCaseData> CreateTestCases(string testName) { var tests = new object[][] { new object[] { "Simple Map Test", SimpleTraversal.NewTraversal( "a.b" ), JToken.Parse( "{ \"a\" : null }" ), JToken.Parse( "{ \"a\" : { \"b\" : \"tuna\" } }" ), new JValue("tuna") }, new object[] { "Simple explicit array test", SimpleTraversal.NewTraversal( "a.[1].b" ), JToken.Parse( "{ \"a\" : null }" ), JToken.Parse( "{ \"a\" : [ null, { \"b\" : \"tuna\" } ] }" ), new JValue("tuna") }, new object[] { "Leading Array test", SimpleTraversal.NewTraversal( "[0].a" ), JToken.Parse( "[ ]" ), JToken.Parse( "[ { \"a\" : \"b\" } ]" ), new JValue("b") }, new object[] { "Auto expand array test", SimpleTraversal.NewTraversal( "a.[].b" ), JToken.Parse( "{ \"a\" : null }" ), JToken.Parse( "{ \"a\" : [ { \"b\" : null } ] }" ), null } }; foreach (var test in tests) { yield return new TestCaseData(test.Skip(1).ToArray()) { TestName = $"{testName}({test[0]})" }; } } public static IEnumerable<TestCaseData> SetTestCases() => CreateTestCases("SetTests"); public static IEnumerable<TestCaseData> GetTestCases() => CreateTestCases("GetTests"); [TestCaseSource(nameof(GetTestCases))] public void GetTests(SimpleTraversal simpleTraversal, JToken ignoredForTest, JToken input, JToken expected) { var original = input.DeepClone(); var tree = input.DeepClone(); var actual = simpleTraversal.Get(tree); expected.Should().BeEquivalentTo(actual); original.Should().BeEquivalentTo(tree, "Get should not have modified the input"); } [TestCaseSource(nameof(SetTestCases))] public void SetTests(SimpleTraversal simpleTraversal, JToken start, JToken expected, JToken toSet) { var actual = start.DeepClone(); simpleTraversal.Set(actual, toSet).Should().BeEquivalentTo(toSet); // set should be successful actual.Should().BeEquivalentTo(actual); } [Test] public void TestAutoArray() { var traversal = SimpleTraversal.NewTraversal( "a.[].b" ); var expected = JToken.Parse( "{ \"a\" : [ { \"b\" : \"one\" }, { \"b\" : \"two\" } ] }" ); var actual = new JObject(); traversal.Get(actual).Should().BeNull(); actual.Count.Should().Be(0); // get didn't add anything // Add two things and validate the Auto Expand array traversal.Set(actual, "one").Should().BeEquivalentTo(JValue.CreateString("one")); traversal.Set(actual, "two").Should().BeEquivalentTo(JValue.CreateString("two")); actual.Should().BeEquivalentTo(expected); } [Test] public void TestOverwrite() { var traversal = SimpleTraversal.NewTraversal( "a.b" ); var actual = JToken.Parse( "{ \"a\" : { \"b\" : \"tuna\" } }" ); var expectedOne = JToken.Parse( "{ \"a\" : { \"b\" : \"one\" } }" ); var expectedTwo = JToken.Parse( "{ \"a\" : { \"b\" : \"two\" } }" ); traversal.Get(actual).Should().BeEquivalentTo(JValue.CreateString("tuna")); // Set twice and verify that the sets did in fact overwrite traversal.Set(actual, "one").Should().BeEquivalentTo(JValue.CreateString("one")); actual.Should().BeEquivalentTo(expectedOne); traversal.Set(actual, "two").Should().BeEquivalentTo(JValue.CreateString("two")); actual.Should().BeEquivalentTo(expectedTwo); } public static IEnumerable<TestCaseData> RemoveTestCases() { return new TestCaseData[] { new TestCaseData( SimpleTraversal.NewTraversal( "__queryContext" ), JObject.Parse("{ 'Id' : '1234', '__queryContext' : { 'catalogLin' : [ 'a', 'b' ] } }" ), JObject.Parse("{ 'Id' : '1234' }" ), JObject.Parse("{ 'catalogLin' : [ 'a', 'b' ] }" ) ) { TestName = "RemoveTests(Inception Map Test)" }, new TestCaseData( SimpleTraversal.NewTraversal( "a.list.[1]" ), JObject.Parse("{ 'a' : { 'list' : [ 'a', 'b', 'c' ] } }" ), JObject.Parse("{ 'a' : { 'list' : [ 'a', 'c' ] } }" ), new JValue("b") ) { TestName = "RemoveTests(List Test)" }, new TestCaseData( SimpleTraversal.NewTraversal( "a.list" ), JObject.Parse("{ 'a' : { 'list' : [ 'a', 'b', 'c' ] } }" ), JObject.Parse("{ 'a' : { } }" ), new JArray( "a","b","c" ) ) { TestName = "RemoveTests(Map leave empty Map)" }, new TestCaseData( SimpleTraversal.NewTraversal( "a.list.[0]" ), JObject.Parse("{ 'a' : { 'list' : [ 'a' ] } }" ), JObject.Parse("{ 'a' : { 'list' : [ ] } }" ), new JValue("a") ) { TestName = "RemoveTestsMap leave empty List)" } }; } [TestCaseSource(nameof(RemoveTestCases))] public void RemoveTests(SimpleTraversal simpleTraversal, JToken start, JToken expectedLeft, JToken expectedReturn) { var actualRemoveOpt = simpleTraversal.Remove(start); actualRemoveOpt.Should().BeEquivalentTo(expectedReturn); start.Should().BeEquivalentTo(expectedLeft); } [Test] public void ExceptionTestListIsMap() { var tree = JObject.Parse("{ 'Id' : '1234', '__queryContext' : { 'catalogLin' : [ 'a', 'b' ] } }" ); var trav = SimpleTraversal.NewTraversal( "__queryContext" ); // barfs here, needs the 'List list =' part to trigger it FluentActions .Invoking(() => (JArray)trav.Get(tree)) .Should().Throw<InvalidCastException>(); } [Test] public void ExceptionTestMapIsList() { var tree = JObject.Parse("{ 'Id' : '1234', '__queryContext' : { 'catalogLin' : [ 'a', 'b' ] } }" ); var trav = SimpleTraversal.NewTraversal( "__queryContext.catalogLin" ); // barfs here, needs the 'Map map =' part to trigger it FluentActions .Invoking(() => (JObject)trav.Get(tree)) .Should().Throw<InvalidCastException>(); } [Test] public void ExceptionTestListIsMapErasure() { var tree = JObject.Parse("{ 'Id' : '1234', '__queryContext' : { 'catalogLin' : [ 'a', 'b' ] } }" ); var trav = SimpleTraversal.NewTraversal( "__queryContext" ); // this works var queryContext = (JObject)trav.Get(tree); // this does not FluentActions .Invoking(() => (JObject)queryContext["catalogLin"]) .Should().Throw<InvalidCastException>(); } [Test] public void ExceptionTestLMapIsListErasure() { var tree = JObject.Parse("{ 'Id' : '1234', '__queryContext' : { 'catalogLin' : { 'a' : 'b' } } }" ); var trav = SimpleTraversal.NewTraversal( "__queryContext" ); // this works var queryContext = (JObject)trav.Get(tree); // this does not FluentActions .Invoking(() => (JArray)queryContext["catalogLin"]) .Should().Throw<InvalidCastException>(); } } }
--- layout: creature name: "Nupperibo" tags: [medium, fiend, cr1/2, mordenkainens-tome-of-foes] page_number: 168 cha: 1 (-4) wis: 8 (-1) int: 3 (-3) con: 13 (+1) dex: 11 (0) str: 16 (+3) size: Medium fiend (devil) alignment: lawful evil challenge: "1/2 (100 XP)" languages: "understands Infernal but can't speak" senses: "blindsight 10 ft. (blind beyond this radius), passive Perception 11" skills: "Perception +1" damage_immunities: "fire, poison" speed: "20 ft." hit_points: "11 (2d8 + 2)" armor_class: "13 (natural armor)" condition_immunities: "blinded, charmed, frightened, poisoned" damage_resistances: "acid, cold; bludgeoning, piercing, and slashing from nonmagical attacks that aren't silvered" --- ***Cloud of Vermin.*** Any creature, other than a devil, that starts its turn within 20 feet of the nupperibo must make a DC 11 Constitution saving throw. A creature within the areas of two or more nupperibos makes the saving throw with disadvantage. On a failure, the creature takes 2 (1d4) piercing damage. ***Hunger-Driven.*** In the Nine Hells, the nupperibos can flawlessly track any creature that has taken damage from any nupperibo's Cloud of Vermin within the previous 24 hours. ### Actions ***Bite*** Melee Weapon Attack: +5 to hit, reach 5 ft., one target. Hit: 6 (1d6 + 3) piercing damage.
client: warn when handshake fails due to BADIP other parts of the code show the meaningful error message too, but not at the spot where it happened for me. Thanks, merged
Deploying to Heroku still doesn't work after fixing Regarding deploying issues, I have been through this same errors since almost 2 days and google numerous times but still no results, I pushed app to Heroku and add Procfile to use Puma server instead Webrick. And then I updated ruby version to 2.0.0 due to Heroku's requirements. I had successfully run first app with sqlite3 before but how come error at this time? I would appreciate to get your help. Please see below: Gemfile: source 'https://rubygems.org' ruby '2.0.0' gem 'rails', '4.2.4' gem 'sass-rails', '~> 5.0' gem 'uglifier', '>= 1.3.0' gem 'coffee-rails', '~> 4.1.0' gem 'jquery-rails' gem 'turbolinks' gem 'jbuilder', '~> 2.0' gem 'bootstrap-sass', '~> 3.3.5' gem 'font-awesome-sass', '~> 4.4.0' gem 'pry', '~> 0.10.1' gem 'puma' gem 'sdoc', '~> 0.4.0', group: :doc group :development, :test do gem 'sqlite3', '~> 1.3.10' gem 'byebug' end group :production do gem 'pg', '~> 0.18.3' gem 'rails_12factor', '~> 0.0.3' end group :development do gem 'web-console', '~> 2.0' gem 'spring' end database.yml: # SQLite version 3.x # gem install sqlite3 # # Ensure the SQLite 3 gem is defined in your Gemfile # gem 'sqlite3' # default: &default adapter: sqlite3 pool: 5 timeout: 5000 development: <<: *default database: db/development.sqlite3 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: <<: *default database: db/test.sqlite3 production: <<: *default database: db/production.sqlite3 console: Salmans-iMac:simple_saas salmanRT15$ git push heroku master Counting objects: 3, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 318 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0) remote: Compressing source files... done. remote: Building source: remote: remote: -----> Ruby app detected remote: -----> Compiling Ruby/Rails remote: -----> Using Ruby version: ruby-2.0.0 remote: -----> Installing dependencies using bundler 1.9.7 remote: Running: bundle install --without development:test --path vendor/bundle --binstubs vendor/bundle/bin -j4 --deployment remote: Rubygems 2.0.14 is not threadsafe, so your gems must be installed one at a time. Upgrade to Rubygems 2.1.0 or higher to enable parallel gem installation. remote: Using rake 10.4.2 remote: Using i18n 0.7.0 remote: Using json 1.8.3 remote: Using minitest 5.8.0 remote: Using thread_safe 0.3.5 remote: Using tzinfo 1.2.2 remote: Using activesupport 4.2.4 remote: Using builder 3.2.2 remote: Using erubis 2.7.0 remote: Using mini_portile 0.6.2 remote: Using nokogiri <IP_ADDRESS> remote: Using rails-deprecated_sanitizer 1.0.3 remote: Using rails-dom-testing 1.0.7 remote: Using loofah 2.0.3 remote: Using rails-html-sanitizer 1.0.2 remote: Using actionview 4.2.4 remote: Using rack 1.6.4 remote: Using rack-test 0.6.3 remote: Using actionpack 4.2.4 remote: Using globalid 0.3.6 remote: Using activejob 4.2.4 remote: Using mime-types 2.6.1 remote: Using mail 2.6.3 remote: Using actionmailer 4.2.4 remote: Using activemodel 4.2.4 remote: Using arel 6.0.3 remote: Using activerecord 4.2.4 remote: Using execjs 2.6.0 remote: Using autoprefixer-rails 6.0.2 remote: Using sass 3.4.18 remote: Using bootstrap-sass <IP_ADDRESS> remote: Using coderay 1.1.0 remote: Using coffee-script-source <IP_ADDRESS> remote: Using coffee-script 2.4.1 remote: Using thor 0.19.1 remote: Using railties 4.2.4 remote: Using coffee-rails 4.1.0 remote: Using font-awesome-sass 4.4.0 remote: Using multi_json 1.11.2 remote: Using jbuilder 2.3.1 remote: Using jquery-rails 4.0.5 remote: Using method_source 0.8.2 remote: Using pg 0.18.3 remote: Using slop 3.6.0 remote: Using pry 0.10.1 remote: Using puma 2.13.4 remote: Using bundler 1.9.7 remote: Using sprockets 3.3.4 remote: Using sprockets-rails 2.3.3 remote: Using rails 4.2.4 remote: Using rails_serve_static_assets 0.0.4 remote: Using rails_stdout_logging 0.0.4 remote: Using rails_12factor 0.0.3 remote: Using rdoc 4.2.0 remote: Using tilt 2.0.1 remote: Using sass-rails 5.0.4 remote: Using sdoc 0.4.1 remote: Using turbolinks 2.5.3 remote: Using uglifier 2.7.2 remote: Bundle complete! 18 Gemfile dependencies, 59 gems now installed. remote: Gems in the groups development and test were not installed. remote: Bundled gems are installed into ./vendor/bundle. remote: Bundle completed (0.80s) remote: Cleaning up the bundler cache. remote: -----> Preparing app for Rails asset pipeline remote: Running: rake assets:precompile remote: Asset precompilation completed (3.10s) remote: Cleaning assets remote: Running: rake assets:clean remote: remote: -----> Discovering process types remote: Procfile declares types -> web remote: Default types for Ruby -> console, rake, worker remote: remote: -----> Compressing... done, 31.7MB remote: -----> Launching... done, v10 remote: https://afternoon-wildwood-4552.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy.... done. Migration : Salmans-iMac:simple_saas salmanRT15$ heroku run rake db:migrate Running `rake db:migrate` attached to terminal... up, run.9334 ActiveRecord::SchemaMigration Load (1.1ms) SELECT "schema_migrations".* FROM "schema_migrations" Errors: Salmans-iMac:simple_saas salmanRT15$ heroku logs --tail 2015-09-08T18:39:55.482641+00:00 app[web.1]: from /app/config.ru:in `new' 2015-09-08T18:39:55.482642+00:00 app[web.1]: from /app/config.ru:in `<main>' 2015-09-08T18:39:55.482647+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:49:in `new_from_string' 2015-09-08T18:39:55.482644+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:49:in `eval' 2015-09-08T18:39:55.482649+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:40:in `parse_file' 2015-09-08T18:39:55.482650+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:299:in `build_app_and_options_from_config' 2015-09-08T18:39:55.482660+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:208:in `app' 2015-09-08T18:39:55.482661+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/server.rb:61:in `app' 2015-09-08T18:39:55.482666+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:272:in `start' 2015-09-08T18:39:55.482665+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:336:in `wrapped_app' 2015-09-08T18:39:55.482668+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/server.rb:80:in `start' 2015-09-08T18:39:55.482669+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:80:in `block in server' 2015-09-08T18:39:55.482671+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `tap' 2015-09-08T18:39:55.482674+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `server' 2015-09-08T18:39:55.482676+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:39:in `run_command!' 2015-09-08T18:39:55.482677+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands.rb:17:in `<top (required)>' 2015-09-08T18:39:55.482681+00:00 app[web.1]: from bin/rails:8:in `require' 2015-09-08T18:39:55.482682+00:00 app[web.1]: from bin/rails:8:in `<main>' 2015-09-08T18:39:56.256256+00:00 heroku[web.1]: Process exited with status 1 2015-09-08T18:39:56.265441+00:00 heroku[web.1]: State changed from starting to crashed 2015-09-08T18:51:48.266869+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=852f1588-388a-4c5d-80bc-8942b0b9fe23 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:51:48.572914+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=c91a2e08-1b02-4cb9-9c04-f0163c97d33f fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:51:53.413318+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/about" host=afternoon-wildwood-4552.herokuapp.com request_id=290f453d-d0bd-45fe-a03a-85c470e898c0 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:51:53.582682+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=cd60a7c9-65b9-481d-8074-dfbea8db4774 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:52:49.199959+00:00 heroku[api]: Starting process with command `bundle exec rake db` 2015-09-08T18:52:54.293561+00:00 heroku[run.1272]: Awaiting client 2015-09-08T18:52:54.332011+00:00 heroku[run.1272]: Starting process with command `bundle exec rake db` 2015-09-08T18:52:54.759690+00:00 heroku[run.1272]: State changed from starting to up 2015-09-08T18:52:59.020288+00:00 heroku[run.1272]: Process exited with status 0 2015-09-08T18:52:59.032621+00:00 heroku[run.1272]: State changed from up to complete 2015-09-08T18:53:14.892087+00:00 heroku[api]: Starting process with command `bundle exec rake db:migrate` 2015-09-08T18:53:18.765194+00:00 heroku[run.9334]: Awaiting client 2015-09-08T18:53:18.792442+00:00 heroku[run.9334]: Starting process with command `bundle exec rake db:migrate` 2015-09-08T18:53:19.009934+00:00 heroku[run.9334]: State changed from starting to up 2015-09-08T18:53:24.988697+00:00 heroku[run.9334]: State changed from up to complete 2015-09-08T18:53:25.029574+00:00 heroku[run.9334]: Process exited with status 0 2015-09-08T18:53:43.714497+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=17663d6d-f9c9-4d30-b06e-1d4a32f3c4b8 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:53:44.114785+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=137c9b11-1134-4828-b8d7-f6e8fe835c02 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:58:20.786629+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=216bad18-00e7-471f-9264-b210e3527a82 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T18:58:21.022480+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=c08e8936-08e8-4555-8698-7d0ed93d197f fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:16:39.726171+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=5f10e8c4-fc8f-41bb-89ef-6b882fd10177 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:16:39.490004+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=8e8c3ef6-c67a-4293-9b5f-bf45ee8d9786 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:22:39.381210+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=a1c8ad83-ac03-4e94-9db3-ff5070e0ea15 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:22:39.643243+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=a20aafc1-b844-4413-a0fa-2d2645534123 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:33:01.164800+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=81699222-7883-4467-9722-dcaeb015446a fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:33:01.410808+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=f9d4abe6-1d89-44c1-ad48-81f88ab0cd22 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:33:02.028240+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=afternoon-wildwood-4552.herokuapp.com request_id=c113fab8-e561-4e8a-b1b3-7b830269d3a0 fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:33:02.235582+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=afternoon-wildwood-4552.herokuapp.com request_id=c27540ae-6db4-4bf2-af69-3e0c49cfa1ac fwd="<IP_ADDRESS>" dyno= connect= service= status=503 bytes= 2015-09-08T19:35:38.651911+00:00 heroku[web.1]: State changed from crashed to starting 2015-09-08T19:35:41.839066+00:00 heroku[web.1]: Starting process with command `bundle exec rails server -p 12426` 2015-09-08T19:35:45.128856+00:00 app[web.1]: => Run `rails server -h` for more startup options 2015-09-08T19:35:45.128854+00:00 app[web.1]: => Rails 4.2.4 application starting in production on http://<IP_ADDRESS>:12426 2015-09-08T19:35:45.128817+00:00 app[web.1]: => Booting Puma 2015-09-08T19:35:45.128858+00:00 app[web.1]: => Ctrl-C to shutdown server 2015-09-08T19:35:45.812085+00:00 app[web.1]: Exiting 2015-09-08T19:35:45.812931+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:469:in `each' 2015-09-08T19:35:45.812922+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:472:in `block (2 levels) in eager_load!' 2015-09-08T19:35:45.812926+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:471:in `each' 2015-09-08T19:35:45.812935+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:469:in `eager_load!' 2015-09-08T19:35:45.812936+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:346:in `eager_load!' 2015-09-08T19:35:45.812943+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/application/finisher.rb:56:in `block in <module:Finisher>' 2015-09-08T19:35:45.812940+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/application/finisher.rb:56:in `each' 2015-09-08T19:35:45.812929+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:471:in `block in eager_load!' 2015-09-08T19:35:45.812916+00:00 app[web.1]: /app/app/mailers/contact_mailer.rb:1:in `<top (required)>': uninitialized constant ActiveMailer (NameError) 2015-09-08T19:35:45.812948+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/initializable.rb:30:in `run' 2015-09-08T19:35:45.812950+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/initializable.rb:55:in `block in run_initializers' 2015-09-08T19:35:45.812953+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:150:in `block in tsort_each' 2015-09-08T19:35:45.812959+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:219:in `each_strongly_connected_component_from' 2015-09-08T19:35:45.812960+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:182:in `block in each_strongly_connected_component' 2015-09-08T19:35:45.812964+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:180:in `each' 2015-09-08T19:35:45.812955+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:183:in `block (2 levels) in each_strongly_connected_component' 2015-09-08T19:35:45.812967+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:180:in `each_strongly_connected_component' 2015-09-08T19:35:45.812945+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/initializable.rb:30:in `instance_exec' 2015-09-08T19:35:45.812968+00:00 app[web.1]: from /app/vendor/ruby-2.0.0/lib/ruby/2.0.0/tsort.rb:148:in `tsort_each' 2015-09-08T19:35:45.812972+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/initializable.rb:54:in `run_initializers' 2015-09-08T19:35:45.812981+00:00 app[web.1]: from /app/config.ru:3:in `require' 2015-09-08T19:35:45.812976+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/application.rb:352:in `initialize!' 2015-09-08T19:35:45.812979+00:00 app[web.1]: from /app/config/environment.rb:5:in `<top (required)>' 2015-09-08T19:35:45.812985+00:00 app[web.1]: from /app/config.ru:3:in `block in <main>' 2015-09-08T19:35:45.812987+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:55:in `instance_eval' 2015-09-08T19:35:45.813023+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:55:in `initialize' 2015-09-08T19:35:45.813025+00:00 app[web.1]: from /app/config.ru:in `new' 2015-09-08T19:35:45.813029+00:00 app[web.1]: from /app/config.ru:in `<main>' 2015-09-08T19:35:45.813030+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:49:in `eval' 2015-09-08T19:35:45.813035+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:40:in `parse_file' 2015-09-08T19:35:45.813032+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/builder.rb:49:in `new_from_string' 2015-09-08T19:35:45.813037+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:299:in `build_app_and_options_from_config' 2015-09-08T19:35:45.813044+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:336:in `wrapped_app' 2015-09-08T19:35:45.813038+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:208:in `app' 2015-09-08T19:35:45.813042+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/server.rb:61:in `app' 2015-09-08T19:35:45.813052+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:80:in `block in server' 2015-09-08T19:35:45.813047+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/server.rb:272:in `start' 2015-09-08T19:35:45.813049+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/server.rb:80:in `start' 2015-09-08T19:35:45.813054+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `tap' 2015-09-08T19:35:45.813057+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `server' 2015-09-08T19:35:45.813063+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands.rb:17:in `<top (required)>' 2015-09-08T19:35:45.813059+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:39:in `run_command!' 2015-09-08T19:35:45.813064+00:00 app[web.1]: from bin/rails:8:in `require' 2015-09-08T19:35:45.813067+00:00 app[web.1]: from bin/rails:8:in `<main>' 2015-09-08T19:35:46.599863+00:00 heroku[web.1]: State changed from starting to crashed 2015-09-08T19:35:46.591596+00:00 heroku[web.1]: Process exited with status 1 Looks like it deployed successfully. @Daiku, sorry I forgot. Now I added errors message. @infused, yes you are right I feel it should work You need to change class ContactMailer < ActiveMailer::Base to class ContactMailer < ActionMailer::Base in your /app/app/mailers/contact_mailer.rb and anywhere in your rails app. There isn't class ActiveMailer, just ActionMailer Oh got it. Now it is solved after testing. Thank you! I wonder, how did you figure out one mistake since huge errors messages via console? Eye of experienced developer can fast find lines like that 2015-09-08T19:35:45.812916+00:00 app[web.1]: /app/app/mailers/contact_mailer.rb:1:in '<top (required)>': uninitialized constant ActiveMailer (NameError) ;) Yes, clearly that's something else not deploying issues but ActiveMailer, lol. I will remember that next time.
II. CTENOGOBIUS ABEI Jordan and Snyder, new species. Head 'd'^ in leno-th; depth 5^^; depth of euudal peduncle )i in head; eye 31; snout 4; maxillary 2|; 1). VI-9; A. 9; P. 10; .seales in lateral series 30, in transverse series 13. Body short, thick, cylindrical anteriorh; caudal peduncle compressed. Head larg-e; snout bluntly rounded. Eyes of moderate size directed laterally; interorbital space somewhat convex; distance between eyes equal to 1^ times their diameter. Mouth ol)lique; jaws e((ual; maxillary concealed, extending to a vertical through posterior part of pupil. Teeth in narrow l)ands on both jaws; the outer ones eidarged. Tongue concave anteriorly. Gill openings restricted to the sides; isthmus l)road; its width contained about 3 times in head. No papilla* on inner edge of shoulder girdle. Gill-rakers very short and blunt. Anterior nostril with a tube. No barbels on head. Fig. .5. — Ctekogobius abei. Occiput and upper pai'tof opercles with scales, head otherwise naked: l)ody covered everywhere with finely ctenoid scales, small anteriorly, growing gradually larger posteriorly. Dorsals seymrate; the spines with long, projecting filaments; when depi-essed the}" reach beyond insertion of soft dorsal; raj's a little longer posteriorly; when depressed not reaching base of caudal. Anal inserttxl l)elow base of second dorsal ray; when depressed, reaching as far posteriorly as does the dorsal. Caudal rounded. Pectorals pointed; the upper rays without free filaments. Ventrals free posteriorly from belly. Color in spirits, light olive, mottled and Ixmded with l)rownish l)lack. Anterior half of body with 5 broad, vertical dark bands; posterior half with '1 longitudinal dark ])ands extending on base of caudal fin; the uppci' l)and connected with its fellow on the opposite side of l)ody by indistinct dark bands which nearly coalesce into a dark mass of color. Head with dark reticulations. Spinous dorsal with a black
closed under disjoint union $\Rightarrow$ closed under union? Closed under disjoint union $\Rightarrow$ closed under union? Seems easy, but I can't wrap my head around it. If the implication is not true, I would appreciate a counterexample. For completeness, the definitions made in class: For $A_1, A_2\subset\mathcal{A}$, subsets of a set system, we say that $\mathcal{A}$ is closed under disjoint union $A_1\sqcup A_2$, if $A_1\sqcup A_2$ is in $\mathcal{A}$. With $A_1\sqcup A_2=A_1\cup A_2$ if $A_1\cap A_2\ne \emptyset$. Hint: What if $\mathcal{A}$ contains just the two subsets $\{1,2\}$ and $\{2,3\}$? A good general approach to this kind of question is to imagine really small (counter)examples before trying to wrestle with the formal arguments. But ${1,2}$ and ${2,3}$ aren't disjoint, so ... @Buochserhorn: Exactly -- so "closed under disjoint union" places no demands on this $\mathcal A$, and is trivially satisfied by it. But the $\mathcal A$ is not closed under arbitrary unions. No, they are not. But $\mathcal{A}$ is closed under disjoint unions because whenever two elements are disjoint their union is in $\mathcal{A}$. It just so happens in this example that there are no disjoint pairs.
Argeria G. Roglan Nimbias Argeria was Hebrion Academy’s number one Single Ranker of the 170th batch. Dungeon Invasion He relinquished the command of his party and handed it over to Desir.
Apparatus for lifting stacks of bricks and the like Jan. 27, 1959 H. J. NEHEfi 2,871,052 APPARATUS FOR LIE TING STACKS OF BRICKS AND THE LIKE Filed Aug. 25, 1955 3'Sheets-Sheet 1 bl 1 i1 F557! i '52 55 I 2.2 0 IO 1 "Wm d5 1 I up i "4 3+ 5 32 EH. a -n 33 28 27 al f f \I I INVENTOR. I Her ber+ J.Neh er & w BY w a 4 W' 4-2 LL/ AHornegQ Jan. 27, 1959 H. J. NEHER APPARATUS FOR LIFTING smc xs OF BRICKS AND THE LIKE Filed Aug. 25, 1955 3 Sheets-Sheet 2 I :II 2% 21 lNVENTOR. Herber+ J. Nah er H. J. NEHER Jan. 27, 1959 APPARATUS FOR LIFTING .STACKS OF BRICKS AND THE LIKE Filed Aug. 25, 1955 3 Sheets-Sheet 3 s ein AH'ornegs United States Patent 'ice APPARATUS FOR LIFTING STA'CKS OF BRICKS AND THE LIKE Herbert J. Neher, Decatur, Ala. Application August 25, 1955, Serial N 0. 530,545 4 Claims. (Cl. 294-63) This invention relates to apparatus for lifting stacks of bricks and the like and is an improvement over the apparatus described and claimed in my Patent No. 2,668,731, issued Feb. 9, 1954, and entitled Apparatus For Lifting Stacks of Bricks and the Like. An object of my invention is to provide apparatus for lifting stacks of bricks and the like which shall include improved means associated with the gripping elements of the apparatus which cause the same to move evenly and concomitantly into clamping engagement with the stack. Another object of my invention is to provide apparatus of the character designated which shall include a vertically movable equalizing bar together with improved means holding the equalizing bar in lowered position relative to the stack while the apparatus is being lifted off the stack. 4 Further objects of my invention are to provide apparatus of the character designated which shall be simple of construction, economical of manufacture and shall be particularly adapted for use under the rugged conditions encountered around brick yards. Briefly my improved apparatus comprises a main frame adapted to be positioned over the stack to be lifted. Secured to one side of the main frame are a plurality of horizontal tubular members having vertical gripping mem-' bers extending downwardly along one side of the stack. Telescoping within the tubular members are elongated horizontal members having vertical gripping members secured to the outer ends thereof and projecting downwardly along the other side of the stack. Connected to each of the tubular members is one end of a cable which passes around a sheave mounted on the elongated member and then around a sheave mounted on the tubular member. Independently movable sheave blocks are connected to the other ends of the cables. Connected to the sheave blocks by means of a flexible member is a vertically movable equalizing bar which is adapted, upon upward movement, to clamp the stack and subsequent thereto lift the entire apparatus and the stack. Mounted on the equalizing bar is a hold-down hook member. which is biased to a position to engage a portion of'the main frame'whereby upward movement of the equalizing bar relative to the main frame is limited until the hook mem of Fig. 1; Fig. 3 is a plan view taken generally along the line IIIIII of. Fig. 1 and showing the stack of bricks in dotted lines; Fig. 4 is a side elevational view, parts being broken away and in section; Fig. 5 is a fragmental sectional-view taken generally along the line V-V of Fig. 4; and 2,871,052 Patented an. 27, Fig. 6 is a fragmental plan view showing the means for engaging the hook member and holding the same in lowered position. Referring now to the drawings for a better understanding of my invention, I show a main frame 10 having angle side members 11 and 12 at one side and angle side members 13 and 14 at the other side thereof- The'angle side members have inwardly extending horizontal legs, as shown in Fig. 1, with the vertical legs thereofwsecured to each other by any suitable means, such as by bolts 16. Connecting the side members at one end. of the frame 14 are transverse angles 17 and 18 and connecting the side members at the other end of frame 10 are transverse angles 19 and 21. The transverse angles are secured to the-angle side members by means of bolts 22 which pass through openings in the horizontal legs of the side frame members 11 and 13 and through elongated openings 23 provided in the transverse angle members, whereby the width of the main frame may be adjusted. Extending transversely beneath the main frame 10 and secured rigidly thereto by any suitable means,suc h as by welding is a plurality of tubular members 24. As shown in Figs. 1 and 3, the ends of the tubular members 24 project outwardly of one side of the main frame and have secured to the outer end thereof downwardly and inwardly extending gripping arms 26. As shown in Fig. 2, the tubular members 24 are rectangular in shape, as viewed in transverse cross section and are braced at their juncture with the gripping arms 26 by gusset plates 27. Telescoping within the tubular members 24 are elongated members 28 having downwardly and inwardly ex-' tween the gripping arms 26 and 29. tending gripping arms 29 secured to the outer ends thereof. Gusset plates 31 are provided at the juncture of the elongated members 28 with the girpping arms 29. The elongated members 28 are also rectangular, as viewed in transverse cross section whereby there is no rotation of the member 28 relative to the tubular member 24. Positioned within the tubular members 24 inwardly of the elongated members 28 are compression springs 32 which urge the tubular members 24 and the elongated members 28 away from each other. Mounted at the sides of two of the tubular members 24 adjacent the corners of the main frame 10- are vertically extending sleeve members 33 which are threaded internally for receiving threaded members 34. Mounted at the lower end of the threaded members 34 are disc like plate members 36 which are in position to engage the upper surface of the stack of bricks, indicated at S, when the apparatus is lowered onto the stack, as shown at Figs. 1 and 3. The disc members 36 are ac!- justed whereby the lower ends of the gripping arms 26 and 29 are spaced a slight distance from the supporting surface for the bricks, indicated at 37, when the disc members are in engagement with the upper surface of the stack S. The lower ends of the gripping arms 26 are bent inwardly as at 38 whereby only the lower edges thereof engage the lower course of bricks at the side of the stack. Suitable resilient gripping elements 39 are provided at the lower ends of the gripping arms 29 in position to engage the lower course of bricks, as shown in Fig. l. The resilient gripping elements 39 together with the compression springs 32 provide resilient gripping means for engaging the lower course of bricks at each side of the stack whereby the stack is gripped firmly be- Mounted at the sides of the tubular members 24 are sheave blocks 41 which carry sheaves 42. Mounted at the sides of the elongated members 28 in alignment with the sheave blocks 41 are sheave blocks 43 which ear ry' sheaves 44. Secured to each of the sheave blocks 41 cables 47 pass around the sheaves 44 mounted on the elongated members 28 and then around the sheaves 42 mounted on the tubular members 24, as best seen in Fig. 1. Connected to the other ends of the cables 47 are sheave blocks 48 which carry sheaves 49. Positioned above the main frame 10 and the sheaves 49 is an equalizing bar 51 which comprises parallel spaced apart bars 52 and 53 connected by a plurality of bolts 54. Mounted for rotation on the bolts 54 are sheaves 56. Each sheave 56 is so spaced longitudinally of the equalizing bar 51 as to lie approximately midway between a subjacent pair of the sheaves 49. Secured to a pin 57 passing between the bars 52 and 53 is one end of a flexible member 58, such as a cable or wire rope. As shown in Fig. 4, the flexible member 53 is threaded alternately under the sheaves 49 and over the sheaves 56 and finally is secured to a pin 59 at the opposite end of the equalizing bar 51. Connected to the ends of the equalizing bar 51 are cable members 61 and 62 which are connected to a suitable form of lifting apparatus, not shown, such as a lift truck or the like. As shown in Fig. 4, a pair of sheaves 56 are mounted directly above the transverse angles 1718 and 19-21 whereby the cable 58 travels in a horizontal direction over the transverse angles. Mounted between the transverse angles 17 and 18 by any suitable means, such as by welding, is a downwardly sloping trough-like member 63 having a bottom wall 64 and upwardly flaring side walls 66 and 67. The bottom wall 64 of the trough-like member is cut away as at 68 to provide a bottomless lower portion for the trough. In like manner, mounted between the transverse angles 19 and 21 is a downwardly sloping trough 69 which is identical in construction to the trough 63. Mounted adjacent the ends of the equalizing bar'51 between the pairs of sheaves 56 are downwardly projecting hook members 71 and 72. As shown in Fig. 5, the hooks 71 and 72 engage beneath the lower edge or cutaway portion of the bottom wall 64 thereby limiting upward movement of the equalizing bar relative to the main frame until the hook is disengaged. The trough-like members 63 and 69 being beneath the hook members 71 and 72 serve as guides for the hook members whereby they move into proper engagement with the cut-away portion of the bottom wall 64. That is, due to the fact that the hook members 71 and 72 are supported by flexible members and are adapted to swing relative to the sheave blocks 48, the trough-like members guide the hook members downwardly whereby they engage the cutaway portion 68 of the bottom wall, thus eliminating the necessity of providing manual means for this operation. To disengage the hook members 71 and 72 from the trough-like members 63 and 69, an outwardly projecting handle member 73 is mounted on the equalizing bar, as shown at Fig. 5. To prevent the equalizing bar 51 from falling off the forward end of the trough-like members 63 and 69, I provide outwardly and upwardly extending guide members 74 and 76 at the sides 67 of the trough-like members. As shown in Fig. 5, the guide members 74 and 76 form with the trough-like members 63 and 69 V-like supports for the equalizing bar. From the foregoing description, the operation of my improved apparatus will be readily understood. To lift a stack of bricks or the like, the apparatus is positioned over the stack S with the gripping arms 26 and 29 at opposite sides of the stack. With the apparatus thus positioned, the springs 32 hold the gripping arms in spaced relation to the sides of the stack and the disc-like members 36 engage the upper surface of the stack to hold the lower ends of the gripping arms 26 and 29 in the spaced-relation to the supporting surface 37. The handle member 73 is moved downwardly to disengage the hook members 71 and 72 from the trough-like members 63 and 69 and the equalizing bar 51 is lifted by applying. a lifting force to the cables 61 and 62. This lifts all of the sheave blocks 48 in unison through the medium of cable 58, thus pulling each of the cables 47 about the sheaves 42 and 44 and causing the elongated members 28 to move inwardly relative to the tubular members 24. As the elongated member 28 and tubular member 24 move inwardly toward each other, the gripping arms 26 and 29 engage the lower course of bricks at the sides of the stack S. Upon further upward movement of the equalizing bar 51, the stack of bricks is lifted off the supporting surface 37. To remove the apparatus from the stack of bricks after the same has been moved to the desired location, the apparatus is lowered onto the supporting surface 37. Upon further lowering the equalizing bar, it moves downward relative to the main frame 10 thereby providing slack in the cable 47. The compression spring 32 then moves the griping arms 26 and 29 away from each other and out of engagement with the sides of the stack S. As the equalizing bar 51 is lowered, the hook members 71 and 72 slide down the bottom 64 of the trough-like members 63 and 69 until they engage beneath the cutaway portion 68 thereof. The guide members 74 and 76, together with the trough-like members, provide means for retaining the. equalizing bar on the trough-like member when the same is moved to fully lowered position. Upon lifting the equalizing bar 51, the hook members 71 and 72 being in engagement with the lower edge of the bottom wall 64 limit upward movement of the equalizing bar relative to the main frame 10. With the equalizing bar thus locked in this position the cable 47 is not pulled about the sheaves 42 and 44 as the equalizing bar is raised Accordingly, the gripping arms 26 and 29 remain out of engagement with the stack S, thus permitting the entire apparatus to be raised off the supporting surface 37. The apparatus is then ready to be placed about another stack S. From the foregoing, it will be apparent that l have devised improved apparatus for handling stacks of bricks and similar objects which is simple of construction and is especially adapted for use. in transporting stacks of bricks from place to place in a yard or placing the same in a vehicle. By providing cable means associated with each set of telescoping members, equal pressure is applied by each of the gripping elements, thus effectively gripping the stack for lifting the same. Also, by providing hook members on the equalizing bar which engage the lower edge of the downwardly sloping bottom wall of the trough-like member, the equalizing bar is effectively locked in lowered position, whereby it cannot move relative to the main frame until it is manually released by the operating handle. Furthermore, by providing guide members which together with the trough-like members form V-shaped retaining means for the equalizing bar, there is no possibility of the equalizing bar falling out of operating position. While I have shown my invention in but one form, it will be obvious to those skilled in the art that it is not so limited, but is susceptible of various changes and modifications without departing. from the spirit thereof, and I desire, therefore, that only such limitations shall be placed thereupon as are specifically set forth in the appended. claims. What I claim is: l. In apparatus for lifting a stack of bricks and the like, a main frame adapted-to be positioned over the stack of bricks, parallel horizontal tubular members secured to ends of the slidable members opposite said first mentioned gripper arms and adapted to contact bricks on the opposite side of the lower course, a sheave mounted on a horizontal axis on each of said tubular members, a second sheave mounted on each of said slidable members outwardly of said tubular members with the second sheaves being equally spaced from the gripper arms on the slidable members, a cable connected at one end to each of said tubular members and passing around said second sheave and then around the first mentioned sheave, a sheave block positioned over each tubular member to which the other end of said cable is connected, an equalizing bar mounted for movement in a vertical plane above the sheave blocks, sheaves mounted on the equalizing bar, a second cable having its ends secured to the equalizing bar and passing alternately under the sheaves in the blocks and over the sheaves on the equalizing bar, means for lifting the equalizing bar thereby to move the sheave blocks upwardly and move the tubular members and the horizontally extending members inwardly of the stack toward each other thereby moving the gripping arms into contact with the sides of the lower course of bricks, releasable hold-down means carried by said equalizing bar adapted to engage the main frame and limit movement of the equalizing bar relative to the main frame when said equalizing bar is lifted, and means mounted on said equalizing bar to release said hold-down means. 2. Apparatus as defined in claim l in which the holddown means comprises a downwardly sloping trough secured to the main frame and a hook member mounted adjacent each end of the equalizing bar and biased to a position to engage a portion of said trough thereby limiting upward movement of the equalizing bar relative to the main frame until the hold-down means is released. 3. In stack lifting apparatus, a main frame adapted to be positioned over the stack, sets of inwardly movable gripping arms suspended from the main frame and adapted to be disposed on opposite sides of the stack to extend upwardly above the stack, means adapted to move said arms into contact with the stack comprising parallal horizontal tubular members secured to the main frame and to the gripping arms at one side of the stack, a horizontally slidable member telescoping within each of said tubular members and secured to a gripping arm at the other side of the stack, a sheave mounted on each of said tubular members, a second sheave mounted on each of said slidable members outwardly of said tubular members and spaced horizontally from the sheave on said tubular member with the second sheaves being equally spaced from the gripper arms on the slidable members, a cable secured at one end to each of said tubular members and passing around said second sheave and then around the first mentioned sheave, an equalizing bar disposed above the arms, sheaves mounted on the equalizing bar, a sheave block connected to the other end of each of said cables, sheaves mounted on the sheave blocks, another cable having its ends anchored adjacent the ends of said equalizing bar and passing alternately under the sheaves of the blocks and over the sheaves of theequalizing bar, means to lift the equalizing bar whereby upon upward movement of the blocks the tubular members and the-slidable members move inwardly of the stack toward each other and the gripping arms move into engagement with the sides of the stack, releasable hold-down means carried by said equalizing bar to engage the main frame and limit upward movement of the equalizing bar relative to the main frame when the equalizingbar is lifted, and means mounted on said equalizing bar to release said hold-down means. 4. Apparatus as defined in claim 3 in which the holddown means comprises hooks mounted on the equalizing bar, and downwardly sloping trough-like members mounted on the main frame beneath said hooks, the bottom Walls of said trough-like member being cut away adjacent the lower ends thereof whereby said hooks engage the lower edges of said bottom walls when the equalizing bar is lowered relative to the main frame, and means mounted on the equalizing bar for tilting said hooks whereby they disengage said lower edges. References Cited in the file of this patent UNITED STATES PATENTS 424,571 Pay Apr. 2, 1890 1,192,504 Crum July 25, 1916 2,076,204 Martin Apr. 6, 1937 2,668,731 Neher Feb. 9, 1954 2,744,780 Dixon May 8, 1956 FOREIGN PATENTS 14,254 Netherlands Feb. 15, 1926 26,416 Netherlands Apr. 15, 1932 656,586 France Jan. 2, 1929 915,691 France July 29, 1946
package main import ( "fmt" "github.com/hyperledger/fabric/core/chaincode/shim" pb "github.com/hyperledger/fabric/protos/peer" "github.com/ivaylopivanov/chaincode-samples/multi-currency-wallet/transactions" ) func history(stub shim.ChaincodeStubInterface, args []string) pb.Response { if len(args) < 1 { return shim.Error("Not enough arguments") } user := args[0] b, err := transactions.Get(stub, user) if err != nil { return shim.Error(fmt.Sprintf("History state error: %s", err)) } return shim.Success(b) }
Appyling Gradle signing plugin, without requiring a GPG keyring I am following the instructions at http://central.sonatype.org/pages/gradle.html to use Gradle to upload artifacts to the Maven Central Repository. The instructions work. Examples appear at https://github.com/plume-lib/options/blob/master/build.gradle and https://github.com/plume-lib/bcel-util/blob/master/build.gradle . My problem is that it results in a buildfile that other developers cannot use. The Gradle signing plugin (https://docs.gradle.org/current/userguide/signing_plugin.html) requires a gradle.properties file with signing.keyId, signing.password, and signing.secretKeyRingFile, where the latter points to a valid GPG keyring. The plugin terminates the build with an error if the file doesn't exist or is not valid. But signing is only needed when uploading artifacts to Maven Central. I want any user to be able to run the gradle buildfile (except for actually uploading to Maven Central), even if they do not have a GPG keyring. How can I achieve this? Here are some things I have tried: Split the gradle file into parts. (This is what is shown in the linked examples.) This requires two changes. Change the main buildfile into a build script. (It can still be invoked from the command line). One gross thing about this is that only a gradle buildfile can contain a plugins { ... } block, and referring to a plugin outside the plugins { ... } block is verbose and ugly, as at the bottom of (say) https://plugins.gradle.org/plugin/com.github.sherter.google-java-format or Create another buildfile, used only for signing, that uses apply from: to include the main one. Question: Is there a way to do this without the ugly buildscript block? Commit a dummy keyring to the repository, refer to it in the local gradle.properties, and the user's ~/gradle.properties can override it for any user who wants to upload to Maven Central. A problem is that using a local pathname yields a gradle warning. Question: Is there a way to do this without a gradle warning? Use conditional signing (https://docs.gradle.org/current/userguide/signing_plugin.html). In my experiments, this does not help. Even on a gradle execution where the artifacts are not signed, the signing plugin still requires the GPG keyring to exist. Question: Can conditional signing be used to avoid the need for a GPG keyring? Question: Is there a better way to achieve my goals than the above possibilities? The documentation has a section on "Conditional Signing" which is exactly what you need here. And you can even make that condition check that the required properties are indeed available to the build. Point #3 in my question points out that conditional signing does not work, at least not as described in the manual. That's really strange, I have authored the Ehcache 3 build and it is working as documented, there are no definitions of the required properties in the build nor in my local ~/.gradle/gradle.properties - see https://github.com/ehcache/ehcache3/blob/master/buildSrc/src/main/groovy/EhDeploy.groovy#L53
package com.example.petitesannonces; import androidx.appcompat.app.AppCompatActivity; import androidx.recyclerview.widget.RecyclerView; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.provider.ContactsContract; import android.view.View; import android.widget.AbsListView; import android.widget.Button; import android.widget.EditText; import android.widget.ImageButton; import android.widget.Toast; import java.util.ArrayList; import java.util.List; public class Chat extends AppCompatActivity { int id_user; int id_dest; RecyclerView recyclerView; List<MessageModel> lMessages; UserModel dest; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_chat); id_user = getIntent().getIntExtra("id_user",-1); id_dest = getIntent().getIntExtra("id_dest",-2); dest = Database.getInstance().getUser(id_dest); lMessages = Database.getInstance().obtenirMessage(id_user,id_dest); recyclerView = findViewById(R.id.recyclerViewChat); recyclerView.setAdapter(new ItemMessageAdapter(lMessages,dest.getName(),R.layout.item_mini_chat)); ///////////////// /// Ajout comportement des boutons //////////////// ((ImageButton)findViewById(R.id.imgBtn_actualiser)).setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View v) { lMessages = Database.getInstance().obtenirMessage(id_user,id_dest); recyclerView.setAdapter(new ItemMessageAdapter(lMessages,dest.getName(),R.layout.item_mini_chat)); } }); ((ImageButton)findViewById(R.id.btn_envoi_message)).setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View v) { String message = ((EditText)findViewById(R.id.et_envoi_message)).getText().toString().trim(); if(!message.equals("") ){ if(Database.getInstance().isConnected()){ if(Database.getInstance().envoyerMessage(id_user,id_dest,message)) { ((EditText) findViewById(R.id.et_envoi_message)).setText(""); }else{ Toast.makeText(getApplicationContext(),"Erreur d'envoi de message",Toast.LENGTH_SHORT).show(); } }else{ Toast.makeText(getApplicationContext(),"Erreur connexion à la BDD",Toast.LENGTH_SHORT).show(); } } } }); ((ImageButton)findViewById(R.id.imgBtn_appeler)).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent callIntent = new Intent(Intent.ACTION_DIAL); callIntent.setData(Uri.parse("tel:"+dest.getPhone()));//change the number startActivity(callIntent); } }); } }
VBScript to send outlook email without asking user to log in I made VBScript to send Outlook email without prompt user to log in in his or her account. But when I tried to run the script, it prompts me to log in. I don't know is it possible to send it without asking, but any help is nice. Here's my code: 'Create an Outlook object Dim Outlook Set Outlook = CreateObject("Outlook.Application") 'Create e new message Dim Message Set Message = Outlook.CreateItem(olMailItem) With Message .Subject = "subject" .Body = "body" 'Set destination email address .Recipients.Add<EMAIL_ADDRESS> 'Set sender address If specified. 'Const olOriginator = 0 'If Len(aFrom) > 0 Then .Recipients.Add(aFrom).Type = olOriginator 'Send the message .Send End With I'm sorry, the code is so dirty. PS: I'm not good with English. Please understand if i wrote something wrong. Do you mean it prompts you to select an Outlook profile? Or to supply the credentials? It prompts to select profile and log into it. Use the Logon method only to log on to a specific profile when Outlook is not already running. This is because only one Outlook process can run at a time, and that Outlook process uses only one profile and supports only one MAPI session. When users start Outlook a second time, that instance of Outlook runs within the same Outlook process, does not create a new process, and uses the same profile. If Outlook is already running, using this method does not create a new Outlook session or change the current profile to a different one. You can turn profile prompt off in Control Panel | Mail | Show Profiles. How can i make it programmable? Set PickLogonProfile value to 0 in HKEY_CURRENT_USER\Software\Microsoft\Exchange\Client\Options But how can I do it with VBScript? Use WScript.Shell object and call RegWrite. Or use Shell.Application and use ShellExecute to run reg command But How?!?!?!?! I don't mean to be mean, but literally the first Google hit is https://www.robvanderwoude.com/vbstech_registry_wshshell.php
HOLLIDAY, executor, et al. v. POPE. No. 16548. April 14, 1949. Rehearing: denied May 12, 1949. Robert B. Blackburn, for plaintiff in error. Duckworth, Chief Justice. (After stating the foregoing facts.) While three separate judgments were rendered and excepted to, the only judgment which requires consideration here is that which overruled the demurrers to the petition as finally amended and to the last amendment. This is true for the reason that, when the original petition was amended after being demurred to, the questions raised by the first demurrer became moot, and the demurrer became extinct or nugatory, and when the petition as then amended was demurred to and again amended, the second' demurrer likewise became extinct or nugatory. Powell v. Cheshire, 70 Ga. 357, 360 (48 Am. R. 572); Livingston v. Barnett, 193 Ga. 640 (19 S. E. 2d, 385); Hughes v. Purcell, 198 Ga. 666 (1) (32 S. E. 2d, 392); Mooney v. Mooney, 200 Ga. 395 (2) (37 S. E. 2d, 195). The demurrer to the petition as finally amended, renewing all previous grounds of demurrer and adding other grounds of demurrer, is, therefore, the only one which requires consideration on the exceptions of the defendant, the grounds of demurrer having been generally referred to in the foregoing statement of facts. “Specific performance is not a remedy which either party can demand as a matter of absolute right, and will not in any given •case be granted unless strictly equitable and just. Mere inadequacy of price may justify a court in refusing to decree a specific performance of a contract of bargain and sale; so also may any •other fact showing the contract to be unfair, or unjust or against good conscience. And in order to authorize specific performance of a contract, its terms must be clear, distinct, and definite.” Shropshire v. Rainey, 150 Ga. 566 (2) (104 S. E. 414); Coleman v. Woodland Hills Co., 196 Ga. 626 (1) (27 S. E. 2d, 226). It is the general rule that the petition must allege the value of the services to be rendered and also the value of the property to be willed, so as to show that the contract sought to be enforced is not unfair or unjust or against good conscience. Johns v. Nix, 196 Ga. 417, 418 (2) (26 S. E. 2d, 526); Matthews v. Blanos, 201 Ga. 549, 562 (40 S. E. 2d, 715). Exceptions exist in cases where one goes into the home of a near relative agreeing to nurse and give the other personal, affectionate, and considerate attention such as could not be readily, procured elsewhere, and where the value of such services could not be readily computed in money. Potts v. Mathis, 149 Ga. 367, 370 (100 S. E. 110); Brogdon v. Hogan, 189 Ga. 244, 250 (5 S. E. 2d, 657); Bullard v. Bullard, 202 Ga. 769 (1) (44 S. E. 2d, 770). Tested by the above-stated principles of law, the petition is not subject to the general ground of demurrer that no cause of action was set forth against the defendant; but, on the contrary, it sets out clearly, distinctly, and definitely facts which would authorize specific performance of the alleged contract. It alleges that about 1930 the petitioner entered into a contract with his grandmother whereby it was agreed that, if he would make his home with her and render her such services and assistance from time to time as she required, she would will him described real estate. It alleges with particularity her requirements and his full compliance therewith and the failure of the grandmother to fulfill her promise. It alleges the value of the services rendered, and though it fails to set forth the value of the real estate to be devised, the near relationship of the contracting parties brings the case within the exception to the general rule. Under the allegations of the petition as amended as to the defendant not being under bond, his insolvency, and his liability for rents and profits from the property since February 23, 1945, and that unless a receiver be appointed to take charge of the property involved, the petitioner will suffer irreparable loss, a case is also made for the appointment of a receiver and an accounting in equity, since such full relief could not be afforded by the court of ordinary. McCord v. Walton, 192 Ga. 279 (14 S. E. 2d, 723). But it is urged in one ground of general demurrer that the petitioner, who is shown to have been a boy of about nine years of age at the time of entering into the alleged agreement, was incapable of contracting and, hence, the present action cannot be maintained. However, the Code, § 20-202, declares that “The exemption of the infant is a personal privilege. The party contracting with him may not plead it, unless he was ignorant of the fact at the time of the contract; nor may third persons avail themselves of it as a defense.” It could not reasonably be said that the grandmother was ignorant of the petitioner’s infancy at the time of the alleged contract, and it does not lie in the mouth of her executor, who stands in her shoes, to urge the disqualification to contract. Another ground of general demurrer is that the court of equity was without jurisdiction to set aside the will, which had been probated in common form in the court of ordinary. While recognizing the principle of law contended for, it is clear that no attempt is being made by allegations and-prayer to set aside the will, although it is alleged to be fraudulent in its failure to devise to the petitioner the property here sought to be acquired. A decree of specific, performance would not oust the defendant executor or interfere with his administration of the estate except in respect to the property decreed to belong to the petitioner. The general ground of demurrer, contending that the court of equity was without jurisdiction to interfere with the administration in the court of ordinary “for the purpose of attacking the validity of such will,” is likewise without merit, since no attempt is being made to attack the validity of the will itself, but the petitioner is merely asserting a right to specific performance so as to have vested in him title to property which was by the will devised to another. The ground of general demurrer that the petitioner cannot maintain the present action, since he has not renounced his legacy under the will, is not well taken. The Code, § 37-502, provides: “When a testator has attempted to give property not his own, and has given a benefit to a person to whom that property belongs, the devisee or legatee shall elect either to take under or against the will.” In construing that section this court, in Lamar v. McLaren, 107 Ga. 591, 604 (34 S. E. 116), said: “To .raise a case of election a person must be entitled to one of two- benefits, to each of which he has legal title, but to enforce both would be unconscientious and inequitable to others having claims upon the same property or fund. He must have legal title to both benefits and have the right to enforce either at his election. . . Now, if Henry J. Lamar Jr. were required to elect between his legacy under the will and his mere claim, to an interest in the business of Henry J. Lamar & Sons, that is between his legacy and a lawsuit, and he should elect to take his claim, or the court should force him so to elect, and he should, upon a subsequent trial for the enforcement of his claimed partnership interest, fail, for any reason, to establish the same, then there would be. no' one to compensate, as in such event there would be no defeated or disappointed legatees, but, on the contrary, the other legatees would get the very property he claimed.” See also First National Bank &c. Co. v. Roberts, 187 Ga. 472 (4) (1 S. E. 2d, 12). The facts in the present case place the petitioner in precisely the same legal position in which Henry J. Lamar Jr. stood in the case just cited. The petitioner now has a lawsuit, a mere claim, which might be defeated when brought to trial; and, in that event, if he had been required to elect as a condition precedent to instituting this action and as a consequence had renounced his legacy, all other legatees under the will would thereupon stand unharmed and the estate would hold the property which under the will now belongs to the petitioner. It is very clear that the meaning and intent of the Code section above cited, as construed by this court, is designed for the purpose of making sure that a legatee shall not be permitted to hold onto his own legacy under a will and at the same time deprive another legatee of property given him under the will. It is not intended and should not be' ■ construed to mean that a legatee placed in the situation of the present petitioner must speculate by electing to take what he believes to be a good claim to property, and at that period of pure chance relinquish his unquestioned title to the property under the will. On the other hand, the obvious intent of the Code section is to prevent such legatee from taking both the property going to him under the will, and the property which he claims.. There is an additional consideration which must not be overlooked in the present case. Specific performance not being an absolute right and being a remedy which equity will deny if to grant it would be against good conscience, specific performance' of the alleged contract in the present case would not be decreed if to do so would allow this petitioner to acquire both that property and the property given to him under the will. We think, however, that the condition contemplated by the Code, where an election is mandatory, will exist in the present case only when by a judgment of the court the petitioner acquires legal title to-the property which he seeks; and by the very act of praying for and obtaining such a decree of title the petitioner will have thereby made an election to renounce his legacy under the will, and the requirements of the Code section will thus be satisfied. In view of this construction, the petitioner is not required to make an election at this time, and the election to take the title by specific performance of the contract when that is an accomplished fact will deny to him any right or title in the property given him under the will. Accordingly, the petition here is not subject to the ground of demurrer asserting that the petition is fatally defective in that it does not recite an election to take the property sued for and a renunciation of the property given him under the •will. Voluminous special demurrers were urged by the defendant,, but upon careful examination they have been found to be without merit and not of such importance as to be set forth in detail. Without unduly encumbering the record, their nature may be understood by the following references and rulings. No misjoinder ■of parties defendant is shown. Under the facts alleged it was. proper to sue the defendant in his individual capacity and also as representative of the estate of the deceased. Goodroe v. C. L. C. Thomas Warehouse, 185 Ga. 399 (1) (195 S. E. 199); Walters v. Suarez, 188 Ga. 190 (3) (3 S. E. 2d, 575). No misjoinder of ■causes of action is shown. The petitioner is seeking specific performance of an oral contract, but is not, as contended by the demurrant, attempting to set aside a will which has been probated in common form in the court of ordinary, though it is alleged that its execution was a fraud upon the petitioner. An order of ■court granting specific performance with reference to the property here involved would not prevent the defendant, as executor, from performing his duties under the will with reference to any other •property of the estate. While there is a prayer that the defendant be enjoined from attempting to probate the will in solemn form, it is an idle prayer and may be treated as surplusage, since it is not shown that any attempt will be made to probate the will in solemn form. Many special grounds complain that ■designated allegations of the petition are vague and indefinite, but are subject to the criticism that they fail to point out wherein the defect exists. Other grounds urge that specified allegations are merely conclusions of law or fact, but are without merit. "Various special demurrers complain that certain allegations are irrelevant and immaterial, but fail to point out the specific in■firmity. Other special grounds of demurrer complain that stated •allegations of the amendment of April 25, 1947, were respectively not germane, not a part of a cause of action, were conclusions, ■and insufficient to constitute any part of a cause of action and to support a charge of insolvency against the defendant, all of which .grounds have been found to be without merit. The same is true as to similar complaints made to allegations of the amendment of ■October 28, 1948. It follows from what has been said that no error is shown •'in the judgment of the court in overruling the demurrers. Judgment affirmed. All the Justices concur, except Wyatt, J., ■who dissents.
The submandibular glands are one of the three main salivary glands and are located in the posterior portion of the submandibular triangle \[[@REF1]\]. This triangle is bordered by the mandibular body (superior), the digastric muscle's anterior belly (medial), and the digastric muscle's posterior body (inferior and lateral) \[[@REF1]\]. Due to protection from the mandibular body, primarily penetrating trauma or lacerations to the floor of the mouth or trauma underneath the mandible can damage the submandibular gland \[[@REF2]\]. These cases are rarely seen and only mentioned in case reports \[[@REF2]\]. On the other hand, pediatric injuries from non-powder firearms have averaged 13,486 annually from 1990 to 2016, with ball bearing (BB) guns accounting for 80.8% of the injuries \[[@REF3]\]. The study also found that 87.1% of the patients were boys, and the most common injuries were eye injuries, with corneal abrasion as the most common diagnosis \[[@REF3]\]. This case presents a 16-year-old girl with a BB gunshot wound to the submandibular gland. A 16-year-old girl with no significant past medical history presented to the emergency department with a gunshot wound on the right side of her neck. The patient was shooting a BB gun at a wooden target with a metal base when she heard a metal "cling" sound and felt a pain in the right side of her neck. She stated that initially there was blood loss and applied pressure. When the bandage was removed in the emergency department, there was no active bleeding. Physical examination findings included vital signs within normal ranges and significant right jaw pain with a 2-3 mm circular wound to the right side of her neck with swelling and tenderness. She had no loss of consciousness, difficulty breathing, or difficulty swallowing. She was not in any respiratory distress, and had no stridor or wheezing, with normal effort and breath sounds. A soft tissue neck ultrasound revealed a hyperechoic area within the right superficial submandibular region that was 1.2 cm deep to the skin surface. The CT scan of the neck with contrast revealed a 7 mm radiopaque foreign body lodged within the right submandibular gland (Figure [1](#FIG1){ref-type="fig"}), There was also evidence of a wound track extending to the right mandible, continuing to the mid-submandibular region. Imaging also revealed extensive soft tissue swelling inferior and lateral to the right-side submandibular gland. The submandibular gland was observed to be swollen, causing mild effacement of the right side airway without extravasation. The wound tract extended from the right submandibular triangle to the right submandibular gland. The chest x-ray was normal. ::: {#FIG1 .fig} ###### CT scan images from the coronal plane (A), localizer radiograph (B), axial plane (C), and sagittal plane (D) of the BB located in the submandibular gland BB, ball bearing Because the patient was hemodynamically stable, she was admitted to the hospital overnight for further observation. She remained afebrile and stable throughout her hospital stay, with pain controlled with acetaminophen and morphine. Otolaryngology (ENT) was consulted and their recommendation was outpatient surgery for removal of the submandibular gland foreign body after two weeks to allow the soft tissue swelling to subside. The patient was discharged on oral Augmentin due to the BB remaining within the submandibular gland. The decision to delay the treatment was based on the impressive tissue edema induced by the trauma. Dissection would have been more difficult due to having to dissect through swollen inflamed tissue. The patient was scheduled for foreign body removal by an ENT specialist 26 days after discharge. She underwent an uncomplicated excision of the submandibular gland and foreign body (Figure [2](#FIG2){ref-type="fig"}). Excising the entire gland was necessary since the BB was embedded in the central portion of the gland. There were several attempts to palpate the BB both through the floor of the mouth in hopes that the ENT could extract it through the mouth. Palpation directly on the gland did not locate the BB. The ENT specialist feared if he tried to dissect within the gland without removing it, chronic sialadenitis could occur. ::: {#FIG2 .fig} ###### Extracted submandibular gland and BB gun pellet BB, ball bearing The right submandibular gland was found to have surrounding inflammation and was excised in a routine manner. The BB was located in the center superior portion of the gland. The postoperative course was uncomplicated and the patient was discharged the next day. Submandibular gland (SMG) injury is typically caused by trauma that fractures the body of the mandible and causes collateral injury posteriorly to the SMG \[[@REF2]\]. Due to protection from the mandible superiorly, submandibular gland injury is rarely described, and only occasionally with case studies \[[@REF2],[@REF4]\]. Injury can also be caused by penetrative trauma from the floor of the mouth or base of the chin \[[@REF2]\]. Motor vehicle accidents (MVA) are the most common cause of SMG injury, which usually occurs in conjunction with significant collateral facial trauma \[[@REF4]\]. However, any penetrating trauma from the floor of the mouth can cause this, as evident in the case of this patient, in which the pellet ricochet was likely the mechanism of injury. Because current literature details only a few cases of submandibular gland trauma, their etiologies are listed below in Table [1](#TAB1){ref-type="table"}. ::: {#TAB1 .table-wrap} ###### Submandibular gland trauma etiology Iwai et al. (2018) \[[@REF5]\]  Fish bone-induced trauma of the submandibular gland Harbinson and Page (2010) \[[@REF4]\] Motor vehicle accident with seatbelt compression causing submandibular trauma Boyd et al. (2002) \[[@REF6]\] Motor vehicle accident with neck hitting the airbag Tonerini et al. (2002) \[[@REF7]\] Motor vehicle accident Singh and Shaha (1995) \[[@REF8]\]  Traumatic submandibular salivary gland fistula Roebker et al. (1991) \[[@REF9]\] Motor vehicle accident causing a fractured submandibular gland de Geus and Maisels (1976) \[[@REF10]\] High-voltage electrical burn of the anterior and left side of patient neck damaging his submandibular gland and requiring removal As the above table presents, it is rare for a gunshot wound, specifically from a BB gun, to cause submandibular gland injury. Although they can be sold in stores specifically for kids, the U.S. Consumer Product Safety Commission reports about four deaths a year due to BB guns or pellet rifles \[[@REF11],[@REF12]\]. These BB guns can have muzzle velocities comparable to pistols (750-1450 ft/s) reaching a maximum of 1200 ft/s \[[@REF13]\]. A retrospective study utilizing the National Electronic Injury Surveillance System data from 1990 through 2016 found that an estimated 364,133 children were treated in emergency departments across the United States for non-powder firearm related injuries \[[@REF3]\]. Another study found that the rates of pediatric eye injury caused by non-powder firearms have increased by over 500% since 2010 \[[@REF14]\]. Christoffel and Christoffel concluded that careful supervision of children and adolescents playing with non-powder firearms as well as additional barriers to access to these firearms is imperative to prevent these injuries \[[@REF15]\]. Evaluating SMG injury is important because of the anatomy surrounding the gland \[[@REF1]\]. Isolated pathology to SMG is seen in cases of neoplasms, autoimmune/inflammatory, and sialadenolithiasis etiologies \[[@REF2]\]. In cases of trauma, careful evaluation of collateral damage is done through primary and secondary trauma evaluation \[[@REF2]\]. Because MVA is the typical preceding event with facial trauma, consideration of more life-threatening pathology like hematoma compressing trachea should be evaluated in a systematic approach \[[@REF6]\]. Regardless of the preceding event trauma evaluation is important in reducing morbidity and mortality from unassessed pathology that requires time-sensitive interventions \[[@REF6]\]. Evaluating SMG injury from a trauma inciting event requires a systematic approach \[[@REF16]\]. Record of history of injury is always paramount in the diagnosis of any pathology \[[@REF16]\]. Imaging should be selected to best suit the data needs of the physician \[[@REF2]\]. The primary imaging modality of SMG pathology is best suited when there can be cross-sectional imaging of the complex anatomy surrounding the gland \[[@REF17]\]. The most common type of imaging of SMG is a CT scan \[[@REF17]\]. Although MRI can provide much more detail, especially in vascular and neoplastic cases, CT imaging is a much more cost-efficient way to manage traumatic injury in a timely fashion \[[@REF17]\]. For this patient, in particular, the combination of the situation requiring an acute trauma work-up along with the metallic foreign body made MRI challenging. With imaging of the SMG injury comes the identification of anatomical variance \[[@REF17]\]. The submandibular gland sits in a triangle bordered by the mandible and anterior and posterior digastric muscles, which drain to Wharton's duct \[[@REF2]\]. Variance in anatomy must be carefully considered, as understanding patterns of anatomy and relation of lingual nerve, hypoglossal nerve, and submandibular ducts is the most important factor in reducing nerve damage during surgery \[[@REF18]\]. Diagnosis and management of SMG injury include surgical management \[[@REF19]\]. The most common surgical excision is via the lateral transcervical approach \[[@REF19]\]. The biggest disadvantage in the transcervical approach is an injury to local nerves (hypoglossal, facial, and lingual) and scar healing, but transoral may be used in a stable patient with palpable portions of the SMG for resection \[[@REF19]\]. The decision for transcervical and transoral should be made by the surgeon on a case-by-case basis. In this case, the operating surgeon decided to complete a transcervical approach because of non-tactile visualization during the oral examination. Imaging diagnostics depend on if there is associated airway impediment, facial fracture, or nerve palsy, as well as the mechanism of trauma \[[@REF16]\]. CT is preferred over MRI because of availability, cost, and time to results \[[@REF16]\]. Because the body of the mandible offers protection, injury to the submandibular gland will most likely cause other pathologies identifiable on CT imaging, such as airway obstruction, vascular compromise, and fractures \[[@REF20]\]. Submandibular gland injuries are very rare due to their location underneath the mandibular body offering protection. Thus, they are only described in case reports most commonly due to a motor vehicle accident. In this case, we presented a 16-year-old girl with a submandibular gland injury due to a BB gun accident. While BB guns have been culturally viewed as toys and not dangerous, it is important to provide education and supervision for children operating these guns as they can inflict serious injuries. The authors have declared that no competing interests exist. Consent was obtained by all participants in this study
UNITED STATES of America, Plaintiff-Appellee, v. Amos J. MOSS, Defendant-Appellant. No. 16-13476 Non-Argument Calendar United States Court of Appeals, Eleventh Circuit. (February 3, 2017) Robert Craig Juman, Wifredo A. Ferrer, Laura Thomas Rivero, Emily M. Smachet-ti, U.S. Attorney’s Office, MIAMI, FL, Courtney L. Coker, Russell R. Killinger, U.S. Attorney’s Office, Fort Pierce, FL, for Plaintiff-Appellee David F. Pleasanton, David F. Pleasan-ton, PA, West Palm Bch, FL, for Defendant-Appellant Before HULL, WILSON, and ROSENBAUM, Circuit Judges. PER CURIAM: Amos J. Moss appeals his 180-month sentence of imprisonment after pleading guilty to one count of possessing a firearm as a convicted felon, in violation of 18 U.S.C. §§ 922(g)(1) and 924(e)(1). His sentence exceeded the normal 10-year maximum sentence under the statute, see id. § 924(a)(2), because the district court imposed an enhancement under the Armed Career Criminal Act (“ACCA”), id. § 924(e)(1). In his sole challenge on appeal, Moss contends that the district court erred.by determining that his prior Florida conviction for domestic battery by strangulation, under Florida Statute § 784.041(2)(a), qualified as a predicate “violent felony” for purposes of the ACCA enhancement. He asserts that the Florida statute does not require the level of force needed to qualify as a violent felony. We disagree and therefore affirm. Under the ACCA, a defendant convicted of being a felon in possession of a firearm who has three or more prior convictions for a “serious drug offense” or “violent felony” faces a mandatory sentence of no less than fifteen years’ imprisonment. 18 U.S.C. § 924(e)(1). We review de novo whether a prior conviction is a “violent felony” within the meaning of the ACCA. United States v. Howard, 742 F.3d 1334, 1341 (11th Cir. 2014). The ACCA defines a “violent felony” as any crime punishable by a term of imprisonment exceeding one year that (i) has as an element the use, attempted use, or threatened use of physical force against the person of another; or (ii) is burglary, arson, or extortion, involves use of explosives, or otherwise involves conduct that presents a serious potential risk of physical injury to another. 18 U.S.C. § 924(e)(2)(B). The first prong of this definition is sometimes referred to as the “elements clause,” while the second prong contains the “enumerated-crimes clause” and what is commonly called the “residual clause.” United States v. Owens, 672 F.3d 966, 968 (11th Cir. 2012). The Supreme Court recently struck down the ACCA’s residual clause as unconstitutionally vague. Samuel Johnson v. United States, — U.S.-, 135 S.Ct. 2551, 2556, 192 L.Ed.2d 569 (2015). In holding that the residual clause is void, however, the Court clarified that it did not call into question the application of the elements and enumerated-crimes clauses of the ACCA’s definition of a violent felony. Id. at 2563. This case concerns the elements clause, which is unaffected by Samuel Johnson. To determine whether a prior conviction qualifies as a violent felony, we typically apply what has been termed the “categorical approach, looking at the fact of conviction and the statutory definition of the prior offense.” United States v. Hill, 799 F.3d 1318, 1322 (11th Cir. 2015) (internal quotation marks omitted). Because we examine what the state conviction necessarily involved, not the facts underlying the case, we must determine whether the least of the acts criminalized in the relevant statute requires “the use, attempted use, or threatened use of physical force against the person of another.” 18 U.S.C. § 924(e)(2)(B)(i); Moncrieffe v. Holder, — U.S. -, 133 S.Ct. 1678, 1684, 185 L.Ed.2d 727 (2013). The inquiry into the minimum conduct criminalized by the state statute must remain within the bounds of plausibility. Moncrieffe, 133 S.Ct. at 1684-85. That is, we roust ask whether the state statute “plausibly covers any non-violent conduct.” United States v. McGuire, 706 F.3d 1333, 1337 (11th Cir. 2013); see Gonzales v. Duenas-Alvarez, 549 U.S. 183, 193, 127 S.Ct. 815, 166 L.Ed.2d 683 (2007) (requiring “a realistic probability, not a theoretical possibility, that the State would apply its statute to conduct that falls outside” the standard). “Only if the plausible applications of the statute of conviction all require the use or-threatened use of force can [Moss] be held guilty of a [violent felony].” McGuire, 706 F.3d at 1337. The Supreme Court has held that the phrase “physical force,” as used in the violent felony definition, means “violent force—that is, force capable of causing physical pain or injury to another person.” Curtis Johnson v. United States, 559 U.S. 133, 140, 130 S.Ct. 1265, 176 L.Ed.2d 1 (2010). In Curtis Johnson, the Supreme Court held that a conviction under Florida’s battery statute, Fla. Stat. § 784.03, is not categorically a violent felony under the ACCA’s elements clause because the offense may be committed by “actually or intentionally touching]” another person. 559 U.S. at 138, 145, 130 S.Ct. 1265. Mere intentional touching, the Court explained, does not require violent force. Id. at 141—43, 130 S.Ct. 1265. While the meaning of “physical force” is a question of federal law, we are bound by a state supreme court’s interpretation of state law, including its determination of the elements of a state offense. Hill, 799 F.3d at 1322. If the state supreme court is silent on an issue of law, we follow the decisions of the state’s intermediate appellate courts, unless there is some persuasive indication that the state’s highest court would decide the issue differently. Id. In Florida, a person commits the offense of domestic battery by strangulation, a third-degree felony, if the person knowingly and intentionally, against the will of another, impedes the normal breathing or circulation of the blood of a family or household member or of a person with whom he or she is in a dating relationship, so as to create a risk of or cause great bodily harm by applying pressure on the throat or neck of the other person or by blocking the nose or mouth of the other person. Fla. Stat. § 784.041(2)(a). Phrased differently, § 784.041(2)(a) requires proof that the defendant knowingly and intentionally “impede[d] the normal breathing or [blood] circulation” of a qualifying victim either by (a) “applying pressure” on the victim’s throat or neck or (b) “blocking” the victim’s nose or mouth. See In re Std. Jury Instructions in Crim. Cases—Report No. 2008-05, 994 So.2d 1038, 1042 (Fla. 2008). In engaging in such conduct, the defendant must “create a risk of or cause great bodily harm.” Id. Moss argues that the level of force required for either “applying pressure” on the throat or neck or “blocking” the mouth or nose is not sufficiently violent so as to qualify under the ACCA. Either of these actions, according to Moss, must simply “impede the normal breathing or circulartion of the victim,” not completely stop the victim’s circulation or breathing. As a result, Moss asserts, the statute could be violated by a “fleeting touch or slight pressure” that momentarily slows the breathing or circulation of the victim. “Based upon a literal reading of the Statute,” Moss contends, “an individual could violate the Statute by plugging someone’s nose or pressing on someone’s neck, even if just for a brief moment.” Moss’s construction of the statute defies common sense and plausibility. While the terms “applying pressure” and “blocking” do not in and of themselves appear to require violent force, the statute requires that the defendant “knowingly and intentionally ... impede[ ] the normal breathing or circulation of the blood” through those actions. Fla. Stat. § 784.041(2)(a). We can think of no plausible scenario, and Moss offers none, in which a nonviolent touch to the victim’s neck or nose could cause a risk of great bodily harm by impeding the victim’s normal breathing or circulation. See McGuire, 706 F.3d at 1337. As the government persuasively argues, “placing ‘a hand over the mouth or nose areas using slight pressure’ might impede breathing momentarily, but it would not ‘create a risk of great bodily harm’ unless sufficient force was used to keep the hand over the victim’s nose or mouth sufficiently long enough to deprive the victim of needed oxygen.” Likewise, the brief application of slight pressure to the victim’s neck or throat might impede normal circulation momentarily, but it defies common sense and ordinary human experience to suggest that such a brief impediment to normal circulation could even leave a bruise, let alone create a risk of great bodily harm. In other words, there is no plausible application of the Florida domestic-battery-by-strangulation statute which covers mere touching. Rather, the force required to create a risk of great bodily harm in the ways contemplated by § 784.041(2)(a)— knowingly and intentionally impeding normal breathing or circulation by applying pressure to the victim’s throat or neck or blocking the victim’s. nose or mouth—is necessarily force “capable of causing physical pain or injury to another person.” See Curtis Johnson, 559 U.S. at 140, 130 S.Ct. 1265 (emphasis added). Because Florida’s domestic-battery-by-strangulation statute, Fla. Stat. § 784.041(2)(a), requires the knowing and intentional use of force capable of causing physical pain or injury to another, Moss’s prior conviction under the statute qualifies as a predicate violent. felony under the ACCA. Accordingly, we affirm his ACCA-enhanced sentence. AFFIRMED. . When a statute is "divisible”—meaning that it sets forth alternative elements of the same crime—we may apply what has been termed the "modified categorical approach,” which involves looking at a limited class of documents to determine under which alternative version of the statutory elements a defendant was convicted. See Descamps v. United States, — U.S. -, 133 S.Ct. 2276, 2283-85, 186 L.Ed.2d 438 (2013). . The statute exempts from its scope "any act of medical diagnosis, treatment, or prescription which is authorized under the laws of this state.” Fla. Stat. § 784.041(2)(a). . See Fla. Stat. § 784.041(2)(b) (defining the terms "family or household member’’ and "dating relationship”). . Whether mere intentional touching that nevertheless causes great bodily harm, as in Florida’s felony battery statute, see Fla. Stat, § 784.041(1), constitutes violent force under Curtis Johnson is still an open question in this Circuit and will be heard by this Court sitting en banc. See United States v. Vail-Bailon, 838 F.3d 1091 (11th Cir. 2016), reh’g en banc granted, opinion-vacated (11th Cir. Nov. 21, 2016). Moss's case is unlikely to be affected by the resolution of Vail-Bailon because, as we have established, domestic battery by strangulation cannot be committed by mere touching.
Phryganophilus Phryganophilus is a genus of beetles belonging to the family Melandryidae. The species of this genus are found in Europe and Northern America. Species: * Phryganophilus angustatus Pic, 1953 * Phryganophilus auritus Motschulsky, 1845
Leandro Pires Leandro Garcia Azevedo Pires (born 16 June 1979) is a Portuguese football coach and a former player. He is an assistant coach with Santa Clara. Career Born in Caminha, Leandro, started is youth career at his hometown club, AC Caminha in 1994. A year later, he moved to rivals Âncora-Praia FC. He debuted as a professional in the fourth tier in 1997, he then passed through five other clubs, before arriving at C.D. Aves in 2005, at age 26. On 21 August 2005, Leandro made his professional debut with DEsportivo de Aves in a 2004–05 Liga de Honra match against Leixões. In the following ten seasons, Leandro has made over 200 league appearances for Desportivo de Aves, ultimately being named team captain. He left Aves after ten seasons and subsequently joined Vianense, assuming the team managerial role on 23 December 2015. On 21 October 2019 he was appointed caretaker manager of Primeira Liga club Aves following the dismissal of Augusto Inácio. In the summer of 2023, Pires was hired as an assistant coach by Santa Clara.
Git workflow that allow new hire to work on feature without polluting git history We want a new hire to work on some features of our current project, however, since he is a new hire, we would like to do code review before he can commit into master branch. Also, we don't want his changes which is rejected by our code review and get into the history. What would be the ideal git workflow for this? (I think is quite common?) Let him clone and submit pull requests. You can see his stuff before you accept his stuff. I will just point out, these practices have merit for all members of your team, not just new hires. This is what I do to ensure a clean Git history. There may be a faster way to achieve the same thing with fewer commands. In summary: Create a feature branch; Work on the branch; Review the branch; Squash the feature into a single commit (note that this is not a great plan unless the scope of the feature is small); Rebase the feature onto the main branch (to ensure a linear history); Perform a fast-forward merge. Different folks will argue that squashing commits is bad; this works for me. Do whatever works for you. Start a new branch for the feature: git checkout -b some_user/some_feature Write some code. Add changes to the branch: git add . git commit -m "I did some stuff." Write some code. Add changes to the branch: git add . git commit -m "I did some more stuff." Review the changes. Update master: git checkout master git pull origin master Switch back to the feature branch: git checkout some_user/some_feature Squash the feature's commits into a single commit: git rebase -i HEAD~2 Rebase the commit onto master to ensure that the merge will be a fast-forward: git rebase master Deal with any merge conflicts. Merge the feature into master: git checkout master git merge some_user/some_feature Push to the server: git push origin master Get rid of the feature branch: git branch -d some_user/some_feature It would be helpful if you explained the advantages and disadvantages of squashing commits, rather than just saying that people disagree. We want a new hire to work on some features of our current project, however, since he is a new hire, we would like to do code review before he can commit into master branch. Create a separate feature branch (or several, if this person will be working on multiple unrelated features) with the git branch command. Periodically merge from master into the feature branch(es) (that is, run git merge master while standing on the feature branch). Once you've reviewed the code, merge the feature branch(es) into master (run git merge feature while standing on master). Some people will tell you to use rebases instead of merges in some or all cases. A rebase turns your history into a straight line, so that this: A ----> B \ \ ---> C ...becomes this (C is rebased onto B, creating C'): A ----> B ----> C' ...rather than this (C is merged with B, creating D): A ----> B -----> D \ / \ / ---> C --- Rebased history is easier to read and (marginally) easier to bisect, but may provide a less accurate picture of how your code was developed, which might make it harder to reason about what actually happened. Rebasing is also more complicated if the commit you are rebasing (C in our example) has already been pushed or pulled into another repository, while merging does not have this issue. Ultimately, you will need to decide for yourself which is the more appropriate course of action for your use case. Also, we don't want his changes which is rejected by our code review and get into the history. Delete the branch (with git branch -D feature) without merging it into master, and wait for Git to garbage collect the orphaned commits automatically. Note the changes are gone for good after that, so be very sure you no longer want to keep them before running this command. If you accidentally delete the wrong branch and notice the problem immediately (or soon after), you can recover it from the reflog. Delete the branch without merging it into master, and wait for Git to garbage collect the orphaned commits automatically. How? Can you explain more? @Yoga: Added sample command. I'm downvoting because your suggestions will guarantee that the commit history gets polluted with minor commits. The correct solution here is to rebase when pulling from master into feature, and to rebase/squash feature into a single commit prior to merging feature into master. @RubberDuck: My answer already explains the advantages and drawbacks of that approach. I have no intention of altering it to pick a "right" side. @Kevin I didn't DV because I disagree with you. I might, I might not. I downvoted because your answer runs in direct opposition to the goals OP is trying to achieve. It's not personal, so please don't take it personally. @RubberDuck: I disagree. OP wants to drop history for rejected code, not all code.
Lzhan55/ase plugin2 ASE is implemented as a OPAE plugin Added an ASE config file to run ASE only Samples can be linked to libopae-c.so to run ASE. Previously it was linked to libopae-c-ase.so. How does the plugin mgr decide whether to choose ASE or xfpga? @michael-adler You can use opae.cfg to enable ASE if you only want ASE to run. When both xfpga and ASE are enabled in the opae.cfg file, both plugin libraries will be loaded. And the application can do fpgaEnumerate() with ASE's GUID to run ASE. Or, the application can run both xfpga and ASE concurrently. What is your strategy for keeping existing scripts/Makefiles working when they specify -lopae-c-ase? Have you thought about what to do with the "with_ase" script? Perhaps it could do something special and force the plugin manager to pick ASE. I think we should avoid two things (perhaps you've done this already): Don't force developers to edit opae.cfg every time that want to switch between FPGA and ASE. Don't force developers to modify their software in order to use ASE. Emphases on the word "force". If someone wants to use fpgaEnumerate() to select between simulation and hardware using your new scheme that's fine. Just don't make it a requirement. Please do address Michael's comments before merging. @michael-adler We can create a symbol link of libopae-c-ase.so to libopae-c.so. And we can check whether LD_PRELOAD="libopae-c-ase.so" was set to let pluginmgr.c to load the ASE plugin correspondingly. Let's set up a meeting to discuss its implementation details.
Application of Raman Spectroscopy for Dental Enamel Surface Characterization *Cecilia Carlota Barrera-Ortega, America Rosalba Vazquez Olmos, Roberto Isaac Sato Berrú and Pineda Dominguez Karla Itzel* ## **Abstract** Dental enamel is the most complex and highly mineralized human body tissue, containing more than 95% of carbonated hydroxyapatite and less than 1% of organic matter. Current diagnostic methods for enamel caries detection are unable to detect incipient caries lesions. Many papers determine the re-mineralizing effect using many fluorinated compounds and different demineralizing solutions to test physical characterizations such as microhardness, roughness, wettability, among others, but there is not much information about the use of Raman Spectroscopy. Raman Spectroscopy is an efficient technique of chemical characterization to identify functional groups (phosphate-hydroxyl groups) found in the hydroxyapatite formula, which helps identify the level of mineralization on dental enamel surface. Raman spectroscopy is applicable to any state of aggregation of the material, indicated for biological samples. Given the minimum bandwidth of a laser source, as with all spectroscopic techniques that use a laser source, a small sample is sufficient, which makes it an important technique in the analysis of reactive products with very low yield. Raman spectroscopy can be used to obtain the main functional groups in order to determine the remineralization of dental enamel; these results are highly valuable as they can help us make the best decisions on dental treatments. **Keywords:** dental enamel, remineralization, demineralization, Raman spectroscopy, hydroxyapatite ## **1. Introduction** In mineralized biological systems, it has been found in the literature that there are different ways to administer fluoride, like the application of varnishes, tablets, and different dental pastes with different concentrations of fluoride that participate in an important way in the mineralization mechanisms of the fundamental unit of enamel (hydroxyapatite prisms), modifying the chemical composition, and increasing the resistance to dissolution in an acidic environment. To inhibit the formation of demineralized lesions and the progression to carious lesions, fluorinated compounds are currently applied to the external surface of the enamel. However, the lack of information on the different vehicles or concentrations of fluorinated compounds, as well as the extent of the effect on enamel, leads to the use of these compounds being exaggerated and at times ineffective in preventive dentistry. Dental enamel is the outer covering of dental crowns, also known as adamantine tissue, and it is currently defined as a nanocomposite bioceramic of epithelial origin, which protects the tooth from chemical and physical aggressions, which has been considered the most mineralized and hard tissue of the organism because it is structurally constituted by millions of highly mineralized prisms that run through its entire thickness, from the amelo-dentin junction to the external or free surface in contact with the oral environment [1]. Its specific function is to form a resistant cover for the teeth that will make them suitable for chewing. In charge of covering and protecting the dentin-pulp complex from chemical and physical aggressions, it lacks vascularization and innervation, which prevents its own remodeling or repair [2]. Embryologically it is widely known that dental enamel is of ectodermal origin, and the formation of this dental structure occurs by cellular events collectively called amelogenesis and biochemical events that are called biomineralization. The chemical composition of dental enamel is made up of 95% inorganic matter, organic matter 4%, and water 1% [3]. ## **2. Raman and infrared spectroscopy** The symmetric vibrational properties of the molecules are used in systematic ways to interpret the infrared spectrum, as they can be used to predict the transitional vibrations allowed, practically only using the table of characters of the punctual group the molecule belongs to. Raman spectral analysis is often compared with the well-known infrared absorption (IR) spectroscopy. While the two techniques are similar, they work in distinct ways and measure different things. The IR technique measures light absorption by specific molecules, while the Raman technique measures Raman emission from molecules under monochromatic laser irradiation. The difference between the light signals and Raman emission corresponds to the vibrational frequencies of these molecules. The two techniques by themselves are great for obtaining important information from samples but the two can be used in combination to measure vibrational bands unique to each technique; that is why the IR and Raman techniques are often regarded to be complementary. As mentioned earlier, IR spectroscopy is an absorption technique and the measurements are determined by changes in vibrational frequency, whereas Raman spectroscopy uses a scattering method to obtain data from changes in the polarizability tensor. These differences affect both the method of obtaining data from samples and the parameters that are necessary for calibration curves [4–6]. Raman spectroscopy can be applied to any state of aggregation: solids, liquids, or gases. In liquid dissolutions, this technique presents advantages over infrared spectroscopy, as only waves of longitude of the visible region of the spectrum are implied, so only the cells and glass optics are precise. Also, water produces very weak Raman signals, which will not tangle the spectrum. These advantages make Raman spectroscopy specially indicated for biological samples. Given the minimum bandwidth of a source laser, a small quantity of the sample is sufficient, which makes this an important technique to analyze the reactions of products with a low yield. Other advantages are derived from the fact that as visible radiation is being employed, and overheating of the samples is notably reduced. The most notable difficulty of Raman spectroscopy is the fluorescence that the enamel emits after the application of different fluoride components such as gels, varnishes, or toothpastes. ### **2.1 Other techniques used for characterizing dental enamel** The mineral content of dental enamel confers itself mechanical, physical, chemical, and biological properties. As it is the most exposed tissue and therefore the most susceptible to demineralization by acidic agents, different techniques have been used throughout time to better know the dental enamel. The first observations were made using scanning electronic microscopy (SEM) and semiquantitative elemental percentage analysis, and to determine the roughness of the dental enamel, atomic force microscopy, and contact and now optical profilometers have been used. To establish the superficial energy of the enamel, wettability is measured by calculating the contact angle, and this property gives us information on how hydrophobic or hydrophilic the dental enamel is. Other techniques used are the nano- and micro-hardness tests, which have been used as physical characterization to assess quantitative demineralization or remineralization of the dental enamel, and finally, Raman spectroscopy is the gold standard to determine the presence or absence of the phosphate-hydroxyl functional groups in dental enamel. ### *2.1.1 Strength* The strength is a mechanical property of any material, consistent with the difficulty that exists to scratch or create mark on a surface by means of a penetrating point. Strength is measured using a strength tester to rehearse penetration. Depending on the type of point used and the range of loads applied, different scales exist, adequate for different ranges of strength [7]. The Vickers hardness test (HVN [hardness Vickers number]) consists in marking the testing material with a diamond indenter that has the form of a pyramid with a square base and angle of 136° in between opposing faces, put through a load of 1 to 100 kg/f. Microhardness tests are used a lot and have an important application in dentistry. A microhardness test can evaluate the level of mineralization of a dental substrate. A specific force applied during a specific time and distance provides important data. It has the capacity of remineralizing the enamel and the dentine after different treatments, like it happens in unbalanced situations of demineralization and remineralization [8]. The roughness is a property of a material, speaking of their surface. The superficial roughness is a set of irregularities on the real surface, defined conventionally in a section where deformities and undulations have been eliminated. The appearance of the surface of a piece depends primarily on the material with what the piece was made of and its conformation process. To currently measure this property, a profilometer is used, which is an optic device that has the advantage of a no contact exploration, therefore avoiding deformities and possible harm of soft surfaces. They can also explore surfaces that are not accessible to mechanical devices, measuring through a transparent layer and measuring the roughness of a texture of a surface in contact with another [9, 10]. In dentistry, the average of roughness (Ra) has been the most used and is defined as the arrhythmic medium of all profile roughness deviations in relation to the central line. ## *2.1.2 Wettability* Wettability is an important property with many upcoming applications in various fields. It indicates the ability of a liquid to wet the surface of solid, suggesting hydrophobic or hydrophilic tendencies. In dentistry, these tendencies affect initial water absorption and the adhesion of oral bacteria to teeth surfaces. To determine wettability, it is necessary to measure the contact angle, which depends on the surface energy of the material and the surface tension of the liquid, formed between the surface of a material and the line tangent to the curved surface of a liquid [11]. If the contact angle formed is lower than 90°, the liquid partially wets the surface of the bare solid, meaning that the surface has good wettability properties and therefore hydrophilic. If the contact angle formed is higher than 90°, the surface has poor wettability properties and therefore hydrophobic. This is a simple method to gauge the wettability of a sample as well as their hydrophobic/hydrophilic tendencies [12]. The measurement of the contact angle to determine the wettability of a sample is carried out with a goniometer, an instrument that allows for a precise angular position. ## *2.1.3 Determination of the quantity of the element fluorine* Of all chemical elements, fluorine is the most electronegative; therefore, it is never found on earth in elemental form. Combined chemically in forms of fluoride, fluorine occupies the 17th place in order of frequency of appearance of the elements and represents around 0.06–0.09% of the Earth crust [10]. Fluoride has scientifically shown efficacy in fighting and preventing dental caries, and is widely used in most parts of the world, through its addition to public water supplies, salt, gels, topical mouthwash solutions, fluoride varnishes, toothpastes, and restorative materials [13]. During the World Health Assembly in 2007, a resolution was approved in which universal access to fluoride for caries prevention should be a basic right to human health. There are three basic methods of administering fluoride for caries prevention: 1.Those based in communities (fluorated water, salt, and milk) Four mechanisms of action of fluoride have been mentioned: *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* Fluoride remains the gold standard for stopping caries lesions with multiple systematic reviews confirming the role of fluoride products in preventing dental caries. However, F- alone does not provide a complete solution in the remineralization process, as the formation of fluoride deposits depends on the available calcium and phosphate ions found in saliva. Therefore, to increase the potential for fluoride prevention, these necessary ions have been added to formulations to increase their retention in an oral environment [15]. ## **3. Physical properties of tooth enamel** ## **3.1 Hardness** The hardness of the adamantine tissue refers to the resistance of the dental enamel surface to wear, scratching, or suffering certain deformation caused by the application of external pressures. It decreases from the free surface to the dentin-enamel junction, which is directly related to the degree of mineralization [1]. ## **3.2 Elasticity** The adamantine fabric has little elasticity due to the minimum percentage of water and organic matter that it has in its composition. It is a fragile tissue with a risk of macro- and microfractures. The elasticity is greater in the area of the neck and sheath of the prisms [3]. ### **3.3 Color and transparency** The color of dental enamel depends directly on the underlying tissues, mainly dentin, presenting a yellowish color in areas where the thickness of the enamel is less and grayish white in areas where it is thicker. It is characterized by being translucent, which is proportional to the degree of enamel mineralization; the higher the mineralization, the higher the translucency [4]. ### **3.4 Permeability** In dental enamel, it is extremely scarce. Enamel can act as a semi-permeable membrane, allowing the diffusion of water and some ions present in the oral environment [3]. ### **3.5 Radiopacity** Considered tooth enamel as one of the most radiopaque elements of the human body due to its high mineralization. It appears white on dental X-rays [4]. ## **4. Chemical composition** #### **4.1 Inorganic matrix** It is made up of calcium mineral salts, basically phosphate and carbonate, giving rise to a crystallization process that transforms the mineral mass into hydroxyapatite crystals. Mineral salt crystals are more voluminous in dental enamel, with a length of 100–1000 nm, a width of 30–70 nm, and a height of 10–40 nm [3]. They present a hexagonal morphology when they have been sectioned perpendicularly and a rectangular morphology when they are sectioned parallel to the longitudinal axis of the crystal [3]. Within the crystal are the basic units of ionic association of mineral salts called cells or unit cells, which, being associated according to the crystal, have a chemical and crystallographic composition, with calcium ions in their vertices and in the center OH<sup>−</sup> bond [3]. #### **4.2 Organic matrix** The main component of the organic matrix of the adamantine substance is of a protein nature, and these proteins are present in different stages of formation and maturation of the adamantine substance. There are three main proteins in the developing enamel stage: amelogenin, ameloblastin, and enamelin, which have therefore been called enamel proteins. Amelogenin is by far the most abundant protein component in the developing enamel layer, contributing more than 90% of its total volume, progressively decreasing as enamel maturity increases, and they are also called immature enamel proteins and are located between the crystals of mineral salts [5]. The ameloblastins represent 5% of the organic component and are located in the most superficial layers of the enamel and the periphery of the crystals, while the enamelanins represent 2–3%, located in the periphery of the crystals. Other proteins that play an important role are ruftelin, which is located in the dentin enamel junction zone at the beginning of the enamel formation process, representing 1–2% of the organic matrix finally within the most representative proteins found, the protein whose function is associated with the transport of calcium from the intracellular to the extracellular medium, which bears the name parvalbumin [5]. Both the timing of the stage and the degree of protein removal affect the composition of the organic matrix. Therefore, protein-mineral interactions change in enamel formation and regulate the structural organization, phase, and mineral composition, as well as crystal growth [6]. #### **4.3 Water** It is located on the surface of the adamantine crystal, constituting to the so-called adsorbed water layer or hydration layer, the percentage of which is scarce and progressively decreases with age [5]. The unique physicochemical properties of enamel are due to its high hydroxyapatite content, the parallel arrangement of individual elongated apatite crystals in enamel prisms, and the interwoven alignment of perpendicular prisms. Together, these characteristics result in a biomaterial of great hardness and physical resilience [6]. *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* ## **5. Influence of the environment on the structure of enamel** Dental caries is an infectious disease that causes local destruction of the hard tissues of the tooth and is associated with diet, the accumulation of microorganisms, and the conditions of the saliva. The development of a clinically visible caries lesion is a consequence of the interaction of various factors in the oral cavity and dental tissues [8]. Carbohydrate fermentation by dental plaque bacteria leads to the formation of various inorganic acids, causing a decrease in pH. When the pH of the oral cavity reaches a critical value of 5.5, an undersaturation of Ca + <sup>2</sup> and PO4 −3 occur, and ions are produced. The tendency is, therefore, the loss of ions from the teeth with the environment, which is called demineralization. This can lead to carious lesions. When the pH becomes higher than 5.5 through the buffering action of saliva, a supersaturation of Ca+2 and PO4 −3 occurs in the medium. In this situation, the tendency is to incorporate the ions into the tooth, and this phenomenon is known as remineralization [9]. There is a constant ion exchange between dental tissues and the environment, always seeking balance. Studies have shown that the use of fluorides causes a decrease in caries. A series of investigations have shown the importance of fluorides in demineralization and remineralization, and in controlling the appearance of caries, when fluoride is constantly present in the oral environment [10]. It is worth mentioning that saliva and its mucous components keep teeth moist and coated to help preserve them under the presence of calcium and phosphorus ions, thus protecting enamel from dissolution by acids. Saliva has organic and inorganic constituents. One litter of human saliva consists of 994 g of water, 1 g of suspended solids, and 5 g of dissolved substances, of which 2 g are organic matter and 3 g is inorganic matter. Sodium and potassium ions are the most abundant inorganic constituents in saliva. Sodium-ion and chloride ion concentrations increase with salivary flow rate. Among the inorganic constituents of saliva are the following: [11]. Among the organic components of saliva are glucose (200 mg/L), cholesterol (80 mg/L), creatine (10 mg/L), urea (200 mg/L), uric acid (15 mg/L). L), and other components of the parotid gland [11]. Remineralization is the natural process of carious lesion repair. Remineralization has been known for many years now. However, it is only recently that the importance of the remineralization process has been accepted as a valid therapeutic option for caries treatment. Topically administering fluoride in various forms and vehicles has shown to reduce notoriously the prevalence and incidence of dental caries. The remineralization process can be difficult if the bacterial load is high or if the salivary components are low, which is why there is a need to improve this process and apply this knowledge in a clinical setting [12]. ## **6. Demineralization and remineralization** Demineralization and remineralization are dental caries processes that are often described as the only physical-chemical event. Although this allows for an easier understanding, description, and mechanism of the process of this disease, dental caries is much more complex. Dental caries is a multifactorial disease that includes factors such as pathogenic bacteria, salivary proteins, enzymes, ions, and fermentation of food sources (carbohydrates). This leads to the formation of biofilm, which can compromise enamel integrity, and the caries process occurs along the interface between the dental biofilm and the enamel surface [13]. ## **7. μ-Raman** μ-Raman spectroscopy is a technique used to identify and characterize in a nondestructive and non-contact way, the chemical composition of organic and inorganic compounds by identifying functional groups without destroying the samples and without carrying out a special preparation. This technique can be used in any state of aggregation of matter [14]. With the help of the optical microscope associated with the equipment, it is possible to identify the isolated particles of the order of the micron. There are reports of the application of this technique to identify the crystallinity of human dental enamel [15]. Obtaining the spectrum is carried out using a laser that generates the beam of light incident on the sample, which is focused on a conventional optical objective in the area of interest. The same target absorbs Rayleigh (elastic) and Raman (inelastic) scattered photons. The radiation is broken down by a monochromator, analyzed using a photomultiplier, and recorded with a recording module. When a polyatomic structure is illuminated by a laser beam (monochromatic radiation of the visible spectrum), several phenomena are observed: the reflection of light, absorption, transmission, and scattering of photons. In the case of inelastic scattering, the scattered photons obtain an additive energy thanks to the energy exchange between the incident photons and the quantized energy levels of the polyatomic structure. The mechanism of this phenomenon is as follows: As a result of the action of the incident photons, which have higher energy than that of the vibrating state of the polyatomic structure, the irradiated material temporarily attains an unstable level and then returns to one of the allowed states, emitting a photon of higher energy than the initial photons [14]; the same wavelength that strikes the molecule will be the same in Rayleigh scattering (elastic), but not in Raman scattering (inelastic) as shown in **Figure 1a** and **b**. μ-Raman spectroscopy is widely used for chemical analysis of biological and synthetic materials, in the same way it has innovated within medicine since recent studies have obtained the spectra of different human tissues, DNA, white blood cells *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* #### **Figure 1.** *(a) Representative diagram of the biphotonic effect on a molecule and the two resulting dispersions (elastic and inelastic), and (b) diagram of the energy levels. Direct source.* (leukocytes), this *in vitro* and more recently *in vivo* in studies of premalignant lesions in the chest and atherosclerotic plaques [16, 17]. #### **7.1 Raman spectroscopic investigations** Malignant or diseased tissue is known to provoke cellular and biochemical changes in an organism, also known as tumor markers, and these changes also alter vibrational spectra. The spectrum of healthy and diseased tissue can be compared using Raman spectroscopy, which is highly specific and sensitive compared with other spectroscopic analysis in the biomedical field. This is not limited to diagnosing external tumors, but by employing miniature fiber optical probes, the technique can be used to diagnose pathologies in the oral cavity, gastrointestinal tract, and brain and ocular tissue. Other advantages of Raman spectroscopy are that the technique is non-invasive, non-destructive, uncomplicated, and the results obtained can be replicated and require minimal or no sample preparation. Applications are not only limited to soft tissues as extended research has been carried out to characterize and diagnose hard tissue pathologies such as normal and diseased bones and teeth tissues. ## **8. Applications of Raman spectroscopy** In the beginning, the major drawbacks of Raman spectroscopy were the fluorescence from biological tissues and the lack of sensitive instruments. Recent advances have made this technique popular due to its advantages over IR spectroscopy, such as its minimal water interference and sample preparation. IR spectroscopy requires the sectioning of samples into a specific size, which can lead to spectral changes in the chemical characterization, and this is not necessary in Raman spectroscopy where the samples can be tested without modifying their state. Modifications in Raman spectroscopy imaging have been made to characterize specific areas of interest in the medical and dental field [3]; all aimed to determined specific peaks in the spectrum, which are used as fingerprints to distinguish diseased tissue from healthy tissue. Raman spectroscopy can also be applied to the study of milk composition as an easy, non-destructive, and fast method to determine the quality of the nutritional components found in dairy products, such as proteins, fats, fatty acids (Fas), and lactose. This makes it easy to provide consumers reliable nutritional information. [17]. Raman spectroscopy is a vibrational spectroscopy with a number of useful properties (nondestructive, no-contact, high molecular-specificity, and robustness) that make it particularly suited for PAT applications in which molecular information (composition and variance) is required. There are important applications of Raman analysis in the production of bio-pharmaceuticals, such as the characterization of raw materials, cell culture media preparation, real-time bioprocess monitoring, and the analysis of the macromolecule product,, the manufacturing and analysis of biopharmaceuticals, for example, in the biopharmaceutical production process such as Raw Materials, Cell Culture Media, Bioprocess, Protein Product, formulated medicine [18]. Raman analysis has proven to be a useful tool for the examination of synthetic and biological materials, in the medical field to study hard tissue, DNA, and white blood cells. The technique has been commonly used as a result of the development of lasers and CCD detectors, which make spectrum analysis even faster.. More recently, MRS has also been used *in vivo* to study pre-malignant lesions in breast and atherosclerotic plaques [19, 20]. There are physical characterizations of dental enamel such as microhardness, roughness, wettability, zeta potential, compressive, and flexural strength, which provide important data, but chemically, Raman is the test that complements this chemical characterization by identifying the functional groups. ## **9. Raman on dental enamel** In 1971, the Raman technique began to be used to identify the functional groups in mineral compounds and it was not until 1993 that it began to be used in dental research, informing about the fluorescence problems of biological materials. Tsuda [6] mentions that CaF2 can be identified in a shift of 322 cm−1 in dental enamel when it has a loss of mineralization and different concentrations of fluorides are applied to the sample to reduce mineral loss; however in this investigation, the spectra were acquired in an interval of 400–4000 cm−1, having this as a limitation for the detection of CaF2. Tsuda [2] in another article mentions the use of pure hydroxyapatite crystals and dental enamel to determine the orientation of the prisms longitudinally and transversally [21]. *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* In dentistry, natural apatites are studied, with calcium phosphates, which by adding an OH− group form hydroxyapatite (HA), an F- form fluorapatite (FA), a Cl− and form chlorapatite (CA); the first two are found in bone, dentin, and enamel and thanks to this characterization technique, we can observe each of these groups expressed in the bands. [18, 19]. When talking about functional groups of dental enamel, the hydroxyl groups should be located: #### **9.1 Hydroxyl group on Raman spectroscopy** When the concentration is increased the intensity of the band decreases, this is due to the hydroxyl stretching vibration. Additional broader bands appear at lower frequencies 3580–3200 cm −1. The appearance of these bands is due to intermolecular bonding, which also increases when the concentrations rise. The precision of the O-H band is dependent on the strength of the hydrogen bond. In some samples, intramolecular hydrogen bonding may occur, the resulting hydroxyl group band which appears at 3590–3400 cm −1 being sharp and unaffected by concentration changes [22]. For solids, liquids, and concentrated solutions, a broad band is normally observed at about 3300 cm−1. Overtone bands of carbonyl stretching vibrations also occur in the region 3600–3200 cm −1 but are, of course, of weak intensity. Bands due to N-H stretching vibrations may also cause confusion. However, these bands are normally sharper than those due to intermolecular hydrogen-bonded o-H groups. Particularly on dental enamel, the hydroxyl group in Raman spectra appears in the vibration range 3510–3650 cm−1, when speaking of it in a clinical context [23]. When the band is observed in this vibration range, it can be translated that when the tooth is in the oral cavity it is healthy, and when the intensity of the band of this group is observed diminished, we can infer that there was an acid attack on the dental enamel that What is clinically weakening it is known as incipient caries lesion, and if that enamel is not remineralized, dental caries can appear. #### **9.2 Organic phosphate compounds on Raman spectroscopy** #### *9.2.1 P-O-C vibrations* For aliphatic compound, the asymmetric stretching vibration of the P-O-C group gives a very strong broad, normally found in the region 1050–970 cm−1. In the case of pentavalent and trivalent methoxy compounds, this band is sharp and strong, occurring at 1090–1010 cm−1. In general, the band occurs at lower frequencies than that for the trivalent compound due to the asymmetric stretching vibration of the P-O-C group of pentavalent phosphate. #### *9.2.2 P=O vibrations* The band is strong due to the stretching vibration of the P=O group and in the region of 1350–1150 cm−1. Taking into account the size of the phosphate atom, the frequency of the P=O stretching vibration is almost independent from the type of compound in which the group occurs and from the size of the substituents. However, it is governed by the number of electronegative substituents directly bonded to it, as well as its sensitivity to the association effects [23]. Particularly in dental enamel, the spectra of the Raman phosphate group appear in the vibration range 1100–900 cm−1, where we can locate them. Clinically, the presence of the high intensity of this band speaks of a good saturation of the dental enamel. In contrast, a weak band, slightly outside the range, speaks of a loss in the mineralization of the dental enamel, also known as incipient caries lesion or loss of continuity of dental enamel and as consequence of cavitation, that is, caries. **Figure 2** shows the characteristic bands of synthetic HA where in the vibration range 1100–900 cm−1 we can identify the group PO4 3− (955.9 cm−1) in its "strong" expression and in the vibration range 3510–3650 cm−1 is the OH− group (3567.1 cm−1). A cluster of minerals containing phosphate is known as apatite, and Ha is known as calcium (Ca) apatite in mineral form and is the phosphate that most resembles the bone phosphate complex. Different salts of calcium phosphate (CaP) are summarized in **Table 1** [24]. #### **Figure 2.** *Spectra of synthetic hydroxyapatite (HA) with the characteristic bands of the PO4 3− group and the OH− group. Direct source.* ## **Table 1.** *Different salts of calcium phosphate.* *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* ## **10. Conclusions** Currently, in the implementation of different biomaterials in tissue engineering, the characterization with μ-Raman has been very useful since it does not structurally modify the sample, whether of synthetic or biological origin, and it can also be measured in any state of aggregation. In Dentistry, it has become an excellent chemical characterization of enamel and dentin with various treatments used for mineral recovery, which is clinically known as remineralization. Hydroxyapatite is a biological material of great interest within the apatite family, as it is a component of bones and teeth, by using m-Raman it is possible to identify the characteristic vibrational modes of the PO4 3− group and of the OH− group since the sample is not modified in its structure. In studies where tooth enamel is demineralized or re-mineralized, characterized by means of m-Raman spectroscopy it can be concluded whether fluorinated compounds are useful or not. ## **Acknowledgements** This work was supported by the DGAPA-PAPIIT IA-200421 project. ## **Conflict of interest** The author declares no conflict of interest. ## **Author details** Cecilia Carlota Barrera-Ortega1 \*, America Rosalba Vazquez Olmos2 , Roberto Isaac Sato Berrú2 and Pineda Dominguez Karla Itzel3 1 Faculty of Higher Studies Iztacala, Laboratory Nano and dental Biomaterials, Speciality in Pediatric Dentistry, UNAM, Mexico 2 Institute of Applied Sciences and Technology, UNAM, Mexico 3 General Directorate of Military Education, Military Graduate School of Health, Mexico \*Address all correspondence to<EMAIL_ADDRESS> © 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## **References** [1] Sa Y, Feng X, Lei C, Yu Y, Jiang T, Wang Y. Evaluation of the effectiveness of micro-Raman spectroscopy in monitoring the mineral contents change of human enamel in vitro. Lasers in Medical Science. 2017;**32**(5):985-991 [2] Tsuda H, Arends J. Orientational Micro-Raman Spectroscopy on Hydroxyapatite Single Crystals and Human Enamel Crystallities. Journal of Dental Research. 1994;**73**(11):1703-1710 [3] Barrera-Ortega CC, Vázquez-Olmos AR, Sato-Berrú RY, Araiza-Téllez MA. Study of demineralized dental enamel treated with different fluorinated compounds by Raman spectroscopy. Journal of Biomedical Physics and Engineering. 2020;**10**(5):635-644 [4] Shin WS, Li XF, Schwartz B, Wunder V, Baran GR. Determination of the degree of cure of dental resins using Raman and FT-Raman spectroscopy. Dental Materials. 1993;**9**:317-324 [5] Requena A, Zúniga J. Espectroscopia. Madrid: Pearson Educación; 2005 [6] Tsuda H, Arends J. Raman spectroscopy IN dental research: A short review of recent studies. Advances in Dental Research. 1997;**11**(4):539-547 [7] Bona GA, Bidlack F. Tooth enamel and its dynamic protein matrix. International Journal of Molecular Sciences. 2020;**21**:12 [8] Gutiérrez M, Reyes J. Microhardness and chemical composition of human tooth. Materials Research. 2003;**6**:367-373 [9] Visscher M, Struik KG. Optical profilornetry and its application to mechanically inaccessible surfaces part I: Principles of focus error detection. Precision Engineering. 1994;**3**:16 [10] Mullane O, Baez R, Jones S, Lennon M, Petersen P, Rugg A. Fluoride and Oral health. Community Dental Health. 2016;**33**:69-99 [11] Tiznado-Orozco GE, Reyes-Gasga J, Elefterie F, Beyens C, Maschke U, Bres EF. Wettability modification of human tooth surface by water and UV and electronbeam radiation. Materials Science and Engineering: C. 2015;**57**:133-146 [12] Navarro M, Planell JA. Nanotechnology in regenerative medicine. In: MaazouZ Y, Aparicio DYC, editors. Measuring Wettability of Biosurfaces at the Microsurfaces. s.l. ed. USA, New Jersey: Springer Science+Business Media; 2012. pp. 163-172 [13] Carvalho RB, Medeiros UV, dos Santos KT, Pacheco Filho AC. Influência de diferentes concentrações de flúor na água em indicadores epidemiológicos de saúde/doença bucal. Ciência & Saúde Coletiva. 2011;**16**:3509-3518 [14] Arifa MK, Ephraim R, Rajamani T. Recent advances in dental hard tissue remineralization: A review of literature. International Journal of Clinical Pediatric Dentistry. 2019;**12**:2 [15] Gonçalves FMC, Delbem ACB, Gomes LF, Emerenciano NG, Pessan JP, Romero GDA, et al. Effect of fluoride, casein phosphopeptide-amorphous calcium phosphate and sodium trimetaphosphate combination treatment on the remineralization of caries lesions. An in vitro study. Archives of Oral Biology. 2021;**122**:10 [16] Barrera-Ortega CC, Araiza-Tellez MA, García-Perez A. Assessment of enamel ## *Application of Raman Spectroscopy for Dental Enamel Surface Characterization DOI: http://dx.doi.org/10.5772/intechopen.108013* surface microhardness with different fluorinated compounds under pH cycling conditions: An in vitro study. Journal of Clinical and Diagnostic Research. 2019;**13**(8):ZC05-ZC10 [17] He H, Sun D-W, Pu H, Chen L, Lin L. Applications of Raman spectroscopic techniques for quality and safety evaluation of milk: A review of recent developments. Critical Reviews in Food Science and Nutrition. 2019;**59**:770-793 [18] Buckley K, Ryder A. Applications of Raman spectroscopy in biopharmaceutical manufacturing: A short review. Applied Spectroscopy. 2017;**71**(6):1085-1116 [19] Gilchrist F, Santini A, Harley K, Deery C. he use of micro-Raman spectroscopy to differentiate between sound and eroded primary enamel. Journal of Paediatric Dentistry. 2007;**17**:274-280 [20] Penel G, Leroy G, Sombret B, Huvenne J, Bres E. Infrared and Raman microspectrometry study of fluorfluor-hydroxy and hydroxy-apatite powders. Journal of Dental Matrials. 1997;**8**:271-276 [21] Kirchner M, Edwards H. Ancient and modern specimens of human teeth o Fourier transform Raman spectroscopic study. Journal of Raman Spectroscopy. 1997;**28**:171-178 [22] Chen J, Yu Z, Zhu P, Wang J, Gan Z, Wei J, et al. Effects of fluorine on the structure of fluorohydroxyapatite: A study by XRD, solid-state NMR and Raman spectroscopy. Journal of Materials Chemistry B. 2015;**3**(1):34-38 [23] Socrates G. Infrared and Raman Characteristics Group Frequencies. Tables and Charts. 3TH ed. London, United Kingdom: John Wiley & Sons, Ltd; [24] Munir MU, Salman S, Ihsan A, Elsaman T. Synthesis, characterization, functionalization and bio-applications of hydroxyapatite nanomaterials an overview. Journal of Nanomedicine. 2022;**17**:1903-1925 ## *Edited by Marwa El-Azazy, Khalid Al-Saad and Ahmed S. El-Shafie* *Infrared Spectroscopy - Perspectives and Applications* is a compendium of contributions from experts in the field of infrared (IR) spectroscopy. This assembly of investigations and reviews provides a comprehensive overview of the fundamentals as well as the groundbreaking applications in the field. Chapters discuss IR spectroscopy applications in the food and biomedicine sectors and for measuring transport through polymer membranes, characterizing lignocellulosic biomasses, detecting adulterants, and characterizing enamel surface advancements. This book is an invaluable resource and reference for students, researchers, and other interested readers. Published in London, UK © 2023 IntechOpen © wacomka / iStock Infrared Spectroscopy - Perspectives and Applications Infrared Spectroscopy Perspectives and Applications *Edited by Marwa El-Azazy,* *Khalid Al-Saad and Ahmed S. El-Shafie*
Motor vehicle wheel speed balancing method ABSTRACT A method balances wheel speeds for a motor vehicle having a wheel-slip control system. Quick coarse balancing with respect to a reference wheel are provided in stages and subsequent fine balancing in pairs of wheels, either on the same side or on the same axle. Cornering is not detected from the left/right deviation of an axle, but from the temporal variation in the differentiated left/right deviation to achieve a very sensitive and reliable wheel balancing. BACKGROUND AND SUMMARY OF THE INVENTION The present invention relates to a method for balancing wheel speeds fora motor vehicle, particularly having a wheel-slip control system. A need arises, particularly in the case of motor vehicles having wheel-slip control systems, for exact wheel speed balancing in order tobe able to reliably inform the driver, via, for example, a driver information lamp, about the system state and driving condition. It is advantageous for such balancing also to be used in conjunction with built-in wheel-slip control systems such as anti-block systems (ABS) and traction slip controls (TSC). The imminent blocking or spinning of a wheel is usually detected in this case by the fact that the time or temporal change in the measured speed of the relevant wheel is no longer situated within a prescribable standard range. That is, with threatened wheel blocking and threatened wheel spinning, the wheel acceleration is respectively situated above an adjustable threshold value. The term"acceleration" is understood both as an actual, positive acceleration and as a retardation or negative acceleration. In order to be able to detect the deviation from the standard, desired wheel-slip behavior as early and reliably as possible, control systems operating at high accuracy must take account of the fact that in the case of slip-free, purely rolling straight-ahead driving the speeds ofthe vehicle wheels are already not equal, for example because of production tolerances during tire manufacture, differing degrees of wear of the tires, and the like. Thus, given a driving speed of approximately100 km/h, typical wheel speed differences in the percentage range already yield a deviation in the overall wheel speeds of approximately 1km/h, a value which must be taken into account in the case of modern all-wheel and wheel-slip control systems. This purpose is achieved by a wheel speed balancing method, by way of which measured wheel speeds are conditioned, taking into account the different rolling circumferences ofthe individual wheels which are also subjected to continuous temporal variations, before they are evaluated by a control system, for example all-wheel drive and/or wheel-slip control system. Offenlegungsschrift DE 41 30 370 A1 describes a single-stage wheel speed balancing method which is respectively activated when overshooting of a minimum speed, a sufficient amount of straight-ahead driving and an at most gentle vehicle acceleration are detected. A two-stage wheel speed balancing method is described in Patent DE 40 19886 C1. In a first stage, a first wheel balancing takes place when a sufficiently gentle vehicle acceleration, a sufficiently low vehicle speed and sufficiently gentle cornering are detected. This first stage is followed in a second stage by renewed wheel balancing when sufficiently gentle vehicle acceleration and sufficiently gentlecornering continue to be detected together with a sufficiently high vehicle speed. In this case, wheel balancing is performed in a first stage in pairs with wheels on the same side, and in the second stage with respect to a selected reference wheel. An object of the invention is to provide a wheel speed balancing method which is substantially more reliable and operates with high accuracy. The foregoing object has been achieved in accordance with the present invention by providing a wheel-slip balancing method in which speed scaling factors are determined for the wheels with the purpose of forming mutually matched corrected wheel speeds. The first step is a quick coarse balancing with respect to a reference wheel, for example in a non-recurring fashion after an engine start, as soon as there is a sufficient measure of straight-ahead driving above a specific minimum speed and, at the same time, a sufficiently low vehicle acceleration value (which is understood, as mentioned above, also to include a negative acceleration value, that is to say a retardation value). Thereupon, a second step in the form of a fine balancing, whichis regularly repeated during a drive, for example, of the vehicle wheels is carried out, specifically for two wheels in pairs between two wheels of one vehicle side or two wheels of one vehicle axle, depending on the driving condition detected. Fine balancing is carried out between wheels on the same axle if, as a consequence of a higher drive torque, balancing of the rear wheels in relation to the respective front wheels on the same side cannot be carried out with sufficient exactitude. The speed scaling factors thus determined admit the formation of corrected, mutually matched wheel speeds by multiplying the respectively measured speed by the associated scaling factor. According to one embodiment of the present invention, a permanent stipulation of the speed scaling factor for the reference wheeladvantageously prevents the correction factors from gradually drifting off in one direction up to a limiting value. Variable reference wheel selection advantageously ensures that balancing is respectively carried out with respect to an average speed. In contrast to rigid or fixed systems which balance with respect to a permanently prescribed wheel (for example, the non-driven left-hand or right-hand wheel), variable reference wheel selection is not attended bythe risk that three relatively identically rotating, "good" wheels are balanced with respect to a "bad" wheel deviating strongly therefrom suchas for example, an emergency wheel mounted at the reference wheel position. According to yet another advantageous feature of the present invention,the quickly calibrating coarse balancing is, as the case may be, carried out recursively and terminated when the deviation of the instantaneouslyvalid scaling factors all remain below a prescribable limiting value with respect to their temporally continuously varying desired value.This limiting value is, on one hand, small enough already to obtain fairly exact wheel speed balancing, but, on the other hand, is selected sufficiently large to be able to terminate coarse balancing in a comparatively short time. Matching of the scaling factors must be performed continuously and is not permitted to overshoot maximumprescribable amounts of variation per prescribable time unit. However,the corrected wheel speeds used in a wheel-slip control system are influenced by the new scaling factors valid from one program cycle tothe other such that the controlled variables determined from the corrected wheel speeds overshoot specific control thresholds. A presently preferred embodiment of the invention adds on, upon termination of quick coarse balancing, an offset amount for the rear wheel scaling factors, which takes account of a rear wheel drive slip possibly present during coarse balancing in the case of a relatively high drive torque. Fine balancing is likewise carried out recursively and continuously during driving if the corresponding driving conditions obtain. In the case of the, preferably recursive, matching of the scaling factors to respectively newly measured wheel speeds, deviations of the newly calculated scaling factors from the previous ones are determined,and the previous scaling factors are brought to the newly calculated values by steps as a function of these differences. This prevents shortterm fluctuations in the scaling factors, which are, for example, caused by roadway influences. In this situation, the newly calculated scaling factor values are respectively yielded from the quotient of the measured speeds of the two wheels which are in the process of being balanced relative to one another. According to yet a further aspect of the present invention, balancing ofthe rear wheels with respect to their front wheels on the same side is carried out for fine balancing in the normal case if, in particular, no excessively large drive torque is active. In the case of no excessively large drive torque, the wheels of each axle are then balanced relative to one another in pairs, since a larger drive torque would lead to defective wheel speed balancing between wheels on the same side. Use can be made not, as customary, only of the left/right deviation itself, but of the temporally differentiated left/right deviation obtained from the temporal deviation obtained thereof for the purpose of detecting sufficiently gentle cornering, which can be, in particular straight-ahead driving completely free from curves. This prevents faulty detection based on a stationary wheel circumference difference of thetwo wheels, for example on the basis of the mounting of wrong wheels and, owing to the exclusion of this possible source of error, permits the selection of a comparatively small limiting value, that is the detection of a large amount of straight-ahead driving. BRIEF DESCRIPTION OF THE DRAWINGS These and other objects, features and advantages of the present invention will become more readily apparent from the following detailed description thereof when taken in conjunction with the accompanying sole FIGURE which is a flow diagram of a wheel speed balancing method according to the present invention for a motor vehicle having a wheel-slip control system. DETAILED DESCRIPTION OF THE DRAWING The wheel speed balancing method of the present invention is used for a motor vehicle having a wheel-slip control system in which the need arises for sensitive wheel balancing for the purpose of actuating adriver information lamp which informs the driver of the current driving condition. The method begins after an engine start (step 1) with quick calibration,carried out non-recursively, for the purpose of coarsely balancing the wheel speeds. Tire rolling radii which deviate strongly from the standard rolling radius such as, for example, during the use of an emergency spare wheel or in the case of mounting a tire of the wrong size, are corrected thereby. By way of introduction, an interrogation is made as to whether conditions expected for coarse balancing are fulfilled (step 2), with the following conditions being monitored:absence of a braking process, detected from monitoring the brake light contact; overshooting of a minimum speed of 45 km/h; presence of a sufficient amount of straight-ahead driving detected from overshootinglimiting values of differentiated left/right deviation signals of measured wheel speeds over a sufficiently long period of time, forexample 4.58; and under shooting limiting values of the vehicle acceleration, for example below 0.5 m/s², detected by determining the average rear axle wheel acceleration. If one of these conditions is not fulfilled, a new interrogation is made. The abovementioned coarse balancing conditions ensure the detection of driving on μ-split roadways or of aquaplaning points, sothat the quick calibration for coarse balancing is halted. Finally, if a driving condition fulfilling all the coarse balancing conditions is reached, the first step is to determine a reference wheel which is used for balancing purposes (step 3). For this purpose, the four wheel speeds are measured, and their arithmetic average is formed.The wheel with the speed which has the smallest deviation from this average value is selected as reference wheel. As mentioned above, this reference wheel selection avoids balancing with respect to an unfavorable wheel. The coarse balancing is also carried out in the case of an activated system intervention, for example braking intervention inthe case of TSC or GDB (regulated differential brake in which the slipping wheel is braked down to vehicle speed using the present wheel brake instead of locking the differential), engine torque intervention in the case of TSC, central locking activation in the case of all-wheel drive, and the like. The quick coarse balancing with respect to a selected reference wheel is logical, since the influence of the drive torque on the dynamic tire rolling circumference is smaller in the relevant range than the influence of tires for mixed and extreme conditions. Thereafter the next step is taken (step 4), in which the actual scaling factor determination is carried out within the framework of quick coarse balancing. The first step for this purpose is to prescribe initial values for the scaling factors. The reference wheel is set in this case to a constant, permanently prescribed scaling factor initial value, withthe result that there is a constant orientation of all the wheel speeds to a fixed value in order, as mentioned above, to prevent a gradual drifting away of the correction factors. The effect of this is that repeated, unfavorable changing to and fro of unfavorable tires does notescalate the scaling factors as far as prescribed minimum or maximum values. The remaining three initial scaling factors are assumed as far as possible from preceding wheel speed balancing, so that the fresh coarse balancing can be terminated as quickly as possible. For this purpose, four instantaneous scaling factors are stored after an engine stop in each case. If the previous values are not available, all the scaling factors can alternatively be set initially to the same initialvalue. After stipulation of initial scaling factors, the wheel speeds are now determined. After suitable filtering, a determination is made of scaling factors which belong to the measured wheel speed and are referred to the reference wheel and which are yielded from the quotient of the reference wheel speed and the speed of the wheel under consideration. After filtering of these scaling factors which have been determined, for each wheel the difference is formed between the still valid scaling factor and the scaling factor determined, and this difference is likewise subjected to filtering. Subsequently, the new scaling factor is formed by incrementally increasing or decreasing the still valid, previous scaling factor for each wheel, with the direction of the stepwise change in value being yielded from the sign of the scaling factor difference determined. In this case, it is possible for the purpose of increasing the rate of calibration to select instead of an increase by 1 a higher increment which is set in terms of absolute value to a fraction of the difference determined, so that the rate of calibration rises with higher instantaneous deviation and is reduced to the desired value with an increasing approximation after a plurality of program cycles. In atypical application, in which the permanently prescribed scaling factor is set to the value of 10,000, the higher increment is selected, forexample, as the next integer above one twenty-fifth of the difference determined. The possibly recursive behavior is generated by a subsequent interrogation (step 5) in which it is established for each wheel whether the difference determined between the scaling factor determined by the wheel speed and the scaling factor previously present does not overshootin terms of absolute value a prescribed maximum value which, forexample, amounts to 0.1% deviation with respect to the permanently prescribed initial wheel speed. If overshooting occurs for at least one wheel, a return is made to before step 4, after which renewed wheel speed measurement and, subsequently, a renewed incremental change in scaling factor are undertaken. If all the differences determined are below the prescribed value, quick coarse balancing is terminated. Inorder not to "calibrate away" any existing drive slip of the rear wheel sat the end of quick coarse balancing, an offset value dependent on engine torque is subsequently added to the two rear wheel scaling factors, for example the scaling factors are raised by 0.4% if the drive torque amounts to +1,000 Nm and are reduced by 0.2% if the drive torque amounts to -500 Nm (step 6). After this non-recurring measure of coarse balancing after an engine start, an interrogation is subsequently made (step 7) as to whether the conditions are present for fine balancing of the rear wheels with respect to the front wheels on the same side. Assumed for this purpose are: driving which is virtually free from drive torque at a speed of more than 45 km/h (so that, when cornering, a front axle/rear axleAckermann correction is no longer required); cornering which is not excessive, for example steering angle of less than 50°; no brakeactuation, detected via the braking light contact; and no excessive vehicle acceleration or non-stationary cornering of the vehicle. If it is detected that all the above conditions are observed in this interrogation step, the actual fine balancing determination of the scaling factors (step 8) is begun. For this purpose the rear wheel speeds are firstly measured again, the values obtained are filtered and scaling factors for the rear wheels are determined therefrom, which scaling factors are yielded from the quotient of the corrected speed ofthe front wheel on the same side and the measured speed of the rear wheel. After filtering these determined rear wheel scaling factors, the difference between the previously present rear wheel scaling factors and those freshly or most recently determined is formed in turn and subjected to filtering. Thereafter, a fine incremental increase in the previous, still valid respective rear wheel scaling factor takes place in a direction prescribed by the sign of the difference determined. The fineness of this balancing by comparison with the coarse balancing described above can be seen from an example in which in the case of coarse calibration there is a step increase of at least one unit per ten program cycles at 15 ms, while in the case of fine balancing matching is performed by one unit per 100 program cycles at 15 ms in this case. In atypical example, it is possible in the case of this fine calibration ata driving speed of 50 km/h to correct the wheel to be calibrated in one minute by 0.2 km/h (i.e., by 0.4%/min). After the increment ation of there ar wheel scaling factors, which can, as already mentioned, be performed in the direction of larger or, as actual decrementation,smaller scaling factor values, these new valid rear wheel scaling factors are used to form corrected rear wheel speeds, specifically respectively as the product of the measured rear wheel speed and the new valid scaling factor of the relevant rear wheel. The corrected rear wheel speeds, finely balanced with respect to the front wheels on thesame side, are present for the rear wheels after filtering of these values. Thereupon, a return is made to before the fine balancing interrogation step in order to initiate renewed fine balancing and inthis way to have present continuously balanced wheel speeds.Alternatively, it is possible to repeat the fine balancing only at relatively long intervals. If it is established in the interrogation step for fine balancing on thesame side that at least one of the conditions is not met, an interrogation is made in the next step (step 9) as to whether mutual fine balancing of the wheels on the same axle, that is to say on the left-hand front relative to the right-hand front and of the left-hand rear relative to the right-hand front wheels, is possible. In contrast to fine balancing on the same side, such fine balancing is also possible given the occurrence of a relatively large drive torque and thus rear axle crown wheel torque. The further conditions essentially correspond to those for fine balancing on the same side, although only a smaller amount of cornering is permitted. In this case, a steering angle of 20°is initially permitted, and is reduced successively down to 3° after repeated overshooting of the scaling factors. If one of the interrogated conditions is not met, the method returns tothe stage before the interrogation for fine balancing on the same side.If it is detected that the conditions are met, the actual fine balancing of the front left-hand wheel relative to the front right-hand one and,at the same time, of the rear left-hand wheel relative to the rear right-hand one are carried out (step 10). For this purpose, a start is made by measuring the speeds of the two left-hand wheels, the values obtained are filtered, and used to determine associated scaling factors for these wheels byway of the quotient of the corrected speed of the associated right-hand wheel and the measured speed of the left-handwheel. After filtering of the new scaling factors for the left-hand wheels, the differences between previously present scaling factors and those determined for the left-hand wheels are calculated in turn in the way described above, and these difference values are filtered. Subsequently, the scaling factors of all the wheels are incrementallyincreased or reduced in the direction prescribed in each case by the deviation differences determined. These new, now valid scaling factors are used to determine the corrected wheel speeds anew as the product ofthe previous wheel speeds and their new scaling factors. Thus, in contrast to fine balancing on the same side, the scaling factors of thetwo wheels to be balanced are moved up towards one another incrementallyin the case of fine balancing on the same axle. This yields inconjunction with the same increment a higher calibration rate of, forexample, 0.8%/min. That is, it is possible in the case of a driving speed of 100 km/h to correct a front or rear axle left/right deviation by 0.8 km/h. After termination of the fine balancing on the same axle, a return is made to the stage before the interrogation for fine balancing on thesame side, as a result of which the program of the method is inherently closed. It is not explicitly represented that, as already mentioned above, in the event of a later engine stop the wheel speed scaling factors present are stored, in order to serve after a later renewed engine start as initial values for coarse balancing. Moreover, suitable safety thresholds are integrated into the program of the method of the present invention, for example minimum and maximum values for the scaling factors in order to intercept any errors in measurement and calculation. The method of the present invention permits quick and precise wheel speed balancing. One program cycle including detection of the measured values, filtering and calculation of the variables lasts 15 ms or shorter. In the case of vehicles with automatic transmissions, the drive torque, (i.e., the rear axle crown wheel torque), is determined via the turbine torque, for which purpose the engine torque, if present, is detected directly or the turbine torque is calculated from the throttle angle, engine speed and converter characteristics map. The steering wheel angle is determined from the left/right deviation of the front wheels and the vehicle reference speed, for which purpose the front wheel speeds are correspondingly conditioned. The algorithm described delivers a wheel speed balancing accuracy of, at most, 0.1% deviation between all the four wheel speeds corrected by the speed scaling factors in conjunction with slip-free rolling. The wheel speed balancing method can be used with slight modifications in vehicles having different tire slip control systems, such as ABS, TSC, SMR (fast torque regulation on longitudinal dynamic behavior regulation systems)and GDB. Although the invention has been described and illustrated in detail, itis to be clearly understood that the same is by way of illustration and example, and is not to be taken by way of limitation. The spirit and scope of the present invention are to be limited only by the terms ofthe appended claims. What is claimed is: 1. A method for balancing speeds of wheels of a motor vehicle having a wheel-slip control system, comprising the steps of (a) making a coarse-step determination of scaling factors by carrying out quick coarse balancing with respect to a reference wheel wherecornering below a preset cornering threshold value, overshooting of a vehicle speed above a preset minimum vehicle speed and vehicle acceleration below a preset threshold value have been detected, and (b)thereafter, making a fine-step determination of the scaling factors by carrying out one of fine balancing of each wheel of an axle with respect to the wheel on the same side of another axle when drive torque below a present torque threshold value and overshooting of the vehicle speed below the present threshold value are detected, and of fine balancing each wheel on one side with respect to the opposite wheel on the same axle when relatively drive torque above a preset threshold value,cornering above the present cornering value and overshooting of the vehicle speed above the preset minimum vehicle speed are detected, suchthat mutually matched corrected wheel speeds are formed in the vehicle.2. The method according to claim 1, wherein step (a) for making the coarse-slip determination comprises (a') first selecting a wheel as a reference wheel and setting a speed scaling factor thereof to a permanently prescribed value, and thereafter (a") measuring the speeds of the wheels, and determining new speed scaling factors from previous speed scaling factors as a function of the quotient of the measured speed of each wheel to that of the reference wheel, and determiningcoarsely corrected speeds for the wheels. 3. The method according to claim 2, wherein in step (a') the speeds of the wheels are measured non-recursively, and the wheel having the smallest speed deviation froman arithmetic average value of the measured wheel speeds is selected asthe reference wheel. 4. The method according to claim 2, wherein step(a") is repeated until all deviations of the new speed scaling factors from the previous speed scaling factors undershoot a prescribed limiting value. 5. The method according to claim 4, wherein, in step (a'), the speeds of the wheels are measured non-recursively, and the wheel having the smallest speed deviation from an arithmetic average value of the measured wheel speeds is selected as the reference wheel. 6. The method according to claim 2, wherein, at the end of the quick coarse balancing,an offset amount dependent on drive torque is added to the new speed scaling factors for the rear wheels. 7. The method according to claim 6,wherein, in step (a'), the speeds of the wheels are measured non-recursively, and the wheel having the smallest speed deviation froman arithmetic average value of the measured wheel speeds is selected asthe reference wheel. 8. The method according to claim 7, wherein step(a") is repeated until all deviations of the new speed scaling factors from the previous speed scaling factors undershoot a prescribed limiting value. 9. The method according to claim 1, wherein, in step (b), the speed of one wheel is measured and the new speed scaling factors are determined from the previous speed scaling factors as a function of the quotient of the speed of the one wheel, measured during fine balancing,to the speed of the other wheel, determined during coarse balancing, and finely corrected speeds are determined therefrom for the two wheels. 10.The method according to claim 9, wherein the step (a) for making the coarse-slip determination (a') comprises first selecting a wheel as a reference wheel and setting a speed scaling factor thereof to a permanently prescribed value, and thereafter (a") measuring the speeds of the wheels, and determining new speed scaling factors from previous speed scaling factors as a function of the quotient of the measured speed of each wheel to that of the reference wheel, and determiningcoarsely corrected speeds for the wheels. 11. The method according to claim 10, wherein, in step (a'), the speeds of the wheels are measured non-recursively, and the wheel having the smallest speed deviation froman arithmetic average value of the measured wheel speeds is selected asthe reference wheel. 12. The method according to claim 11, wherein step(a") is repeated until all deviations of the new speed scaling factors from the previous speed scaling factors undershoot a prescribed limiting value. 13. The method according to claim 12, wherein at the end of the quick coarse balancing, an offset amount dependent on drive torque is added to the new speed scaling factors for the rear wheels. 14. The method according to claim 9, wherein, in step (b), the rear wheels are finely balanced with respect to the front wheels on the same side of the vehicle, and the speed scaling factors of the front wheels, determined from coarse balancing, are kept constant whereas the speed scaling factors of the rear wheels are freshly determined. 15. The method according to claim 14, wherein, for determining the new speed scaling factors, the previous speed scaling factors are increased incrementallyin a direction towards scaling factor values which are yielded from the quotients from the measured wheel speeds of two wheels under consideration. 16. The method according to claim 9, wherein, in step(b), during fine balancing of the speeds of wheels on the same axle, new speed scaling factors are determined from the previous ones for both wheels by respective incremental change in the two previous speed scaling factors towards one another. 17. The method according to claim16, wherein, for determining the new speed scaling factors, the previous speed scaling factors are increased incrementally in a direction towards scaling factor values which are yielded from the quotients from the measured wheel speeds of two wheels under consideration. 18. The method according to claim 17, wherein, in step (b), the rear wheels are finely balanced with respect to the front wheels on the same side of the vehicle, and the speed scaling factors of the front wheels, determined from coarse balancing, are kept constant whereas the speed scaling factors of the rear wheels are freshly determined. 19. The method according to claim 1, wherein, for determining the new speed scaling factors, the previous speed scaling factors are increased incrementallyin a direction towards scaling factor values which are yielded from the quotients from the measured wheel speeds of two wheels under consideration. 20. The method according to claim 1, wherein, for detecting an amount of cornering, the speeds of the left-hand wheel andthe right-hand wheel of one axle are determined as a function of time, a respective left/right deviation is determined and the time dependence thereof is differentiated, and at most gentle cornering is concluded when the differentiated left/right deviation under shoots a prescribablelimiting value.
Thread:<IP_ADDRESS>/@comment-22439-20170117164959 Hi, I'm an admin for the community. Welcome and thank you for your edit to Yakumo Oomori! Please check our policies. These are guides for your contributions. Enjoy your time at !
namespace ExcelDocConverter { partial class Form2 { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { System.Windows.Forms.DataGridViewCellStyle dataGridViewCellStyle1 = new System.Windows.Forms.DataGridViewCellStyle(); System.ComponentModel.ComponentResourceManager resources = new System.ComponentModel.ComponentResourceManager(typeof(Form2)); this.btSelectFiles = new System.Windows.Forms.Button(); this.label1 = new System.Windows.Forms.Label(); this.cbFileType = new System.Windows.Forms.ComboBox(); this.btOutputFolder = new System.Windows.Forms.Button(); this.txtOutput = new System.Windows.Forms.TextBox(); this.btStartConvert = new System.Windows.Forms.Button(); this.dataGridView1 = new System.Windows.Forms.DataGridView(); this.colnSelect = new System.Windows.Forms.DataGridViewLinkColumn(); this.colnFiles = new System.Windows.Forms.DataGridViewTextBoxColumn(); this.colnStatus = new System.Windows.Forms.DataGridViewTextBoxColumn(); this.btRemoveAll = new System.Windows.Forms.Button(); this.linkLabel1 = new System.Windows.Forms.LinkLabel(); this.lbError = new System.Windows.Forms.Label(); this.pictureBox1 = new System.Windows.Forms.PictureBox(); ((System.ComponentModel.ISupportInitialize)(this.dataGridView1)).BeginInit(); ((System.ComponentModel.ISupportInitialize)(this.pictureBox1)).BeginInit(); this.SuspendLayout(); // // btSelectFiles // this.btSelectFiles.Location = new System.Drawing.Point(12, 90); this.btSelectFiles.Name = "btSelectFiles"; this.btSelectFiles.Size = new System.Drawing.Size(124, 23); this.btSelectFiles.TabIndex = 0; this.btSelectFiles.Text = "Select Files"; this.btSelectFiles.UseVisualStyleBackColor = true; this.btSelectFiles.Click += new System.EventHandler(this.btSelectFiles_Click); // // label1 // this.label1.AutoSize = true; this.label1.Location = new System.Drawing.Point(12, 9); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(140, 13); this.label1.TabIndex = 1; this.label1.Text = "Convert Selected File(s) To"; // // cbFileType // this.cbFileType.DropDownStyle = System.Windows.Forms.ComboBoxStyle.DropDownList; this.cbFileType.FormattingEnabled = true; this.cbFileType.Items.AddRange(new object[] { "xls - Microsoft Excel 2003", "xlsx - Microsoft Excel 2007 & Above", "pdf - Portable Document Format", "xps - XML Paper Specification", "csv - Comma Separated Value"}); this.cbFileType.Location = new System.Drawing.Point(158, 6); this.cbFileType.Name = "cbFileType"; this.cbFileType.Size = new System.Drawing.Size(279, 21); this.cbFileType.TabIndex = 2; // // btOutputFolder // this.btOutputFolder.Location = new System.Drawing.Point(12, 33); this.btOutputFolder.Name = "btOutputFolder"; this.btOutputFolder.Size = new System.Drawing.Size(262, 23); this.btOutputFolder.TabIndex = 3; this.btOutputFolder.Text = "Select a folder to save the converted files"; this.btOutputFolder.UseVisualStyleBackColor = true; this.btOutputFolder.Click += new System.EventHandler(this.btOutputFolder_Click); // // txtOutput // this.txtOutput.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.txtOutput.Location = new System.Drawing.Point(12, 62); this.txtOutput.Name = "txtOutput"; this.txtOutput.Size = new System.Drawing.Size(710, 22); this.txtOutput.TabIndex = 4; // // btStartConvert // this.btStartConvert.Location = new System.Drawing.Point(272, 90); this.btStartConvert.Name = "btStartConvert"; this.btStartConvert.Size = new System.Drawing.Size(124, 23); this.btStartConvert.TabIndex = 5; this.btStartConvert.Text = "Start Convert"; this.btStartConvert.UseVisualStyleBackColor = true; this.btStartConvert.Click += new System.EventHandler(this.btStartConvert_Click); // // dataGridView1 // this.dataGridView1.AllowUserToAddRows = false; this.dataGridView1.AllowUserToDeleteRows = false; this.dataGridView1.AllowUserToResizeRows = false; this.dataGridView1.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom) | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.dataGridView1.BackgroundColor = System.Drawing.Color.White; this.dataGridView1.ColumnHeadersHeightSizeMode = System.Windows.Forms.DataGridViewColumnHeadersHeightSizeMode.AutoSize; this.dataGridView1.Columns.AddRange(new System.Windows.Forms.DataGridViewColumn[] { this.colnSelect, this.colnFiles, this.colnStatus}); dataGridViewCellStyle1.Alignment = System.Windows.Forms.DataGridViewContentAlignment.MiddleLeft; dataGridViewCellStyle1.BackColor = System.Drawing.SystemColors.Window; dataGridViewCellStyle1.Font = new System.Drawing.Font("Segoe UI", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); dataGridViewCellStyle1.ForeColor = System.Drawing.SystemColors.ControlText; dataGridViewCellStyle1.SelectionBackColor = System.Drawing.SystemColors.Window; dataGridViewCellStyle1.SelectionForeColor = System.Drawing.SystemColors.ControlText; dataGridViewCellStyle1.WrapMode = System.Windows.Forms.DataGridViewTriState.False; this.dataGridView1.DefaultCellStyle = dataGridViewCellStyle1; this.dataGridView1.Location = new System.Drawing.Point(12, 119); this.dataGridView1.Name = "dataGridView1"; this.dataGridView1.ReadOnly = true; this.dataGridView1.RowHeadersVisible = false; this.dataGridView1.Size = new System.Drawing.Size(710, 368); this.dataGridView1.TabIndex = 6; // // colnSelect // this.colnSelect.HeaderText = ""; this.colnSelect.Name = "colnSelect"; this.colnSelect.ReadOnly = true; this.colnSelect.Width = 80; // // colnFiles // this.colnFiles.HeaderText = "File(s)"; this.colnFiles.Name = "colnFiles"; this.colnFiles.ReadOnly = true; this.colnFiles.Width = 500; // // colnStatus // this.colnStatus.HeaderText = "Status"; this.colnStatus.Name = "colnStatus"; this.colnStatus.ReadOnly = true; // // btRemoveAll // this.btRemoveAll.Location = new System.Drawing.Point(142, 90); this.btRemoveAll.Name = "btRemoveAll"; this.btRemoveAll.Size = new System.Drawing.Size(124, 23); this.btRemoveAll.TabIndex = 7; this.btRemoveAll.Text = "Remove All"; this.btRemoveAll.UseVisualStyleBackColor = true; this.btRemoveAll.Click += new System.EventHandler(this.btRemoveAll_Click); // // linkLabel1 // this.linkLabel1.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right))); this.linkLabel1.AutoSize = true; this.linkLabel1.Location = new System.Drawing.Point(664, 492); this.linkLabel1.Name = "linkLabel1"; this.linkLabel1.Size = new System.Drawing.Size(58, 13); this.linkLabel1.TabIndex = 9; this.linkLabel1.TabStop = true; this.linkLabel1.Text = "More Info"; this.linkLabel1.LinkClicked += new System.Windows.Forms.LinkLabelLinkClickedEventHandler(this.linkLabel1_LinkClicked); // // lbError // this.lbError.AutoSize = true; this.lbError.Location = new System.Drawing.Point(590, 89); this.lbError.Name = "lbError"; this.lbError.Size = new System.Drawing.Size(122, 26); this.lbError.TabIndex = 10; this.lbError.Text = "Double click \"Error\"\r\nto view error message."; this.lbError.Visible = false; // // pictureBox1 // this.pictureBox1.Image = global::ExcelDocConverter.Properties.Resources.p; this.pictureBox1.InitialImage = null; this.pictureBox1.Location = new System.Drawing.Point(401, 92); this.pictureBox1.Name = "pictureBox1"; this.pictureBox1.Size = new System.Drawing.Size(220, 20); this.pictureBox1.TabIndex = 11; this.pictureBox1.TabStop = false; this.pictureBox1.Visible = false; // // Form2 // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.ClientSize = new System.Drawing.Size(734, 512); this.Controls.Add(this.pictureBox1); this.Controls.Add(this.lbError); this.Controls.Add(this.linkLabel1); this.Controls.Add(this.btRemoveAll); this.Controls.Add(this.dataGridView1); this.Controls.Add(this.btStartConvert); this.Controls.Add(this.txtOutput); this.Controls.Add(this.btOutputFolder); this.Controls.Add(this.cbFileType); this.Controls.Add(this.label1); this.Controls.Add(this.btSelectFiles); this.Font = new System.Drawing.Font("Segoe UI", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.Icon = ((System.Drawing.Icon)(resources.GetObject("$this.Icon"))); this.MinimumSize = new System.Drawing.Size(650, 350); this.Name = "Form2"; this.StartPosition = System.Windows.Forms.FormStartPosition.CenterScreen; this.Text = "Simple MS Excel Document Converter 2.1"; this.Load += new System.EventHandler(this.Form2_Load); ((System.ComponentModel.ISupportInitialize)(this.dataGridView1)).EndInit(); ((System.ComponentModel.ISupportInitialize)(this.pictureBox1)).EndInit(); this.ResumeLayout(false); this.PerformLayout(); } #endregion private System.Windows.Forms.Button btSelectFiles; private System.Windows.Forms.Label label1; private System.Windows.Forms.ComboBox cbFileType; private System.Windows.Forms.Button btOutputFolder; private System.Windows.Forms.TextBox txtOutput; private System.Windows.Forms.Button btStartConvert; private System.Windows.Forms.DataGridView dataGridView1; private System.Windows.Forms.DataGridViewLinkColumn colnSelect; private System.Windows.Forms.DataGridViewTextBoxColumn colnFiles; private System.Windows.Forms.DataGridViewTextBoxColumn colnStatus; private System.Windows.Forms.Button btRemoveAll; private System.Windows.Forms.LinkLabel linkLabel1; private System.Windows.Forms.Label lbError; private System.Windows.Forms.PictureBox pictureBox1; } }
i rate vivian grey three RateBook
package keeper import ( "bytes" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" postsK "github.com/desmos-labs/desmos/x/posts/keeper" posts "github.com/desmos-labs/desmos/x/posts/types" "github.com/desmos-labs/desmos/x/reports/types" "github.com/desmos-labs/desmos/x/reports/types/models" ) // Keeper maintains the link to data storage and exposes getter/setter methods for the various parts of the state machine type Keeper struct { PostKeeper postsK.Keeper // Post's keeper to perform checks on the postIDs StoreKey sdk.StoreKey // Unexposed key to access store from sdk.Context Cdc *codec.Codec // The wire codec for binary encoding/decoding. } // NewKeeper creates new instances of the reports Keeper func NewKeeper(pk postsK.Keeper, cdc *codec.Codec, storeKey sdk.StoreKey) Keeper { return Keeper{ PostKeeper: pk, StoreKey: storeKey, Cdc: cdc, } } // CheckPostExistence checks if a post with the given postID is present inside // the current context and returns a boolean indicating that. func (k Keeper) CheckPostExistence(ctx sdk.Context, postID posts.PostID) bool { _, exist := k.PostKeeper.GetPost(ctx, postID) return exist } // SaveReport allows to save the given reports inside the current context. // It assumes that the given reports has already been validated. // If the same reports has already been inserted, nothing will be changed. func (k Keeper) SaveReport(ctx sdk.Context, postID posts.PostID, report types.Report) { store := ctx.KVStore(k.StoreKey) key := models.ReportStoreKey(postID) // Get the list of reports related to the given postID var reports models.Reports k.Cdc.MustUnmarshalBinaryBare(store.Get(key), &reports) // try to append the given reports reports = append(reports, report) store.Set(key, k.Cdc.MustMarshalBinaryBare(&reports)) } // GetPostReports returns the list of reports associated with the given postID. // If no reports is associated with the given postID the function will returns an empty list. func (k Keeper) GetPostReports(ctx sdk.Context, postID posts.PostID) (reports types.Reports) { store := ctx.KVStore(k.StoreKey) // Get the list of reports related to the given postID k.Cdc.MustUnmarshalBinaryBare(store.Get(models.ReportStoreKey(postID)), &reports) return reports } // GetReportsMap allows to returns the list of reports that have been stored inside the given context func (k Keeper) GetReportsMap(ctx sdk.Context) map[string]types.Reports { store := ctx.KVStore(k.StoreKey) iterator := sdk.KVStorePrefixIterator(store, types.ReportsStorePrefix) defer iterator.Close() reportsData := map[string]types.Reports{} for ; iterator.Valid(); iterator.Next() { var reports types.Reports k.Cdc.MustUnmarshalBinaryBare(iterator.Value(), &reports) idBytes := bytes.TrimPrefix(iterator.Key(), types.ReportsStorePrefix) reportsData[string(idBytes)] = reports } return reportsData }
Is there a way to switch a texture between left and right pass for a stereoscopic rendering (3.1.2)? Is there a way to switch a texture between left and right pass for a stereoscopic rendering (3.1.2)? I'm creating a stereoscopic 360; one of the "subjects" of the scene is a person filmed on a greenscreen (filmed in stereoscopic 360). I'm reprojecting the masked movie on a geometry: this reprojection, which at the moment is the one for the left eye, should be switched to the source for the right eye (when Blender is rendering the right eye, of course). Hi there, welcome! It's not clear to me what you're asking. Can you just mirror the image? Perhaps include some images and try and rephrase. hope someone can help. :) The answer to this likely depends how you're loading the video or image sequence. Please try to provide details on the precise setup of the scene: 1. How is the stereoscopic video loaded, 2. How are your projecting it, 3. How are you rendering the scene with regard to Blender stereoscopic render setting. Please also add screenshots of your shader setup and scene. Hi all, better avoiding the complexity of my actual scene. Imagine to have a stereoscopic 360 movie (already splitted in two files, one for the left eye and one for the right eye) You are using the movie (the texture taken from the left eye, for the moment) as a texture (on an emissive shader). The purpose is to render this material in stereoscopy, using the "left eye" texture when Blender is rendering the left eye, and using the "right eye" texture when Blender is rendering the right eye. Is it clearer? Otherwise please ask, I will try to do my best to give you any information possible. I have no idea how to do this in preview, but you can do it in the compositor, using map UV and switch view nodes: I already new about this method.. and unfortunately it's not usefull for my purpose. Thank you anyways!
Board Thread:Wiki management/@comment-9944069-20151208014920/@comment-9944069-20151228014900 Pinkgirl234 wrote: Bearjedi wrote: Pinkgirl234 wrote: DatDramaPlant wrote:
const supertest = require('supertest'); const app = require('../api'); const Utils = require('../api/utils'); const TestUtils = require('./utils/test-utils'); const api = supertest(app); const ENDPOINT = '/api/me'; describe('Me', () => { describe(`GET ${ENDPOINT}`, () => { describe('when not logged in', () => { it('should return 401', (done) => { api.get(ENDPOINT) .expect(401, TestUtils.finishTest(done)); }); }); describe('when logged in', () => { it('should return 200 and correct data', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { this.auth = auth; return api.get(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .expect(200); } ).then( (successResponse) => { const me = successResponse.body.data.attributes; expect(me.role).toBe(this.auth.role); expect(me.local.email).toBe(this.auth.email); expect(me.local.firstName).toBe(this.auth.firstName); expect(me.local.lastName).toBe(this.auth.lastName); done(); } ); }); }); }); describe(`POST ${ENDPOINT}`, () => { describe('when not logged in', () => { it('should return 404', (done) => { api.post(ENDPOINT) .expect(404, TestUtils.finishTest(done)); }); }); describe('when logged in', () => { it('should return 404', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { this.auth = auth; api.post(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .expect(404, TestUtils.finishTest(done)); } ); }); }); }); describe(`PUT ${ENDPOINT}`, () => { describe('when not logged in', () => { it('should return 401', (done) => { api.put(ENDPOINT) .expect(401, TestUtils.finishTest(done)); }); }); describe('when logged in', () => { it('should return 400 when no data provided', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response }) => { api.put(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .send() .expect(400, TestUtils.finishTest(done)); } ); }); it('should return 204 when only email provided', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { const data = Utils.duplicateObject(auth); data.email += '.mod'; delete data.password; delete data.firstName; delete data.lastName; api.put(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .send(data) .expect(204, TestUtils.finishTest(done)); } ); }); it('should return 204 when only firstName provided', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { const data = Utils.duplicateObject(auth); data.firstName += '.mod'; delete data.password; delete data.email; delete data.lastName; api.put(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .send(data) .expect(204, TestUtils.finishTest(done)); } ); }); it('should return 204 when only lastName provided', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { const data = Utils.duplicateObject(auth); data.lastName += '.mod'; delete data.password; delete data.email; delete data.firstName; api.put(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .send(data) .expect(204, TestUtils.finishTest(done)); } ); }); it('should return 204 when all data provided', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response, auth }) => { const data = Utils.duplicateObject(auth); data.email += '.mod'; data.password += '.mod'; data.firstName += '.mod'; data.lastName += '.mod'; api.put(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .send(data) .expect(204, TestUtils.finishTest(done)); } ); }); }); }); describe(`DELETE ${ENDPOINT}`, () => { describe('when not logged in', () => { it('should return 404', (done) => { api.delete(ENDPOINT) .expect(404, TestUtils.finishTest(done)); }); }); describe('when logged in', () => { it('should return 404', (done) => { TestUtils.user.createAndLogIn(api, 'standard').then( ({ response }) => { api.delete(ENDPOINT) .set('Cookie', TestUtils.responseCookies(response)) .set('Accept', 'application/json') .expect(404, TestUtils.finishTest(done)); } ); }); }); }); });
bidirectional_rnn not taking sequence_length [batch_size] Trying to do variable sequence length with brinn and not able to use placeholder of batch_size as sequence length, instead wants num_steps size even though function description says [batch_size]. In the code below setting up the graph gives an dimension error. (I am using the latest tensorflow version 0.10.0.) import tensorflow as tf flags = tf.flags FLAGS = flags.FLAGS flags.DEFINE_bool("use_fp16", False, "Train using 16-bit floats instead of 32bit floats") def data_type(): return tf.float16 if FLAGS.use_fp16 else tf.float32 batch_size = 20 num_steps = 40 hidden_size = 64 num_layers = 2 vocab_size = 1000000 embedding_input = tf.placeholder(tf.int32, [batch_size, num_steps]) sequence_length = tf.placeholder(tf.int32, [batch_size]) embeddings = tf.get_variable("embedding", [vocab_size, 300],dtype=data_type(),trainable=False) embedding_input = tf.nn.embedding_lookup(embeddings, embedding_input) initializer = tf.random_uniform_initializer(-1,1) lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_size, initializer=initializer) lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_size, initializer=initializer) lstm_fw_cell = tf.nn.rnn_cell.MultiRNNCell([lstm_fw_cell] * num_layers) lstm_bw_cell = tf.nn.rnn_cell.MultiRNNCell([lstm_bw_cell] * num_layers) lstm_fw_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_fw_cell, output_keep_prob=0.9) lstm_bw_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_bw_cell, output_keep_prob=0.9) inputs = tf.nn.dropout(embedding_input, 0.9, noise_shape=None, seed=None) inputs = [tf.squeeze(x) for x in tf.split(0, batch_size, inputs)] output, _, _ = tf.nn.bidirectional_rnn(lstm_fw_cell, lstm_bw_cell,inputs, sequence_length=sequence_length, dtype=tf.float32) Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 566, in merge_with new_dims.append(dim.merge_with(other[i])) File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 133, in merge_with self.assert_is_compatible_with(other) File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 108, in assert_is_compatible_with % (self, other)) ValueError: Dimensions 20 and 40 are not compatible Please post this on stackoverflow instead. Thanks.
# Geometric properties of Clausen’s Hypergeometric Function ${}_{3}F_{2}(a,b,c;d,e;z)$ Koneri Chandrasekran Koneri Chandrasekran Department of Mathematics Jeppiaar SRR Engineering College, Affiliated to Anna University Chennai 603 103, India<EMAIL_ADDRESS>and Devasir John Prabhakaran Devasir John Prabhakaran Department of Mathematics MIT Campus, Anna University Chennai 600 044, India<EMAIL_ADDRESS> ###### Abstract. The Clausen’s Hypergeometric Function is given by ${}_{3}F_{2}(a,b,c;d,e;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}(c)_{n}}{(d)_{n}(e)_{n}(1)_{n}}z^{n}\,;\,\,\,a,b,c,d,e\in{\mathbb{C}}$ provided $d,\,e\,\neq 0,-1,-2,\cdots$ in unit disc ${\mathbb{D}}=\\{z\in{\mathbb{C}}\,:\,|z|<1\\}$. In this paper, an operator $\mathcal{I}_{a,b,c}(f)(z)$ involving Clausen’s Hypergeometric Function by means of Hadamard Product is introduced. Geometric properties of $\mathcal{I}_{a,b,c}(f)(z)$ are obtained based on its Taylor’s co-efficient. ###### Key words and phrases: Clausen’s Hypergeometric Function, Univalent Functions, Starlike Functions, Convex Functions and Close-to-Convex functions ###### 2000 Mathematics Subject Classification: 30C45 ## 1\. Introduction and preliminaries Let $\mathcal{A}$ be the class of functions (1.1) $\displaystyle f(z)=z+\sum_{n=2}^{\infty}\,a_{n}\,z^{n}$ analytic in the open unit disc ${\mathbb{D}}=\\{z\in{\mathbb{C}}\,:\,|z|<1\\}$ of the complex plane. Let ${\mathcal{S}}$, ${\mathcal{S}}^{*}$, $\mathcal{C}$ and $\mathcal{K}$ be the class of univalent , Starlike, Convex and Close-to-Convex functions respectively. We will be particularly focussing on the classes ${\mathcal{S}}^{*}_{\lambda},\,\lambda>0$ and $\mathcal{C}_{\lambda}$ are defined by $\mathcal{S}^{*}_{\lambda}\,=\,\left\\{f\in\mathcal{A}\,|\,\left|\frac{zf^{\prime}(z)}{f(z)}-1\right|\,<\,\lambda,\,z\in{\mathbb{D}}\right\\}.$ and $\mathcal{C_{\lambda}}=\left\\{f\in\mathcal{A}\,|\,zf^{\prime}(z)\in\mathcal{S}^{*}_{\lambda}\right\\}.$ The following are the sufficient conditions for which the function $f$ is in $\mathcal{S}^{*}$ and $\mathcal{C}_{\lambda}$ respectively (1.2) $\displaystyle\displaystyle\sum_{n=2}^{\infty}(n+\lambda-1)|a_{n}|\leq\lambda.$ and (1.3) $\displaystyle\displaystyle\sum_{n=2}^{\infty}\,n\,(n+\lambda-1)|a_{n}|\leq\lambda.$ Let for $\beta<1$, ${\mathcal{R}}(\beta)=\\{f\in{\mathcal{A}}:\exists\ \eta\in\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\,|\,{\rm Re}\,\ [e^{i\eta}(f^{\prime}(z)-\beta)]>0,\quad z\in{\mathbb{D}}\\}.$ Note that when $\beta\geq 0$, we have ${\mathcal{R}}(\beta)\subset{\mathcal{S}}$ and for each $\beta<0,\ \ {\mathcal{R}}(\beta)$ contains also non univalent functions. The concept of uniformly convex and uniformly starlike functions were introduced by Goodman[4, 5] and denoted by UCV and UST respectively. Subsequently Rønning [3] and Ma and Minda [6] independently gave the one variable analytic characterization of the class UCV as follows: $f\in UCV$ if and only if $\displaystyle Re\left(1+\frac{zf^{{}^{\prime\prime}}(z)}{f^{\prime}(z)}\right)>\left|\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right|$. In [4], the condition on the function $f$ defined in (1.1) belongs to $UCV$ is that (1.4) $\displaystyle\sum_{n=2}^{\infty}\,n\,(n-1)|a_{n}|\leq\frac{1}{3}$ The subclass ${\mathcal{S}}_{p}$ of starlike functions was introduced by Rønning [3] in the following way (1.5) $\displaystyle\displaystyle{\mathcal{S}}_{p}=\\{F\in{\mathcal{S}}^{*}/F(z)=zf^{\prime}(z),\,f(z)\in UCV\\}.$ It is easily seen that $f(z)\in{\mathcal{S}}_{p}$ if and only if (1.6) $\displaystyle\left|\frac{zf^{\prime}(a)}{f(z)}-1\right|<Re\left(\frac{zf^{\prime}(z)}{f(z)}\right),\,z\in{\mathbb{D}}$ and has a sufficient condition (1.7) $\displaystyle\sum_{n=2}^{\infty}(2n-1)|a_{n}|\leq 1,$ We observe that ${\mathcal{S}}_{P}$ is the class of functions for which the domain of values of $\displaystyle zf^{\prime}(z)/f(z),$ $z\in{\mathbb{D}}$ is the region $\Omega$ defined by $\Omega=\\{w:Re(w)>|w-1|\\}$. Note that $\Omega$ is the interior of a parabola in the right half plane which is symmetric about the real axis and has vertex at $\left(\frac{1}{2},0\right)$. It is well- known that the function $\phi(z)=1+\frac{2}{\pi^{2}}\left(log\frac{1+\sqrt{z}}{1-\sqrt{z}}\right)^{2}$ maps unit disc ${\mathbb{D}}$ onto the parabolic region $\Omega$ and hence is in ${\mathcal{S}}_{p}.$ Let $\displaystyle f(z)=z+\sum_{n=2}^{\infty}\,a_{n}\,z^{n}$ and $\displaystyle g(z)=z+\sum_{n=2}^{\infty}\,b_{n}\,z^{n}$ be analytic in ${\mathbb{D}}$. Then the Hadamard product or convolution of $f(z)$ and $g(z)$ is defined by $f(z)*g(z)=z+\sum_{n=2}^{\infty}a_{n}b_{n}z^{n}.$ For any complex variable $a$, define the ascending factorial notation $(a)_{n}=a(a+1)(a+2)\cdots(a+n-1)=a(a+1)_{n-1}$ for $n\geq 1$ and $(a)_{0}=1$ for $a\neq 0$. When $a$ is neither zero nor a negative integer, we have $(a)_{n}=\Gamma(n+a)/\Gamma(a).$ The Clausen’s hypergeometric function ${}_{3}F_{2}(a,b,c;d,e;z)$ is defined by (1.8) ${}_{3}F_{2}(a,b,c;d,e;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}(c)_{n}}{(d)_{n}(e)_{n}(1)_{n}}z^{n};\,\,\,a,b,c,d,e\in{\mathbb{C}}$ provided $d,\,e\,\neq 0,-1,-2,-3\cdots$ Which is an analytic function in unit disc ${\mathbb{D}}$. For $n=1$ in Theorem 1 in Miller and Paris [2] yields the following formula (1.9) ${}_{3}F_{2}(a,b,c;b+1,c+1;1)$ $\displaystyle=$ $\displaystyle\frac{bc}{c-b}\Gamma(1-a)\left[\frac{\Gamma(b)}{\Gamma(1-a+b)}-\frac{\Gamma(c)}{\Gamma(1-a+c)}\right],$ provided that $Re(2-a)>0$, $b>a-1$ and $c>a-1$. Alternatively we drive the same by putting $m=n=0$ in the equation (3) in Shpot and Srivastava [1]. We use the formula (1.9) to prove our main results. For $f\in\mathcal{A}$, we define the operator $\mathcal{I}_{a,b,c}(f)(z)$ (1.10) $\displaystyle\mathcal{I}_{a,b,c}(f)(z)=z\,_{3}F_{2}(a,b,c;b+1,c+1;z)*f(z)=z+\sum_{n=2}^{\infty}A_{n}\,z^{n}$ with $A_{1}=1$ and for $n\geq 1,$ $\displaystyle A_{n}$ $\displaystyle=$ $\displaystyle\frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\,a_{n}.$ The following Lemma is useful to prove our main results. ###### Lemma 1.11. Let $a,b,c>0$. Then we have the following 1. (1) For $b,c>a-1$ $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc\,\Gamma(1-a)}{c-b}\left[\frac{(1-b)\Gamma(b)}{\Gamma(1-a+b)}-\frac{(1-c)\Gamma(c)}{\Gamma(1-a+c)}\right].$ 2. (2) For $b,c>a-1$ $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{2}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc\,\Gamma(1-a)}{c-b}\left[\frac{(1-b)^{2}\Gamma(b)}{\Gamma(1-a+b)}-\frac{(1-c)^{2}\Gamma(c)}{\Gamma(1-a+c)}\right].$ 3. (3) For $b,c>a-1$. Then $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{3}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc\,\Gamma(1-a)}{c-b}\left[\frac{[(1-b)^{3}-b^{2}]\Gamma(b)}{\Gamma(1-a+b)}-\frac{[(1-c)^{3}-c^{2}]\Gamma(c)}{\Gamma(1-a+c)}\right].$ 4. (4) For $a\neq 1,\,b\neq 1,\,$ and $c\neq 1$ with$\,b,c>Max\\{0,a-1\\}$ $\displaystyle\sum_{n=0}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{(n+1)}}$ $\displaystyle=$ $\displaystyle\frac{bc}{(a-1)(b-1)(c-1)}$ $\displaystyle\times\left[\frac{\Gamma(2-a)}{c-b}\left(\frac{(c-1)\Gamma(b)}{\Gamma(1-a+b)}-\frac{(b-1)\Gamma(c)}{\Gamma(1-a+c)}\right)-1\right].$ Proof. (1) Using ascending factorial notation, we can write $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\sum_{n=0}^{\infty}\frac{(a)_{n+1}\,(b)_{n+1}\,(c)_{n+1}}{(b+1)_{n+1}\,(c+1)_{n+1}\,(1)_{n-1}}+\sum_{n=0}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ Using the formula (1.9) and the fact that $\Gamma(1-a)=-a\Gamma(-a)$, the above reduces to $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc}{c-b}\,\Gamma(1-a)\,\left[\frac{(1-b)\Gamma(b)}{\Gamma(1-a+b)}-\frac{(1-c)\Gamma(c)}{\Gamma(1-a+c)}\right]$ Hence, (1) is proved. (2) Using ascending factorial notation and $(n+1)^{2}=n(n-1)+3n+1$, we can write $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{2}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n-2}}+\sum_{n=1}^{\infty}\frac{3\,(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n-1}}+\sum_{n=0}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ Using the formula (1.9) and $\Gamma(1-a)=-a\Gamma(-a)$, the above reduces to $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{2}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc\,\Gamma(1-a)}{c-b}\times\left[\frac{(1-b)^{2}\Gamma(b)}{\Gamma(1-a+b)}-\frac{(1-c)^{2}\Gamma(c)}{\Gamma(1-a+c)}\right]$ Which completes the proof of (2). (3) Using shifted factorial notation and by adjusting coefficients suitably, we can write $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{3}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\sum_{n=0}^{\infty}\frac{(a)_{n+3}\,(b)_{n+3}\,(c)_{n+3}}{(b+1)_{n+3}\,(c+1)_{n+3}\,(1)_{n}}+5\sum_{n=0}^{\infty}\frac{(a)_{n+2}\,(b)_{n+2}\,(c)_{n+2}}{(b+1)_{n+2}\,(c+1)_{n+2}\,(1)_{n}}$ $\displaystyle+6\sum_{n=0}^{\infty}\frac{(a)_{n+1}\,(b)_{n+1}\,(c)_{n+1}}{(b+1)_{n+1}\,(c+1)_{n+1}\,(1)_{n}}+\sum_{n=0}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ Using the formula (1.9), we have $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{3}(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n}}$ $\displaystyle=$ $\displaystyle\frac{bc\,\Gamma(1-a)}{c-b}\times\left[\frac{[(1-b)^{3}-b^{2}]\Gamma(b)}{\Gamma(1-a+b)}-\frac{[(1-c)^{3}-c^{2}]\Gamma(c)}{\Gamma(1-a+c)}\right]$ and the conclusion follows. (4) Let $a$ be a positive real number such that $a\neq 1$, $b\neq 1$ and $c\neq 1$ with $b,c>Max\\{0,a-1\\}$. We find that $\displaystyle\sum_{n=0}^{\infty}\frac{(a)_{n}\,(b)_{n}\,(c)_{n}}{(b+1)_{n}\,(c+1)_{n}\,(1)_{n+1}}$ $\displaystyle=$ $\displaystyle\frac{bc}{(a-1)(b-1)(c-1)}\left[\frac{\Gamma(2-a)}{c-b}\left(\frac{(c-1)\Gamma(b)}{\Gamma(1-a+b)}-\frac{(b-1)\Gamma(c)}{\Gamma(1-a+c)}\right)-1\right]$ and the result follows. ∎ ## 2\. Starlikeness of $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ ###### Theorem 2.1. Let $a\in{\mathbb{C}}\backslash\\{0\\}$, $b>|a|-1$ and $c>|a|-1.$ The sufficient condition for the function $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ belongs to the class ${\mathcal{S}}^{*}_{\lambda},\,0<\lambda\leq 1$ is that (2.2) $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{\Gamma(b)\,(\lambda-b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)\,(\lambda-c)}{\Gamma(1-|a|+c)}\right]$ $\displaystyle\leq$ $\displaystyle 2\lambda$ Proof. Let $f(z)=z_{3}F_{2}(a,b,c;b+1,c+1;z)$. Then by the equation (1.2), it is enough to show that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(n+\lambda-1)|a_{n}|\leq\lambda.$ Since $f\in{\mathcal{S}}$, we have $|a_{n}|\leq 1$, and using the fact that $|(a)_{n}|\leq(|a|)_{n}$, $\displaystyle T$ $\displaystyle\leq$ $\displaystyle\sum_{n=2}^{\infty}(n-1+\lambda)\left(\frac{(|a|)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\right)$ Using the formula (1.9) and the result (1) of Lemma 1.11 in above equation, we get $\displaystyle T$ $\displaystyle\leq$ $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{\Gamma(b)(\lambda-b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)(\lambda-c)}{\Gamma(1-|a|+c)}\right]-\lambda$ Because of (2.2), the above expression is bounded by $\lambda$ and hence $\displaystyle T$ $\displaystyle\leq$ $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{\Gamma(b)(\lambda-b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)(\lambda-c)}{\Gamma(1-|a|+c)}\right]-\lambda\leq\lambda$ Hence $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ belongs to the class ${\mathcal{S}}^{*}_{\lambda}.$ ∎ ###### Theorem 2.3. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,c>0,\,b>0,\,|a|\neq 1,\,b\neq 1,\,c\neq 1,\,b>|a|-1$ and $c>|a|-1.$ For $0<\lambda\leq 1$, assume that (2.4) $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\left(\frac{b-\lambda}{b-1}\right)\frac{\Gamma(b)}{\Gamma(1-|a|+b)}-\left(\frac{c-\lambda}{c-1}\right)\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\right]$ $\displaystyle\leq\lambda\left(1+\frac{1}{2(1-\beta)}\right)+\frac{(\lambda-1)bc}{(|a|-1)(b-1)(c-1)}$ Then the integral operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into ${\mathcal{S}}^{*}_{\lambda}$. Proof. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,c>0,\,b>0,\,|a|\neq 1,\,b\neq 1,\,c\neq 1,\,b>|a|-1$ and $c>|a|-1.$ Suppose that $\displaystyle f(z)$ is defined in (1.1) is in $\mathcal{R}(\beta)$. By MacGregor [7], We have (2.5) $\displaystyle|a_{n}|\leq\frac{2(1-\beta)}{n}.$ Consider the integral operator $\mathcal{I}_{a,b,c}(f)$ is defined by (1.10). According to the equation (1.2), we need to show that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(n+\lambda-1)|A_{n}|\leq\lambda.$ Then, we have $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}[(n-1)+\lambda]\left|\frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\right||a_{n}|$ Using (2.5) in above, we have $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\sum_{n=1}^{\infty}(n+1)\frac{(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n}(n+1)}\right.$ $\displaystyle\left.+(\lambda-1)\sum_{n=1}^{\infty}\frac{(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n}}\left(\frac{1}{n+1}\right)\right]:=T_{1}$ Using the formula given by (1.9) and the results (1) and (4) of Lemma 1.11, we find that $\displaystyle T_{1}$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\right)-1\right.$ $\displaystyle\left.+(\lambda-1)\left(\frac{bc}{(a-1)(b-1)(c-1)}\left(\left(\frac{2-a}{c-b}\right)\left(\frac{(c-1)\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(b-1)\Gamma(c)}{\Gamma(1-|a|+c)}\right)\right.\right.\right.$ $\displaystyle\left.\left.\left.-1\right)-1\right)\right]$ Using the fact that $\Gamma(a+1)=a\Gamma(a)$, we have $\displaystyle T_{1}$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}\left(\frac{b-\lambda}{b-1}\right)-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\left(\frac{c-\lambda}{c-1}\right)\right)\right.$ $\displaystyle\left.+\frac{(\lambda-1)\,bc}{(a-1)(b-1)(c-1)}-\lambda\right]$ Under the condition given by (2.4) $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}\left(\frac{b-\lambda}{b-1}\right)-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\left(\frac{c-\lambda}{c-1}\right)\right)\right.$ $\displaystyle\left.+\frac{(\lambda-1)\,bc}{(a-1)(b-1)(c-1)}-\lambda\right]$ $\displaystyle\leq$ $\displaystyle\lambda$ Thus, the inequality $T\leq T_{1}\leq\lambda$ and (2.5) are holds. Therefore, we conclude that the operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into ${\mathcal{S}}^{*}_{\lambda}$. Which completes the proof of the theorem. ∎ When $\lambda=1$, we get the following results from Theorem 2.3. ###### Corollary 2.6. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1.$ Assume that $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\right]\leq 1+\frac{1}{2(1-\beta)}$ Then the integral operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into ${\mathcal{S}}^{*}_{1}$. ###### Theorem 2.7. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Suppose that $a$ and $0<\lambda\leq 1$ satisfy the condition (2.8) $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left(\frac{(1-b)(\lambda-b)\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-c)(\lambda-c)\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle 2\lambda$ then the integral operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{S}$ to ${\mathcal{S}}^{*}_{\lambda}$. Proof. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Suppose that $\displaystyle f(z)$ is defined by (1.1) is in ${\mathcal{S}}$, then we know that (2.9) $\displaystyle|a_{n}|\leq n.$ Suppose that the integral operator $\mathcal{I}_{a,b,c}(f)$ is defined by (1.10). In the view of the equation (1.2), it is enough to show that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(n+\lambda-1)|A_{n}|\leq\lambda.$ Using the fact that $|(a)_{n}|\leq(|a|)_{n}$ and the equation (2.9) in above, we have $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(n-1+\lambda)\left|\frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\right||a_{n}|$ Using (1) and (2) of Lemma 1.11, we find that $\displaystyle T$ $\displaystyle\leq$ $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{(1-b)\Gamma(b)}{\Gamma(1-|a|+b)}[\lambda-b]-\frac{(1-c)\Gamma(c)}{\Gamma(1-|a|+c)}[\lambda-c]\right)-\lambda$ By the condition given by (2.8), the above expression is bounded by $\lambda$ and hence $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{(1-b)(\lambda-b)\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-c)(\lambda-c)\Gamma(c)}{\Gamma(1-|a|+c)}\right)-\lambda$ $\displaystyle\leq$ $\displaystyle\lambda$ Under the stated condition, The integral operator $\mathcal{I}_{a,b,c}(f)$ maps ${\mathcal{S}}$ into ${\mathcal{S}}^{*}_{\lambda}$. Which gives the appropriate conclusion. ∎ ## 3\. Convexity of $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ ###### Theorem 3.1. Let $a\in{\mathbb{C}}\backslash\\{0\\}$, $b>|a|-1$, $c>|a|-1$ and $0<\lambda\leq 1$. The sufficient condition for the function $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ belongs to the class $\mathcal{C}_{\lambda}$ is that $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{(1-b)\,(\lambda-b)\,\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-b)(\lambda-c)\Gamma(c)}{\Gamma(1-|a|+c)}\right]\leq 2\lambda.$ Proof. The proof is similar to Theorem 2.7. So we omit the details. ∎ ###### Theorem 3.2. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>0,\,c>0,\,|a|\neq 1,\,b\neq 1$, $c\neq 1,\,b>|a|-1$, $c>|a|-1$ and $0<\lambda\leq 1$. For $0\leq\beta<1$, assume that $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{(\lambda-b)\,\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(\lambda-c)\,\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle\lambda\left(\frac{1}{2(1-\beta)}+1\right)$ then the operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into $\mathcal{C}_{\lambda}$. Proof. The proof is similar to Theorem 2.1. So we omit the details. ∎ ###### Theorem 3.3. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Suppose that $a$ and $0<\lambda\leq 1$ satisfy the condition (3.4) $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{[b^{2}-b-b^{3}+\lambda(1-b)^{2}]\Gamma(b)}{\Gamma(1-|a|+b)}\right.$ $\displaystyle\left.-\frac{[c^{2}-c-c^{3}+\lambda(1-c)^{2}]\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle 2\lambda$ then $\mathcal{I}_{a,b,c}(f)$ maps ${\mathcal{S}}$ into $\mathcal{C}_{\lambda}$. Proof. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Suppose that $\displaystyle f(z)$ is defined by (1.1) is in ${\mathcal{S}}$, then (3.5) $\displaystyle|a_{n}|\leq n.$ Suppose the integral operator $\mathcal{I}_{a,b,c}(f)$ defined by (1.10). In the view of the sufficient condition given by (1.3), it is enough to prove that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}\,n\,(n+\lambda-1)\,|B_{n}|\leq\lambda.$ i.e., $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}n\,(n+\lambda-1)\,\left|\frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\right|\,|a_{n}|\leq\lambda.$ Using the fact that $|(a)_{n}|\leq(|a|)_{n}$ and the equation $(\ref{thm12eq2})$ in above, we have $\displaystyle\leq$ $\displaystyle\sum_{n=0}^{\infty}\frac{(n+1)^{3}\,(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n}}+(\lambda-1)\sum_{n=0}^{\infty}\frac{(n+1)^{2}\,(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n}}-\lambda$ Using (2) and (3) of Lemma 1.11, we find that $\displaystyle T$ $\displaystyle\leq$ $\displaystyle\frac{2\,bc}{c-b}\Gamma(1-|a|)\left(\frac{[(1-b)^{3}-b^{2}]\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{[(1-c)^{3}-c^{2}]\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle+\frac{bc(\lambda-1)}{c-b}\Gamma(1-|a|)\left(\frac{(1-b)^{2}\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-c)^{2}\Gamma(c)}{\Gamma(1-|a|+c)}\right)-\lambda$ By the equation (3.4), the above expression is bounded by $\lambda$ and hence $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{[b^{2}-b-b^{3}+\lambda(1-b)^{2}]\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{[c^{2}-c-c^{3}+\lambda(1-c)^{2}]\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle 2\lambda$ Hence, the integral operator $\mathcal{I}_{a,b,c}(f)$ maps ${\mathcal{S}}$ into $\mathcal{C}_{\lambda}$ and the proof is complete. ∎ ## 4\. Admissibility condition of $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ in Parabolic domain. ###### Theorem 4.1. Let $a\in{\mathbb{C}}\backslash\\{0\\}$, $b>|a|-1$ and $c>|a|-1.$ The sufficient condition for the function $z_{3}F_{2}(a,b,c;b+1,c+1;z)$ belongs to the class $UCV$ is that $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{(1-c)\,\Gamma(c+1)\,}{\Gamma(1-|a|+c)}-\frac{(1-b)\,\Gamma(b+1)\,}{\Gamma(1-|a|+b)}\right]$ $\displaystyle\leq$ $\displaystyle\frac{1}{3}.$ Proof. The proof is similar to Theorem 2.7. So we omit the details. ∎ ###### Theorem 4.2. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>0,\,c>0,\,b>|a|-1$ and $c>|a|-1$. For $0\leq\beta<1$, assume that (4.3) $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(c+1)}{\Gamma(1-|a|+c)}-\frac{\Gamma(b+1)}{\Gamma(1-|a|+b)}\right)$ $\displaystyle\leq$ $\displaystyle\frac{1}{6(1-\beta)}$ then $I_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into $UCV$. Proof. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>0,\,c>0,\,b>|a|-1$ and $c>|a|-1$. Consider the integral operator $\mathcal{I}_{a,b,c}(f)$ is given by (1.10). According to sufficient condition given by (1.4), it is enough to show that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}n\,(n-1)\,|B_{n}|\leq\frac{1}{3}.$ Using the fact that $|(a)_{n}|\leq(|a|)_{n}$ and the equation $(\ref{thm2eq2})$ in above, we have $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\sum_{n=2}^{\infty}n\,(n-1)\,\left|\frac{(|a|)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}n}\right|$ Using the formula given by (1.9) and (1) of Lemma 1.11, we find that $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{(1-b-1)\,\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-c-1)\,\Gamma(c)}{\Gamma(1-|a|+c)}\right)\right]$ Using the fact that $\Gamma(a+1)=a\Gamma(a)$, the above reduces to $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(c+1)}{\Gamma(1-|a|+c)}-\frac{\Gamma(b+1)}{\Gamma(1-|a|+b)}\right)\right]$ By the formula given by (4.3), the above expression is bounded by $\frac{1}{3}$ and hence $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(c+1)}{\Gamma(1-|a|+c)}-\frac{\Gamma(b+1)}{\Gamma(1-|a|+b)}\right)\right]$ $\displaystyle\leq$ $\displaystyle\frac{1}{3}$ By Hypothesis, the operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into $UCV$ and the result follows. ∎ ###### Theorem 4.4. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Assume that $\displaystyle\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{(b^{2}-b-b^{3})\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(c^{2}-c-c^{3})\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle\frac{1}{3}$ then the integral operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{S}$ to $UCV$. Proof. The proof is similar to Theorem 3.3. So we omit the details. ∎ ## 5\. Inclusion Properties of $z\,_{3}F_{2}(a,b,c;b+1,c+1;z)$ in ${\mathcal{S}}_{p}$-CLASS ###### Theorem 5.1. Let $a\in{\mathbb{C}}\backslash\\{0\\}$, $b>|a|-1$ and $c>|a|-1.$ The sufficient condition for the function $z\,_{3}F_{2}(a,b,c;b+1,c+1;z)$ belongs to the class $S_{p}$ is that $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left[\frac{(1-2b)\,\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-2c)\,\Gamma(c)}{\Gamma(1-|a|+c)}\right]$ $\displaystyle\leq$ $\displaystyle 2.$ Proof. The proof is similar to Theorem 4.2. So we omit the details. ∎ ###### Theorem 5.2. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>0,\,c>0,\,|a|\neq 1,\,b\neq 1$, $c\neq 1,\,b>|a|-1$ and $c>|a|-1$. Assume that (5.3) $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left(\left(\frac{2b-1}{b-1}\right)\frac{\Gamma(b)}{\Gamma(1-|a|+b)}-\left(\frac{2c-1}{c-1}\right)\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle+\frac{bc}{(|a|-1)(b-1)(c-1)}\leq\frac{1}{2(1-\beta)}+1.$ then $I_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into $S_{p}$ class. Proof. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>0,\,c>0,\,|a|\neq 1,\,b\neq 1$, $c\neq 1,\,b>|a|-1$ and $c>|a|-1$. Consider the integral operator $\mathcal{I}_{a,b,c}(f)$ is given by (1.10). In the view of the equation (1.7), it is enough to show that $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(2n-1)|B_{n}|\leq 1.$ (or) $\displaystyle T$ $\displaystyle=$ $\displaystyle\sum_{n=2}^{\infty}(2n-1)\left|\frac{(a)_{n-1}(b)_{n-1}(c)_{n-1}}{(b+1)_{n-1}(c+1)_{n-1}(1)_{n-1}}\right|\,|a_{n}|\leq 1.$ Using the inequality $|(a)_{n}|\leq(|a|)_{n}$ and the equation $(\ref{thm2eq2})$ in above, we have $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[2\sum_{n=0}^{\infty}\frac{(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n}}-\sum_{n=0}^{\infty}\frac{(|a|)_{n}(b)_{n}(c)_{n}}{(b+1)_{n}(c+1)_{n}(1)_{n+1}}-1\right]:=T_{1}$ Using the equation (1.9) and the result (3) of Lemma 1.11, we find that $\displaystyle T_{1}$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[2\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\right)\right.$ $\displaystyle\left.-\frac{bc}{(|a|-1)(b-1)(c-1)}\left(\left(\frac{\Gamma(2-a)}{c-b}\right)\left(\frac{(c-1)\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(b-1)\Gamma(c)}{\Gamma(1-|a|+c)}\right)-1\right)-1\right]$ Using the fact that $\Gamma(a+1)=a\Gamma(a)$, the above reduces to $\displaystyle T$ $\displaystyle\leq$ $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}\left(\frac{2b-1}{b-1}\right)\right.\right.$ $\displaystyle\left.\left.-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\left(\frac{2c-1}{c-1}\right)\right)+\frac{bc}{(|a|-1)(b-1)(c-1)}-1\right]$ By the condition (5.3), the above expression is bounded by 1 and hence $\displaystyle 2(1-\beta)\left[\frac{bc}{c-b}\Gamma(1-|a|)\left(\frac{\Gamma(b)}{\Gamma(1-|a|+b)}\left(\frac{2b-1}{b-1}\right)-\frac{\Gamma(c)}{\Gamma(1-|a|+c)}\left(\frac{2c-1}{c-1}\right)\right)\right.$ $\displaystyle\left.+\frac{bc}{(|a|-1)(b-1)(c-1)}-1\right]$ $\displaystyle\leq$ $\displaystyle 1$ Under the stated condition, the operator $\mathcal{I}_{a,b,c}(f)$ maps $\mathcal{R}(\beta)$ into ${\mathcal{S}}_{p}$ and the proof is complete. ∎ ###### Theorem 5.4. Let $a\in{\mathbb{C}}\backslash\\{0\\},\,b>|a|-1$ and $c>|a|-1$. Suppose that $a,\,b$ and $c$ satisfy the condition $\displaystyle\frac{bc}{c-b}\,\Gamma(1-|a|)\left(\frac{(1-b)(1-2b)\Gamma(b)}{\Gamma(1-|a|+b)}-\frac{(1-c)(1-2c)\Gamma(c)}{\Gamma(1-|a|+c)}\right)$ $\displaystyle\leq$ $\displaystyle 2.$ then $I_{a,b,c}(f)$ maps ${\mathcal{S}}$ into ${\mathcal{S}}_{p}$ class. Proof. The proof is similar to Theorem 4.1. So we omit the details. ∎ ## 6\. Conclusion and Future Scope We derived the geometric properties for the Clausen’s Hypergeometric Series ${}_{3}F_{2}(a,b,c;b+1,c+1;z)$ in which the numerator and denominator parameters differs by arbitrary negative integers. It would be of great interest to determine conditions on the parameters $a,b,c,\lambda$ and $\beta$ so that the integral operator defined by (1.10) associated with some classes of univalent functions using Dixon’s summation formula which we state as an open problem. Problem: To determine the conditions on the parameters $a,\,b,\,c,$ such that the hypergeometric function ${}_{3}F_{2}(a,b,c;1+a-b,\,1+a-c)$ associated with the Dixon’s summation formula or its equivalence has the admissibility to be the classes like $\mathcal{S}^{*}_{\lambda},\,\mathcal{C}_{\lambda}$, $\mathcal{UCV}$ and ${\mathcal{S}}_{p}$. ## References * [1] M. A. Shpot and H. M. Srivastava, The Clausenian hypergeometric function ${}_{3}F_{2}$ with unit argument and negative integral parameter differences, Appl. Math. Comput. 259 (2015), 819–827. * [2] A. R. Miller and R. B. Paris, Clausen’s series ${}_{3}F_{2}(1)$ with integral parameter differences and transformations of the hypergeometric function ${}_{2}F_{2}(x)$, Integral Transforms Spec. Funct. 23 (2012), no. 1, 21–33. * [3] F. Rønning, Uniformly convex functions and a corresponding class of starlike functions, Proc. Amer. Math. Soc. 118 (1993), no. 1, 189–196. * [4] A. W. Goodman, On uniformly convex functions, Ann. Polon. Math. 56 (1991), no. 1, 87–92. * [5] A. W. Goodman, On uniformly starlike functions, J. Math. Anal. Appl. 155 (1991), no. 2, 364–370. * [6] W. C. Ma and D. Minda, Uniformly convex functions, Ann. Polon. Math. 57 (1992), no. 2, 165–175. * [7] T. H. MacGregor, Functions whose derivative has a positive real part, Trans. Amer. Math. Soc. 104 (1962), 532–537.
#include "gtest/gtest.h" #include "meta/for_each.hpp" TEST(TupleForEach, ForEach) { int sum = 0; pmeta::tuple_for_each( std::make_tuple(42, 21), [&sum](auto i) { sum += i; } ); EXPECT_EQ(sum, 63); } TEST(TupleForEach, ForEachRef) { int i = 0; double d = 0; pmeta::tuple_for_each( std::make_tuple(std::ref(i), std::ref(d)), [&i, &d](auto &&v) { if constexpr (std::is_same<pmeta_typeof(v), int>::value) i = 42; else if constexpr (std::is_same<pmeta_typeof(v), double>::value) d = 84; else { i = -42; d = -42; } } ); EXPECT_EQ(i, 42); EXPECT_EQ(d, 84); } TEST(TupleForEach, ForEachType) { int i = 0; double d = 0; pmeta::tuple_for_each( std::tuple<pmeta::type<int>, pmeta::type<double>>(), [&i, &d](auto &&t) { if constexpr (std::is_same<pmeta_wrapped(t), int>::value) i = 42; else if constexpr (std::is_same<pmeta_wrapped(t), double>::value) d = 84; else { i = -42; d = -42; } } ); EXPECT_EQ(i, 42); EXPECT_EQ(d, 84); }
Dynamic selection of interworking functions in a communication system ABSTRACT The invention provides techniques for selecting, on a dynamic basis, an interworking function (IWF) that can modify a communication protocol to a particular format required by bridged terminal equipment in a communication system. The IWF can be selected to ensure compatibility between transmission bandwidth, coding and other format parameters of a call and the corresponding parameters of its destination terminal in the system. An IWF in accordance with the invention may be utilized to allow a user to bind to different terminals having different capabilities over the duration of a given call. An IWF in accordance with the invention may also be used to insert additional data, retrieved from a database of the switch, into a reverse portion of the call directed from the destination terminal to the source terminal. The invention can thus be used to ensure that the established bandwidth between the destination terminal and the source terminal is substantially bidirectionally symmetric. RELATED APPLICATIONS The present application is related to U.S. patent application Ser. No. 09/031,580 entitled “Dynamic Binding and Bridging in a Communication System,” and U.S. patent application Ser. No. 09/031,574 entitled “Proximity-Based Registration in a Communication System,” each filed concurrently herewith in the names of inventors Albert D. Baker, Vincent H. Choy, Venkatesh G. Iyengar, James C. Liu and Eileen P. Rose, and assigned to the assignee of the present application. FIELD OF THE INVENTION The invention relates generally to communication systems, and more particularly to business communication systems in which calls or other incoming communications are directed by a switch to desk sets, wireless mobile telephones, or other types of user terminals within the system. BACKGROUND OF THE INVENTION A typical business communication system includes an enterprise switch which directs calls from one or more incoming trunks to various user terminals. The user terminals may include, for example, wired desk sets, wireless desk sets, wireless mobile telephones and advanced terminals such as computers or video telephones. A shared communication facility within such a system is generally represented in both the switch and the corresponding terminals as a “Call Appearance” (CA). When a CA to a shared facility is presented on multiple user terminals, and multiple users are allowed to access this facility, the CA is known as a “bridged appearance.” In existing systems, such bridged appearances can generally only be defined at system administration time, for example, during an initial set-up and configuration of the system or during a subsequent system-level reconfiguration. As a result, conventional bridged appearances remain static until the system is re-administered. This conventional static architecture is generally considered best suited for wired terminals, where the operational expectation is that the user associated with a given terminal will be at his or her desk, and will be the primary or exclusive user of that terminal. However, in systems which support wireless terminals and other more advanced equipment, users will typically have more than one terminal available to them, and may also be allowed to use the advanced equipment on a demand basis. For example, a given set of users may each have a wired deskset, a simple mobile telephone, and access on a random demand basis to an advanced shared resource such as a video telephone. Unfortunately, the above-noted conventional static bridging techniques are unable to create a dynamic bridged appearance that exists on, for example, both the mobile telephone of a given user and an advanced shared resource which happens to be located in proximity to the mobile telephone at a particular time. The conventional techniques therefore generally do not provide the user with an option of answering an incoming call directed to the mobile telephone at a co-located advanced terminal, unless the advanced terminal has been bridged with the mobile telephone during system administration. As a result, the user will often be unable to access the more sophisticated features of a nearby advanced terminal for accepting calls directed to the mobile, or placing calls as a known originator. SUMMARY OF THE INVENTION This invention provides a system in which users can be associated with a system terminal on a demand basis by creating a bridged call appearance that exists, for example, on both a simple mobile telephone and a co-located complex terminal such as a video telephone. This invention thus allows the creation of bridged call appearances on a dynamic demand basis. In an illustrative embodiment, a temporary association is established between a mobile terminal and at least one other system terminal. While the mobile terminal is “registered” in this manner to the other terminal, the mobile user can request permission to utilize the functions of the other terminal in order to, for example, receive incoming calls or place outgoing calls. The temporary association may be established based on a determination of the proximity of the mobile terminal to the other terminal, such that the mobile registers to different complex system terminals as it moves between different cells of the system. The temporary relationship between the mobile and a given other terminal may therefore be terminated when the mobile is no longer in proximity to that terminal. Proximity-based registration in accordance with the invention may also be implemented in an embodiment in which the proximity of a given user to a system terminal is determined by detecting a signal transmitted by a beacon device carried by the user. The dynamic binding and bridging of the invention may be implemented using state-based processing. In an example of this type of implementation, the mobile at a given point in time may be in one of a number of states of operation, such as the following five states: (1) a null state in which there is no temporary association between the mobile and any other terminal of the system; (2) a registered state in which the temporary association is established, but the mobile user has not obtained permission to access the functions of the other terminal; (3) a bound active state in which the temporary association exists and the user is actively accessing the functions of the other terminal to conduct an on-going call; (4) a bound inactive state in which the temporary association exists and the user has obtained permission to access the functions of the other terminal, but is not currently accessing the functions; and (5) a bound alerting state in which the temporary association exists, the user has obtained permission to access the functions of the other terminal, and an in-coming call directed to the mobile generates an alerting indication on the other terminal. Another aspect of the invention provides techniques for selecting, on a dynamic basis, an inter working function (IWF) that can modify a communication protocol to the particular format required by the bridged terminal equipment. This allows a user to bind to different terminals having different capabilities over the duration of a given call. For example, if the source terminal of the incoming call is a wireless deskset using 32 kbps voice coding and the destination terminal utilizes a DSO line at 64 kbps, the IWF may be an ADPCM-to-PCM transcoder. An IWF in accordance with the invention may also be used to insert additional data, retrieved from a database of the switch, into a reverse portion of the call directed from the destination terminal to the source terminal. For example, if the call is a video call, and the destination terminal is a terminal without video generating capability, the additional data may be video data retrieved from the database and inserted in a signal delivered from the destination terminal to the source terminal. This aspect of the invention can be used to ensure that the established bandwidth between the destination terminal and the source terminal is substantially bidirectionally symmetric. Another aspect of the invention relates to overlaying the characteristics of a particular system terminal onto another terminal to which that user is bound. For example, when a given user enters any of the bound states noted above, permission data previously stored for that user may be overlaid onto the bound terminal so that the user may place or receive all calls in accordance with his or her normal restrictions, using the bound terminal. In an illustrative embodiment, a given system user can have multiple stored terminal profiles, one for each type of system terminal that may be accessed by that user. When the user then becomes bound to particular system terminal, the corresponding stored terminal profile of that user is overlaid onto the bound terminal. For example, if the bound terminal is of the same type as a terminal assigned to the user, the functional layout of the assigned terminal, including button assignments and soft-key label arrangements, may be overlaid on the bound terminal such that the bound terminal is configured to operate in a manner similar to the assigned terminal. These and other features and advantages of the present invention will become more apparent from the accompanying drawings and the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a portion of an exemplary communication system configured in accordance with the invention. FIG. 2 is a state diagram illustrating the operation of dynamic binding and bridging functions in the system of FIG. 1. FIGS. 3 through 12 are flow diagrams illustrating in more detail the operation of the state transitions shown in the state diagram of FIG. 2. DETAILED DESCRIPTION OF THE INVENTION The invention will be illustrated below in conjunction with an exemplary wireless communication system. Although particularly well-suited for use in conjunction with a business telephone system, the invention is not limited to use with any particular type of system. The disclosed binding and bridging techniques may be used in any communication application in which it is desirable to provide users with improved access to additional system terminals in an efficient manner. For example, the invention may be applied to handsets for use in cellular and personal communication services (PCS) systems, and to other types of communication devices. The term “mobile” as used herein should therefore be understood to include not only portable wireless handsets as in the illustrative embodiment but also other types of portable communication devices, including wireless personal computers. The term “line” as used herein is intended to include not only telephone lines but more generally any type of communication channel which supplies calls or other communications for processing at one or more user terminals. The term “system administration” or “system administration time” refers generally to a system reconfiguration which involves altering operating parameters for two or more system terminals, and is intended to include, for example, an initial set-up and configuration of the system or a subsequent system-level reconfiguration. The term “dynamic” as applied to establishment of an association between a first user terminal and at least one other terminal of the system refers generally to an association which is established at a time other than during system administration. A “temporary association” is intended to include any association which is established on a dynamic basis as opposed to an association established during system administration. FIG. 1 shows a portion of an exemplary communication system 100 in accordance with an illustrative embodiment of the invention. The system 100 includes an enterprise switch 110 which receives as an input a trunk 114. The trunk 114 supplies incoming calls to the switch 110 for processing. The switch 110 includes a central processing unit (CPU) 115, a memory 116, at least one inter working function (IWF) 117, and a system database 118. The CPU 115 may be a microprocessor, an application-specific integrated circuit (ASIC) or other type of digital data processor, as well as various portions or combination of such elements. The memory 116 may be a random access memory (RAM), a read-only memory (ROM) or combinations of these and other types of electronic memory devices. The IWF 117 is used to provide dynamic binding and bridging features which will be described in greater detail below. The IWF 117 may in other embodiments be incorporated into other elements of switch 110, such as the CPU 115 and memory 116. The system database 118 is used to store bridging and other administrative information regarding the configuration of the system 100. The switch 110 in this example further includes four port cards 120A, 120B, 120C and 120D. Port card 120A is coupled to a wireless base station 121 which communicates with a simple wireless terminal (WT) 122 designated WT1 and a more complex wireless terminal 123 designated WT2. The terminal WT1 may be a simple mobile telephone, and the terminal WT2 may be a wireless deskset. Port card 120B is connected to a National Information Infrastructure (NII) wireless base station 124, which communicates with a wireless personal computer (WPC) 125. Port card 120C is connected to a wired deskset (DS) 126. Port card 20D is connected to an advanced terminal (AT) 127, which may be, for example, a video telephone operating in accordance with the H.320 standard. It should be noted that the switch 110 may include additional port cards, and may be connected to other types and arrangements of user terminals. The switch 110 is also connected to an administrator terminal 128 which may used to program the operation of the switch 110 during a system administration. FIG. 2 shows a state diagram illustrating dynamic binding and bridging functions which may be provided in the system 100 of FIG. 1 in accordance with the invention. In this embodiment, it will be assumed that the state diagram shows the possible states for a mobile terminal of the system 100, such as terminal WT1 or WPC. The mobile terminal will also be referred to simply as a “mobile” in the following description. It will be apparent to those skilled in the art that state diagrams similar to that of FIG. 2 call be generated for other types of terminals in the system. The state diagram operations may be implemented, for example, in the form of one or more system software programs stored in memory 116 of switch 110 and executed by CPU 115. Such software will be referred to herein as “system software” or “switch software.” The state diagram of FIG. 2 in this embodiment includes the following five states: Null (210), Registered (220), Bound Inactive (230), Bound Active (240) and Bound Alerting (250). A given mobile terminal begins in the Null state, and depending on user input and other system parameters and conditions, it may pass through one or more of the other states. The possible state transitions are shown as arrows connecting the states in the state diagram. FIGS. 3 through 12 below illustrate the state transitions in greater detail. Each of the state transitions is numbered in FIG. 2, and that number appears below in the heading of the description of the corresponding transition. Unless otherwise specified, the description of FIGS. 3 through 12 will be of an embodiment in which a mobile binds to a deskset. It should be understood, however, that the invention is not limited to such embodiments, but can instead be used to provide dynamic binding between a mobile and any other type of communication terminal or terminals. The state transitions of FIG. 2 make use of information in the following tables. These tables may be stored, for example, in system database 118 of switch 110. In the tables, all items marked with a “*” are status information elements that are filled in dynamically during binding and bridging operations. All other items are static and are filled in during system administration. TABLE 1 User Profile Table Directory Ter- Button Attributes User ID Number minal Assign mts Home Desk set (e.g., (UID) (DN) Type BID FID TID TID timer) epf (732)957- WT B1 CA 2 5 T1 = 120 1234 B2 CA adb (732)957- WT B1 CA 4 5 T1 = 120 5678 B2 CA desk (732)957- 7434 B1 LWC 5 NULL 9101 B2 CF epf NULL BT NULL 2 NULL The User Profile Table lists information characterizing the current system configuration for each of the system users. Each user is identified by a User Identifier (UID). A given UID is associated with a corresponding Directory Number (DN), a Terminal Type, Button Assignments, a Home Terminal Identifier (TID), a Desk set TID, and various Attributes. The DN represents the primary number which callers dial to be connected to the corresponding user. The Terminal Type specifies what type of terminal (e.g., mobile terminal (WTI), wireless deskset (WT2), wireless personal computer (WPC), wired deskset (such as a type 7434 wired deskset from Lucent Technologies Inc.), etc.) the user is equipped with. The Button Assignments include a Button Identifier (BID) and corresponding Function Identifier (FID) for each of a number of programmable buttons on the user terminal. For example, BID B1 for UID “epf” in the User Profile Table is set to a call appearance (CA) function, and BID B2 for UID “desk” is set to provide a call forwarding (CF) function. The FID designation LWC corresponds to a leave-word calling function. The Home TID identifies the terminal which is considered the “home” terminal for the user. This may be, for example, the deskset in the user's office. The Desk set TID identifies a deskset terminal to which the user is bound, and therefore varies as the user binds to different system terminals. The Attributes may include, for example, a Timer T1 which is specified at system administration time and indicates how long the user may remain bound to a particular terminal without receiving or placing a call there. TABLE 2 Permission Table Password UID COR Authentication epf 2 epf password adb 2 adb password desk 4 NULL The Permission Table stores information which permits the system to authenticate users trying to access system functions. The Class of Restriction (COR) equates to a definition of a user's authorization to place and receive calls. In the above example, passwords are stored in the Password Authentication field for each of the UIDs “epf” and “adb.” The UID “desk” does not require any user authentication, that is, any user is permitted to place a call or execute functions from that terminal. Its Password Authentication field is therefore NULL in Table 2. All of the information in the Permission Table is entered at system administration. TABLE 3 Terminal Profile Table Terminal TID Type Port ID* Binding State* 2 WT 0x1a BOUND ACTIVE 4 WT 0x1b REGISTERED 5 7434 0x2a BOUND ACTIVE 6 BT 0x2b NULL The Terminal Profile Table stores information regarding the Terminal Type, Port Identifier (Port ID) and Binding State for each of a number terminals. The terminals are identified by TID. The Port ID identifies, for example, the port card and line over which the corresponding terminal communicates with the switch 110. The Binding State entry specifies whether the terminal is in the Bound Active, Bound Inactive, Bound Altering, Registered or Null state. For example, the terminal with TID 2 in Table 3 is a wireless terminal which is currently communicating over Port 0x1a and is in the Bound Active state 240. Both the Port ID and the Binding State change dynamically as different users bind to the terminal, while the Terminal Type for the terminal is established at system administration. TABLE 4 Port Capability Table Physical Location Cell ID Port ID (slot/port) (proximity) 0x1a 5 12 0x1b 6 12 0x2a 7 12 0x2b 8 12 The Port Capability Table lists the Physical Location for each of the possible Port IDs in the system. The Physical Location may include, for example, slot and port identifiers for the corresponding Port IDs. A Cell Identifier (Cell ID) is also included for each of the Port IDs. The Cell ID specifies which cell of a radio subsystem of system 100 includes the terminal which is communicating over the specified Port ID. For non-mobile terminals, the Cell ID may be filled in at system administration time, if applicable. The radio subsystem is used to implement “proximity based” dynamic binding as will be described in conjunction with FIGS. 3 and 4 below. The proximity based binding allows a user with a simple mobile terminal to bind with a more complex terminal which is located in the same proximity. The correspondence between Cell IDs/Port IDs and TIDs can change as, for example, mobile terminals move within the system. TABLE 5 Binding Table UID* Visiting TID* Timer On/Off* epf 5 on The Binding Table specifies information regarding which users are bound to which terminals, as well as characteristics of the binding. In the example above, the user with UID “epf” is bound to the terminal with TID 5. The Timer for the binding, which as noted above may specify the amount of time the user can remain bound to the terminal but inactive, is turned on. The entries of this table vary dynamically as different users bind to different terminals in the system. TABLE 6 Binding Group Definition Table Desk set Mobile Registered * TID UIDs UIDs Bound UID* 5 epf, adb epf, adb epf The Binding Group Definition Table specifies the users which are registered and/or bound to a particular terminal in the system. The Mobile UIDs column represents a pre-administered list of mobile terminal users that are allowed to bind to a given deskset TID. In the example shown in the table, the users with UIDS “epf” and “adb” are in the pre-administered list permitted to bind to the deskset terminal with TID 5. The group of users which are registered to bind to a given terminal at a particular point in time are referred to herein as the “binding group” for the given terminal. These users are listed in the Registered UIDs column for that terminal. In the example, the users with UIDS “epf” and “adb” are also registered to bind to the deskset terminal with TID 5. One of the users in the binding group may actually be bound to the terminal on an on-going call. This user is listed in the Bound UID column for that terminal. In general, only one user at a time is permitted to bind to a given terminal, but multiple users can register to bind to that terminal. In accordance with the invention, binding groups may be created at system administration time, or by user invocation of a designated Feature Access Code (FAC), or by a combination of both of these techniques. At system administration time, the administrator can assign Im own individuals to groups, and then relate the groups to either designated terminals or designated groups of terminals. This information may be stored in the system database 118 of switch 110 for use during normal system operation. Alternatively, certain users can be authorized to access the system database 118 during system operation and dynamically add or delete members to or from the Binding Group Definition Table in the system database. These authorized users could be identified at system administration time, or could be provided with an authorization code. Entry of such a code would allow the user to access the system database in order to enter definitions of new groups, update definitions of the existing groups, and establish or delete group relationships with the system terminals. TABLE 7 Terminal Capability Table Terminal Signaling Display Feature Transport Coding Type Protocol Size Buttons Type Type 7400 DCP 2 × 16 12 DS0 64K PCM AT1 H.320 NULL NULL 6 × DS0 AT2 ATM NULL NULL CBR/AAL1 WT DECT 32K ADPCM BT DECT NULL NULL NULL NULL The Terminal Capability Table includes information regarding the capabilities of the various terminals of the system. This information includes, for example, the Signaling Protocol, Display Size, Feature Buttons, Transport Type and Coding Type for a given specified Terminal Type. For example, the table shows that Advanced Terminal Type 2 (AT2) uses an asynchronous transfer mode (ATM) Signaling Protocol and a constrained bit rate (CBR)/ATM Adaptation Layer 1 (AAL1) transport stream structure. All of this information may be entered at system administration. TABLE 8 Facility-Coding Type Table Facility ID Call Type Coding Type 0001 Voice PCM 0001 Video H.320 0101 Voice ADPCM The Facility-Coding Type Table specifies Call Type and Coding Type of each of a number of communication facilities supported by the system 100. For example, the table indicates that the facility with Facility ID 0001 supports both PCM voice calls and H.320 video calls. As this information is typically static, it can be entered at system administration. REGISTRATION (1) & (10) FIG. 3 illustrates the following three different cases in which a given mobile can “register” with a deskset terminal, that is, move from the Null state 210 to the Registered state 220 of FIG. 2: (i) the user dials a Registration Feature Access Code (FAC) followed by a deskset Directory Number (DN) from the mobile; (ii) the user dials the Registration FAC followed by the mobile DN from the deskset; or (iii) the mobile fulfills a proximity based registration condition for the deskset. In any of these cases, the mobile registers to become part of the binding group of the deskset, such that it becomes eligible to subsequently “bind” to the deskset. The processing for case (i) begins in step 302 of FIG. 3 with the user entering the Registration FAC followed by the deskset DN at the mobile. In step 304 the system derives the UID of the mobile and the TID of the specified deskset. This involves a number of Lookup operations which are listed in 305. These and all other Lookup operations in this description are written in the form Lookup (n, x, y), where n specifies the table number of one of TABLES 1 through 8 above, x is a key into the specified table, and y identifies the information to be retrieved from the table. For example, the operation Lookup (3, Port ID, TID_(Mobile)) in 305 causes the system to perform a look-up in the Terminal Profile Table (Table 3) using the Port ID of the mobile as a key in order to obtain the TID of the mobile. Since the deskset DN is dialed from the mobile, the physical port that this number comes in on can be used to identify the Port ID on which the mobile communicates. The mobile TID is then used as a key into the User Profile Table (Table 1) in order to obtain the UID of the mobile. The DN of the deskset is used as a key into the User Profile Table to obtain the TID of the deskset. The processing for case (ii) begins with the user entering the Registration FAC followed by the mobile DN from the deskset in step 306. In step 308, the system derives the UID of the mobile and the TID of the deskset using the two Lookup operations specified in 309. The operation Lookup (3, Port ID, TID_(Desk)) causes the system to perform a look-up in the Terminal Profile Table (Table 3) using the Port ID of the deskset as a key in order to obtain the TID of the deskset. Since the mobile DN is dialed from the deskset, the physical port that this number comes in can be used to identify the Port ID on which the deskset communicates. The DN of the mobile is then used as a key into the User Profile Table (Table 1) to obtain the UID of the mobile. The processing for both cases (i) and (ii) continues in step 310 in which a determination is made as to whether the mobile can register to the deskset. In step 311, the Binding Group Definition Table (Table 6) is searched using the deskset TID as a key to attempt to locate the mobile UID in the set of mobile UID entries associated with the deskset TID. Step 312 checks whether the mobile UID is listed with the deskset TID in the Binding Group Definition Table and is therefore permitted to bind to that deskset. If the mobile UID is not permitted to bind to the deskset, the registration is deemed to have failed as indicated in step 314, and the mobile remains in the Null state. If the mobile is permitted to bind to the deskset, the registration is deemed to be completed, and the update operations in step 316 are performed. The update operations are written using the same format described above for the Lookup operations. For example, the operation Update (1, UID, TID_(Desk)) in step 316 specifies that the User Profile Table (Table 1) is updated to include the deskset TID for the mobile UID. In the other update operations of step 316, the Binding Group Definition Table (Table 6) is updated to indicate that the mobile UID is a registered UID for the deskset TID, and the Terminal Profile Table (Table 3) is updated to include a Binding State entry of REGISTERED for the mobile TID. The state of the mobile then goes to Registered, and the mobile is a member of the binding group for the deskset. Case (iii) above is referred to as “proximity based registration” and begins in step 320 with the Lookup operations listed in 321. When the mobile comes within the coverage of a particular cell of the system, the Port ID of the mobile is used as a key into the Terminal Profile Table (Table 3) to obtain the mobile TID. The mobile TID is used as a key into the User Profile Table (Table 1) to obtain the mobile UID. The mobile UID is used as a key into the Binding Group Definition Table (Table 6) to obtain a viable deskset TID. That deskset TID is used as a key into the Terminal Profile Table to obtain the associated Port ID of the deskset. The deskset Port ID is used as a key into the Port Capability Table (Table 4) to determine the Cell ID of the cell in closest proximity to the specified Port ID. The mobile Port ID is also used as a key into the Port Capability Table to find the Cell ID associated with the mobile Port ID. Step 322 determines if the deskset and mobile Cell IDs match. If the two Cell IDs match, a proximity based registration message is sent in step 324, the update operations of step 316 are performed, and the mobile enters the Registered state. If the two Cell IDs do not match, step 326 indicates that there will be no proximity based registration, and the mobile returns to the Null state. In the Registered state 220, the data from the User Profile Table and the Permission Table are available. Therefore, when a specific user transits the state machine of FIG. 2 to any of the Bound states, the permission data of that user may be overlaid onto the bound terminal so that the user may place or receive all calls in accordance with his or her normal restrictions, using the bound terminal. A given system user can have multiple stored terminal profiles, one for each type of system terminal that may be accessed by that user. When the user then becomes bound to particular system terminal, the corresponding stored terminal profile of that user is overlaid onto the bound terminal. If the bound terminal is of the same type as a terminal assigned to the user, the functional layout of the assigned terminal, including button assignments and soft-key label arrangements, may be overlaid on the bound terminal such that the bound terminal is configured to operate in a manner similar to the assigned terminal. For example, the layout of a given deskset assigned to the user may be overlaid onto another otherwise un-elated deskset of the same or a similar type to which the user becomes bound. DEREGISTRATION (2) FIG. 4 illustrates the following three different cases in which a given mobile can “deregister” or move from the Registered state 220 to the Null state 210 of FIG. 2: (i) the user dials a Deregistration FAC followed by a deskset DN from the mobile; (ii) the user dials the Deregistration FAC followed by the mobile DN from the deskset; or (iii) the mobile fulfills a proximity based deregistration condition for the deskset. In each these cases, the mobile is removed from the binding group of the deskset, and is therefore no longer eligible to bind with that deskset. The processing for case (i) begins in step 402 of FIG. 4 with the user entering the Deregistration FAC followed by the deskset DN at the mobile. In step 404 the system derives the UID of the mobile and the TID of the specified deskset, by performing the Lookup operations listed in 405. The system searches the Terminal Profile Table (Table 3) using the Port ID of the mobile as a key in order to obtain the TID of the mobile. The mobile TID is then used as a key into the User Profile Table (Table 1) in order to obtain the UID of the mobile. The DN of the deskset is used as a key into the User Profile Table to obtain the TID of the deskset. The processing for case (ii) begins with the user entering the Deregistration FAC followed by the mobile DN from the deskset in step 406. In step 408, the system derives the UID of the mobile and the TID of the deskset using the two Lookup operations specified in 409. The operation Lookup (3, Port ID, TID_(Desk)) causes the system to perform a look-up in the Terminal Profile Table (Table 3), using the Port ID of the deskset as a key, in order to obtain the TID of the deskset. The DN of the mobile is then used as a key into the User Profile Table (Table 1) to obtain the UID of the mobile. Case (iii) above is referred to as “proximity based deregistration” and begins in step 420 with the Lookup operations listed in 421. When the mobile goes outside the coverage of a particular cell of the system, the Port ID of the mobile is used as a key into the Terminal Profile Table (Table 3) to obtain the mobile TID. The mobile TID is used as a key into the User Profile Table (Table 1) to obtain the mobile UID. The mobile UID is used as a key into the Binding Group Definition Table (Table 6) to obtain the associated deskset TID. That deskset TID is used as a key into the Terminal Profile Table to obtain the associated Port ID of the deskset. The deskset Port ID is used as a key into the Port Capability Table (Table 4) to determine the Cell ID of the cell in closest proximity to the specified Port ID. The mobile Port ID is also used as a key into the Port Capability Table to find the Cell ID associated with the mobile Port ID. Step 422 determines if the deskset and mobile Cell IDs match. If the two Cell IDs do not match, a proximity based deregistration message is sent in step 424, and the process moves to step 410. If the two Cell IDs do match, step 426 indicates that there will be no proximity based deregistration, and the mobile returns to the Registered state. The processing for each of cases (i), (ii) and (iii) above continues in step 410 in which a determination is made as to whether the mobile is actually registered to the deskset. In step 411, the Binding Group Definition Table (Table 6) is searched using the deskset TID as a key to attempt to locate the mobile UID in the set of mobile UID entries registered to the deskset TID. The User Profile Table (Table 1) is also searched using the mobile UID to determine if it is registered to the deskset TID. Step 412 checks whether the mobile UID is listed with the deskset TID in the Binding Group Definition Table and whether the deskset TID is listed with the mobile UID in the User Profile Table. If either one of these conditions fails, the mobile UID is not permitted to deregister from the deskset, the deregistration is deemed to have failed as indicated in step 414, and the mobile remains in the Registered state. If both of the conditions in step 412 pass, the deregistration is deemed to be completed, and the update operations in step 416 arc preformed. The User Profile Table (Table 1) is updated to clear the deskset TID for the mobile UID, the Binding Group Definition Table (Table 6) is updated to indicate that the Mobile UID is no longer a registered UID for the deskset TID, and the Terminal Profile Table (Table 3) is updated to include a Binding State entry of NULL for the mobile TID. The state of the mobile then goes to Null. PROXIMITY DEACTIVATION (3 a, 3 b, 3 c) FIG. 5 illustrates the manner in which a mobile transitions from the Bound Inactive state 230, the Bound Active state 240 or the Bound Alerting state 250, to the Null state 210 of FIG. 2. These three transitions can arise as follows: Bound Inactive→Null (Transition 3 a). After completing a call while being bound to another terminal, and waiting to place or receive another call while remaining bound to that terminal, the mobile is taken out of proximity of the above-noted radio subsystem. Bound Active→Null (Transition 3 b). While active on a call and bound to another terminal, the mobile is taken out of proximity of the radio subsystem. Bound Alerting→Null (Transition 3 c). After an incoming call is established for the mobile and the deskset is alerting with a simulated bridged appearance, the mobile is taken out of proximity of the radio subsystem. For each of the transitions (3 a), (3 b) and (3 c) described above and shown in FIG. 2, the mobile receives an “out of proximity” indication from the radio subsystem as shown in step 500. This out of proximity indication may be generated in accordance with step 420 of FIG. 4. After the out of proximity indication is received, step 510 verifies if the mobile is bound. This involves performing the Lookup operations listed at 511. When the mobile receives the out of proximity indication, the corresponding message comes in on a physical Port ID on which the mobile communicates. The system uses this Port ID as a key into the Terminal Profile Table (Table 3) to obtain the TID of the mobile. The system then uses the mobile TID as a key into the Terminal Profile Table to obtain the Binding State of the mobile. If the mobile is bound, the Binding State will be one of the following states: Bound Active, Bound Inactive or Bound Alerting. Step 512 determines if the Binding State of the mobile is one of these three valid bound states. If the Binding State of the mobile is not a valid bound state, step 514 indicates that the out of proximity indication is ignored because a proximity based deregistration was issued for an unbound mobile. If the Binding State of the mobile is one of the three valid binding states, then the mobile is unbound and then de-registered, using the operations of step 516. The unbinding process of step 516 first determines the deskset to which the mobile is currently bound. The system uses the mobile TID as a key into the User Profile Table (Table 1) to obtain the TID of this deskset. The system does another look-up in the User Profile Table, using the mobile TID as the key, and obtains the UID of the mobile. To unbind the mobile, an update is done to the Binding Group Definition Table (Table 6). The system uses the deskset TID as a key into the Binding Group Definition Table and removes the mobile UID as the Bound UID. To de-register the mobile, another update is done to the Binding Group Definition Table. The system uses the deskset TID as a key into the Binding Group Definition Table and removes the mobile UID from the list of Registered UIDs. These two updates to the Binding Group Definition Table correspond to the operation Update (6, TID_(Desk), UID_(Bound)/UID_(Reg)=0) in step 516. Next, in order to disassociate the deskset from the mobile, an update is done to the User Profile Table (Table 1) to remove the deskset TID associated with the mobile UID. This update is performed by using the mobile UID as a key into the User Profile Table and setting the Desk set TID field entry to NULL. The Binding Table (Table 5), which keeps track of all current mobile users that are bound, is updated to remove the mobile user that went out of proximity. The mobile UID is used as a key into the Binding Table, and all elements associated with the mobile UID are removed. This involves setting the Timer, Visiting TID and UID fields to NULL. Finally, the mobile TID is used as a key into the Terminal Profile Table (Table 3), and the Binding State associated with the mobile TID is set to NULL. The mobile is thereby transitioned to the Null state 210. FAC UNBOUND AND TIMER EXPIRY (4) FIG. 6 illustrates the manner in which a mobile transitions from the Bound Inactive state 230 to the Registered state 220 of FIG. 2. This transition can occur in the following cases: (i) the user enters the Unbinding FAC followed by the deskset DN from the mobile, (ii) the user enters the Unbinding FAC followed by the mobile DN from the deskset; or (iii) the timer for the mobile UID to remain in a bound state expires. For all three of these cases, the mobile UID and the deskset TID are needed in order to make the necessary system updates to transition the mobile to the Registered state. The processing for case (i) begins when the user enters the Unbinding FAC from the mobile in step 600. The corresponding message comes in on a physical Port ID on which the mobile communicates. The system in one of the Lookup operations 611 of step 610 uses this Port ID as a key into the Terminal Profile Table (Table 3) and extracts the TID of the mobile. In order to determine the UID of the mobile, the system does a look-up in the User Profile Table (Table 1), using the mobile TID as the key, and extracts the mobile UID. In order to determine the deskset TID, another look-up is done in the User Profile Table using the mobile TID as the key and the deskset TID is extracted. The processing for case (ii) begins when the user enters the Unbinding FAC from the deskset in step 612. The corresponding message comes in on a physical Port ID on which the deskset communicates. The system in one of the Lookup operations 615 of step 614 uses this Port ID as a key into the Terminal Profile Table (Table 3) and extracts the TID of the deskset. In order to determine the UID of the mobile, the system performs a look-up into the Binding Group Definition Table (Table 6), using the deskset TID as the key, and extracts the mobile UID as the UID bound to the deskset. The processing for case (iii) begins when the Timer expires in step 620. The Timer function in step 622 then supplies the mobile UID. In order to determine the deskset TID, a look-up step 623 is performed in the Binding Group Definition Table (Table 6), using the mobile UID (i.e., the Bound UID) as the key, and the deskset TID is extracted. After the mobile UID and the deskset TID are obtained in cases (i), (ii) or (iii) in the manner described above, the processing continues with step 624. The mobile UID is first unbound from the deskset by making an update to the Binding Group Definition Table (Table 6). Using the deskset TID as the key into the Binding Group Definition Table, the associated Bound UID is set to NULL. Next, the Binding Table (Table 5), which keeps track of all current mobile users that are bound, is updated to remove the mobile user. The mobile UID is used as a key into the Binding Table, and all elements associated with the mobile UID are removed. This involves setting the Timer, Visiting TID and UID fields to NULL. Finally, the mobile TID is used as a key into the Terminal Profile Table (Table 3), and the Binding State associated with the mobile TID is set to REGISTERED. The mobile is thereby transitioned to the Registered state 220. OUTGOING CALL ESTABLISHMENT (5) FIG. 7 illustrates the manner in which a mobile transitions from the Bound Inactive state 230 to the Bound Active state 240 of FIG. 2. This transition is initiated in step 700 when a user places a call when its mobile is in the Bound Inactive state. It will be assumed for this example that the mobile is bound to a deskset from which the call is placed. This deskset will also be referred to as the originating terminal. The system detects the call placement, and in step 702 updates the Terminal Profile Table (Table 3) to reflect the fact that the deskset to which the mobile is bound is in the Bound state. The system in step 704 then executes a well-known facility selection routine and declares a specific network facility to be dedicated to the current call instance associated with the bound terminal. This generally involves selecting a facility which has a bandwidth equal to or greater than that required by the originating terminal. The system in step 706 uses the Facility ID of the selected facility as a key into the Facility-Coding Type Table (Table 8) to determine the Coding Type of the selected facility. The system then uses the TID of the originating terminal as a key into the Terminal Profile Table, and extracts the Tenn Type. The Term Type is used as a key into the Terminal Capability Table (Table 7) to determine the Coding Type and Transport Type requirements of the originating terminal. The system in step 708 executes an appropriate inter working function (IWF) for the transport stream in order to align the bandwidth, Coding Type and Transport Type of the originating terminal and the selected facility, if necessary. The IWF is “inserted” into the call path. For example, if the originating terminal is a wireless deskset using 32 kbps voice coding and the selected network facility is a DSO line at 64 kbps, the system in step 708 may insert an ADPCM-to-PCM transcoder for inter working the voice call. The system then initiates call establishment procedures in step 710. If the user is determined in step 712 to have aborted the call, the system returns the originating terminal to the Bound Inactive state, by updating the state of that terminal in the Terminal Profile Table as indicated in step 714. If the user has not aborted the call, the mobile completes the transition to the Bound Active state 240. INCOMING CALL ESTABLISHMENT (6) & (12) FIG. 8 illustrates the manner in which a mobile transitions from the Registered state 220 to the Bound Alerting state 250, which is transition (6) in FIG. 2, or from the Bound Inactive state 230 to the Bound Alerting state 250, which is transition (12). Transition (6) occurs in the event of an incoming call to a mobile which has been successfully registered to a binding group. Transition (12) occurs in the event of an incoming call to a mobile which has been successfully bound to a binding group. For both transitions, it is assumed that the mobile has not been active in a call. The incoming call to the mobile has been originated by the calling party which may be, for example, a voice-only telephone or an advanced terminal. The calling party initiates the incoming call by dialing the DN of the mobile. In the exemplary process shown in FIG. 8, the incoming call is directed to a registered mobile through the DN dialed by the calling party. The switch software implements the process steps of FIG. 8 in order to route the call. Step 800 checks if the mobile has been registered to the binding group. This involves the Lookup operations shown in 802. The dialed DN is first used as a key into the User Profile Table (Table 1) to determine the associated UID of the mobile. The mobile UID is then used as a key into the User Profile Table to determine the TID of the associated deskset. The deskset TID is used as a key into the Binding Group Definition Table (Table 6) to locate the registered UIDs for that deskset. Step 804 determines if any of the registered UIDs match the UID of the mobile. If there is no match, or if the registered UID entry for the deskset TID is NULL, the addressed mobile is not bound to any binding group, and the switch continues the normal call routing to the mobile, as shown in step 806. The mobile then returns to the Registered state. If there is a match between the registered UID for the deskset and the mobile UID, the system in step 808 checks if the deskset is busy with an on-going call. The deskset TID is used in Lookup operation 810 as a key into the Terminal Profile Table (Table 3) to extract the Binding State entry for the deskset. Step 812 checks whether the Binding State entry for the deskset is BOUND ACTIVE. If the Binding State entry is BOUND ACTIVE, the deskset is busy with another active call, so the incoming call is offered to the mobile only as shown in step 806, and the process returns to either the Bound Inactive or Registered state. If the Binding State entry for the deskset is not BOUND ACTIVE, the deskset is idle. The mobile then begins it transition to the Bound Alerting state with the update operations in step 814. The Bound UID of the deskset is updated in the Binding Group Definition Table (Table 6) to the mobile UID, the Binding State of the deskset is updated in the Terminal Profile Table (Table 3) to BOUND ACTIVE, and the Bound-Inactive Timer associated with the mobile UID and the deskset TID in the Binding Table (Table 5) is canceled. In step 816, an appropriate IWF is selected for the incoming call. The selection of an IWF makes use of information retrieved in the Lookup operations 817. The Terminal Type of the deskset is retrieved from the Terminal Profile Table (Table 3) using the deskset TID as a key. The Terminal Type is then used as key into the Terminal Capability Table (Table 7) to retrieve the Signaling Protocol, Transport Type, Coding Type and Display Size for the deskset. The Lookup operations 817 may include another Lookup operation, not shown in FIG. 8, which uses the mobile UID and Terminal Type as keys into the User Profile Table (Table 1) in order to retrieve the Button Assignments information associated with the mobile UID. The incoming call is then offered at both the mobile and the deskset, as indicated in step 820. Appropriate “alerting” is therefore generated for both the deskset and the mobile. This completes the transition of the mobile to the Bound Alerting state. CALL DIS-ESTABLISHMENT (7) FIG. 9 shows the manner in which a mobile moves from the Bound Active state 240 to the Bound Inactive state 230 of FIG. 2 when the mobile is released from a call. The call dis-establishment procedure is initiated in step 900. As part of this procedure, the system releases the call from the mobile or deskset. The system in step 902 then determines the mobile UID. The Lookup operation 904 uses the Port ID of the terminal released from the call as a key into the Terminal Profile Table (Table 3) to retrieve the TID and the Terminal Type of that terminal. Step 906 then determines if the resulting Terminal Type is a deskset. If the resulting Terminal Type is not a deskset, then the call must have been released from the mobile. The system then in step 908 retrieves the mobile UID from the User Profile Table (Table 1) using the TID of the mobile as a key. If the Terminal Type is determined in step 906 to be a deskset, step 910 uses the TID of the deskset as a key into the Binding Group Definition Table (Table 6) to retrieve the Bound UID for the deskset. In either case, step 912 determines whether the Bound UID for the deskset is set to the mobile UID. If the Bound UID for deskset is set to the mobile UID, the process moves to step 920. If the Bound UID for the deskset is not set, this is an error condition, and the Bound UID for the deskset is set to the mobile UID in step 914 before the process moves to step 920. In step 920, the Bound Interactive Time timer is set in the Binding Table (FIG. 5) for the mobile UID entry. This completes the transition and the mobile moves into the Bound Inactive state. FAC BOUND (8) FIG. 10 illustrates the following two different cases in which a given mobile can move from the Registered state 220 to the Bound Inactive state 230 of FIG. 2: (i) the user dials a Binding FAC followed by a deskset DN from the mobile; or (ii) the user dials the Binding FAC followed by the mobile DN from the deskset to be bound. The processing for case (i) begins in step 1002 of FIG. 2 with the user entering the Binding FAC followed by the deskset DN at the mobile. In step 1004 the system derives the UID of the mobile and the TID of the specified deskset, by performing the Lookup operations listed in 1005. The system searches the Terminal Profile Table (Table 3) using the Port ID of the mobile as a key in order to obtain the TID of the mobile. The mobile TID is then used as a key into the User Profile Table (Table 1) in order to obtain the UID of the immobile. The DN of the deskset is used as a key into the User Profile Table to obtain the TID of the deskset. The processing for case (ii) begins with the user entering the Binding FAC followed by the mobile DN from the deskset in step 1006. In step 1008, the system derives the UID of the mobile and the TID of the deskset using the two Lookup operations specified in 1009. The operation Lookup (3, Port ID, TID_(Desk)) causes the system to perform a look-up in the Terminal Profile Table (Table 3) using the Port ID of the deskset as a key to obtain the TID of the deskset. The DN of the mobile is then used as a key into the User Profile Table (Table 1) to obtain the UID of the mobile. The processing for cases (i) and (ii) continues in step 1010 in which a determination is made as to whether the mobile is actually registered to the deskset. In step 1011, the Binding Group Definition Table (Table 6) is searched using the deskset TID as a key to attempt to locate the mobile UID in the set of UID entries registered to the binding group of the deskset. Step 1012 checks whether the mobile UID is listed with the deskset TID in the Binding Group Definition Table. If the mobile UID does not match one of the registered Ul Ds for the deskset, the binding request is denied in step 1014. This indicates either that the mobile is not registered to the deskset DN supplied in step 1002, or the deskset has no registered mobile in the binding group that matches the mobile DN supplied in step 1006. As a result, the mobile remains in the Registered state. If it is determined in step 1012 that the mobile UID does match a registered UID for the deskset, the mobile is permitted to bind to that deskset, and the update operations in step 1016 are performed. The Binding Group Definition Table (Table 6) is updated to indicate that the mobile UID is a Bound UID for the deskset, the Terminal Profile Table (Table 3) is updated to include the Binding State entry of BOUND for the mobile TID, and the Binding Table (Table 5) is updated to set the Visiting TID to the deskset TID and to enable the Bound Inactive Timer. The state of the mobile then goes to Bound Inactive. INCOMING CALL NOT ANSWERED (9) FIG. 11 illustrates the manner in which a given mobile can move from the Bound Alerting state 250 to the Registered state 220 of FIG. 2. When the mobile is in the Bound Alerting state, both the bound deskset of the binding group and the mobile can answer the incoming call. In step 1102, an incoming call is not answered at either the bound deskset or the mobile. The call may therefore be rerouted by the switch or dropped by the called party. Information such as the TID of the deskset, the TID of the mobile, and the UID of the mobile are available in the switch while the incoming call is rerouted or disconnected. Step 1104 resets the Binding State of the mobile and the deskset, by updating the Binding State information in the Terminal Profile Table (Table 3) to NULL for the deskset TID and to REGISTERED for the mobile TID. Step 1106 clears the Bound UID information for the deskset TID in the Binding Group Definition Table (Table 6) to NULL. Step 1108 deletes the entire binding record of the mobile UID from the Binding Table (Table 5) using the operation Update (5, UID_(Mobile), delete). The mobile then returns to the Registered state as shown. INCOMING CALL ANSWERED AT COMPLEX TERMINAL (11) FIG. 12 illustrates the manner in which a given mobile can move from the Bound Alerting state 250 to the Bound Active state 240 of FIG. 2. In step 1200, an incoming call arrives at the trunk 114 destined for a particular terminal of system 100. The call may not be end-to-end symmetric in that, for example, its bandwidth, coding type and/or service type may not be consistent throughout the network. During the transition from the Bound Alerting to the Bound Active state, the system therefore verifies the bandwidth and network coding type for the call, and checks the destination terminal profile for compatibility. This part of the process is implemented in steps 1202 and 1204 of FIG. 12. In step 1202, the Facility ID associated with the destination terminal is used as a key in the Facility-Coding Type Table (Table 8) to obtain the Coding Type for the terminal. It will be assumed for purposes of illustration that the destination terminal is a deskset. The DN from the incoming call is used as a key in the User Profile Table (Table 1) to obtain the Terminal Type and TID of the deskset. The deskset TID is then used as a key in the Terminal Capability Table (Table 7) to obtain the Coding Type for the deskset. A determination is then made in step 1204 as to whether the bandwidth, coding type and service type of the incoming call are “symmetric” with the corresponding parameters of the destination deskset. If the call and deskset parameters are not symmetric, an IWF is selected in step 1206 to provide extract/fill operations which arc designed to smooth out the asymmetry between the call and deskset parameters. If the call and deskset parameters are symmetric, an IWF transcoder is selected in step 1208. In either case, step 1210 updates the Binding State entry for the deskset TID in the Terminal Profile Table (Table 3) to BOUND ACTIVE. This completes the transition of the mobile to the Bound Active state. In the simplest case of the operation of the process of FIG. 12, the incoming call is end-to-end symmetric from its source terminal to the destination terminal, and the IWF transcoder selected in step 1208 may be a NULL IWF. However, there may be many cases in which such symmetry does not exist. For example, in the case of Transition (5) described above, both the bandwidth and voice coding scheme of the call may be inter worked using an appropriate IWF transcoder designed to match the capabilities of the destination terminal to those of the source. Another case of this type is one in which the incoming call is a multimedia call, but the destination terminal is not capable of multimedia support. In this case, the system determines that there is a service mismatch (as well as a bandwidth and coding mismatch), and initiates IWF procedures to allow the call to be delivered to the destination terminal. By way of example, assume that an H.320 call arrives at the network trunk 114, destined for a user who is bound to the wireless deskset WT2 of FIG. 1. The system decodes the H.320 transport stream, and extracts the voice samples. These voice samples are then transcoded to match the capabilities of the bound terminal WT2, and delivered to the user. As one possible additional feature related to this example, the IWF may be configured to provide preset video data in the reverse (i.e., outbound) direction, such that the established bandwidth between the system terminal and source of the H.320 call is bidirectionally symmetric. Since the deskset WT2 itself is not capable of generating this preset video data to insert in the reverse direction, the IWF may extract such video data from the system database 118 and insert it in the outbound transport stream. It should be noted that during the life of a call, since the call can be bridged between the mobile and a more complex terminal, and such bridged appearances can be created by proximity to the complex terminal, the IWF selection steps may have to be executed multiple times. For example, assume a user binds to a complex terminal in a particular location, and accepts a call there. During the call, the user, utilizing the bridged call appearance on the mobile, moves the call to the mobile, and moves to a different area. Upon reaching the new location, the user is detected to be in proximity to a new complex terminal. A new IWF may then be invoked when the user enters the Bound Active state at the new terminal. In one possible alternative embodiment of the invention, a given user is supplied with a device which is configured to signal user identification (UID) information for that user. Such a device could be implemented within an employee identification badge worn by the user, or in the form of a small “button” which could be attached to the user's clothing or worn as a necklace. In a wireless environment, such a device is generally referred to as a beacon. A system in accordance with the invention may be configured to utilize the beacon to track the location of the user, such that a call received for the user is directed to the deskset or other complex terminal which the system determines is closest to the user at that particular time. Although this type of proximity-based registration may be implemented in a manner similar to that described above for mobile terminals, it will not include a bridging operation if the beacon device carried by the user does not support transport channels or a user interface. A beacon-directed call processing function can be activated by user entry of a feature access code (FAC) and/or depression of a feature button. Alternatively, it may be implemented in a fully automated manner. In the feature-activated implementation, the system tracks beacon signals only when directed to do so by user commands. In the fully-automated implementation, beacon-to-terminal proximity relationships may be determined on an a priori basis, and used for all routing. The system could first check user location as defined by his or her corresponding beacon location before performing any beacon-directed call routing for that user. Tables 1, 3, 4 and 7 above include entries relating to a beacon terminal (BT) for implementing a beacon-directed call processing function. For example, the User Profile Table (Table 1) indicates that the user having UID “epf” is equipped with a beacon device. This embodiment of the invention may be used, for example, to route incoming calls to the user at the closest system terminal, to allow the user to access functions of a stored user-defined terminal profile at the closest system terminal, as well as in other applications. The above-described embodiments of the invention are intended to be illustrative only. These and numerous other alternative embodiments within the scope of the following claims will be apparent to those skilled in the art. What is claimed is: 1. A method for processing a call received in a switch of a communication system, the method comprising the steps of: identifying a parameter of the call; retrieving previously-stored information regarding a corresponding parameter of a destination terminal of the call, the destination terminal being determined based on an association established in the switch on a dynamic basis between the destination terminal and at least one other terminal of the system; and processing the call in accordance with at least one inter working function selected from a set of inter working functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between the parameter of the call and the corresponding parameter of the destination terminal. 2. The method of claim 1 wherein the parameter of the call is a service type associated with the call. 3. The method of claim 1 wherein the parameter of the call is a bandwidth utilized by the call. 4. The method of claim 1 wherein the parameter of the call is a transport stream characteristic of the call. 5. The method of claim 1 wherein the parameter of the call is a voice coding technique used in the call. 6. The method of claim 1 wherein the processing step includes inserting a transcoder implementing the interworking function between an input receiving the call and the destination terminal. 7. The method of claim 6 wherein the call is a voice call and the transcoder is an ADPCM-to-PCM transcoder for interworking the voice call. 8. The method of claim 1 wherein the call is moved from the destination terminal to at least one other terminal of the system during the call, and the processing step further includes the step of processing the call using a first interworking function to provide compatibility with the destination terminal and a second interworking function to provide compatibility with the other system terminal. 9. The method of claim 1 wherein the processing step includes the step of inserting additional data, for presentation in a user-perceptible manner at a source terminal of the call, the additional data being retrieved from a database of the switch, into a reverse portion of the call directed from the destination terminal to the source terminal of the call. 10. The method of claim 9 wherein the call is a video call, the destination terminal is a terminal without video generating capability, and the additional data is video data retrieved from the database and inserted in a signal delivered from the destination terminal to the source terminal as part of the reverse portion of the call. 11. The method of claim 1 wherein the call is a video call including a transport stream, and the processing step further includes: extracting voice samples from the transport stream; transcoding the voice samples to match one or more parameters of the destination terminal; and delivering the voice samples to the destination terminal. 12. The method of claim 11 further including the step of inserting additional video data, retrieved from a database of the switch, into a transport stream directed, in a reverse portion of the call, from the destination terminal to a source terminal of the call, such that the established bandwidth between the destination terminal and the source terminal is substantially bidirectionally symmetric. 13. An apparatus for processing a call in a switch of a communication system, comprising: a processor implementing at least one interworking function selected from a set of interworking functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between a parameter of the call and a corresponding parameter of a destination terminal of the call, the destination terminal being determined based on an association established in the switch on a dynamic basis between the destination terminal and at least one other terminal of the system; and a memory for storing information regarding the corresponding parameter of the destination terminal. 14. The apparatus of claim 13 wherein the parameter of the call is a service type associated with the call. 15. The apparatus of claim 13 wherein the parameter of the call is a bandwidth utilized by the call. 16. The apparatus of claim 13 wherein the parameter of the call is a transport stream characteristic of the call. 17. The apparatus of claim 13 wherein the parameter of the call is a voice coding technique used in the call. 18. The apparatus of claim 13 wherein the processor includes a transcoder implementing the interworking function, wherein the transcoder is inserted between an input of the switch which receives the call and the destination terminal. 19. The apparatus of claim 18 wherein the call is a voice call and the transcoder is an ADPCM-to-PCM transcoder for interworking the voice call. 20. The apparatus of claim 13 wherein the call is moved from the destination terminal to at least one other terminal of the system during the call, and the processor is further operative to process the call using a first interworking function to provide compatibility with the destination terminal and a second interworking function to provide compatibility with the other system terminal. 21. The apparatus of claim 13 wherein the processor is further operative to insert additional data, for presentation in a user-perceptible manner at a source terminal of the call, the additional data being retrieved from a database of the switch, into a reverse portion of the call directed from the destination terminal to the source terminal of the call. 22. The apparatus of claim 21 wherein the call is a video call, the destination terminal is a terminal without video generating capability, and the additional data is video data retrieved from the database and inserted in a signal delivered from the destination terminal to the source terminal as part of the reverse portion of the call. 23. The apparatus of claim 13 wherein the call is a video call including a transport stream, and the processor is further operative to extract voice samples from the transport stream, to transcode the voice samples to match one or more parameters of the destination terminal, and to deliver the voice samples to the destination terminal. 24. The apparatus of claim 23 wherein the processor is further operative to insert additional video data, retrieved from a database of the switch, into a transport stream which is directed, in a reverse portion of the call, from the destination terminal to a source terminal of the call, such that the established bandwidth between the destination terminal and the source terminal is substantially bidirectionally symmetric. 25. A method for processing a call received in a switch of a communication system, the method comprising the steps of: identifying a parameter of the call; retrieving previously-stored information regarding a corresponding parameter of a destination terminal of the call; and processing the call in accordance with at least one interworking function selected from a set of interworking functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between the parameter of the call and the corresponding parameter of the destination terminal; wherein the call is moved from the destination terminal to at least one other terminal of the system during the call, and the processing step further includes the step of processing the call using a first interworking function to provide compatibility with the destination terminal and a second interworking function to provide compatibility with the other system terminal. 26. A method for processing a call received in a switch of a communication system, the method comprising the steps of: identifying a parameter of the call; retrieving previously-stored information regarding a corresponding parameter of a destination terminal of the call; and processing the call in accordance with at least one interworking function selected from a set of interworking functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between the parameter of the call and the corresponding parameter of the destination terminal; wherein the call is a video call including a transport stream, and the processing step further includes: extracting voice samples from the transport stream; transcoding the voice samples to match one or more parameters of the destination terminal; and delivering the voice samples to the destination terminal. 27. An apparatus for processing a call in a switch of a communication system, comprising: a processor implementing at least one interworking function selected from a set of interworking functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between a parameter of the call and a corresponding parameter of a destination terminal of the call; and a memory for storing information regarding the corresponding parameter of the destination terminal; wherein the call is moved firm the destination terminal to at least one other terminal of the system during the call, and the processor is further operative to process the call using a first interworking function to provide compatibility with the destination terminal and a second interworking function to provide compatibility with the other system terminal. 28. An apparatus for processing a call in a switch of a communication system, comprising: a processor implementing at least one interworking function selected from a set of interworking functions implemented in the switch, wherein the selected interworking function is operative to provide compatibility between a parameter of the call and a corresponding parameter of a destination terminal of the call; and a memory for storing information regarding the corresponding parameter of the destination terminal; wherein the call is a video call including a transport stream, and the processor is further operative to extract voice samples from the transport stream, to transcode the voice samples to match one or more parameters of the destination terminal, and to deliver the voice samples to the destination terminal.
Marie Dacke Marie Ann-Charlotte Dacke is a professor of Sensory Biology, at the Lund Vision Group in Lund University, Lund, Sweden. Her research focuses on nocturnal and diurnal compass systems, using the dung beetle as a model organism. Dacke is a Wallenberg Scholar as of 2025. In 2022 she was elected a fellow of the Royal Swedish Academy of Sciences. Dacke has a keen interest for the education of the general public and among other things act as a panel member of the Swedish TV show Studio Natur. In 2013 she received an Ig Nobel Prize for her work on the navigation system of dung beetles. Since 2018, she is also an honorary professor at the University of the Witwatersrand in Johannesburg, South Africa. Early life and career Dacke went to high school in Landskrona. After graduating from high school she attended Lund University where she studied biology. Here, she completed her Ph.D. on Celestial Orientation in Dim Light in 2003, under the supervision of Professor Dan-Eric Nilsson. Her thesis focused on how optical compasses are built, how they are used and how they are adapted to work at low light intensities. During her Ph.D, she discovered a unique compass organ in spiders, a study which was published in Nature in 1999. A few years later she revealed the first evidence of an animal able to use the dim pattern of polarized moon-light for orientation, a study also published in Nature in 2003. After her Ph.D., she spent two years as a postdoctoral reseacher at the Centre for Visual Sciences at the Australian National University in Canberra. In 2007, she returned to Lund University as a research fellow and in 2011 she became an associate professor in Sensory Biology. She became a Professor in Sensory Biology in 2017. Dacke's research is focused on navigation and orientation in insects, in particular orientation in dung beetles. She is interested in the celestial compass (which is the use of the sky to guide navigation). By exploring the interface between behaviour, neurobiology and cognition, her research tries to understand how diurnal and nocturnal compass systems of insects work. In 2013 she, together with Marcus Byrne, Emily Baird, Clark Scholtz and Eric Warrant, received the Ig Nobel Prize in the joint astronomy and biology category for showing that nocturnal dung beetles can use the Milky Way as a compass. This research was published in Current Biology. In 2014, Dacke received an Excellent Young Researchers grant from the Swedish Research Council (Vetenskapsrådet) to continue her research on the compass systems of dung beetles, exploring the link between electrophysiology and behaviour. Part of this research was published in Proceedings of the National Academy of Sciences (PNAS) in 2015 and Current Biology in 2016. In 2018 Dacke received funding from the European Research Council to expand further on her work, and define the principles behind multimodal navigational systems, studying brain activity in dung beetles as they perform their orientation behaviour. Part of this cross-disciplinary research was published in PNAS in 2019 and iScience in 2022. Dacke has been elected a fellow of the Young Academy of Sweden (2011), Royal Physiographic Society of Lund (2017), Royal Entomological Society of London (2018), Royal Swedish Academy of Sciences (2022) and Societas Ad Sciendum (2023). From 2025 she is a Wallenberg Scholar. Science communication Dacke has been a panel member on the Swedish TV show Studio Natur (currently streaming on SVT Play) since 2010. In 2012 Dacke was named best science communicator in Sweden in the national competition Forskar Grand Prix (Science Grand Prix). In 2012 Dacke was one of the scientists to appear in a series about research and researchers produced by the Swedish Foundation for Strategic Research and TV4. In 2019 she gave the Royal Entomological Society's Verrall Lecture at the Natural History Museum, London, speaking about As the crow flies, and the beetle rolls: straight-line orientation from behaviour to neurons. Dacke has authored two books; Trädgårdsdjur - myllret och mångfalden som växterna älskar (Roos & Tegnér, ISBN<PHONE_NUMBER>629) (co-authored with Låtta Skogh) in 2020, and Taggad att leva - igelkottens liv, historiska resa och hotande framtid (Roos & Tegnér, ISBN<PHONE_NUMBER>368), in 2021.
Improve performance of serializeV1SCT BenchmarkSCTOld 1000000 2135 ns/op 448 B/op 13 allocs/op BenchmarkSCTNew 10000000 151 ns/op 64 B/op 1 allocs/op Hi, thanks for the PR, apologies it's taken so long to get to it. Few comments below. In regards to the 0xFFFF SCT size limit. After looking through the RFC it seems that limit only applies when the SCT is placed in the SCTList structure for OCSP and TLS responses (although that is the only place I can see it used). I've removed it from the serialisation code. New Patch follows. Benchmark results: BenchmarkSCTOld 1000000 2135 ns/op 448 B/op 13 allocs/op BenchmarkSCTNew 10000000 151 ns/op 64 B/op 1 allocs/op Benchmark Code: func BenchmarkSCTOld(b *testing.B) { sct := defaultSCT() for i := 0; i < b.N; i++ { SerializeSCT(sct) } } Thanks!
A banner to notify of possible problems with adblock on analytics page [OSF-6307] Purpose fix JIRA problem OSF-6307 Changes A banner shows up if the user has adblock on the analytics page Side effects There shouldn't be any. Ticket https://openscience.atlassian.net/browse/OSF-6307 Good start, but needs a couple changes. Pass done. 🐧 Pass done. 🐧
About dev mapping of partitions New Linux user. Today had bright idea of making a backup of two partitions on an sd card to another sd card already in use. Did this: shrunk partition on backup sd, thus creating unallocated space, and copypasted the partitions to that space with gparted. I don't have two card reader drives on the laptop, so I used a usb-attachable sd-to-usb drive. All went well, except now debian can access and mount the partitions when ever I read the new card from the usb attached drive (they are mapped as /dev/sdbX) but when I put the card in the card reader slot, while it does see the partitions and show them as mmcblk0pX, it refuses to mount or read them. So, now I'm guessing that I had an oversimplified understanding of disk management in Linux, and would appreciate it some one could explain this and suggest a way to fix it. I though that the device mapping would be done automatically from the partition mapping, which seems not to be the case... you're probably not so dumb after all. But might you elaborate a little on refuses? Maybe error messages? And probably a little lsblk output couldn't hurt... Very good question though. Also - device-mapper actually a factor here - like did you use device-mapper to layer the partitions somehow? I'm not sure anymore, I thought device mapper was what created the /dev/* listings.. On refuses, I simply get a message: cannot mount device not found probably not then.
System and method for stabilizing high order sigma delta modulators ABSTRACT A system and method is provided for stabilizing high order sigma delta modulators. The system includes an integrator having a limiter in the feedback path of the integrator. The integrator combines an input signal with a feedback signal generated by the limiter to produce an integrated output signal. The output signal is output to the next component of the sigma delta modulator. In addition, the output signal is fed back through the limiter. When an output signal received in the feedback path by the limiter exceeds the threshold value of the limiter, the limiter is activated and clamps the output signal to produce a limited signal. The limited signal is combined with the input signal to the integrator to produce the output signal. FIELD OF THE INVENTION The present invention is related to data communications networks and modulation. More particularly, the present invention is related to techniques for stabilizing high order sigma delta modulators. BACKGROUND OF THE INVENTION Modern communications technologies rely on the ability of equipment to quickly and efficiently convert data between analog and digital formats. As a result, analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) have become central components in a wide variety of applications. As these applications have become increasingly sophisticated, the demand for greater bandwidth and resolution from their ADCs and DACs has risen dramatically. At a high level, an ADC receives an analog signal and produces a digital signal and a DAC receives a digital signal and produces an analog signal. In an ADC, the digital signal comprises a sequence of discrete quantized values that, over time, track the parameter variations of the analog signal. Quantization error is an unwanted byproduct of this quantization process. DAC and ADC are characterized by their sampling frequency and degree of resolution. The ability of a converter to digitize an analog signal faithfully is a direct function of both of these parameters. As the sampling frequency is increased, the analog signal is sampled at more points in time. As the degree of resolution is refined, differences between the digital signal and analog signal are minimized. Many distinct architectures exist for DACs and ADCs including “flash,” “pipelined,” “successive approximation,” and “sigma delta” architectures. Each architecture has benefits and drawbacks. Paramount among these is a tradeoff between bandwidth and degree of resolution. Of these architectures, sigma delta converters have exhibited the best balance between bandwidth and resolution. A conventional sigma delta converter includes a sigma delta modulator followed by a decimator. The sigma delta modulator samples the input signal at a rate that is much faster than the Nyquist rate. The use of oversampling combined with noise shaping functionality allows a sigma delta modulator to move most of the quantization noise outside the band of the signal. The decimator then reduces the frequency of the resultant output and filters the out of band noise. FIG. 1 illustrates a conventional first-order, single-stage, single-bit sigma delta modulator 100. A sigma delta modulator can be included in either a ADC or a DAC. In addition, a sigma delta modulator can have either an analog or a digital implementation. A single converter can contain an analog or digital sigma delta modulator or both. Sigma delta modulator 100 includes a summing node 110, an integrator 120, a single-bit quantizer 150, and a converter 160. Summing node 110, integrator 120, and quantizer 150 are connected, respectively, in series along signal path 108. Converter 160 is connected in parallel with signal path 108 between node N₀ 104 and summing node Σ₀ 110. Initially, a signal x[n] passes through summing node 110 and is sampled by integrator 120. Integrator 120 integrates signal x[n] over a given period of time to produce an integrated signal v[n]. Integrated signal v[n] is transmitted to single-bit quantizer 150. Single-bit quantizer 150 rounds integrated signal v[n] to the closest of two preset levels (i.e., a single bit) to produce a quantized signal y[n]. To minimize the difference between quantized signal y[n] and signal x[n], quantized signal y[n] is transmitted to converter 160 and converted to produce an feedback signal fbk[n], which is fed back to summing node 110. At summing node 110, feedback signal fbk[n] is subtracted from signal x[n] to produce an difference signal u[n]. Difference signal u[n] passes into integrator 120 and the process described above is repeated. Essentially, integrator 120 integrates the difference between quantized signal y[n] and signal x[n]. Over a large number of samples, integrator 120 forces this difference to approach zero. Thus, signal x[n] is received by modulator 100 and converted into quantized signal y[n], produced at node N₀ 104. The quantized signal y[n] comprises a stream of quantized values. Typically, this stream is produced at a modulator frequency that is several times greater than the carrier frequency of analog signal x[n]. The ratio of the modulator frequency to the Nyquist frequency is referred to as the oversampling ratio. Signal-to-noise ratio (SNR) is an important measurement in a sigma-delta converter because a higher SNR translates into smaller distortion between digital and analog signals. In a sigma delta modulator, the SNR improves when the oversampling ratio is increased. For example, as a “rule of thumb,” the SNR for an ADC improves by 9 dB for every doubling of its oversampling ratio. The use of high-order sigma delta modulators further improves the SNR. As a result, high-order single-loop sigma delta modulators are desirable for high SNR applications such as digital voice and audio. High-order sigma delta modulators can be implemented using a wide variety of architectures. For example, a sigma delta modulator could have either a single stage or cascaded (also known as MASH) architecture. In a cascaded architecture, two or more low-order sigma delta modulators are coupled to produce a high-order sigma delta modulator. A modulator is considered high-order if it contains 3 or more integrator segments. A detailed explanation of the various high-order architectures is provided in the book “Delta-Sigma Data Converters—Theory, Design and Simulation,” Norsworthy et al., IEEE Press, Piscataway, N.J. (1997), which is incorporated herein by reference in its entirety. A high-order single stage single-loop sigma delta modulator can either follow a multiple feed forward topology or a multiple feedback topology. FIG. 2 is a block diagram of a third-order single-loop sigma delta modulator 200 having a multiple feed forward topology. Modulator 200 has a first summing node 210, a first integrator 220, a second integrator 230, a third integrator 240, a second summing node 212, and a quantizer 150 connected, respectively, in series along signal path 208. A first amplifier 272 is connected in parallel with signal path 208 between node N₁ 206 and the second summing node 212. A second amplifier 274 is connected in parallel with signal path 208 between node N₂ 207 and the second summing node 212. A third amplifier 276 is connected in parallel with signal path 208 between node N₀ 104 and the first summing node 210. Referring to FIG. 2, signal x[n] passes through the first summing node 210 and is sampled by the first integrator 220. First integrator 220 produces an integrated signal v₁[n]. Integrated signal v₁[n] is transmitted to second integrator 230 and to first amplifier 272. First amplifier amplifies the signal by c₁ and generates signal v_(c1)[n]. The second integrator 230 integrates signal v₁[n] over a given period of time to produce an integrated signal v₂[n]. Integrated signal v₂[n] is transmitted to third integrator 240 and to second amplifier 274. Second amplifier 274 amplifies the signal by c₂ and generates signal v_(c2)[n]. The third integrator 240 integrates the signal v₂[n] over a given period of time to produce an integrated signal v₃[n]. At the second summing node 212, integrated signal v₃[n] is added to amplified signals v_(c1)[n] and v_(c2)[n] resulting in signal w[n]. Signal w[n] is input to single-bit quantizer 150. Single-bit quantizer 150 produces a quantized signal y[n]. Quantized signal y[n] is transmitted to third amplifier 276 and amplified by C₃ to produce a feedback signal fbk[n], which is fed back to the first summing node 210. At the first summing node 210, feedback signal fbk[n] is subtracted from signal x[n] to produce a difference signal u[n]. Difference signal u[n] passes into the first integrator 220 and the process described above is repeated. FIG. 3 is a block diagram of a third-order single-loop sigma delta modulator 300 having a multiple feedback topology. Modulator 300 comprises a first summing node 310, a first integrator 320, a first amplifier 372, a second summing node 314, a second integrator 330, a second amplifier 374, a third summing node 316, a third integrator 340, and a quantizer 150 connected, respectively, in series along signal path 308. A gain amplifier 378 is connected in parallel with signal path 308 between node N₁ and the second summing node 314. A third amplifier 376 is connected in parallel with signal path 308 and feds back a signal to the first, second, and third summing nodes 310, 314, 316. The integrators shown in FIG. 1 through 3 can either be digital integrators or analog integrators. Many architectures exist for implementing analog and digital integrators. FIG. 4A is a block diagram illustrating an exemplary architecture for a conventional digital integrator 420. Digital integrator 420 comprises an adder 422 and a delay 424. The delayed output of the adder 422 is fed back to the adder 422 along signal path 426. Conventional analog integrators are typically implemented using either a switch-capacitor or a continuous time design. FIG. 4B illustrates an exemplary switch-capacitor analog integrator 430. Integrator 430 includes an operational amplifier (op amp) 432, a first capacitor 434, a second capacitor 436, phase 1 switches S1 and S4, and phase 2 switches S2 and S3. Switch S1, first capacitor 434, and Switch S4 are connected, respectively, in series between the input of the integrator and the inverting (negative) terminal of op amp 432. Second capacitor 436 is connected between the inverting terminal of op amp 432 and the op amp output. Switch S2 is connected between N₁ and ground. S4 is connected between N₂ and ground. The 3^(rd) order single-loop sigma-delta modulators shown in FIG. 2 and FIG. 3 achieve 14-bit resolution with an oversampling ratio of 40. The multiple feedback topology of FIG. 3 is especially useful for digital sigma-delta modulation since the one-bit feedback could make its implementation multiplier free. Notwithstanding these benefits, all high-order sigma delta modulators are susceptible to instability. When a large signal is received as input to the modulator, the internal filter states of the modulator exhibit large signal oscillation. The modulator then produces an output of alternating long strings of 1's or 0's. The signal-to-noise-plus-distortion ratio (SNDR) drops dramatically when the sigma delta modulator is operating in an unstable state. A detailed explanation of the stability problem is provided in the book “Delta-Sigma Data Converters—Theory, Design and Simulation,” Norsworthy et al., referenced above. A current technique for addressing the stability problem is the integrator reset technique. This technique is frequently used when an analog switch capacitor integrator is built using CMOS technology. In this technique, the modulator determines when instability exists and triggers a short pulse to reset the integrator. One method for determining the existence of instability is through the use of a comparator. In another method, modulator instability is detected when a sufficiently long string of 1's or 0's occurs at the output of the modulator. When instability is detected, the integrator is reset with a short pulse. If the frequency of the reset event is lower than the cut-off frequency of the subsequent filter, a large amount of noise may appear at the output of the modulator. Another current technique for addressing the stability problem is the state-variable clamping technique. In the state-variable clamping technique, a limiter is placed in the forward path of the integrator. FIG. 5A depicts a block diagram of an analog integrator 3530 implementing the state variable stabilizing technique. Analog integrator 530 comprises an op amp 532, a capacitor 536, a limiter 538, and a resistor 533. Limiter 538 has thresholds of +V and −V. When the voltage across capacitor 536 exceeds the limiting voltage threshold, the output of the integrator clamps. Since the output voltage cannot go to the rail, the op amp 532 keeps in the linear region of operation and stability improves. FIG. 5B depicts a block diagram of a digital integrator 520 implementing the state variable stabilizing technique. The digital integrator 520 comprises an adder 522, a delay 524, and a limiter 528. Similar to the analog integrator 530, the output of the digital integrator is clamped to the limiter value when the input exceeds the limiting threshold. A drawback of the state-variable clamping technique is that the signal path is blocked when the signal level exceeds a certain level, resulting in significantly deteriorated SNDR for large signals. The current state-variable clamping technique described above is viable for the multiple feed-forward topology illustrated in FIG. 2 where the last integrator is limited in the forward path. When the last integrator degrades into DC, two other feed-forward paths still exist to the output. Thus, when the limiter in the third integrator is active, the 3^(rd) order sigma delta modulator shown in FIG. 2 degrades into a 2^(nd) order sigma delta modulator. The 2^(nd) order sigma delta modulator is stable but it has a lower SNDR. However, this limiting method is not appropriate for the multiple feedback topology shown in FIG. 3. In this topology, when the third integrator degrades to DC and blocks the forward path, no more signal paths to the output of the modulator exist. A need therefore exists for an integrator that can improve the SNDR when a high-order sigma delta modulator becomes unstable without blocking the input signal path or degrading the input signal into a DC signal. BRIEF SUMMARY OF THE INVENTION The present invention is directed to a system and method for stabilizing high order sigma delta modulators. In accordance with embodiments of the present invention, the system comprises an integrator having a limiter in the feedback path. In an embodiment of the present invention, the integrator is a digital integrator comprising an adder connected in series to a delay along the forward signal path. The digital integrator further comprises a limiter connected in parallel with the forward signal path between the output and the adder. The limiter is connected along the feedback path of the integrator. In another embodiment of the invention, the integrator is an analog integrator comprising an op amp, a capacitor, a first resistor, a second resistor, and a limiter is the feedback path. The second resistor is connected in series with limiter. The second capacitor is connected between the inverting (negative) terminal of the op amp and the output of the op amp, in parallel with the series connection of the second resistor and the limiter. The first resistor is connected between the input of the integrator and the inverting terminal of the op amp. The non-inverting (positive) terminal of the op amp is connected to ground. Voltage, V_(in), is applied to the integrator at the input. In another embodiment of the invention, the integrator is an analog integrator comprising an op amp, a capacitor, a first resistor, a second resistor, and a limiter is the feedback path. The second resistor is connected in series with limiter. The second capacitor is connected between the inverting (negative) terminal of the op amp and the output of the op amp, in parallel with the series connection of the second resistor and the limiter. The first resistor is connected between ground and the inverting terminal of the op amp. The non-inverting (positive) terminal of the op amp is connected to the input voltage, V_(in). In an embodiment of the present invention, when the stabilizing system receives an input signal, the input signal is combined with a feedback signal and is delayed by a delay to produce an output signal. The output signal is fed back along feedback path into the limiter. If the output signal does not exceed the threshold of the limiter, the limiter is not activated and the signal passes through to the adder. The signal is then combined with the input signal and the process is repeated. If the output signal exceeds the thresholds of the limiter, the limiter is activated. When the limiter is activated, the limiter clamps the output signal to the threshold value of the limiter. The output of the limiter is then input to adder and combined with the input signal and the process is repeated. In an embodiment of the present invention, a method for stabilizing high-order sigma delta modulation includes the steps of combining an input signal and a feedback signal to produce a difference signal; integrating the difference signal to obtain an integrated signal; and quantizing the integrated signal to obtain a quantized signal representing a high-order sigma delta modulation of the input signal. The integrating step includes integrating with feedback and limiting the integrating to maximum and minimum voltage threshold values (+V, −V) when integrator feedback voltages exceed the maximum and minimum threshold values. BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention. FIG. 1 is a block diagram of a first-order, single-stage, sigma delta modulator. FIG. 2 is a block diagram of a third-order, single-loop sigma delta modulator having a multiple feed forward topology. FIG. 3 is a block diagram of a third-order, single-loop sigma delta modulator having a multiple feedback topology. FIG. 4A is a block diagram of a conventional digital integrator. FIG. 4B is a block diagram of a conventional analog switched capacitor integrator. FIG. 5A is a block diagram of an analog switched capacitor integrator implementing the conventional state variable stabilizing technique. FIG. 5B is a block diagram of a digital integrator implementing the conventional state variable stabilizing technique. FIG. 6 is a block diagram of a digital integrator having a feedback limiter in accordance with embodiments of the present invention. FIG. 7 is a flowchart illustrating a method for stabilizing a sigma delta modulator in accordance with embodiments of the present invention. FIGS. 8A and 8B are block diagrams of analog switched capacitor integrators in accordance with embodiments of the present invention. FIG. 9 is a graph of SNDR versus input amplitude for a conventional integrator using a limiter in the feed forward path and an integrator having a limiter in the feedback path in accordance with embodiments of the present invention. The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit in the corresponding reference number. DETAILED DESCRIPTION OF THE INVENTION FIG. 6 is a block diagram of a digital feedback limiter integrator 620 for stabilizing a sigma delta modulator in accordance with an embodiment of the present invention. Feedback limiter integrator 620 can be implemented in any high order sigma delta modulator architecture including the single loop multiple feed-forward topology shown in FIG. 2 and the multiple feedback topology shown in FIG. 3. In any architecture, one or more of the integrators is replaced with a feedback limiter integrator 620 in accordance with embodiments of the present invention. FIG. 6 shows an exemplary digital implementation of the feedback limiter integrator 620. Integrator 620 includes an adder 622 and a delay 624 connected in series along signal path 608. The system also comprises a limiter 628 connected along feedback path 626 between output node 625 and adder 622. In feedback limiter integrator 620, limiter 628 is in the feedback path of the integrator. Limiter 628 has threshold values +V and −V, and clamps a received signal to these threshold values when the limiter is active. The threshold values of limiter 628 are set based on the application for which the sigma delta modulator using integrator 620 is designed. Persons skilled in the relevant art(s) will recognize that configurations and arrangements other than those provided in FIG. 6 can be used without departing from the spirit and scope of the present invention. FIG. 7 depicts a flowchart 700 of a method for stabilizing a high order sigma delta modulator using a digital feedback limiter integrator in accordance with embodiments of the present invention. The invention, however, is not limited to the description provided by the flowchart 700. Rather, it will be apparent to persons skilled in the relevant art(s) from the teachings provided herein that other functional flows are within the scope and spirit of the present invention. The flowchart 700 will be described with continued reference to the exemplary digital feedback limiter integrator 620 described in reference to FIG. 6, above. However, the invention is not limited to that embodiment. The method starts when integrator 620 receives an input signal, r[n] (step 710). In step 720, the input signal, r[n], is combined with feedback signal, fdbk[n] in adder 622 to produce signal r₁[n]. Signal r₁[n] is then delayed by delay 624 to produce integrator output signal v[n] (step 730). The output signal v[n] is then output (step 740). In step 750, the output signal, v[n], is also fed back along feedback path 626 into limiter 628. The integrator next determines whether the output signal exceeds the thresholds of the limiter. If the output signal, v[n], does not exceed the threshold of the limiter, the limiter is not activated and the signal passes through to the adder 622. In this circumstance, the feedback signal fdbk[n] is equal to output signal, v[n]. If the output signal, v[n], exceeds the thresholds of the limiter, the limiter is activated. When the limiter is activated, the limiter clamps the output signal, v[n], to the threshold value of the limiter. The limiter produces a feedback signal, fdbk[n], which is the clamped output signal, v[n]. The feedback signal, fdbk[n], is then input to adder 622 and combined with the input signal, r[n], and the process is repeated. When the input signal is very large, the feedback limiter integrator 620 limits the integrator output, v[n], while allowing a signal to still pass through the integrator in the forward signal path. In this way, the present invention degrades the integrator into linear proportion when the limiter is activated. As a result, the method in accordance with the present invention improves the SNDR of the sigma delta modulator when the modulator is experiencing instability. FIG. 8A depicts a block diagram of a exemplary analog feedback limiter integrator 830. Analog feedback limiter integrator 830 comprises an op amp 832, a capacitor 836, a limiter 838, a first resistor 833, and a second resistor 835. Second resistor 835 is connected in series with limiter 838. Second capacitor 836 is connected between the inverting (negative) terminal of op amp 832 and the output of op amp 832. The second capacitor 836 is connected in parallel to the series combination of the second resistor 835 and limiter 838. The first resistor 833 is connected between the input of the integrator and the inverting terminal of op amp 832. The non-inverting (positive) terminal of op amp 832 is connected to ground. Voltage, V_(in), is applied to the integrator at the input. FIG. 8B depicts a block diagram of an alternative analog feedback limiter integrator 840. Analog feedback limiter integrator 840 comprises an op amp 842, a capacitor 846, a limiter 848, a first resistor 843, and a second resistor 845. Second resistor 845 is connected in series with limiter 848. Second capacitor 846 is connected between the inverting (negative) terminal of op amp 842 and the output of op amp 842. The second capacitor 846 is connected in parallel to the series combination of the second resistor 845 and limiter 848. The first resistor 843 is connected between ground and the inverting terminal of op amp 842. The non-inverting (positive) terminal of op amp 842 is connected to input voltage, V_(in). FIG. 9 is a graph of SNDR versus input amplitude. Graph 910 represents a graph of the performance of an integrator having a limiter in the feed forward path according to conventional stabilization techniques. Graph 920 represents a graph of the performance of an integrator having a limiter in the feedback path in accordance with embodiments of the present invention. At low signal levels, the SN DRs are identical for both graphs because the limiters are not yet active. However, when the signal becomes large enough to activate the limiters, the feedback limiter technique depicted in graph 920 shows significantly better SNDR than the conventional stabilizing technique in graph 910. While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the present invention should not be limited in any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. 1. A sigma delta modulator comprising: one or more integrators, wherein at least one integrator comprises: an input; an output; a forward signal path between the input and the output; means for signal integration in the forward signal path, the means for signal integration coupled to the input and the output; a feedback path between the output of the integrator and the input of the integrator; and a limiter, in the feedback path, coupled to the output and the input. 2. The sigma delta modulator of claim 1 wherein the at least one integrator is a digital integrator. 3. The sigma delta modulator of claim 1 wherein the means for signal integration comprises: an adder in the forward signal path coupled to the input; and a delay coupled between the adder and the integrator output in the forward signal path. 4. The sigma delta modulator of claim 1 wherein the at least one integrator is an analog integrator. 5. The sigma delta modulator of claim 4 wherein the at least one integrator is a switched capacitor integrator. 6. The sigma delta modulator of claim 1 wherein the means for signal integration comprises: an operational amplifier having a negative terminal, a positive terminal, and an output, wherein the positive terminal is coupled to a ground; a first resistor coupled between the integrator input and the negative terminal of the operational amplifier; a capacitor coupled between the negative terminal of the operational amplifier and the output of the operational amplifier; a second resistor coupled to the negative terminal of the operational amplifier; and the limiter coupled to the second resistor and the output of the operational amplifier. 7. The sigma delta modulator of claim 1 wherein the means for signal integration comprises: an operational amplifier having a negative terminal, a positive terminal, and an output, wherein the positive terminal is coupled to the integrator input; a first resistor coupled between a ground and the negative terminal of the operational amplifier; a capacitor coupled between the negative terminal of the operational amplifier and the output of the operational amplifier; a second resistor coupled to the negative terminal of the operational amplifier; and the limiter coupled to the second resistor and the output of the operational amplifier. 8. The sigma delta modulator of claim 1 wherein the sigma delta modulator has at least three integrators. 9. The sigma delta modulator of claim 8 wherein the sigma delta modulator has a feed forward topology. 10. The sigma delta modulator of claim 8 wherein the sigma delta modulator has a multiple feedback topology. 11. The sigma delta modulator of claim 8 wherein the sigma delta modulator has a cascaded topology. 12. A high-order single-loop sigma delta modulator, comprising: a plurality of integrators arranged along a signal path between a modulator input and modulator output; and a quantizer arranged between said integrators and the modulator output; wherein at least one of said integrators includes a limiter arranged in an integrator feedback path parallel to a forward signal path through the integrator. 13. The sigma delta modulator of claim 12, wherein said integrators include first, second and third integrators arranged in a multiple feed forward topology, each integrator having an input and output, and further comprising: a first summing node arranged along the signal path between the modulator input and an input of said first integrator, said first summing node being further coupled to a modulator feedback path in parallel with the signal path between the modulator output and the first summing node; and a second summing node coupled between the output of said third integrator output and an input to said quantizer. 14. The sigma delta modulator of claim 13, further comprising: a first amplifier coupled between the output of the first integrator and said second summing node; a second amplifier coupled between the output of the second integrator and said second summing node; and a third amplifier arranged along the modulator feedback path between the modulator output and the first summing node. 15. The sigma delta modulator of claim 12 wherein the at least one integrator includes an adder and a delay in the forward signal path, the delay being coupled between the adder and the integrator output in the forward signal path, and said limiter being coupled between the adder and the integrator output in the integrator feedback path. 16. The sigma delta modulator of claim 12 wherein the at least one integrator includes an operational amplifier in the forward signal path, and a resistor coupled in series to said limiter in the integrator feedback path. 17. The sigma delta modulator of claim 12, wherein said integrators include first, second and third integrators arranged in a multiple feedback topology, each integrator having an input and output, and further comprising: a first summing node arranged along the signal path between the modulator input and an input of said first integrator, said first summing node being further coupled to a first modulator feedback path in parallel with the signal path between the modulator output and the first summing node; a second summing node arranged along the signal path between the first and second integrators, said second summing node being further coupled to the first modulator feedback path and to a second modulator feedback path in parallel with the signal path between the output of the third integrator and the second summing node; and a third summing node arranged along the signal path between the second and third integrators, said third summing node being further coupled to the first modulator feedback path. 18. The sigma delta modulator of claim 17, further comprising: a first amplifier coupled between the output of the first integrator and said second summing node; a second amplifier coupled between the output of the second integrator and said third summing node; a third amplifier arranged along the first modulator feedback path between the modulator output and the first summing node; and a gain amplifier arranged along the second modulator feedback path in parallel with the signal path between the output of the third integrator and the second summing node. 19. A method for stabilizing high-order sigma delta modulation, comprising: combining an input signal and a feedback signal to produce a difference signal; integrating the difference signal to obtain an integrated signal; and quantizing the integrated signal to obtain a quantized signal representing a high-order sigma delta modulation of the input signal; wherein said integrating step includes integrating with feedback and limiting the integrating to maximum and minimum voltage threshold values (+V, −V) when integrator feedback voltages exceed the maximum and minimum threshold values.
const DEFAULT_BEM_CONFIG = { element: '__', modifier: '--', modifierValue: '-', }; /** * Class to contain bem methods * * @class Bem */ class Bem { constructor(settings = {}) { const { block = 'nameless', modifiers = [], config = DEFAULT_BEM_CONFIG } = settings; this.blockName = block; this.modifiers = modifiers; this.config = { ...DEFAULT_BEM_CONFIG, ...config }; } /** * Construct class from block name and modifiers * * @method block * @param {Object} props={} * @param {Object} passedModifiers={} * @return {String} */ block(props = {}, passedModifiers = {}) { const classList = []; if (props.className) { classList.push(props.className); } classList.push(this.blockName); const modifiersFromPropsAsObject = this.modifiers.reduce( (collector, modifier) => ({ ...collector, [modifier]: props[modifier] }), {}); const classListFromModifiers = modifiersFromObj(this.blockName, { ...modifiersFromPropsAsObject, ...passedModifiers }, this.config); classList.push(...classListFromModifiers); return classList.join(' '); } /** * Construct class from element name and modifiers * * @method element * @param {String} elementName * @param {Object} modifiers * @return {String} */ element(elementName, modifiers = {}) { const { element: elementDelimiter } = this.config; const elementClass = `${this.blockName}${elementDelimiter}${elementName}`; const modifiersClasses = modifiersFromObj(elementClass, modifiers, this.config); return [elementClass, ...modifiersClasses].join(' '); } } export default Bem; /* HELPERS */ /** * Create class name with modifier * * @method createModifier * @param {String} baseClass * @param {String} modifierName * @param {any} modifierValue * @param {any} config * @return {String} */ function createModifier(baseClass, modifierName, modifierValue = null, config = DEFAULT_BEM_CONFIG) { const { modifier: modifierDelimiter, modifierValue: modifierValueDelimiter, } = config; // check only null, undefined and false values to save 0 value if (modifierValue == null || modifierValue === false) { return ''; } const className = `${baseClass}${modifierDelimiter}${modifierName}`; // if no modifier value passed or it is boolean, then return modifier class itself if (modifierValue === true) { return className; } const classList = []; if (Array.isArray(modifierValue)) { const modifierClassList = modifierValue .filter( value => value != null && value !== false ) .reduce( (collector, value) => { collector.push(...createModifier(baseClass, modifierName, value, config).split(' ')); return collector; }, []); classList.push(...modifierClassList); } else { classList.push(`${className}${modifierValueDelimiter}${modifierValue}`); } return [...new Set(classList)].join(' '); } /** * Get list of modifier classes * * @method modifiersFromObj * @param {String} baseClass * @param {Object} modifiers * @param {Object} config * @return {Array} */ function modifiersFromObj(baseClass, modifiers, config) { const classList = []; Object.keys(modifiers).forEach( modifier => { modifiers[modifier] != null && modifiers[modifier] !== false && classList.push(createModifier(baseClass, modifier, modifiers[modifier], config)); }); return classList; }
Thread:The Deranged Umbreon/@comment-24822266-20140422232434/@comment-24569600-20140423164214 Well, Mojang said they would make a ¨Nether Replacement Structure¨ whatever that means.
#ifndef __SPI_h__ #define __SPI_h__ #include "System.h" #include "stm32f4xx_spi.h" enum SPI_Channel { spi_c1, spi_c2, spi_c3, spi_c4, spi_c5 }; class SPI_Interface { private: void configure(SPI_InitTypeDef* SPI_InitStruct){ /* SPI default settings: * - Full duplex mode * - Communicate as master * - 8 bit wide communication * - SPI Mode 0 * - Internal NSS management * - 12.5 MHz Baud Rate TODO: check this again * - MSB first */ SPI_InitStruct->SPI_Direction = SPI_Direction_2Lines_FullDuplex; SPI_InitStruct->SPI_Mode = SPI_Mode_Master; SPI_InitStruct->SPI_DataSize = SPI_DataSize_8b; SPI_InitStruct->SPI_CPOL = SPI_CPOL_Low; SPI_InitStruct->SPI_CPHA = SPI_CPHA_1Edge; SPI_InitStruct->SPI_NSS = SPI_NSS_Soft | SPI_NSSInternalSoft_Set; SPI_InitStruct->SPI_BaudRatePrescaler = SPI_BaudRatePrescaler_16; SPI_InitStruct->SPI_FirstBit = SPI_FirstBit_MSB; }; public: SPI_Channel SPIc; SPI_TypeDef* SPIx; SPI_Interface(SPI_Channel SPIc) { this->SPIc = SPIc; switch(SPIc){ case spi_c1: this->SPIx = SPI1; break; case spi_c2: this->SPIx = SPI2; break; case spi_c3: this->SPIx = SPI3; break; case spi_c4: this->SPIx = SPI4; break; case spi_c5: this->SPIx = SPI5; break; } } void begin(){ SPI_InitTypeDef SPI_InitStruct; // Initialize alternate function pins A5, A6, and A7 switch(this->SPIc){ case spi_c1: RCC_APB2PeriphClockCmd(RCC_APB2Periph_SPI1, ENABLE); configure_GPIO(PA5, NO_PU_PD, ALT); configure_GPIO(PA6, NO_PU_PD, ALT); configure_GPIO(PA7, NO_PU_PD, ALT); GPIO_PinAFConfig(GPIOA, GPIO_PinSource5, GPIO_AF_SPI1); // SCLK GPIO_PinAFConfig(GPIOA, GPIO_PinSource6, GPIO_AF_SPI1); // MISO GPIO_PinAFConfig(GPIOA, GPIO_PinSource7, GPIO_AF_SPI1); // MOSI break; case spi_c2: RCC_APB1PeriphClockCmd(RCC_APB1Periph_SPI2, ENABLE); configure_GPIO(PC7, NO_PU_PD, ALT); configure_GPIO(PB14, NO_PU_PD, ALT); configure_GPIO(PB15, NO_PU_PD, ALT); GPIO_PinAFConfig(GPIOC, GPIO_PinSource7, GPIO_AF_SPI2); // SCLK GPIO_PinAFConfig(GPIOB, GPIO_PinSource14, GPIO_AF_SPI2); // MISO GPIO_PinAFConfig(GPIOB, GPIO_PinSource15, GPIO_AF_SPI2); // MOSI break; case spi_c3: RCC_APB1PeriphClockCmd(RCC_APB1Periph_SPI3, ENABLE); break; case spi_c4: RCC_APB2PeriphClockCmd(RCC_APB2Periph_SPI4, ENABLE); configure_GPIO(PB13, NO_PU_PD, ALT); configure_GPIO(PA11, NO_PU_PD, ALT); configure_GPIO(PA1, NO_PU_PD, ALT); GPIO_PinAFConfig(GPIOB, GPIO_PinSource13, GPIO_AF6_SPI4); // SCLK GPIO_PinAFConfig(GPIOA, GPIO_PinSource11, GPIO_AF6_SPI4); // MISO GPIO_PinAFConfig(GPIOA, GPIO_PinSource1, GPIO_AF_SPI4); // MOSI break; case spi_c5: RCC_APB2PeriphClockCmd(RCC_APB2Periph_SPI5, ENABLE); break; } configure(&SPI_InitStruct); SPI_Init(this->SPIx, &SPI_InitStruct); SPI_Cmd(this->SPIx, ENABLE); }; uint8_t transfer(uint8_t data){ this->SPIx->DR = data; // write data to be transmitted to the SPI data register while( !(this->SPIx->SR & SPI_I2S_FLAG_TXE) ); // wait until transmit complete while( !(this->SPIx->SR & SPI_I2S_FLAG_RXNE) ); // wait until receive complete while( this->SPIx->SR & SPI_I2S_FLAG_BSY ); // wait until SPI is not busy anymore return this->SPIx->DR; // return received data from SPI data register }; /* void setClockDivider(uint8_t clock_divider){}; */ /* void setDataMode(){}; */ }; #endif
using Pombos.Classes; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace Columbus2019 { public partial class Form2 : Form { // Instances Pombo p = new Pombo(); // Variables private string lastID; // Start public Form2() { // Initialize InitializeComponent(); } private void Form2_Load(object sender, EventArgs e) { } public void LoadProfile(DataGridView Grid, int row) { // Store ID lastID = Grid.Rows[row].Cells[0].Value.ToString(); // Assign info to labels txtPopName.Text = Grid.Rows[row].Cells[2].Value.ToString(); txtPopNumber.Text = Grid.Rows[row].Cells[1].Value.ToString(); txtPopGender.Text = Grid.Rows[row].Cells[4].Value.ToString(); txtPopDad.Text = Grid.Rows[row].Cells[5].Value.ToString(); txtPopMom.Text = Grid.Rows[row].Cells[6].Value.ToString(); txtPopState.Text = Grid.Rows[row].Cells[7].Value.ToString(); txtPopNotes.Text = Grid.Rows[row].Cells[8].Value.ToString(); // Image decoding byte[] imgData = null; if (Grid.Rows[row].Cells[9].Value != DBNull.Value && (imgData = (byte[])Grid.Rows[row].Cells[9].Value) != null && imgData.Length > 0) { imgData = (byte[])Grid.Rows[row].Cells[9].Value; using (MemoryStream memoryStream = new MemoryStream(imgData, 0, imgData.Length)) { memoryStream.Write(imgData, 0, imgData.Length); pictureBox1.Image = Image.FromStream(memoryStream, true); } } } private void UpdateProfile_Click(object sender, EventArgs e) { // ID bool convN0 = int.TryParse(lastID, out int result); if (convN0 == true) { p.PomboID = result; } else { MessageBox.Show("ID Inválido!"); } // State p.State = txtPopState.Text; // Notes p.Notes = txtPopNotes.Text; // Update data into DB bool successful = p.UpdateNotes(p); if (successful == true) { MessageBox.Show("Pombo atualizado com sucesso!"); } else { MessageBox.Show("Erro ao atualizar pombo! Tente novamente."); } // Close this dialog Close(); // Avoid minimizing form1 by activating it Program.form1.Activate(); // Clear input fields from form1 Program.form1.Clear(); // Refresh GridDataView Program.form1.RefreshGrid(); } private void label1_Click(object sender, EventArgs e) { } private void label2_Click(object sender, EventArgs e) { } private void label3_Click(object sender, EventArgs e) { } private void txtPopGender_Click(object sender, EventArgs e) { } private void txtPopDad_Click(object sender, EventArgs e) { } private void txtPopMom_Click(object sender, EventArgs e) { } } }
Making Zombot Stomp a 6 is way too extreme. Petal Morphosis is questionable. Ra Zombie is way to broken as you force the opponent to play a 2 cost.
Sniffing is one of the most prominent causes for most of the attacks in the digitized computing environment. Through various packet analyzers or sniffers available free of cost, the network packets can be captured and analyzed. The sensitive information of the victim like user credentials, passwords, a PIN which is of more considerable interest to the assailants can be stolen through sniffers. This is the primary reason for most of the variations of DDoS attacks in the network from a variety of its catalog of attacks. An effective and trusted framework for detecting and preventing these sniffing has greater significance in today's computing. A counter hack method to avoid data theft is to encrypt sensitive information. This paper provides an analysis of the most prominent sniffing attacks. Moreover, this is one of the most important strides to guarantee system security. Also, a Lattice structure has been derived to prove that sniffing is the prominent activity for DoS or DDoS attacks.
Talk:Anne Bracegirdle Biography assessment rating comment WikiProject Biography Summer 2007 Assessment Drive The article may be improved by following the WikiProject Biography 11 easy steps to producing at least a B article. -- Yamara 22:10, 15 July 2007 (UTC) Discussion <IP_ADDRESS> added a final sentence "Nothing further is known of her life until her death in 1748". In order to get closure for the narrative, perhaps? But I've removed it, as it's not true. Her life is known all right, she was a famous person by then, it would have been odd if she'd disappeared from sight. People have contact with her and mention her ongoingly, for instance Colley Cibber talks in his autobiography about how charming she still is (in 1740). There just wasn't anything interesting enough going on in that part of her life to put in the article, I thought. She lived quietly.--Bishonen | Talk 11:18, 1 Jan 2005 (UTC) On quitting the stage I wish people would stop reverting my "quitted the stage" to "quit the stage". Both forms are equally correct and when that's the case it's Wikiquette to respect the original author's choice of style. The reason I prefer quitted is that it's less colloquial and exclusively modern, and "quitted the stage" is by way of being an idiom (old-fashioned, but hey, it's the 18th century). In most contexts, I would prefer quit, too, just not here. It's hardly worth a revert war, though. The next time somebody "corrects" it to quit, I'll just change it to "she left the stage".--Bishonen | Talk 11:18, 1 Jan 2005 (UTC)
Multiwindow does not work with Youtube When using Multiwindow on Android N, the onPause() is called when touching the other window, causing the youtube player to pause. As specified by the Multiwindow Lifecycle the video should be pausing in the onStop(). Is there any way I can make this happen on my own, or do I need to wait for the API to update? Can we see some code which your are trying? Assuming you are talking about the YouTube Player API, can you file a bug on the Android N Issue Tracker? Preferably with a small sample project.
Why does the Skype incoming call window disappear in Awesome WM? Skype used to work OK on Awesome WM for some months, but recently incoming call windows do not show up at all. I've checked all the tabs and tried to minimise all the other windows, and there is no status bar entry, so it looks like it's not floated behind any of them. Removing the Awesome WM configuration file and restarting doesn't work. This affects me as well. I always have to tell people that I call them back. Same for you?
Give me the complete political dialog on the topic of Agriculture and Agri-Food that ends with: ...and Canada's agricultural trade around the world?.
// @flow import { resolve, dirname, basename } from 'path' import rimraf from 'rimraf' import { isEmpty } from 'lodash' import type { StoryPaths, ImgLog, ImgTest } from '../picturebook.types' import { replaceImage, getImageDiff, writeImage } from './image' type Params = { screenshots: Array<ImgLog>, files: Array<StoryPaths>, root: string, threshold?: number, overwrite?: boolean, } const diffRoot = resolve(__dirname, '../screenshot/reports/diffs') const toPct = num => `${num * 100}%` function updateReferenceImage({ imgFileName, referencePath, baseResponse, diffThreshold, }) { return replaceImage(imgFileName, referencePath) .then(() => ({ ...baseResponse, status: 'CREATED', diffThreshold: 0, })) .catch(e => ({ ...baseResponse, status: 'FAILED', error: `Updating image failed ${e.message}`, diffThreshold, })) } function compareImageGroup({ imgFileName, name, screenshots, platform, browser, root, overwrite = false, threshold = 0, }): Promise<ImgTest> { const browserKey = `${platform}.${browser}` const referenceFolder = resolve(root, dirname(name), '__screenshots__') const referenceFile = `${basename(name)}.${browserKey}.png` const referencePath = resolve(referenceFolder, referenceFile) const baseResponse = { name, browser, platform, error: null, diffPath: null, referencePath, screenshotPath: imgFileName, } if (!platform || !browser || !imgFileName) { return Promise.resolve({ ...baseResponse, status: 'FAILED', error: 'browser, platform and imgFileName are required parameters', diffThreshold: -1, }) } if (isEmpty(screenshots) || !screenshots[browserKey]) { return replaceImage(imgFileName, referencePath) .then(() => ({ ...baseResponse, status: 'CREATED', diffThreshold: 0, })) .catch(() => ({ ...baseResponse, status: 'FAILED', error: 'Unable to copy image', diffThreshold: -1, })) } const diffPath = resolve(diffRoot, screenshots[browserKey]) const derivedRefPath = resolve(root, screenshots[browserKey]) if (referencePath !== derivedRefPath) { return Promise.resolve({ ...baseResponse, status: 'FAILED', error: `Path mismatch ${referencePath} should match ${derivedRefPath}`, diffThreshold: -1, }) } return getImageDiff(imgFileName, referencePath) .then( ({ misMatchPercentage, isSameDimensions, dimensionDifference, getBuffer, }) => { const diffThreshold = parseFloat(misMatchPercentage) || 0 const aboveThreshold = diffThreshold > threshold const hasError = !isSameDimensions || aboveThreshold if (hasError && overwrite) { return updateReferenceImage({ imgFileName, referencePath, baseResponse, diffThreshold, }) } if (!isSameDimensions) { return { ...baseResponse, status: 'FAILED', error: `Image size mismatch of ${dimensionDifference.width}x${ dimensionDifference.height }px`, diffPath, diffThreshold, } } if (diffThreshold > threshold) { writeImage(getBuffer(), diffPath) return { ...baseResponse, status: 'FAILED', error: `Pixel differences of ${toPct( diffThreshold )} are above the threshold (${toPct(threshold)})`, diffPath, diffThreshold, } } return { ...baseResponse, status: 'SUCCESS', diffThreshold, } } ) .catch(e => ({ ...baseResponse, status: 'FAILED', error: `ImageDiff failed: ${e.message}`, diffThreshold: -1, })) } export default function compareImages({ screenshots, files, root, threshold, overwrite, }: Params) { // remove old diffs rimraf.sync(diffRoot) const results = [] return screenshots .map(screenshot => ({ root, threshold, overwrite, screenshots: {}, ...screenshot, ...files.find(({ name }) => name === screenshot.name), })) .reduce( (acc, current) => acc.then(() => compareImageGroup(current).then(result => results.push(result)) ), Promise.resolve() ) .then(() => results) }
JAMES HOGG. As we shall now be proceeding to Pocock's Diary for 1811, in which he records the appearance of the great comet of that year, it suitably enables us to direct more especial attention to our printer's love of nature, and his ardent pursuit of natural history. This he evinced in 1809, in his
We have found a way for penetrating the space of the dynamical systems towards systems of arbitrary dimension exhibiting the nonlinear mixing of a large number of oscillation modes through which extraordinarily complex time evolutions arise. The system design is based on assuring the occurrence of a number of Hopf bifurcations in a set of fixed points of a relatively generic system of ordinary differential equations, in which the main peculiarity is that the nonlinearities appear through functions of a linear combination of the system variables. The paper presents the design procedure and a selection of numerical simulations with a variety of designed systems whose dynamical behaviors are really rich and full of unknown features. For concreteness, the presentation is focused to illustrating the oscillatory mixing effects on the periodic orbits, through which the harmonic oscillation born in a Hopf bifurcation becomes successively enriched with the intermittent incorporation of other oscillation modes of higher frequencies while the orbit remains periodic and without necessity of bifurcating instabilities. Even in the absence of a proper mathematical theory covering the nonlinear mixing mechanisms we find enough evidence to expect that the oscillatory scenario be truly scalable concerning the phase space dimension, the multiplicity of involved fixed points and the range of time scales, so that extremely complex but ordered dynamical behaviors could be sustained through it.
Thread:Pinkwolflover/@comment-22439-20160221143922/@comment-24096680-20160221172402 It's not that, the system just posts on admin's behalf whenever a new user edits. Welcome!
OpenFL/HTML5 Applying shader to bitmap I'm trying to apply a shader to a Bitmap as it's described on this link. This is the code: var shader = new Shader (); shader.glFragmentSource = "..."; <- this part is not important shader.data.useAlphaImage = [ true ]; shader.data.uAlphaImage.input = alphaBitmapData; bitmap.filters = [ new ShaderFilter (shader) ]; But if I apply the shader it gets transparent, with no errors. Do I have to configure something to get it working? I am targeting HTML5. ShaderFilter is partially disabled in current OpenFL releases. The initial implementation was too slow to work on mobile; there are plans to revisit the feature again written in a different way. In the meantime, there is a beta API you could try: bitmap.shader = shader; Also, be aware OpenFL uses premultiplied alpha, so bear that in mind within your shader when it comes to alpha values. This should be represented in the default shader code. Thanks! That works :D ... Is there any place where I can find info about this kind of things? I don't find the openfl documentation really useful Until the documentation gets updated, just asking (and looking on the forums) is your best bet
Sharing a specific window / area Sharing a specific window or an area would be a nice capability - chromium has it as a functionality, as it uses it for chromecasting. By initial research, I assume this happens by setting chromeMediaSource to "desktop" and chromeMediaSourceId to a value returned by chooseDesktopMedia - https://developer.chrome.com/extensions/desktopCapture. technically that is not implemented in chromium but in chromes 'chrome app' libraries, I opened an issue requesting support for it in electron a while back https://github.com/atom/electron/issues/1380 Recently got merged into Electron https://github.com/atom/electron/pull/2963 OH NICE FINALLY!!! Docs for it are here: https://github.com/atom/electron/blob/9c861b9ad37a9c6335dde2e59d3005742fe75150/docs/api/desktop-capturer.md This will let us implement a UI that asks the user what part of the screen they want to share. Currently we only share the entire screen. Being able to select only 1 screen would be pretty sweet :) got it working in https://github.com/maxogden/screencat/commit/9e93952e8f5c1c3f714636d1b47acb447602f51b heres the UI I hacked up: because of the way it changes screen sharing, audio chat is currently not hooked up. I'll need to hook that up again with more code feedback welcome! LGTM Can anyone please answer this question: https://github.com/atom/electron/issues/4432
With no explanation, chose the best option from "A", "B", "C" or "D". to buck when the first shot was fired, Appellant continued firing his pistol from atop an uncontrolled horse. Indeed, it was under the circumstances just described that Samantha, seated only a few feet from Jonathan and Gabe, was shot. It is therefore easily seen that if the horse had bucked in a slightly different way as Appellant continued to fire his gun, any of the shots could have hit Jonathan or Gabe as surely as the one that hit Samantha. Appellant’s conduct, as indicated by the Commonwealth’s evidence, exhibited an extreme indifference to the value of human life and created a substantial danger of death or serious physical injury to Jonathan and Gabe. Appellant was not entitled to a directed verdict on these two charges. See Port v. Commonwealth, 906 S.W.2d 327, 334 (Ky.1995) (<HOLDING>); Combs v. Commonwealth, 652 S.W.2d 859, 860-61 A: holding that evidence was sufficient where appellant verbally threatened victim and pointed gun at him and then at a group of people causing everyone to scatter B: holding that there was sufficient evidence of wanton endangerment where defendant pointed a gun and fired two shots while in a crowded restaurant thereby creating dangerous atmosphere for other diners C: holding that evidence was sufficient to prove defendant constructively possessed the gun where although defendant denied ownership of the gun it was found near a knife of which defendant claimed ownership and where defendant was aware of the presence of the gun D: holding that there was sufficient evidence of wanton endangerment where a bullet came within fifteen feet of a bystander B.
Rewrite useResource to support custom context Some of this might be unnecessary, but I think we should clean up a couple of things now, before adoption grows even more. The core piece of this PR is the rewrite of useResource hook to support custom router context. Basically we allow to provide route, match and query to manipulate the resource with another generated key instead of based on current route location. To be honest, I find a bit annoying that we have to pass in all that data to generate the key, but don't see any other option (suggestion welcome though). Took also the chance to properly handle route changes: by generating a sweet-state hook on the fly we can subscribe to route changes but thanks to the selector the resource hook will trigger a re-render only if the key changes. const MyComponent = () => { const { data, loading } = useResource(issueResource, { routerContext: createRouterContext(issueRoute, { issueKey: 'BLA-1' }) }); // ... } Related change is removing location from the passed in args to resources: it contains duplicated information and we can export a generateLocation (like this) to easily build that if needed. This is to reduce the amount of input required when consumers want to provide a custom router context. Another thing that I always found annoying is the array destructuring, so it's time to drop it. This will require consumers changes during the bump. We could still support both beading a Symbol.iterator method so that we create an array like object, however I wonder if we can make types happy and if it worth the additional complexity. const output = { ...slice, update, refresh }; return { ...output, // support the deprecated [{ data }] style 0: output, length: 1, [Symbol.iterator]() { return { next: () => ({ value: output, done: false }), }; }, }; TODO: [ ] add tests [ ] rewrite subscriber [ ] fix flow types [x] export generatePath util Nice! Can you give an example or have a test for how this will get used? Given routerContext is a quite complex object, I've exported a new utility called createRouterContext that allows to generate the proper object with match, query and route. I've also renamed current getRouteContext to findRouterContext as was causing confusion: this one needs a list of routes and location to find out which one is the current, hence I think "find" is a better verb. But happy to gain more feedback on naming and APIs. @albertogasparin this looks good to me. If you update the branch I'll ✅
using Neudesic.Schoolistics.Core.Utils; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Data; using Windows.UI.Xaml.Media.Imaging; namespace Neudesic.Schoolistics.WindowsStore.Converters { class SchoolDetailsLogoConverter:IValueConverter { public object Convert(object value, Type targetType, object parameter, string language) { value = Constants.blobimage + value; return value; //var uri = value as Uri; //if (uri == null) // return null; //var result = uri.ToString(); //return result; var val = value as string; //Image i = new Image(); //i.Source = new BitmapImage(new Uri("ms-ap", UriKind.Absolute)); //return i.Source; //string s=Constants.BlobBaseUrl; //string newstring = s + val; //Image i = new Image(); //i.Source = new BitmapImage(new Uri(newstring, UriKind.Absolute)); //return i; } public object ConvertBack(object value, Type targetType, object parameter, string language) { return null; } } }
package com.prowidesoftware.swift.model.mx.dic; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlType; import org.apache.commons.lang3.builder.EqualsBuilder; import org.apache.commons.lang3.builder.HashCodeBuilder; import org.apache.commons.lang3.builder.ToStringBuilder; import org.apache.commons.lang3.builder.ToStringStyle; /** * Amounts linked to a securities balance, for example, holding value. * * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "BalanceAmounts3", propOrder = { "hldgVal", "prvsHldgVal", "bookVal", "elgblCollVal", "acrdIntrstAmt" }) public class BalanceAmounts3 { @XmlElement(name = "HldgVal") protected AmountAndDirection6 hldgVal; @XmlElement(name = "PrvsHldgVal") protected AmountAndDirection6 prvsHldgVal; @XmlElement(name = "BookVal") protected AmountAndDirection6 bookVal; @XmlElement(name = "ElgblCollVal") protected AmountAndDirection6 elgblCollVal; @XmlElement(name = "AcrdIntrstAmt") protected AmountAndDirection6 acrdIntrstAmt; /** * Gets the value of the hldgVal property. * * @return * possible object is * {@link AmountAndDirection6 } * */ public AmountAndDirection6 getHldgVal() { return hldgVal; } /** * Sets the value of the hldgVal property. * * @param value * allowed object is * {@link AmountAndDirection6 } * */ public BalanceAmounts3 setHldgVal(AmountAndDirection6 value) { this.hldgVal = value; return this; } /** * Gets the value of the prvsHldgVal property. * * @return * possible object is * {@link AmountAndDirection6 } * */ public AmountAndDirection6 getPrvsHldgVal() { return prvsHldgVal; } /** * Sets the value of the prvsHldgVal property. * * @param value * allowed object is * {@link AmountAndDirection6 } * */ public BalanceAmounts3 setPrvsHldgVal(AmountAndDirection6 value) { this.prvsHldgVal = value; return this; } /** * Gets the value of the bookVal property. * * @return * possible object is * {@link AmountAndDirection6 } * */ public AmountAndDirection6 getBookVal() { return bookVal; } /** * Sets the value of the bookVal property. * * @param value * allowed object is * {@link AmountAndDirection6 } * */ public BalanceAmounts3 setBookVal(AmountAndDirection6 value) { this.bookVal = value; return this; } /** * Gets the value of the elgblCollVal property. * * @return * possible object is * {@link AmountAndDirection6 } * */ public AmountAndDirection6 getElgblCollVal() { return elgblCollVal; } /** * Sets the value of the elgblCollVal property. * * @param value * allowed object is * {@link AmountAndDirection6 } * */ public BalanceAmounts3 setElgblCollVal(AmountAndDirection6 value) { this.elgblCollVal = value; return this; } /** * Gets the value of the acrdIntrstAmt property. * * @return * possible object is * {@link AmountAndDirection6 } * */ public AmountAndDirection6 getAcrdIntrstAmt() { return acrdIntrstAmt; } /** * Sets the value of the acrdIntrstAmt property. * * @param value * allowed object is * {@link AmountAndDirection6 } * */ public BalanceAmounts3 setAcrdIntrstAmt(AmountAndDirection6 value) { this.acrdIntrstAmt = value; return this; } @Override public String toString() { return ToStringBuilder.reflectionToString(this, ToStringStyle.MULTI_LINE_STYLE); } @Override public boolean equals(Object that) { return EqualsBuilder.reflectionEquals(this, that); } @Override public int hashCode() { return HashCodeBuilder.reflectionHashCode(this); } }
Adelaide Alma Caroline Pellow (1904-1987) Vital Statistics * Sex : Female * Born: 7 at Cootamundra District, New South Wales, Australia * Died: 1 at Strathfield, Cumberland County, New South Wales, Australia at 82 years * Interment: at Rookwood Cemetery, Cumberland County, New South Wales, Australia Parents * Father: Thomas Pellow (Bef 1889-Aft 1904) * Mother: Louisa M (Bef 1889-Aft 1904) Siblings * Sibling: Adelaide Alma Caroline Pellow (1904-1987) Spouses * Spouse: Isaac Abraham Green (1892-1963) Offspring * Child: Lola Joyce Green (?-?) * Child: Elsa May Green (?-?) * Child: Harold Rex Green (?-?) * Child: Searle Green (?-?) * Child: Anne O L Green (?-?) Biography Birth: Date:7 Aug 1904 Location:at Cootamundra District, New South Wales, Australia Burial: Location:at Rookwood Cemetery, Cumberland County, New South Wales, Australia Contributors Yewenyi
The risk-neutral option pricing method under GARCH intensity model is examined. The GARCH intensity model incorporates the characteristics of financial return series such as volatility clustering, leverage effect and conditional asymmetry. The GARCH intensity option pricing model has flexibility in changing the volatility according to the probability measure change.
Multi currency exchanges between participants ABSTRACT A method and apparatus for facilitating payment transactions in multiple currencies between participants is provided. In one embodiment, an option is provided to a user to select a currency in which to make a payment. An indication of the selected currency in which to make the payment is received. A determination is made as to whether the selected currency is a primary currency of an account of the user. Based on the selected currency being different from the primary currency of the account of the user, the payment is converted to the selected currency. RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 13/567,902, filed Aug. 6, 2012, entitled “Multi Currency Exchanges Between Participants,” which is a continuation of U.S. patent application Ser. No. 13/212,994, filed Aug. 18, 2011, entitled Multi Currency Exchanges Between Participants of a Network-Based Transaction Facility,” which is a continuation of U.S. patent application Ser. No. 12/818,935, filed Jun. 18, 2010, entitled “Multi Currency Exchanges Between Participants of a Network-Based Transaction Facility,” which is a continuation of U.S. patent application Ser. No. 10/608,525, filed Jun. 26, 2003, entitled “Multi Currency Exchanges Between Participants of a Network-Based Transaction Facility,” all of which are incorporated herein by reference in their entirety. TECHNICAL FIELD The present invention relates generally to the field of e-commerce and, more specifically, to facilitating payment transactions in multiple currencies between participants. BACKGROUND Typically, an electronic payment system allows participants of a network-based transaction facility to collect payments online. For example, the payer may send money to the electronic payment system using a credit card or check, or funds in a payer account maintained by the electronic payment system. Recipients can store money in their accounts maintained by the electronic payment system, transfer the money to a separate bank account or have the electronic payment system cut them a check. With the growth in international commerce, problems arise due to different monetary systems used in different countries. That is, money is generally expressed in different currencies in different countries and the value of the different currencies varies greatly. Currency conversion is widely used to convert money from one currency into money of a different currency. However, currency conversion represents a significant economic risk to both buyers and sellers in international commerce. For example, when a buyer in the U.S. desires to buy a product in an online transaction facility from a seller in France, the buyer may use a credit card to pay the seller for the product. The credit card company may pay the seller in Euros, and then at an undetermined later date, it will bill an amount to the buyer in U.S. dollars. The amount billed to the buyer is determined by an exchange rate used at the time the credit card company settles the transaction. The time of this settlement is at the credit card company's discretion. The risk to the credit card company is minimal because the credit card company can settle the transaction when exchange rates are favorable. Thus, in this case, it is the buyer who bears the risk that the value of the buyer's currency will decline prior to this settlement. In another example, a seller participating in an online transaction facility may decide to accept a different currency to be able to sell the product. In this case, the seller may later sell the currency to a currency trader, usually at a discount. The price the seller charges to the buyer who pays cash reflects both the cost of currency conversion and the risk that the rate used to establish the price of the product in a particular currency may have changed. This typically results in the buyer paying a higher price for the product and the seller incurring risk due to a possible change in currency exchange rates. In yet another example, a buyer may convert from the native currency to a different second currency before the sale to be able to buy a product from a seller who only accepts payments in the second currency. In this case, the buyer can purchase goods at a price in the second currency, but cannot be certain of the value of the second currency relative to the buyer's native currency. Thus, the individual assumes the risk of devaluation of the second currency against the first currency. Further, the buyer bears the risk that the second currency may cease to be convertible into his native currency. The above problems create inconvenience and uncertainty for participants in international commerce, thus discouraging the development of international commerce over electronic networks. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which: FIG. 1 is a block diagram of one embodiment of a system for processing online multi currency payment transactions between participants in a network-based transaction facility; FIG. 2 is a block diagram of one embodiment of a multicurrency transfer module; FIG. 3 is a block diagram of one embodiment of a send payment sub-module; FIG. 4 is a flow diagram of one embodiment of a method for processing submissions of online multi currency payments; FIG. 5 is a block diagram of one embodiment of a receive payment sub-module; FIG. 6 is a flow diagram of one embodiment of a method for processing receipts of online multicurrency payments; FIG. 7 is a block diagram of one embodiment of a user account manager; FIG. 8 is a flow diagram of one embodiment of a method for managing multicurrency balances of a user; FIG. 9 is a flow diagram of one embodiment of a method for obtaining guaranteed exchange rates; FIG. 10 is a flow diagram of one embodiment of a method for facilitating multi currency payment transactions between participants of a network-based transaction facility; FIGS. 11-20 are exemplary representations of various interfaces; and FIG. 21 is a block diagram of one embodiment of a computer system. DETAILED DESCRIPTION A method and apparatus for facilitating online payment transactions in multiple currencies between users over a communications network are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details. System for Processing Online Payment Transactions FIG. 1 is a block diagram of one embodiment of a system for processing online payment transactions in multiple currencies between participants in a network-based transaction facility. In this embodiment, a client 100 is coupled to a transaction facility 130 via a communications network, including a wide area network 110 such as, for example, the Internet. Other examples of networks that the client may utilize to access the transaction facility 130 include a local area network (LAN), a wireless network (e.g., a cellular network), or the Plain Old Telephone Service (POTS) network. The client 100 represents a device that allows a user to participate in a transaction facility 130. The transaction facility 130 handles all transactions between various participants including the user of the client computer 100. In one embodiment, the transaction facility 130 may be an online auction facility represented by an auction web site visited by various participants including the user of the client computer 100. Alternatively, the transaction facility 130 may be an online retailer or wholesaler facility represented by a retailer or wholesaler web site visited by various buyers including the user of the client computer 100. In yet other embodiments, the transactions facility 130 may be any other online environment used by a participant to conduct business transactions. The transaction facility 130 is coupled to an online payment service 120. In one embodiment, the transaction facility 130 is coupled to the online payment service 120 via a communications network such as, for example, an internal network, the wide area network 110, a wireless network (e.g., a cellular network), or the Plain Old Telephone Service (POTS) network. Alternatively, the online payment service 120 is integrated with the transaction facility 130 and it is a part of the transaction facility 130. The online payment service 120 is also coupled to the client 100 via any of the described above communications networks. The online payment service 120 is a service for enabling online payment transactions between participants of the transaction facility 130, including the user of the client computer 100. In one embodiment, the online payment service 120 includes a multi-currency transfer module 150 that allows the participants to maintain account balances in different currencies and make online payments in different currencies in the course of business conducted in the transaction facility 130. The term “currency” as referred to herein may include, for example, denominations of script and coin that are issued by government authorities as a medium of exchange. In another example, a “currency” may also include a privately issued token that can be exchanged for another privately issued token or government script. For example, a company might create tokens in various denominations. This company issued “money” could be used by employees to purchase goods from sellers. In this case, an exchange rate might be provided to convert the company currency into currencies which are acceptable to merchants. As will be discussed in more detail below, in one embodiment, the multi currency transfer module 150 allows the participants to make educated decisions as to which currency to choose for sending and receiving payments. In another embodiment, the multi currency module 150 provides the participants with a mechanism for managing their account balances in different currencies. FIG. 2 is a block diagram of one embodiment of a multicurrency transfer module 200. The multicurrency transfer module 200 includes, in one embodiment, a send payment sub-module 202, a receive payment sub-module 204, a user account manager 206, and a rate controller 208. In one embodiment, the send payment sub-module 202 is responsible for facilitating a sender selection of a currency in which a payment to a recipient is to be made, for funding the payment, for notifying a recipient about the payment, and for handling returned or denied payments. In one embodiment, if the sender does not hold an account balance in the currency that he or she selects for the payment, the send payment sub-module 202 is responsible for automatically converting funds from an existing sender balance in a different currency into the selected currency. In one embodiment, the receive payment sub-module 204 is responsible for assisting a recipient in making a decision with respect to an acceptance of a sender payment in a specific currency, for converting the sender payment into a different currency if needed, and for notifying the sender about the recipient's decision. In one embodiment, the user account manager 206 is responsible for allowing users to hold account balances in different currencies, for opening/removing currency balances within user accounts, and for performing transfers of funds between different currency balances within a user account. In one embodiment, the rate controller 208 is responsible for periodically obtaining exchange rates from a third party system and using these rates to refresh rates stored in a database of the online payments service. In one embodiment, the multi currency transfer module 200 also includes a request money sub-module that allows users to request money in any currency using a request money user interface with a list of currencies for user selection. In one embodiment, the multicurrency transfer module 200 also includes a withdraw funds sub-module that allows users to withdraw money from any currency balance to a user bank account. If the withdrawal requires conversion, the relevant conversion data is presented to the user and the user is requested to confirm the final withdrawal. FIG. 3 is a block diagram of one embodiment of a send payment sub-module 300. The send payment sub-module 300 includes, in one embodiment, a transaction information receiver 302, a conversion calculator 304, a sender funds analyzer 306, and a recipient communicator 308. The transaction information receiver 302 is responsible for communicating to a sender a user interface that facilitates user input of transaction information such as a recipient identifier (e.g., a recipient email address), a payment amount, a currency to be used for the payment, etc. In one embodiment, the user interface presents to the sender a list of currencies supported by the online payment system (e.g., U.S. dollars, Canadian dollars, Euros, pounds sterling, yen, etc.) and the sender is asked to select a specific currency from the list. The transaction information receiver 302 is further responsible for receiving transaction information entered by the sender via the user interface. In one embodiment, if the currency selected by the sender for the payment is not a sender primary currency, the conversion calculator 304 is invoked. In another embodiment, the conversion calculator 304 is invoked only if the sender does not hold an account balance in the selected currency. Once invoked, the conversion calculator 304 is responsible for providing a current exchange rate between the sender-selected currency and the sender primary currency and for calculating an equivalent value in the sender primary currency for the payment amount. The primary currency may be, for example, a currency used in the majority of payment transactions that involved the sender. In another example, the primary currency is a currency that was specifically identified by the sender as primary. In yet another example, the primary currency may be a currency of a country in which the sender resides or a default currency provided by the online payment service 120. The transaction information receiver 302 displays to the sender the conversion information provided by the conversion calculator 304 and requests the sender to confirm the payment in the selected currency. Once the sender sees the conversion information, the sender may decide that the current exchange rate for the selected currency is not favorable and select another currency. Alternatively, the sender may consider the current exchange rate as favorable and confirm the payment in the selected currency. In one embodiment, the sender may request, prior to confirming the payment, to view the history of currency conversion calculations from the sender's previous payment transactions to decide whether the current exchange rate is favorable. The recipient communicator 308 is responsible for informing the recipient about the sender's payment in the selected currency, receiving data indicating whether the recipient decides to accept the payment in this currency, and communicating the recipient's decision to the sender. In one embodiment, if the recipient decides to deny the payment, the recipient communicator 308 displays to the sender a message offering to select a different currency. The sender funds analyzer 306 is responsible for analyzing the sender's funds and determining how to fund the payment in the sender-selected currency. In one embodiment, if the sender holds an account balance in the selected currency, the sender funds analyzer 306 uses this account balance to fund the payment. Alternatively, if the sender does not hold an account balance in the selected currency, the sender funds analyzer 306 may use an account balance in the sender's primary currency to fund the payment. If the funds in the sender's primary balance are not enough to cover the payment, the sender funds analyzer 306 may ask the sender to specify an additional source for funding. This additional source may be, for example, a sender credit card, a sender bank account, a sender balance other then the primary balance, etc. In one embodiment, the sender is presented with relevant conversion information before requesting the sender's confirmation of any conversion that is necessary to fund the payment. FIG. 4 is a flow diagram of one embodiment of a method 400 for processing submissions of online multicurrency payments. The method 400 may be performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may reside either in the online payment service 120, or partially or entirely in a separate device and/or system(s). Referring to FIG. 4, the method 400 begins with processing logic communicating to a sender via a communications network a user interface that facilitates the sender input with respect to a desired currency in which a payment is to be made (processing block 402). In one embodiment, the user interface presents to the sender, for his or her selection, a list of currencies that are supported by the online payment service 120. At processing block 404, processing logic receives data identifying the sender-selected currency from the sender via the communications network. In response, in one embodiment, processing logic determines whether the sender-selected currency is the sender's primary currency. If it is not, processing logic determines the current exchange rate for conversion between the sender-selected currency and the sender primary currency. In another embodiment, processing logic determines the current exchange rate only if the sender does not hold an account in the sender-selected currency. Next, processing logic communicates to the sender via the communications network the current exchange rate for the conversion between the sender-selected currency and the sender primary currency (processing block 406). In one embodiment, processing logic also presents to the sender an equivalent value in the sender primary currency for the payment amount in the sender-selected currency. The presentation of the current conversion information (e.g., the exchange rate and the equivalent value) assist the sender in determining whether the terms for converting into the sender-selected currency are favorable at the present time. In addition, in one embodiment, the sender is provided with an opportunity to view the history of currency conversion calculations from previous transactions involving the sender to compare the current terms with prior terms. Further, if processing logic receives from the sender a confirmation of the payment in the sender-selected currency (decision box 408), processing logic notifies the recipient about the payment in the sender selected currency (processing block 410). FIG. 5 is a block diagram of one embodiment of a receive payment sub-module 500. The receive payment sub-module 500 includes, in one embodiment, a transaction information receiver 502, a conversion calculator 504, a recipient decision determinator 506, and a sender notifier 508. The transaction information receiver 302 is responsible for receiving information about a sender's payment and communicating it to the recipient. The information about the sender payment may include, for example, the identifier of the sender (e.g., sender's name or email address), the payment amount, the sender-selected currency of the payment, etc. In one embodiment, the transaction information receiver 502 is also responsible for determining whether the recipient holds an account balance in the sender-selected currency. If so, the transaction information receiver 502 is responsible for requesting a transfer of the payment amount to this account balance. If the recipient does not hold an account balance in the sender-selected currency, the conversion calculator 504 is invoked to provide a current exchange rate between the sender-selected currency and the recipient primary currency, and then the recipient decision determinator 506 communicates the current exchange rate to the recipient and requests the recipient's input with respect to an acceptance of the payment in the sender-selected currency. If the recipient accepts the payment in the sender-selected currency, the recipient decision determinator 506 requests to open a balance in the sender-selected currency within the recipient account. Alternatively, if the recipient accepts the payment in the sender-selected currency but also asks to convert it into the primary currency, the recipient decision determinator 506 performs the conversion and requests the addition of the resulting amount to the recipient's primary account balance. In another embodiment, the recipient decision determinator 506 is responsible for requesting the recipient's input for every payment received from any sender. If the recipient specifies that he accepts the payment and wants to convert it into a different currency, the recipient decision determinator 506 is responsible for invoking the conversion calculator 504, communicating information provided by the conversion calculator 504 to the recipient, and obtaining the recipient's final confirmation of the acceptance of the payment. In one embodiment, the conversion calculator 504 also calculates an equivalent value in a recipient primary currency (or some other currency specified by the recipient) for the payment amount in the sender-selected currency. The equivalent value is also presented to the recipient. Hence, the recipient is provided with information that can assist him in determining whether the acceptance of the payment in the sender-selected currency and/or conversion of the sender-selected currency into a different currency would be beneficial for the recipient at the present time. In addition, in one embodiment, the recipient is provided with an opportunity to view the history of currency conversion calculations from previous transactions involving the recipient to compare the current terms with prior terms. Once the recipient specifies his decision, the sender notifier 506 notifies the sender about the recipient's decision. FIG. 6 is a flow diagram of one embodiment of a method 600 for processing receipts of online multi currency payments. The method 600 may be performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may reside either in the online payment service 120, or partially or entirely in a separate device and/or system(s). Referring to FIG. 6, the method 600 begins with processing logic communicating to a recipient via a communications network a notification of a sender payment in a sender-selected currency (processing block 602). At processing block 604, processing logic presents to the recipient via the communications network conversion data pertaining to a payment amount in the sender-selected currency. The conversion data may include an equivalent value in a recipient primary currency for the payment amount in the sender-selected currency. In one embodiment, the conversion data is communicated to the recipient if the recipient does not hold an account balance in the sender-selected currency. Alternatively, the conversion data is communicated to the recipient for every received payment. In one embodiment, the notification about the sender payment and the conversion data is presented to the sender using a single user interface. In one embodiment, this user interface also allows the recipient to provide input for the recipient's decision with respect to an acceptance of the sender payment. The presentation of the conversion data assists the recipient in determining which actions with respect to the payment in the sender-selected currency would be the most advantageous for the recipient at the present time. In one embodiment, the recipient may be also presented with a history of currency conversion calculations from previous transactions involving the recipient for comparison. At processing block 606, processing logic receives from the recipient via the communications network data indicating the recipient's decision with respect to an acceptance of the payment in the sender-selected currency. In one embodiment, in which the recipient does not hold an account balance in the sender-selected currency, the recipient is provided with three decision options: (1) accept the payment and create a balance in the sender-selected currency within the recipient account, (2) accept the payment and convert it into the recipient's primary balance, and (3) deny the payment. If the recipient chooses the first option, processing logic requests a creation of a new balance within the recipient account and a transfer of the payment amount to this new balance. If the recipient chooses the second option, processing logic converts the payment amount into the recipient's primary balance and requests a transfer of the resulting amount to the recipient's primary balance. In one embodiment, processing logic determines the recipient decision with respect to this payment based on payment receiving preferences previously provided by the recipient with respect to future payments in currencies for which the recipient does not hold a balance. In one embodiment, processing logic assesses a receiving fee in the sender-selected currency if the recipient accepts the payment. Afterwards, processing logic notifies the sender via the communications network of the recipient decision (processing block 608). In one embodiment, if the recipient denies the payment, processing logic presents to the sender a message offering the sender to select a different currency for the payment. FIG. 7 is a block diagram of one embodiment of a user account manager 700. The user account manager 700 includes, in one embodiment, a currency balance manager 702, a conversion calculator 704, a transfer request processor 706, and a funds transferor 708. The currency balance manager 702 is responsible for maintaining balances in different currencies within a user account, opening new balances when needed and closing existing balances when requested by a user. The conversion calculator 704 is responsible for providing current exchange rates and calculating amounts of potential and actual transfers. The transfer request processor 706 is responsible for transferring funds between different currency balances within a user account. Prior to performing a transfer, the transfer request processor 706 displays conversion data provided by the conversion calculator 704 and then requests the user to confirm the transfer. The funds transferor 708 is responsible for performing the transfer. FIG. 8 is a flow diagram of one embodiment of a method 800 for managing multicurrency balances of a user. The method 800 may be performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may reside either in the online payment service 120, or partially or entirely in a separate device and/or system(s). Referring to FIG. 8, the method 800 begins with processing logic communicating to a recipient via a communications network information identifying a set of balances in different currencies within the user account (processing block 802). In one embodiment, the user is also presented with the combined total of all the balances in the user primary currency. At processing block 804, processing logic receives from the user via the communications network data indicating a user desire to transfer funds between two currency balances. In response, processing logic presents to the user via the communications network data identifying a current exchange rate for conversion between currencies of the two balances (processing block 806). Next, processing logic receives a user approval of the desired transfer (processing block 808) and performs the transfer (processing block 810). As discussed above, a current exchange rate is periodically updated based on the rates obtained from a third party system. A third party may be a financial institution or any other organization that guarantees an exchange rate to the online payment service 120 during a predefined time interval. As a result, the online payment service 120 is not affected by any market fluctuations that may occur during this time interval and can provide its users with more up-to-date exchange rates. FIG. 9 is a flow diagram of one embodiment of a method 900 for obtaining guaranteed exchange rates. The method 900 may be performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may reside either in the online payment service 120, or partially or entirely in a separate device and/or system(s). Referring to FIG. 9, the method 900 begins with processing logic retrieving new exchange rates from a third party system (processing block 902). The new exchange rates have associated expiration dates and the online payment system is guaranteed the ability to trade against these rates within the specified window. In one embodiment, the new exchange rates are pulled via a client interface that interacts with a third party server. In one embodiment, the new exchange rates include a market exchange rate, a bid exchange rate and an ask exchange rate. Next, processing logic applies a set of business rules to the new exchange rates (processing block 904). The set of business rules include a variety of checks (e.g., whether the new exchange rates have changed by more than 5% from the previous exchange rates) that are done to ensure that the rates are correct. At decision box 906, processing logic determines whether the rates are correct. If not, processing logic generates an error message (processing block 908). If so, processing logic updates exchange rates currently stored in the live database of the online payment service with the new exchange rates (processing logic 910) and begins accumulating customer payment transactions in different currencies (processing block 912). When a predefined time period expires (decision box 914), processing logic requests the third party system to trade and settle the accumulated customer payment transactions (processing logic 916) and receives confirmation and summary reports once the trades are completed. In one embodiment, all transactions are funded and settled in a specific currency (e.g., U.S. dollars). In one embodiment, the trades are completed via a client interface that interacts with the third party server. FIG. 10 is a flow diagram of one embodiment of a method 1000 for facilitating multi currency payment transactions between participants of a network-based transaction facility. The method 900 may be performed by processing logic, which may comprise hardware, software, or a combination of both. Processing logic may reside either in the online payment service 120, or partially or entirely in a separate device and/or system(s). Referring to FIG. 10, the method 1000 begins with processing logic presenting to a sender a user interface that facilitates sender input of a specific currency for a payment (processing block 1002). Next, processing logic determines whether the sender-selected currency is a sender primary currency (decision box 1004). If so, the method 1000 proceeds directly to decision box 1008. If not, processing logic displays a current exchange rate for conversion between the sender-selected currency and the sender primary currency and an equivalent value in the sender primary currency (processing block 1006) and requests the sender to confirm the payment. If the sender confirms the payment (decision box 1008), processing logic notifies the recipient about the payment in the sender-selected currency and presents to the recipient an equivalent value in the recipient's primary currency for the payment amount in the sender-selected currency (processing block 1010). If the recipient denies the payment (decision box 1012), processing logic presents to the sender a message offering the sender to select a different currency. If the recipient accepts the payment, processing logic funds the payment using one or more payment instruments of the sender (processing block 1013). In one embodiment, if the sender has an account balance in the sender-selected currency, processing logic funds the payment using this account balance. If the sender does not have such account balance, processing logic funds the payment using the sender primary account balance. If the primary account balance does not cover the payment, processing logic may use a sender credit card, a sender bank account, or other account balances within the sender account to fund the payment. Further, if the recipient accepts the payment, processing logic assesses a receiving fee in the sender-selected currency (processing block 1014) and determines whether the recipient holds an account balance in the sender-selected currency (decision box 1015). If so, processing logic adds the payment to this balance (processing block 1016). If not, processing logic determines whether the recipient requested conversion of the accepted payment into the recipient primary currency (decision box 1018). If so, processing logic performs the conversion (processing block 1020), shows transaction history for the conversion (processing block 1022), and transfers the payment amount to the primary balance. If the recipient did not request conversion, processing logic creates a new currency balance (processing block 1024), transfers the payment amount to the new currency balance (processing block 1026), and presents a list of existing currency balances with the total amount value to the recipient (processing block 1028). In one embodiment, if processing logic receives a request to return the payment to the sender, processing logic performs the return in the currency in which the payment was originated using an original exchange rate. Functions of the online payment service 120 pertaining to multi currency payments will now be described within the context of user interfaces, according to one embodiment of the present invention. Exemplary representations of the various interfaces are shown in FIGS. 11-20. While the exemplary interfaces are described as comprising markup language documents displayed by a browser, it will be appreciated that the described interfaces could comprise user interfaces presented by any Windows® client application or stand-alone application, and need not necessarily comprise markup language documents. FIG. 11 illustrates an exemplary send money interface that enables a sender to specify which currency 1102 is to be used for a payment. FIG. 12 illustrates an exemplary check payment details interface that displays a current exchange rate 1204 for conversion between the sender-selected currency and a sender primary currency and an equivalent value 1202 in the sender primary currency. The user interface also includes a send money button 1206 requesting the sender to confirm the payment. FIG. 13 is an exemplary receive money interface that notifies a recipient about the sender's payment and requests him to specify his decision with respect to the payment. The receive money interface presents to the recipient the payment amount 1304 in the sender-selected currency and an equivalent value 1302 in the recipient primary currency. FIG. 14 is an exemplary account overview interface which is presented if the recipient chose to accept the payment in the sender-selected currency. A new balance 1402 created in response to the recipient's acceptance is shown in the Balance box. The balance 1402 reflects an assessment of a receiving fee. FIG. 15 is an exemplary transaction history interface that is presented in response to the recipient's request to accept the payment in the sender-selected currency and to convert it into the recipient primary currency. The transaction history includes 3 records: (1) the payment received in its original currency, (2) the conversion from the original currency, and (3) the conversion to the recipient's primary currency. FIG. 16 is an exemplary payment receiving preferences interface that includes information 1602 specifying how the recipient wishes to handle payments that are sent in currencies that the recipient does not hold. As shown, the recipient can request that such payments be blocked, accepted and converted into a primary currency, or be asked about. FIG. 17 is an exemplary account overview interface that identifies various currency balances within a user account and provides a total amount of all the balances in the primary currency. FIG. 18 is an exemplary transfer funds interface that allows a user to transfer funds from one account balance to another. The transfer funds interface also presents a current exchange rate for the conversion, a resulting amount in the desired conversion, and a transfer button to confirm the transfer. FIG. 19 is an exemplary manage currency interface that displays all the currency in which the user may maintain a balance, allows the user to open a new balance, remove an existing balance and make a certain balance primary. FIG. 20 is an exemplary withdraw funds interface that allows a user to withdraw funds from any of his currency balances. Before completing the deposit, the funds are converted into the currency of the user bank account and the results are displayed to the user In summary, it will be appreciated that the above described interfaces, and underlying technologies, provide a convenient vehicle for facilitating multicurrency payment transactions in a transaction facility. FIG. 21 shows a diagrammatic representation of machine in the exemplary form of a computer system 2100 within which a set of instructions, for causing the machine to perform anyone of the methodologies discussed above, may be executed. In alternative embodiments, the machine may comprise a network router, a network switch, a network bridge, Personal Digital Assistant (PDA), a cellular telephone, a web appliance or any machine capable of executing a sequence of instructions that specify actions to be taken by that machine. The computer system 2100 includes a processor 2102, a main memory 2104 and a static memory 2106, which communicate with each other via a bus 2108. The computer system 2100 may further include a video display unit 2110 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 2100 also includes an alpha-numeric input device 2112 (e.g., a keyboard), a cursor control device 2114 (e.g., a mouse), a disk drive unit 2116, a signal generation device 2120 (e.g., a speaker) and a network interface device 2122. The disk drive unit 2116 includes a computer-readable medium 2124 on which is stored a set of instructions (i.e., software) 2126 embodying anyone, or all, of the methodologies described above. The software 2126 is also shown to reside, completely or at least partially, within the main memory 2104 and/or within the processor 2102. The software 2126 may further be transmitted or received via the network interface device 2122. For the purposes of this specification, the term “computer-readable medium” shall be taken to include any medium that is capable of storing or encoding a sequence of instructions for execution by the computer and that cause the computer to perform anyone of the methodologies of the present invention. The term “computer-readable medium” shall accordingly be taken to included, but not be limited to, solid-state memories, optical and magnetic disks, and carrier wave signals. Thus, a method and apparatus for facilitating online payment transactions in a network-based transaction facility using multiple payment instruments have been described. Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. 1. A method comprising: providing an option to a user to select a currency in which to make a payment, the option including a primary currency of the user and a currency of a recipient, the primary currency of the user and the currency of the recipient being different; receiving an indication of the selected currency in which to make the payment; determining, using a processor of a machine, whether the selected currency is the primary currency of the user or the currency of the recipient; and based on the selected currency being the currency of the recipient, causing funding of the payment to the recipient in the selected currency by converting the payment to the selected currency prior to an expiration time of an exchange rate. 2. The method of claim 1, further comprising causing conversion information to be presented to the user. 3. The method of claim 2, wherein the conversion information comprises the exchange rate. 4. The method of claim 2, further comprising calculating an equivalent value of the currency of the recipient in the primary currency of the user, wherein the conversion information to be presented further comprises the equivalent value in the primary currency of the user. 5. The method of claim 1, further comprising: requesting the user to confirm the payment in the currency of the recipient; and receiving a confirmation to proceed with processing the payment in the currency of the recipient. 6. The method of claim 1, further comprising: receiving an indication of acceptance or denial of the payment; and communicating the indication of the acceptance or denial to the user. 7. The method of claim 6, further comprising, in response to the denial of the payment, allowing the user to make the payment using a different payment mechanism. 8. The method of claim 1, further comprising: determining if a balance of an account of the user is sufficient to cover the payment; and based on the balance not being sufficient, requesting the user to provide an additional source of payment. 9. The method of claim 1, further comprising providing a user interface that facilitates input of the indication of the selected currency. 10. The method of claim 1, further comprising, based on the selected currency being the primary currency, causing a conversion from the selected currency to the currency of the recipient after the payment is funded from an account of the user. 11. The method of claim 1, further comprising assessing a fee in the primary currency based on acceptance of the payment. 12. The method of claim 1, wherein the primary currency comprises a privately issued token that can be exchanged for another privately issued token or government script. 13. A system comprising: one or more hardware processors; and at least one payment module to provide an option to a user to select a currency in which to make a payment, the option including a primary currency of the user and a currency of a recipient, the primary currency of the user and the currency of the recipient being different, receive an indication of the selected currency in which to make the payment, and determine, using the one or more hardware processors, whether the selected currency is the primary currency of the user or the currency of the recipient, and based on the selected currency being the currency of the recipient, cause funding of the payment to the recipient in the selected currency by converting the payment to the selected currency prior to an expiration time of an exchange rate. 14. A machine-readable storage medium having no transitory signals and storing instructions which, when executed by at least one processor of a machine, cause the machine to perform operations comprising: providing an option to a user to select a currency in which to make a payment, the option including a primary currency of the user and a currency of a recipient, the primary currency of the user and the currency of the recipient being different; receiving an indication of the selected currency in which to make the payment; determining whether the selected currency is the primary currency of the user or the currency of the recipient; and based on the selected currency being the currency of the recipient, causing funding of the payment to the recipient in the selected currency by converting the payment to the selected currency using an exchange rate, prior to an expiration time of the exchange rate. 15. The machine-readable storage medium of claim 14, wherein the operations further comprise causing conversion information to be presented to the user, the conversion information reflecting the exchange rate. 16. The machine-readable storage medium of claim 15, wherein the operations further comprise calculating an equivalent value of the currency of the recipient in the primary currency of the user, wherein the conversion information to be presented further comprises the equivalent value in the primary currency of the user. 17. The machine-readable storage medium of claim 14, wherein the operations further comprise: receiving an indication of acceptance or denial of the payment; and communicating the indication of the acceptance or denial to the user. 18. The machine-readable storage medium of claim 17, wherein the operations further comprise, in response to the denial of the payment, allowing the user to make the payment using a different payment mechanism. 19. The machine-readable storage medium of claim 14, wherein the operations further comprise: determining if a balance of an account of the user is sufficient to cover the payment; and based on the balance not being sufficient, requesting the user to provide an additional source of payment. 20. The machine-readable storage medium of claim 14, wherein the operations further comprise causing a conversion from the selected currency to the currency of the recipient after the payment is funded from an account of the user based on the selected currency being the primary currency.
How to get Dialogflow to accept any input for a required entity So I'm trying to create a dialogflow agent that works as a kind of interviewer. For instance, at one point the agent asks "Do you have any food service experience?" I've created entities "previous position," "previous employer," and "duration," and marked them as required. Using automatic expansion and providing sufficient example user input, the agent has no problem assigning things it hasn't seen before to these entities (for example, "Yes, I worked as an X at X company for X years" or "Yup, for X years I was an X at X company"). However, I'm running into problems with the prompts when a user doesn't provide all the required entities, which I assume will be quite common, for instance, a user might respond with simply "yes." If a user doesn't supply one of these entities, the prompt will ask "what was your position" and/or "Where did you work" and/or "how long did you work there." However, even with "automatic expansion" being checked, the system will not accept any user input that does not match one of the example entities I've provided ("Taco Bell", "4 years", "cook", "etc"), and it just keeps repeating the question. And of course I can't predict every possible response. I know there are other ways of fixing this (such as prompting the user to enter the three categories in the original question or breaking it up into multiple intents), but I really want to find a way to fix this specific problem if it's possible in order for a less clunky chatbot. So, is there any way to get the prompts for required entities to accept any input? It would be better if you could ask a question which does not have multiple responses. You should phrase it in such a way that makes user automatically answer in a required way as your intent is designed. The way you have put your question: "Do you have any food service experience?" Its answer will be mostly Yes or No. So I will suggest you put two follow-up intents capturing YES and NO and in the YES follow-up intent capture the other required entities. Here you should mark them as required. Also, in the original parent intent keep collecting the same entities but do not mark them as required. This way, you will automatically capture user responses like "Yes, I worked as an X at X company for X years". Now, if the user only says "YES", you will have follow-up intent to capture entities and if the information is provided upfront in the parent entity, you can set the lifespan of context for the follow-up intent to "0" so that YES follow-up is not called. You will need to play with contexts and multiple intents to capture all the scenarios. Take a look at the following links to understand how to design conversations and follow best practices. These can be applied to almost every chatbot app. Dialogflow Design-Conversation General Best Practice Thank you, this makes sense! The only part I'm struggling with is figuring out how to trigger the yes follow up ONLY if there are entities missing? You mention setting the context to zero, but how do I differentiate calling it or not calling it based on entities being filled or not? Relatedly, what if two out of the three entities are filled in the first intent...is there a way to trigger a followup that only asks the relevant questions? If you make entities as optional in first intent, Yes or similar response will trigger the "YES" follow-up intent. You will need to enable webhook slot filing for capturing entities. If all entities are collected, then context can be set to 0 to trigger other other intents, else you will need to manipulate contexts to capture all the entities from the 2 intents. This will take some efforts from your side and it may be possible you have 2-3 intents capturing all the entities. Better to start with an MVP and design your conversation so that user gets an idea what to speak. If the correct response is not provided by the user give them hints. Ah gotcha! So there is not really a way to set the context to 0 iff all entities are collected without using webhook? You can set the context to 0 from Dialogflow. But in that case, it would be every time. What you need is based on logic which is better through Webhook.
Oil Pump The ICA Oil Pump is an endgame activity that prospectors unlock by finishing quests for Description Features The Oil Pump can be called down on oil veins to gather NiC Oil Cannister. The Oil Pump takes July 27, 2024 to land and July 27, 2024 for each NiC Oil Cannister. Loot
DO NOT MERGE: testing from upstream Closing until @ckstettler and I can resume testing the test errors together
RuneScape talk:Requests for adminship * Yes we've noticed.--Whiplash 21:54, 14 October 2006 (UTC) Removing Request when done * 1) This should go at the bottom, not the top. * 2) Sign in before posting. (That way you can sign with ~ .) * Ok, ok and ok -- Carralpha 09:37, 23 December 2006 (UTC) Archives? List of users * Archive 1, and Runescape:Administrators --Eucarya 19:20, 13 November 2006 (UTC) Enough Administrators.. Oh god...That's all I have to say.. 01:21, 21 January 2007 (UTC) Stop self-nominating? * I nominate myself......I'm kidding. *stamps "OPPOSE" on his forehead* 08:04, 7 March 2007 (UTC) Bureaucrats? What exactly is a bureaucrat, in the wikia sense?--Atlantima 22:07, 7 March 2007 (UTC) Main space edits That's my view. Discuss. JalYt-Xil-Vimescarrot 08:47, 10 March 2007 (UTC) Sysops have become something else. * Sysop Requirements * .....No respones...? 15:56, 13 March 2007 (UTC) * Read the requirements that Eucarya set * Re-Think their nomination (if self elected) and decide if they truly do feel that they are ready * My thoughts about the current rules, guidelines and proposed ones: Agreed. The contributor should be nominated for doing the things mentioned in the guidelines. Agreed, two weeks seems like a good amount. My ideas for nominee requirements: * A registered user (this goes without saying) My ideas for voter requirements: * A registered user Dtm142 23:43, 14 April 2007 (UTC) Self-nominating again... If a user feels they are ready for adminship duties and no one has nominated them yet they should be free to nominate themselves. This rule was mostly put in place to stop new users from doing this. I think a guidline to prevent that could be something like you must have been here for at least a month. Then if the user dosent meet the guidline their request can be removed from the page.--Whiplash 18:14, 5 May 2007 (UTC) * I would Dtm but we need more opinions before I can or can't. --Whiplash 19:16, 5 May 2007 (UTC) * Don't forget about Total Rune O.o Chaoticar 01:07, 6 May 2007 (UTC) Activity? New set of rules Considering the size of our wiki, I think its time to tighten up these rules. our current ones are About RfA Nomination standards Decision process Expressing opinions Nominating What can admins do? See RuneScape:Administrators What can I do to help? Watch Special:Recentchanges and help to revert vandalism even before you're an admin Create good content Be part of the community All decisions should be democratic "Nomination standards Requirements to be nominated for adminship: * 3) They must never have vandalised the wiki. * 4) They must not made an attack on another user in the past 3 months. * 5) They must not have used any swearing in the past 3 months. * 6) They must have reverted at least 5 edits as vandalism. * 7) They must have edited in the last 5 days. (still be active) "Expressing opinions Nominating Aggreed, except that voters must have been here for at least a month, so we don't get sockpuppets. Alternative to current nomination system * When a user wants to be in the running for sysop, they add their name to the table below: Give this a thought and then post some intelligent criticism. Suggestion for vote indents I want to be a sysop. ~Someuser, somedate * Sure, why not? * No, I don't think so. * Nah. * But... * That's a no. * But I wanna! ((Yes, I'm also making a point about our disorganized in-general indenting. * Support by me. * I say no. * Yes * No * Hmm. * I'm commenting on "Hmm." * I'm replying to the comment about "Hmm." * However, I'm still replying to "Hmm." * And I'm just another comment. * I'm commenting on that comment. * Another top-level comment ((Maybe I went a bit overboard on the comments hierarchy?)) * Then we should probably give each RfA its own subpage... Oddlyoko talk 19:39, 19 May 2007 (UTC) * Good idea. That would make it easier to archive too. Dtm142 20:04, 19 May 2007 (UTC) I'm wondering Do you guys think I got what it takes to be an admin?00:01, 20 May 2007 (UTC) * Maybe. Put up a nomination and see. 00:04, 20 May 2007 (UTC) * Ehhh I can nominate myslef?00:04, 20 May 2007 (UTC) * Self nominations are allowed. Dtm142 00:08, 20 May 2007 (UTC) * Umm...last time I was active self nominating wasn't allowed lol.00:19, 20 May 2007 (UTC) Two Weeks??? I don't think so Revising the process ==User== ===Support=== ===Oppose=== ===Neutral=== ===Comments=== or something like that.--Richard (Talk - Contribs) 00:13, 25 May 2007 (UTC) * Put it in sitenotice and put a notice at the top of the page. 17:43, 27 May 2007 (UTC) * I'll do that right now. Dtm142 17:44, 27 May 2007 (UTC) --Xenogears2 18:04, 27 May 2007 (UTC) * It's our method of deciding who gets administrator powers. Dtm142 18:06, 27 May 2007 (UTC) * Wow, that's really good idea. 05:29, 1 June 2007 (UTC) Templates, st00f lets not get anymore admins * Let's see.... * Vimescarrot is not completely inactive and still does some things * I think Eucarya is the same * Hyenaste * Sacre Fi * Oddlyoko * Whiplash * Ya I guess so. 22:29, 29 May 2007 (UTC) * I know I'm counting people who are active. 22:37, 29 May 2007 (UTC) The Meaning of "Sysop" Sysops Oddlyoko talk 00:22, 31 May 2007 (UTC) * I suspect Oddlyoko speaks of me at the top. 14:29, 1 June 2007 (UTC) Wowee, this really did spark controversy. :| Oddlyoko talk 16:28, 1 June 2007 (UTC) * I'll make a list of what I'm trying to say... 17:08, 1 June 2007 (UTC) * 1) You can't purely base a person's matureness on their age. * 2) it's very hard to tell a person's age online without any personal info. * Perhaps now you understand? 17:08, 1 June 2007 (UTC) * Would being sysoped in January be considered recent by your standards? 17:16, 1 June 2007 (UTC) * 17:44, 1 June 2007 (UTC) * Total Rune was 19. Proof by example. So would I be an exception or not? 18:57, 2 June 2007 (UTC) Forums * Whiplash does. 19:05, 2 June 2007 (UTC)
Ribonuclease Z TRNase Z (, 3 tRNase, tRNA 3 endonuclease, RNase Z, 3' tRNase) is an enzyme that, among other things, catalyses the reactions involved in the maturation of tRNAs. Here, it endonucleolytically cleaves the RNA and removes extra 3' nucleotides from the tRNA precursor, generating the 3' termini of tRNAs. A 3'-hydroxy group is left at the tRNA terminus and a 5'-phosphoryl group is left at the trailer molecule. Similarly, it processes tRNA-like molecules such as mascRNA.
Distribution: schema download link and corresponding documentation For a Distribution, I see the property mobilitydcatap:dataModelSchema. Its description is: “This property describes the schema of the delivered content, as applied for the data model, see property mobilitydcatap:dataModel). The schema can be individually determined by the data provider (e.g., a stakeholder-based DATEX II profile) or a prescribed by other institutions (e.g., a DATEX II recommended reference profile). […]” I do not quite understand how I have to read this; it reads like “a description of the schema/XSD has to be given in this property”. But XSD in itself is a description. If we take DATEX II as an example, would it not be more, or also, suitable to provide the option to add to a distribution a download link to the schema (XSD)? (Say, maybe dataModelSchemaDownloadURL?) Lots of elaborate documentation about DATEX II profiles (schemas) is given, for example on websites (datex2.eu). If mobilityDCAT-AP would allow to give a link to a schema for download, a property (say, maybe DataModelDocumentationURL) to point to the corresponding documentation for this schema would, in my opinion, be very good to have. Regarding "I do no understand...": We will refine the usage note as follows: "This property refers to a data schema associated with the applied data model, see property mobilitydcatap:dataModel." Regarding 1: You propose to change the range of this property from "rdfs:Resource" to "dcat:Distribution". If we do so, the schema information would need to include any mandatory properties from the class "dcat:Distribution", see: https://mobilitydcat-ap.github.io/mobilityDCAT-AP/drafts/latest/#properties-for-distribution This would unnecessarily explode the payload: For example, the property "mobilitydcatap:dataModel" would need to be noted again, which is equal to "mobilitydcatap:dataModel" of the original distribution. I propose to keep the generic range "rdfs:Resource". Regarding 2: If we keep the range "rdfs:Resource" (see above), the NAP system can decide how to reference the schema. It might be a URL to a NAP-internal catalogue of schemas, a URL to an individual schema (that is provided by the metadata publisher), or a URL to an external catalogue of schemas (like on the datex2.eu page). I will add this as a usage note. I propose NOT to not include any additional properties for mobilityDCAT-AP1.0. We have only a few days until the publication, and property additions would involve another review round. However, let's think about refinements of our format-related properties (which may be expressed as formal sub-properties) for mobilityDCAT-AP2.0. Thanks for the proposals. I was not aware that there are (only) a few days left until the release of version 1.0. So, it's nice that the usage note of 1 and 2 can still be adjusted. To see later whether it is desirable to further develop format-related properties, looks like a sensible route to me. I resolved this while resolving this issue: https://github.com/mobilityDCAT-AP/mobilityDCAT-AP/issues/12 See point 2. -> There is a dedicated new class "Mobility Data Standard", with a property "Schema". -> The range of "Schema" is kept with "rdfs:Resource", for the reasons above. -> The usage of "Schema" has been concretised, with the options how to describe a schema described above.
recherche du temps perdu Etymology Borrowed from. From the title of the novel . Noun * 1) Remembrance of things past.
<?php namespace Tests\amoCRM\Entity; use amoCRM\Entity\BaseCustomField; use amoCRM\Entity\CustomFieldAddress; use PHPUnit\Framework\TestCase; /** * Class CustomFieldAddressTest * @package Tests\amoCRM\Entity * @covers \amoCRM\Entity\CustomFieldAddress */ final class CustomFieldAddressTest extends TestCase { /** @var integer */ private $default_id = 25; public function testIsInstanceOfBaseField() { $this->assertInstanceOf( BaseCustomField::class, new CustomFieldAddress($this->default_id) ); } public function testSetValueToAmo() { $cf = new CustomFieldAddress($this->default_id); $value = sprintf("%s\n%s\n%s", 'some', 'multi line', 'text'); $cf->setValue($value); $this->assertEquals(['id' => $this->default_id, 'values' => [['value' => $value]]], $cf->toAmo()); } public function testSetValue() { $cf = new CustomFieldAddress($this->default_id); $value = 1; $cf->setValue($value); $data = $cf->toAmo(); $this->assertEquals(['id' => $this->default_id, 'values' => [['value' => $value]]], $data); $this->assertInternalType('string', $data['values'][0]['value']); } }
Flutter Build Failed Hi @Hixie, @stuartmorgan , @danagbemava-nc, @jaumard My Flutter Application was working fine. For the past two days when I try to run the app, I see the below error in the console. Please help me in resolving the below errors, Any help would be appreciated! FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processQaDebugResources'. > A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade > Android resource linking failed C:\Users\mohammed.uddin\.gradle\caches\transforms-3\4ad9241f3702f85124aba07b562b6468\transformed\core-1.7.0\res\values\values.xml:105:5-114:25: AAPT: error: resource android:attr/lStar not found. Hi @mdzafar3194, have you added any dependencies to your project recently? have you made any changes to either of your build.gradle files or AndroidManifest.xml? What is the output of flutter doctor -v? can you provide your pubspec.yaml, build.gradle and AndroidManifest.xml? can you provide a complete minimal reproducible code sample? also, can you provide the output of flutter run -v? (output is likely to be huge so you can paste it into a text file and attach it here. Lastly, do bear in mind that the flutter issue tracker is meant for tracking issues with flutter itself. For help with personal code, it is better that you check out any of the flutter communities https://flutter.dev/community. There might be someone who has also faced this error and can offer you some assistance. Thank you Please do not ping random people when filing issues.
The inside dimensions of the water screen (fig. 3, B) were 6 inches by 2 inches by 0.25 inch. The two pieces of glass (c) were held securely one-fourth inch apart by a strong wooden frame (d). The running water passed through a siphon (fig. 2, P) and a glass tube (fig. 3, B, c), having a bore of one-eighth inch, into one end of the water screen and out again at the other end through another tube and siphon (fig. 2, Q, fig. 3, B, /). Fig. 3. — Diagrams of photo-geotactic box (A) and water screen (B), showing following parts of them: a, glass shutter; b, wooden shutter; c, piece of glass; d, wooden frame ; e, inlet glass tube ; and /, outlet glass tube. The lamp (fig. 2, O), consisting of a blue daylight bull) 100 W, iioV, rested on the table (£), or on a box. while the photo-geo tactic box and water screen lay on a stool {R). Since the writer was not able quickly and accurately to separate the live bean beetles according to sex, sex was disregarded in all the tests conducted. The beetles, otherwise, were selected so that those in each set were of practically the same age and responded to daylight readily.
Inject component into another component What is the preferred way to inject a component into another component? I have an Object Oriented application structure where a View only knows about its parent View. Because all my components are 'dynamic' components I do not know the component structure beforehand. I tried it in two different ways with the following shared code: /** @jsx React.DOM */ var component = React.createClass({ render: function () { return ( <div> .. many elements here .. {this.props.children} </div> ); } }); var subcomponent = React.createClass({ render: function () { return ( <div>test</div> ); } }); var parentView = React.renderComponent( <component>.. subelements</component>, document.getElementById('reactContainer') ); 1. Multiple components rendered var subView = React.renderComponent( <subcomponent />, parentView.getDOMNode() ); The problem with this is that the super components inner html is replaced by the injected component. Also other errors are popping up. Seems like this is not the React-way of doing this. 2. Inject subcomponent via setProp with single renderComponent Another approach is to set the children prop. parentView.setProps({ children: <subcomponent /> }); This works almost as expected, but also has some drawbacks. It is resetting the children to only the injected component. I could work around this by: parentView.setProps({ children: [parentView.props.children, <subcomponent />] }); But now the childView is managing the children of its parent. But I could extract this to a method on the parentView. Another drawback is that when the view-depth is deeper than 2, the reference to the React component is gone because only the rootView is rendered via React.renderComponent and thus I can only do setProps on the rootview. I think I need a React.renderComponent for every view, but I don't know a way of how to inject it in the parent. I can't fully make sense of your model, but it seems you are fighting the idiomatic react way of doing things. You should perhaps re-read the docs and start again :( You don't need to inject children and they definitely shouldn't be passed as props. How to handle children is addressed here: http://facebook.github.io/react/docs/multiple-components.html I'm also extremely confused by what you're trying to do here. In order to have a child component inside a parent, you just call the child inside the parent's render function. Why you're trying to insert a child through props is completely beyond me and is absolutely not something you should generally be trying to do with React. In most cases, the preferred way to pass a component down to another component is using the special children prop (like you show in your example): var component = React.createClass({ render: function () { return ( <div> .. many elements here .. {this.props.children} </div> ); } }); you can read more here
Signal processing apparatus and method for generating a corrected image having a large number of pixels from an image having a lesser number of pixels, and recording medium having a program recorded thereon for performing such method ABSTRACT A signal processing apparatus includes a generator operable to generate a second image signal by converting a first image signal into the second image signal; a calculation unit operable to calculate a correction amount based on an evaluation of the second image signal relative to the first image signal; and a correction unit operable to correct the second image signal based on the correction amount. CROSS-REFERENCE TO RELATED APPLICATIONS The present application claims priority from Japanese Patent Application No. JP 2004-124270 filed Apr. 20, 2004, the disclosure of which is hereby incorporated by reference herein. BACKGROUND OF THE INVENTION The present invention relates to signal processing apparatuses and methods and to recording media and programs for controlling the signalprocessing apparatuses and methods, and more particularly, to a signalprocessing apparatus and method capable of generating high-quality images and to a recording medium and a program for controlling such signal processing apparatus and method. Recently, due to an increase in the size of display screens for television receivers, images often have been displayed using image signals having a large number of pixels. Thus, for example, processing for converting pixels into quadruple-density pixels in order to convert standard-definition (SD) images into high-definition (HD) images is suggested, for example, in Japanese Unexamined Patent Application Publication No. 2000-78536. Thus, viewers can view higher-quality images on large screens. However, when an output image is generated using a linear prediction coefficient, a unique output image is determined from an input image. Inaddition, for example, when a method is adopted for performing classification by adaptive dynamic range coding (ADRC), for reading an optimal prediction coefficient from among prediction coefficients learned in advance in accordance with the classification, and for generating an output image in accordance with the prediction coefficient, a unique output image is determined depending on the input image. Although in order to reduce the error between an input image and an output image, a prediction coefficient is generated by learning many supervisor images in advance, the error may not be satisfactorily reduced for some input images. In such cases, the high-resolution images that are generated have been output as they are. For example, by quadruple-density conversion, four pixels are generated from one pixel using respective independent linear prediction coefficients. Since the characteristics of the processing performed between the four pixels are different from the characteristics of the processing performed between another four pixels acquired from another input pixel, discontinuity may occur. As a result, users may not be able to view images with high quality. SUMMARY OF THE INVENTION It is desirable to provide images with higher quality. According to an embodiment of the present invention, a signal processing apparatus includes a generator operable to generate a second imagesignal by converting a first image signal into the second image signal;a calculation unit operable to calculate a correction amount based on an evaluation of the second image signal relative to the first imagesignal; and a correction unit operable to correct the second imagesignal based on the correction amount. A block of pixels from among pixels constituting the second image signal includes a target pixel, and the calculation unit may calculate a coefficient representing the evaluation based on the relationship between a first difference between the target pixel and pixels other than the target pixel within the block of pixels and a second difference between the target pixel and pixels outside the block of pixels. The calculation unit may include a first difference calculation unitoperable to calculate the first difference between the target pixel andthe pixels other than the target pixel within the block of pixels; a second difference calculation unit operable to calculate the second difference between the target pixel and the pixels outside the block of pixels; a first average value calculation unit operable to calculate the average of the first differences in a frame; a second average value calculation unit operable to calculate the average of the second differences in the frame; a coefficient calculation unit operable to calculate the coefficient based on a ratio of the average of the first differences in the frame and the average of the second differences inthe frame; and a correction amount calculation unit operable to calculate the correction amount based on the coefficient and the second difference. According to an embodiment of the present invention, a signal processing method includes generating a second image signal by converting a first image signal into the second image signal; calculating a correctionamount based on an evaluation of the second image signal relative to thefirst image signal; and correcting the second image signal based on the correction amount. According to an embodiment of the present invention, a recording medium is recorded with a computer-readable program for performing a signalprocessing method, the method including generating a second image signal by converting a first image signal into the second image signal;calculating a correction amount based on an evaluation of the second image signal relative to the first image signal; and correcting the second image signal based on the correction amount. According to an embodiment of the present invention, a system for performing a signal processing method includes a processor operable to execute instructions; and instructions, the instructions including generating a second image signal by converting a first image signal intothe second image signal; calculating a correction amount based on an evaluation of the second image signal relative to the first imagesignal; and correcting the second image signal based on the correctionamount. Accordingly, a correction amount is calculated in accordance with an evaluation of a second image signal relative to a first image signal,the second image signal being generated by performing pixel conversion on the first image signal, and the second image signal is corrected based on the correction amount. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing an example of the functional structure of a signal processing apparatus according to an embodiment of the present invention; FIG. 2 is a flowchart of an image signal generation process performed bythe signal processing apparatus shown in FIG. 1; FIG. 3 is a block diagram showing an example of the functional structure of an HD prediction section shown in FIG. 1; FIG. 4 is a flowchart of an HD prediction value generation process; FIG. 5 is an illustration for explaining the relationship between SD pixel data and HD pixel data; FIG. 6 is a block diagram showing an example of the functional structure of a prediction value evaluation section shown in FIG. 1; FIG. 7 is an illustration for explaining modes; FIG. 8 is an illustration for explaining an intra-mode difference and an inter-mode difference; FIG. 9 is an illustration for explaining the intra-mode difference andthe inter-mode difference; FIG. 10 is an illustration for explaining the intra-mode difference andthe inter-mode difference; FIG. 11 is an illustration for explaining the intra-mode difference andthe inter-mode difference; FIG. 12 is a flowchart of a correction amount calculation process; FIG. 13 is an illustration for explaining the principle of correction;and FIG. 14 is a block diagram showing an example of the structure of a personal computer. DETAILED DESCRIPTION Embodiments of the present invention will be described with reference tothe drawings. FIG. 1 shows an example of the functional structure of a signalprocessing apparatus 1 according to an embodiment of the present invention. The signal processing apparatus 1 includes an HD prediction section 11, a prediction value evaluation section 12, and a prediction value correction section 13. An input image signal is input to the HD prediction section 11. Forexample, when the input image signal is an SD image signal, the HDprediction section 11 converts the SD image signal into an HD imagesignal and outputs the HD image signal as a signal Y₁ to the prediction value evaluation section 12 and the prediction value correction section13. The prediction value evaluation section 12 evaluates the HD imagesignal received as a prediction value from the HD prediction section 11,calculates a correction amount E, and supplies the correction amount Eto the prediction value correction section 13. The prediction value correction section 13 corrects the HD image signal received from the HDprediction section 11 in accordance with the correction amount E supplied from the prediction value evaluation section 12, and outputs an output image signal as a signal Y₂. A process for generating an image signal performed by the signalprocessing apparatus 1 shown in FIG. 1 is described next with reference to the flowchart shown in FIG. 2. In step S1, the HD prediction section 11 generates an HD prediction value from an input image signal. In other words, the HD prediction section 11 generates an HD image signal, as an HD prediction value, froma received SD image signal, and outputs the generated HD image signal tothe prediction value evaluation section 12 and the prediction value correction section 13. A process for generating the HD prediction value will be described below with reference to FIGS. 3 and 4. In step S2, the prediction value evaluation section 12 evaluates the HDprediction value received from the HD prediction section 11. The operation of the prediction value evaluation section 12 will be described below with reference to FIGS. 6 and 12. Accordingly, the prediction value generated by the HD prediction section 11 is evaluated,and a correction amount E is calculated in accordance with the evaluation. In step S3, the prediction value correction section 13 corrects the HDprediction value. In other words, a signal Y₂, as a corrected HD imagesignal, is calculated by subtracting the correction amount E supplied from the prediction value evaluation section 12 from the prediction value Y₁, which is the HD image signal supplied from the HD prediction section 11, in accordance with equation (1).Y ₂ =Y ₁ −E   (1) FIG. 3 is a block diagram showing the functional structure of the HDprediction section 11. As shown in FIG. 3, the HD prediction section 11includes a prediction tap extraction unit 31, a class tap extraction unit 32, a classification unit 33, a coefficient storage unit 34, and an adaptive prediction unit 35. The prediction tap extraction unit 31 extracts a prediction tap from the SD image signal, which is an input image signal, and supplies the extracted prediction tap to the adaptive prediction unit 35. The class tap extraction unit 32 extracts a class tap from the received SD imagesignal, and outputs the extracted class tap to the classification unit33. The position of a pixel in the received SD image signal that is extracted as the prediction tap by the prediction tap extraction unit 31and the position of a pixel in the received SD image signal that is extracted as the class tap by the class tap extraction unit 32 are determined in advance. The classification unit 33 determines a class in accordance with thevalue of a pixel constituting the class tap received from the class tap extraction unit 32, and outputs code corresponding to the class to the coefficient storage unit 34. Prediction coefficients generated for each class by learning many images in advance are stored in the coefficient storage unit 34. The coefficient storage unit 34 reads the prediction coefficient corresponding to the class received from the classification unit 33, and outputs the prediction coefficient to the adaptive prediction unit 35. The adaptive prediction unit 35 applies the value ofthe pixel constituting the prediction tap extracted by the prediction tap extraction unit 31 and the prediction coefficient supplied from the coefficient storage unit 34 to a first linear combination formula, and generates an HD image signal as an HD prediction value. A process for generating an HD prediction value performed by the HDprediction section 11 is described next with reference to the flowchartshown in FIG. 4. In step S31, the class tap extraction unit 32 extracts a class tap froma received SD image signal. The extracted class tap is supplied to the classification unit 33. In step 32, the classification unit 33 performs classification. In other words, a class is determined by performing, forexample, 1-bit ADRC processing on the value of the class tap received from the class tap extraction unit 32. The determined class corresponds to the characteristics of the received SD image signal. In step S33, the coefficient storage unit 34 reads a prediction coefficient. More specifically, the prediction coefficient corresponding to the class code received from the classification unit 33 is read, and the read prediction coefficient is supplied to the adaptive prediction unit 35.Since this class code corresponds to the characteristics of the received SD image signal, the prediction coefficient corresponds to the characteristics of the SD image signal. In step S34, the prediction tap extraction unit 31 extracts a prediction tap. The extracted prediction tap is supplied to the adaptive prediction unit 35. In step S35, the adaptive prediction unit 35 generates an HDprediction value. In other words, the HD prediction value is calculated by applying the pixel value of the prediction tap supplied from the prediction tap extraction unit 31 and the prediction coefficient read bythe coefficient storage unit 34 to a predetermined first linear prediction formula. As described above, for example, as shown in FIG. 5, HD pixel data represented by squares having four times the pixel density is generated from SD pixel data represented by circles. In this case, as shown in FIG. 5, for example, one piece of SD pixel data p1 corresponds to four pieces of HD pixel data q1 to q4 around the SD pixel data p1. As described above, a higher-density HD image signal generated from anSD image signal is supplied to the prediction value evaluation section12 and the prediction value correction section 13. FIG. 6 shows an example of the functional structure of the prediction value evaluation section 12. The prediction value evaluation section 12includes an intra-mode difference calculator 61, an average value calculator 62, a correction coefficient calculator 63, a correctionamount calculator 64, an inter-mode difference calculator 65, and an average value calculator 66. The intra-mode difference calculator 61 calculates an intra-modedifference value of HD prediction values supplied from the adaptive prediction unit 35 of the HD prediction section 11. Similarly, the inter-mode difference calculator 65 calculates an inter-mode difference value of the received HD prediction values. In this embodiment, as shown in FIG. 7, with respect to a target pixel in SD pixel data, four surrounding HD pixels are set to modes 0 to 3. In the example shown in FIG. 7, an HD pixel on the upper left of the target pixel is set to themode 0, an HD pixel on the upper right of the target pixel is set to themode 1, an HD pixel on the lower left of the target pixel is set to themode 2, and an HD pixel on the lower right of the target pixel is set tothe mode 3. As shown in FIG. 8, four pixels m0, m1, m2, and m3 in the modes 0, 1, 2,and 3 constitute a mode block. One of the pixels m0 to m3 is set as a target pixel, and the difference between the target pixel and the other pixels in the mode block is calculated as an intra-mode difference. Inthe example shown in FIG. 8, the upper-left pixel m0 in the mode 0 isset as the target pixel. Thus, the difference between the target pixelm0 in the mode 0 and the pixel m1 in the mode 1, the pixel m2 in themode 2, and the pixel m3 in the mode 3 is calculated as an intra-modedifference. In other words, the intra-mode difference D_(in) is represented by equation (2).D _(in) =|m0−m1|+|m0−m2|+|m0−m3|  (2) As is clear from equation (2), in this example, the sum of the absolute values of the differences between the target pixel and the pixels in theother three modes is obtained as the intra-mode difference D_(in). In contrast, the difference between the target pixel and pixels that arenot within the mode block for the target pixel is obtained as an inter-mode difference. In other words, in the example shown in FIG. 8,since the target pixel m0 is on the upper left corner of the mode block,the difference between the target pixel m0 and pixels s1, s2, and s3,which are on the left, above, and upper left, respectively, of thetarget pixel m0, is an inter-mode difference. In other words, the inter-mode difference D_(out) is calculated using equation (3) as the sum of the absolute values of the differences between the target pixelm0 and the pixel s1, which is on the left of the target pixel m0,between the target pixel m0 and the pixel s2, which is above the targetpixel m0, and between the target pixel m0 and the pixel s3, which is onthe upper left of the target pixel m0.D _(out) =|m0−s1|+|m0−s2|+|m0−s3|  (3) For example, if the pixel m0 located on the upper right corner of themode block is set as the target pixel, as shown in FIG. 9, the sum ofthe absolute values of the differences between the target pixel m0 andthe pixel s1, which is on the right of the target pixel m0, between thetarget pixel m0 and the pixel s2, which is above the target pixel m0,and between the target pixel m0 and the pixel s3, which is on the upperright of the target pixel m0, is obtained as the inter-mode differenceD_(out). In FIG. 9, the sum of the absolute values of the differences between thetarget pixel m0 and the other three pixels m1, m2, and m3 in the mode block is obtained as the intra-mode difference D_(in), as in the example shown in FIG. 8. For example, if the pixel m0 located on the lower left corner of themode block is set as the target pixel, as shown in FIG. 10, the sum ofthe absolute values of the differences between the target pixel m0 andthe pixel s1, which is on the left of the target pixel m0, between thetarget pixel m0 and the pixel s2, which is below the target pixel m0,and between the target pixel m0 and the pixel s3, which is on the lowerleft of the target pixel m0, is obtained as the inter-mode differenceD_(out). The sum of the absolute values of the differences between thetarget pixel m0 and the other three pixels m1, m2, and m3 in the mode block is obtained as the intra-mode difference D_(in). For example, if the pixel m0 located on the lower right corner of themode block is set as the target pixel, the sum of the absolute values ofthe differences between the target pixel m0 and the pixel s1, which ison the right of the target pixel m0, between the target pixel m0 and the pixel s2, which is below the target pixel m0, and between the targetpixel m0 and the pixel s3, which is on the lower right of the targetpixel m0, is obtained as the inter-mode difference D_(out), The sum ofthe absolute values of the differences between the target pixel m0 andthe other three pixels m1, m2, and m3 in the mode block is obtained asthe intra-mode difference D_(in). Referring back to FIG. 6, the average value calculator 62 calculates an intra-mode difference average D_(inav), which is the average of the intra-mode differences in a frame calculated by the intra-modedifference calculator 61. The average value calculator 66 calculates an inter-mode difference average D_(outav), which is the average of the inter-mode differences in the frame calculated by the inter-modedifference calculator 65. The correction coefficient calculator 63 calculates a correction coefficient K using equation (4) in accordance with the intra-modedifference average D_(inav) calculated by the average value calculator62 and the inter-mode difference average D_(outav) calculated by the average value calculator 66. $\begin{matrix}{K = \frac{D_{inav}}{D_{outav}}} & (4)\end{matrix}$ The correction amount calculator 64 calculates a correction amount E using equation (5) in accordance with the correction coefficient K calculated by the correction coefficient calculator 63 and an inter-modedifference d_(out) calculated by the inter-mode difference calculator65. $\begin{matrix}{E = {( {1 - K} ) \times \frac{d_{out}}{2}}} & (5)\end{matrix}$ The inter-mode difference d_(out) used in equation (5) is obtained by equation (6). $\begin{matrix}{d_{out} = \frac{\{ {( {{m\; 0} - {s\; 1}} ) + ( {{m\; 0} - {s\; 2}} ) + ( {{m\; 0} - {s\; 3}} )} \}}{3}} & (6)\end{matrix}$ A process for calculating a correction amount performed by the prediction value evaluation section 12 is described next with reference to the flowchart shown in FIG. 12. In step S61, the intra-mode difference calculator 61 calculates an intra-mode difference. More specifically, a mode block is arranged in a predetermined position of a frame constituted by HD pixel data, and a pixel from among four pixels constituting the mode block is set as a target pixel. The intra-mode difference D_(in) is calculated in accordance with equation (2). In step S62, the inter-mode difference calculator 65 calculates an inter-mode difference D_(out) in accordance with equation (3). Calculation of intra-mode differences and calculation of inter-mode differences are performed by the intra-mode difference calculator 61 andthe inter-mode difference calculator 65, respectively, for all the pixels in the frame by sequentially moving the position of the mode block within the frame. In step S63, the average value calculator 62 calculates the intra-modedifference average D_(inav), which is the average of the intra-mode differences D_(in) in the frame calculated by the intra-mode difference calculator 61. Similarly, in step S64, the average value calculator 66calculates the inter-mode difference average D_(outav), which is the average of the inter-mode differences D_(out) in the frame calculated bythe inter-mode difference calculator 65. In step S65, the correction coefficient calculator 63 calculates a correction coefficient K. In other words, the correction coefficient calculator 63 calculates the correction coefficient K in accordance with equation (4) by dividing the intra-mode difference average D_(inav)calculated by the average value calculator 62 by the inter-modedifference average D_(outav) calculated by the average value calculator66. In step S66, the correction amount calculator 64 calculates a correctionamount E represented by equation (5) in accordance with the correction coefficient K calculated by the correction coefficient calculator 63 andthe inter-mode difference d_(out) represented by equation (6),calculated by the inter-mode difference calculator 65. The correction coefficient K and the correction amount E are explained as described below. In a general natural image, an intra-mode difference average D_(inav)and an inter-mode difference average D_(outav) are equal to each other,as represented by equation (7).D_(inav)=D_(outav)   (7) Equation (7) means that there is no direction dependency in the pixel level gradient. In this embodiment, however, since quadruple-density conversion is performed on a pixel, gaps are generated between blocks each constituted by four pixels in the converted image. Thus, the inter-mode difference average D_(outav) is greater than the intra-mode difference averageD_(inav), as represented by condition (8).D_(inav)<D_(outav)   (8) In other words, on average, the inter-mode difference is greater thanthe intra-mode difference. This is because four HD pixels are generated from one SD pixel and this output is independently performed for each four pixels. As described above, the values that should be equal to each other on average, as represented by equation (7), are not equal to each other inan image after pixel conversion, as represented by condition (8). Thus,correcting the values so as to be equal to each other enables the calculation result to be approximated to a desired image to be output(an image with higher accuracy). Since, as represented by equation (7), the intra-mode difference averageD_(inav) and the inter-mode difference average D_(outav) are equal to each other in the original image on which pixel conversion is not performed, the correction coefficient K represented by equation (4)represents how much smaller than the inter-mode difference, serving asthe output result of the quadruple-density processing, the inter-modedifference in the original image is. Thus, approximating the inter-modedifference D_(out) of the image, serving as the output result of quadruple-density processing, to the intra-mode difference D_(in)corrects the image in the correct direction. FIG. 13 shows this processing conceptually. As shown at the leftmostpart in FIG. 13, the difference d0 between the target pixel m0 and the pixel s1 that is on the left of the target pixel m0 and that is outside the mode block for the target pixel m0 is obtained. When quadruple-density pixel conversion is performed on this image, as shown at the center in FIG. 13, the difference between the target pixelm0 and the pixel s1 increases from d0 to d1 on average. This is thestate of the signal Y₁ that is output as the HD prediction value fromthe HD prediction section 11. Multiplying the correction coefficient K, which represents how much smaller than the difference d1 the difference d0 in the original image is, by the difference d1 enables the corrected difference d2 to be approximated to the original difference d0. The amount of correction used here is the correction amount E. Since the processing for correcting the difference between the targetpixel m0 and the pixel s1 is performed from two sides (correction performed when the pixel m0 functions as a target pixel and correction performed when the pixel s1 functions as a target pixel), the correctionamount E is divided by two in equation (5). The prediction value correction section 13 calculates the signal Y₂,which is the corrected HD image signal, by subtracting the correctionamount E from the signal Y₁ output from the HD prediction section 11 in accordance with equation (1). Thus, on average, the inter-modedifference average approximates the intra-mode difference average in the whole screen, and an image without gaps between mode blocks can be achieved. Although a case where quadruple-density pixel conversion is performed has been described, the multiplication factor is not limited to four. Inaddition, n-times density pixel conversion is not necessarily performed.1/n-times density pixel conversion can also be performed. The present invention is also applicable to television receivers, harddisk recorders, and other apparatuses for processing image signals. The foregoing series of processes may be performed by hardware or software. In this case, for example, the signal processing apparatus 1may include a personal computer shown in FIG. 14. Referring to FIG. 14, a central processing unit (CPU) 221 performs various types of processing in accordance with a program stored in a read-only memory (ROM) 222 or a program loaded into a random-access memory (RAM) 223 from a storage unit 228. Data necessary for the CPU 221to perform the various types of processing is appropriately stored inthe RAM 223. The CPU 221, the ROM 222, and the RAM 223 are connected to each other via a bus 224. An input/output interface 225 is connected to the bus224. The input/output interface 225 is connected to an input unit 226including a keyboard, a mouse, and the like; an output unit 227including a display, such as a cathode-ray tube (CRT) or a liquid crystal device (LCD), and a speaker; the storage unit 228, such as a hard disk; and a communication unit 229, such as a modem. The communication unit 229 performs communication via a network includingthe Internet. A drive 230 is connected to the input/output interface 225 according to need. A removable medium 231, such as a magnetic disk, an optical disk,a magneto-optical disk, or a semiconductor memory, is appropriately installed on the drive 230. A computer program read from the removable medium 231 is installed in the storage unit 228 according to need. When the series of the foregoing processes is performed by software, a program constituting the software is installed via a network or a recording medium on a computer built in dedicated hardware or a general-purpose personal computer or the like capable of performing various functions by installing various programs. As shown in FIG. 14, the recording medium not only includes the removable medium 231, such as a magnetic disk (including a flexible disk), an optical disk (including a compact disk-read only memory(CD-ROM) and a digital versatile disk (DVD)), a magneto-optical disk(including a MiniDisk (MD)), or a semiconductor memory, which is recorded with the program and is distributed in order to provide the program to a user independently of the apparatus main unit, but also includes the ROM 222 or the storage unit 228, such as a hard disk, whichis built in the apparatus main unit to be provided to the user and whichis recorded with the program. In this specification, steps for a program recorded in the recording medium are not necessarily performed in chronological order in accordance with the written order. The steps may be performed in parallel or independently without being performed in chronological order. In addition, in this specification, a system means the whole equipment including a plurality of apparatuses. The present invention is also applicable to a personal computer performing image processing. According to the foregoing embodiments, high-resolution images with higher quality can be generated. Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. 1. A signal processing apparatus comprising: a generator operable to generate a second image signal by converting a first image signal intothe second image signal; a calculation unit operable to calculate a correction amount based on an evaluation of the second image signal relative to the first image signal, said evaluation involving calculating a first difference by using pixels within a block of pixels which includes a target pixel and calculating a second difference by using pixels outside the block of pixels; and a correction unit operableto correct the second image signal based on the correction amount. 2. A signal processing apparatus, comprising: a generator operable to generate a second image signal by converting a first image signal intothe second image signal; a calculation unit operable to calculate a correction amount based on an evaluation of the second image signal relative to the first image signal; and a correction unit operable to correct the second image signal based on the correction amount, wherein a block of pixels from among pixels constituting the second image signal includes a target pixel, and the calculation unit calculates a coefficient representing the evaluation based on a relationship between a first difference between the target pixel and pixels other than thetarget pixel within the block of pixels and a second difference betweenthe target pixel and pixels outside the block of pixels. 3. The signalprocessing apparatus according to claim 2, wherein the calculation unit includes: a first difference calculation unit operable to calculate thefirst difference between the target pixel and the pixels other than thetarget pixel within the block of pixels; a second difference calculation unit operable to calculate the second difference between the targetpixel and the pixels outside the block of pixels; a first average value calculation unit operable to calculate an average of the first differences in a frame; a second average value calculation unit operableto calculate an average of the second differences in the frame; a coefficient calculation unit operable to calculate the coefficient basedon a ratio of the average of the first differences in the frame and the average of the second differences in the frame; and a correction amount calculation unit operable to calculate the correction amount based onthe coefficient and the second difference. 4. A signal processing method, comprising: generating a second image signal by converting a first image signal into the second image signal; calculating a correction amount based on an evaluation of the second image signal relative to the first image signal, said evaluation involving calculating a first difference by using pixels within a block of pixels which includes a target pixel and calculating a second difference by using pixels outside the block of pixels; and correcting the second image signal based on the correction amount. 5. A recording medium recorded with a computer-readable program for performing a signalprocessing method, the method comprising: generating a second imagesignal by converting a first image signal into the second image signal;calculating a correction amount based on an evaluation of the second image signal relative to the first image signal, said evaluation involving calculating a first difference by using pixels within a block of pixels which includes a target pixel and calculating a second difference by using pixels outside the block of pixels; and correcting the second image signal based on the correction amount. 6. A system for performing a signal processing method, the system comprising: a processor operable to execute instructions; and instructions, the instructions including: generating a second image signal by converting a first image signal into the second image signal; calculating a correction amount based on an evaluation of the second image signal relative to the first image signal, said evaluation involving calculating a first difference by using pixels within a block of pixels which includes a target pixel and calculating a second difference by using pixels outside the block of pixels; and correcting the second image signal based on the correction amount. 7. A signal processing apparatus, comprising: generating means for generating a second imagesignal by converting a first image signal into the second image signal;calculation means for calculating a correction amount based on an evaluation of the second image signal relative to the first imagesignal, said evaluation involving calculating a first difference by using pixels within a block of pixels which includes a target pixel and calculating a second difference by using pixels outside the block of pixels; and correction means for correcting the second image signal based on the correction amount. 8. The signal processing apparatus according to claim 1, wherein the calculation unit calculates a coefficient representing the evaluation in accordance with a relationship between the first difference and the second difference, andin which calculating of the first difference involves obtaining a difference between the target pixel and the pixels other than the targetpixel within the block of pixels which is among pixels constituting the second image and calculating of the second difference involves obtaining a difference between the target pixel and the pixels outside the block of pixels.
Joomla extensions for photo albums I need a photo album extension for Joomla that can take selected folders from the file manager and show each one of them as an album. It should have a module that can show for example 6 albums which when clicked on can open the album's page and display pictures from that album in whatever way. Something like SIMGallery but for Joomla 1.7, it has a module for showing random albums, that's exactly what I need. It's a shame it's only available for Joomla 1.5. The thing is I have pretty much no experience in Joomla. I have no problem in learning it but I can't find decent tutorials that teach how to make components or modules. The web is filled with WordPress tutorials but finding a good article for Joomla is extremely hard. If there are available extensions out there, please link me, that would be great but I would really like to make something on my own also. Can someone guide to nice articles for component and module development and understand the J platform. A good starting point for Joomla 1.7. Thanks! I appreciate all the help. Check out the list of photo gallery plugins on the Joomla Extensions Directory - there should be at least one with your requirements. Also, I have some Joomla tutorials on my site. I don't have a module or component tutorial up yet (they're planned), but I have installation and template tutorials that you might find useful. You may find it difficult to find tutorials for Joomla 1.6 or 1.7 as the Joomla team is now moving through version numbers at a faster rate than the past. However, if you find something for Joomla 1.5, chances are that it will work for later versions as well with just a few minor adjustments (usually the install XML file). Thanks for your reply, I really like your template tutorial, very helpful. Anyway I found one extension called SIMGallery, it's exactly what I need, but it is only available for 1.5 Native. I've updated my question with a link to SIMGallery. If you know something similar please let me know. Thanks!
Structural characteristics and functional properties of fiber-rich by-products of white cabbage modified by high-energy wet media milling The recovery of residues and by-products of the food industry plays an important role in terms of sustainable management. For this reason, the aim of this study was to analyse the effect of wet milling parameters on dietary fiber concentrates of white cabbage by products or, more precisely, the stalks of cabbage. The input of hydraulic shearenergy during wet milling process leads to a partial modification of the structure of fiber components to obtain compounds with high waterand oil-binding properties. Furthermore, the wet milling parameters affect the functional properties of the fiber concentrates. A mathematical model was developed which relates the functional properties to the parameters of the colloid mill such as slurry concentration, milling time, agitation speed and particle size distribution. A slurry of the grounded material is forced into the milling gap. Grinding is autogenous as a result of collisions between rotating particles. All of the material in the process stream is being grounded finer than the gap setting and grinding can be optimized by adjusting mill operating parameters. The identification of the relations between milling parameters and functional properties is necessary in order to comprehend the processing characteristics of the material in the context of fiber enriched food products manufacturing. Introduction The intensification of food production in developed countries led to large waste streams of by-products. The inedible fraction of fresh cabbages and other brassicas based upon literature values are approximately 20 % and can be attributed to unavoidable wastes out of industrial processing (Laurentiis et al. 2018). The trimmings, like stalks, are rich in cell wall materials and their high amount of dietary fiber (DF) enables their usage in modeling new natural ingredients for the food industry. DF are food components well-known for their beneficial effects on human health (EFSA 2010). The most widely accepted classification for DF is the differentiation of dietary components based on their solubility. (Dhingra et al. 2012). The term 'dietary fiber concentrates' (DFC) can be used for a product which major component is DF, but which does not exclude the presence of other components, such as digestible carbohydrates, protein, lipids, minerals and a small amount of water (Garcia-Amezquita et al. 2017). The physicochemical properties of DFC are an issue for their functionality like characteristic chemistry, dimensions, surface properties and surface charge and can be affected by chemical, enzymatic, mechanical, thermal or thermo-mechanical treatments (Guillon and Champ 2000). One of the most common functional properties of DFC are water-and oil-binding capacities (WBC and OBC) and the bulk density (Garcia-Amezquita et al. 2017). Physical modification of DFC can set out the conditions that are necessary to apply them into food products. One possibility for mechanical modification are wet milling processes like colloid milling. A colloid mill is mainly constructed of a stator and a rotor. The space between these two elements creates a gap for material passage. When the materials pass through, the materials attaching on the rotating surface will have a maximum speed, while the materials on the stator relative to the rotor will be abiding. The result is a high velocity gradient, which leads to grinding by the strong shear (Chen et al., 2018). Zhu et al. (2014) reported that ultrafine grinding of buckwheat hull insoluble DF could effectively pulverize the fiber particles to submicron scale. As particle size decreased, the hydration properties (e. g. WBC) were significantly decreased. Jongaroontaprangsee et al. (2007) examined the effects of drying temperature and particle size on hydration properties of DF powder from lime and cabbage outer leaves. The particle size of lime residues did not affect the WBC whereas the particle size of cabbage outer leaves significantly decreased the WBC of their fiber powder. Conversely, Raghavendra et al. (2006) described that the reduction in the particle size of coconut residue DFC from 1127 to 550 µm resulted in increased hydration properties; beyond 550 µm, the hydration properties were reported to decrease with decreases in particle size. The OBC was found to increase with decrease in particle size. Furthermore, Zheng and Li (2018) pictured, that the WBC of coconut (Cocos nucifera L) cake DF increased when the particle size was reduced from 250 to 167 µm, while the OHC decreased with decreasing particle size. It can therefore be concluded that the effect of a grinding operation largely depends upon the food material nature on one hand, and the applied technology, the distribution and intensity of the applied stress by the grinding tool on the other hand. Wet grinding can influence the hydration properties of DF and DFC positively, in particular the kinetics of water uptake. As outcome of the increase of surface area, the fibers hydrate more rapidly. However, the reduction of particle size can lead to an adverse change and collapse of the porous structure resulting in a reduction of (capillary) water uptake (Guillon and Champ 2000). For this reason, the aim of this study was to analyse the effect of different milling parameters on DFC of white cabbage by-products to improve their functional properties. Materials and Methods Samples collection. Stalks of white cabbage Brassica oleracea var. capitata (2018 crop year) were selected from manufacturing process of pickled cabbage obtained from a local producer in the federal state of Brandenburg, Germany and were cut into pieces (approx. 3x3 cm). Samples of 500 g were vacuum packed in light and air tight laminated plastic bags and kept at -18°C until analysis. Obtaining dietary fiber concentrates. Obtaining DFC was done with a method described elsewhere (Kunzek et al. 2002). The cut stalks of white cabbage were pre-comminuted with a cutting mill, washed with distilled water and then pressed at 4.8 bar water pressure by means of a rubber bladder and filter cloth (Fa. Paul Arauner GmbH & Co. KG, Germany). The compressed mash residue was washed with distilled water, blanched in citric acid solution (1.2 %), homogenized using Miccra disperser and a shredder attachment (16,500 rpm for 5 minutes). Afterwards the shredded material was wet-sieved manually using a sieving tower (Model AS 200 Retsch, Germany). The wet sieving was carried out with distilled water on a 500 μm sieve with 20 μm sieve underneath until the passage had a conductivity of less than 200 μS/cm. Modification of dietary fiber concentrates. The experimental setup of obtaining and modification of DFC is sketched in Figure 1. Wet milling. Sizes of a 0.78 % and 1.04 % w/w DFC suspension in deionized water were reduced using a colloid mill (IKA Magic Lab, Staufen, Germany) and milled under following operation parameters: After 5 and 10 min through the mill and an agitation speed of 14,600 rpm and 20,000 rpm, a sample of each suspension was taken. Grounded fiber concentrates had high water contents. Removal of this water was performed using a centrifuge (Fa. SIGMA, 6-16 KS; Max. 20,335 RZB) with a rotation rate of 11,200 rpm for 10 min, and the samples were then stored frozen (-18°C). Freeze-drying. To build a porous material out of the colloid milled slurry, the samples were portioned in stainless steel dishes and deep frozen (-18°C) before freeze drying (Christ alpha 1-4, Germany) for about 24 h. The set temperature of the ice condenser was -55°C and the set pressure was adjusted to 0.05 mbar. Dry grinding and fractionating. The freeze-dried samples were grounded in a centrifugal mill (10,000 rpm, model PULVERISETTE 14, Fritsch, Germany) fitted with a 0.5 mm screen to break the freeze-dried agglomerates into smaller units. Grounded samples were separated according to particle size using a sieve shaker (AS 200,Retsch,Germany). Mesh size of sieves (Fa. Retsch) was 20, 200 and 400 µm. Each sample was placed on the top sieve with the largest mash width and shaken for 10 min at an amplitude setting of 60. The residue of each sieve was kept separately in light and air-tight plastic containers. Water-binding capacity. The water-binding capacity (WBC) was determined according to a method described by Robertson et al. (2000). Distilled water (20 ml) was transferred into 50 ml centrifuge tubes containing 200 mg of sample. The mixture was stirred for 5 min and left at room temperature for 10 min. After centrifugation at 5,000 rpm for 15 min, the excess supernatant was removed and the residue hydrated weight was recorded. WBC was determined as (w/w) deionized water per DM. Oil-binding capacity. Oil-binding capacity (OBC) was measured using a method described by Elleuch et al. (2008). 200 mg of DFC were added to 10 ml of sunflower oil in a 50 ml centrifuge tube. The content was stirred for 5 min prior centrifugation at 1,500 rpm for 30 min. The free oil was decanted and absorbed oil was determined. OHC was calculated as (w/w) sunflower oil per DM. Bulk density. The bulk density of the dry samples was determined outlined elsewhere (Kaur and Singh 2005). The dry samples were gently filled into 50 ml graduated cylinders, previously tared. The bottom of the cylinder was gently tapped on a laboratory bench several times until there was no further diminution of the sample level after filling to the 5 ml mark. Bulk density was calculated as weight of sample per unit volume of sample (w/v) per DM. Statistical analysis. Data analysis and optimization were done using Two Level Factorial Design. It creates an experiment that includes all possible combinations of the factor levels. In this design, all factors are being treated numerically. Table 1 represents the levels of independent variables used in the experimental design. Investigated responses (dependent parameters) such as WHC, OHC and bulk density were considered. In this model 16 experiments in triplicate (48 runs) to 2FI full Factorial design were used. The statistical software package Design Expert 12.0 (Stat-Ease, Minneapolis, MN) was used for regression analysis of the data and estimation of regression equation coefficients. Significant terms in the model for each response were found by analysis of variance (ANOVA) and were employed to determine the regression coefficients and statistical significance of the model terms. Results and Discussion Analysis of variance, fitting the model. In order to study the effects of the four independent variables namely (A) 'slurry concentration', (B) 'milling time', (C) 'agitation speed' and (D) 'particle fraction' on the dependent variables WBC, OHC and bulk density, a two level factorial design methodology was applied. The Model F-values of all dependent variables imply significance of models (p < 0.0001) at the designated conditions. For WHC B, C and their interactions AB, ABD, BCD had significant effects (p < 0.001). For OHC the factors B, C, D, and their interactions AB, AC, ABC had significant effects (p < 0.002). Although the main effect of A was not significant, it was also included to achieve hierarchic models. The factors that had significant effects on the bulk density were A, B, C, D and the interactions of AC, AD, BC, CD, ABC, and ABCD (p < 0.0001). In all cases, the Predicted R² was in reasonable agreement with the Adjusted R²; i. e. the difference is less than 0.2. The measured Adeq Precision indicated in all cases an adequate signal in relation to the signal to noise ratio. Water-binding capacity. The WBC of DFC ranged between 12.4 and 17.6 g/g, which is in accordance with data typically found for fruit and vegetable fibers in literature (Elleuch et al. 2011). The interaction effects which were considered to be significant by ANOVA were studied. The normal probability plot and Pareto plot showed that the most significant factor with positive effects was the 'particle fraction' (D) followed by the 'milling time' (B). All samples of the series revealed higher WBC at higher 'particle fraction' of agglomerates. The WBC refers to the amount of water that remains bounded within the fiber structure after presence of external force such as centrifugation. Gupta and Premavalli (2016) also reported that the WBC of vegetable fibers were higher at higher 'particle fraction' as the particles are less firmly packed than finer ones. The 'agitation speed' (C), the interaction between 'concentration and milling time' (AB) as well as the interaction between 'concentration, time and particle fraction' (ABD) had significant negative effects on WBC. The interaction between 'concentration and time' (AB) showed significant effects on WBC when 'particle fraction' was on its high level for both levels of 'agitation speed' as shown in Figure 2 a. Increase in initial 'slurry concentration' increased the WBC of the DFC at low 'milling time', while increase in 'slurry concentration' decreased the WBC at high 'milling time'. Oil-binding capacity. The OBC of samples tested amounted to a value between 9.4 g/g and 15.0 g/g. The interaction effects considered significant (p < 0.001) by ANOVA were studied. The normal probability plot and Pareto plot showed that the most significant factor with negative effects is the 'agitation speed' (C) followed by the interaction between 'concentration and time' (AB), and the interaction between 'concentration and agitation speed' (AC). The higher these factors, the lower the OBC of the samples. In addition, 'particle fraction' had significant positive effects on OBC. The oilbinding capacity is related to the quality of the surface and the density or thickness of particles, so that those particles with the greatest surface area theoretically present a greater capacity to adsorb and bind components of an oily nature (Lopez et al. 1998). The interaction between 'concentration, time and agitation speed', and the factor 'time' were important but less significant (> Bonferroni limit). The interaction between 'concentration and time' (AB) showed no significant effect when 'agitation speed' was at its -level. When 'agitation speed' was on its + level, the change in 'slurry concentration' had an effect on the OBC at both high and low 'milling time'. Increase in 'slurry concentration' increased the OBC of the DFC at low 'milling time', while increase in initial 'slurry concentration' decreased the OBC at high 'milling time'. Figure 2 b illustrates the significance of interactions of AB. The deviation from the parallel of the lines is related to the degree of interaction. In other words, the effect of one factor depends on the level of the other. The interaction between 'concentration and agitation speed' (AC) showed no significant effect when 'milling time' was at itslevel, for both levels of 'particle fraction'. When 'milling time' was on its high level, the OBC was significant different at high 'slurry concentration' (Fig. 3 b). Bulk density. The determined bulk density ranged from 0.07 to 0.14 g/ml. ANOVA revealed the highest significant positive effects (p < 0.001) for the factors 'slurry concentration' (A), 'agitation speed' (C), their interaction (AC) and 'particle fraction' (D) with regard to the bulk density. The effects of interaction AC on bulk density is shown in Figure 3 a. The highest significant negative main effects on bulk density had the interaction between BC and the factor 'milling time'. The interaction between BC showed significant effects by increasing 'agitation speed' for both levels of 'milling time' and 'particle fraction'. Typically, a decrease in fiber 'particle fraction' is associated with an increase in bulk density (Sangnark and Noomhorm 2003). However, the method applied for these trials directed towards an alternative conclusion. Due to previous wet milling prior to grinding and fractionation, the agglomerates are made of tightly packed particles with irregular and frayed surfaces. As a result, even the bigger fiber particles can join together more densely. Conclusions The present study shows that several parameters during mechanical pre-treatment affected the structural characteristics and physicochemical properties of fiber-rich by-products of white cabbage. A high level of 'particle fraction' (+D) of agglomerates turned out to be one of the key factors to increase WBC and OBC. The high bulk density at high level 'particle fraction' (+D) suggests that the agglomerates are made of tightly packed particles with irregular and frayed surfaces due to a wet milling process before drying. The bulk density was found to be inversely proportional to the OBC when DFC were produced at high level 'agitation speed', 'milling time' and 'particle fraction'. The highest values for both WBC and OBC were measured using the following factor settings: low level 'slurry concentration' (-A), high level 'milling time' (+B), high level 'agitation speed' (+C) and high level 'particle fraction' (+D). High water-and oil-binding capacity opens up the possibility of using the fibers as ingredients in food products. DFC with high OBC provides the stabilization of high fat food products and emulsions. Dietary fibers with high WBC can be applied as functional ingredients to avoid syneresis and modify the viscosity and texture of some formulated foods (Elleuch et al. 2011). The identification of the relations between processing and functionality is necessary to know the processing characteristics of the material in the context of the manufacture of fiber enriched food products.