content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
SAT Math : How to find the solution to an inequality with addition
Study concepts, example questions & explanations for SAT Math
varsity tutors app store varsity tutors android store varsity tutors amazon store varsity tutors ibooks store
Example Questions
← Previous 1
Example Question #1 : Inequalities
What values of x make the following statement true?
|x – 3| < 9
Possible Answers:
x < 12
–6 < x < 12
6 < x < 12
–12 < x < 6
–3 < x < 9
Correct answer:
–6 < x < 12
Explanation:
Solve the inequality by adding 3 to both sides to get x < 12. Since it is absolute value, x – 3 > –9 must also be solved by adding 3 to both sides so: x > –6 so combined.
Example Question #19 : Inequalities
If –1 < w < 1, all of the following must also be greater than –1 and less than 1 EXCEPT for which choice?
Possible Answers:
3w/2
|w|0.5
|w|
w/2
w2
Correct answer:
3w/2
Explanation:
3w/2 will become greater than 1 as soon as w is greater than two thirds. It will likewise become less than –1 as soon as w is less than negative two thirds. All the other options always return values between –1 and 1.
Example Question #5 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
Absolute value problems always have two sides: one positive and one negative.
First, take the problem as is and drop the absolute value signs for the positive side: z – 3 ≥ 5. When the original inequality is multiplied by –1 we get z – 3 ≤ –5.
Solve each inequality separately to get z ≤ –2 or z ≥ 8 (the inequality sign flips when multiplying or dividing by a negative number).
We can verify the solution by substituting in 0 for z to see if we get a true or false statement. Since –3 ≥ 5 is always false we know we want the two outside inequalities, rather than their intersection.
Example Question #6 : How To Find The Solution To An Inequality With Addition
What values of make the statement true?
Possible Answers:
Correct answer:
Explanation:
First, solve the inequality :
Since we are dealing with absolute value, must also be true; therefore:
Example Question #1 : How To Find The Solution To An Inequality With Addition
Solve:
Possible Answers:
Correct answer:
Explanation:
To solve , isolate .
Divide by three on both sides.
Example Question #2 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
We want to isolate the variable on one side and numbers on another side. Treat like a normal equation.
Subtract on both sides.
Divide on both sides.
Example Question #3 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
We want to isolate the variable on one side and numbers on another side. Treat like a normal equation.
Subtract on both sides.
Divide on both sides. Remember to flip the sign.
Example Question #2 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
We want to isolate the variable on one side and numbers on another side. Treat like a normal equation.
Subtract on both sides.
Example Question #5 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
We want to isolate the variable on one side and numbers on another side. Treat like a normal equation.
We need to set-up two equations since its absolute value.
Subtract on both sides.
Divide on both sides which flips the sign.
Subtract on both sides.
Since we have the 's being either greater than or less than the values, we can combine them to get .
Example Question #6 : How To Find The Solution To An Inequality With Addition
Solve for .
Possible Answers:
Correct answer:
Explanation:
We want to isolate the variable on one side and numbers on another side. Treat like a normal equation.
We need to set-up two equations since it's absolute value.
Subtract on both sides.
Distribute the negative sign to each term in the parenthesis.
Add and subtract on both sides.
Divide on both sides.
We must check each answer. Let's try .
This is true therefore is a correct answer. Let's next try .
This is not true therefore is not correct.
Final answer is just .
← Previous 1
Learning Tools by Varsity Tutors
|
__label__pos
| 0.992554 |
How Can i embed my skecthfab model view in gmail or another e-mail?
email
(Udomania) #1
Hello everyone i have a skecthfab model and i want to share in gmail to my contacts, i have copied the code (iframe, bbcode) and paste in my compose gmail or hotmail and nothing hapenned. Thank you so much for your help. Greetings from Machu Picchu - Peru
ddd
(Shaderbytes) #2
i dont think you can embed the viewer itself in an email, you can send a url to the viewer page though
(Bart) #3
Correct - no email program supports embeddable media. Same as YouTube etc. The best solution is to embed a picture of the player with the blue ‘Play’ button on it to invite people to click on it, then link that image to the page that contains the Sketchfab model (and perhaps set that to auto-play).
|
__label__pos
| 0.698611 |
5
$\begingroup$
This is a PDE taken from a Maple document. Mathematica DSolve currently unable to solve it.
I wanted to verify Maple solution using NDSolve. This is string of length 1, fixed on the left, and free to move on the right. Given an initial position and let go.
Here is the specs of the PDE
Solve for $0<x<1, t>0$ the wave PDE $$ -u_{tt} + u(x,t)= u_{xx} + 2 e^{-t} \left( x - \frac{1}{2} x^2 + \frac{1}{2} t - 1 \right) $$
With boundary condition
\begin{align*} u(0,t) &= 0 \\ \frac{\partial u(1,t)}{\partial x} &= 0 \end{align*}
And initial conditions
\begin{align*} u(x,0) &= x^2-2 x \\ u(x,1)&= u\left(x,\frac{1}{2}\right) + e^{-1} \left( \frac{1}{2} x^2-x\right) - \left( \frac{3}{4} x^2- \frac{3}{2}x \right) e^{\frac{-1}{2}} \end{align*}
The tricky part in this, is that no initial velocity is given. But only initial position at $t=0$, and then a relation on the solution at 2 different times is give instead.
NDSolve complain with that dreaded error
Boundary condition is not specified on a single edge of the boundary of the computational domain.
And I do not know how to get rid of it. Here is the code
ClearAll[u, x, t];
pde = -D[u[x, t], {t, 2}] + u[x, t] ==
D[u[x, t], {x, 2}] + 2*Exp[-t]*(x - (1/2)*x^2 + (1/2)*t - 1);
bc = {u[0, t] == 0, Derivative[1, 0][u][1, t] == 0};
ic = {u[x, 0] == x^2 - 2*x,
u[x, 1] == u[x, 1/2] + ((1/2)*x^2 - x)*Exp[-1] - ((3*x^2)/4 - (3/2)*x)* Exp[-2^(-1)]};
sol = NDSolve[{pde, ic, bc}, u, {x, 0, 1}, {t, 0, 1}]
Here is the Maple code and the analytical solution it gives
pde := -diff(u(x, t), t, t) + u(x, t) =
diff(u(x, t), x, x)+ 2*exp(-t)*(x-(1/2)*x^2+(1/2)*t-1);
ic := u(x, 0) = x^2-2*x,
u(x, 1) = u(x, 1/2)+((1/2)*x^2-x)*exp(-1)-(3/4*(x^2)-3/2*x)*exp(-1/2);
bc := u(0, t) = 0, eval(diff(u(x, t), x), {x = 1}) = 0;
pdsolve([pde, ic, bc],u(x,t))
$$ u(x,t) = -\frac{e^{-t}}{2} (x^2-2 x) (t-2) $$
Here is animation of Maple solution, which I wanted to verify
mapleSol[x_, t_] := -(Exp[-t]/2) (x^2 - 2 x) (t - 2)
Manipulate[
Plot[mapleSol[x, t], {x, 0, 1}, PlotRange -> {{0, 1}, {-1, .1}}],
{{t, 0, "time"}, 0, 10, .1}
]
enter image description here
Any suggestion how to get rid of the error from NDSolve?
Using V 12 on windows 10. ps. I solved this by hand also, but can't get Maple solution, and my solution looks wrong. I still need to find out why.
$\endgroup$
• 1
$\begingroup$ What Maple delivers here is simply excellent! $\endgroup$ – rmw Jul 23 at 10:08
• $\begingroup$ @rmw yes. Nice solution. I struggled to find out how to get same solution analytically for few hrs and could not. $\endgroup$ – Nasser Jul 23 at 10:14
5
$\begingroup$
I don't have time to make a solid answer, but this seems to work :
pde = -D[u[x, t], {t, 2}] + u[x, t] ==
D[u[x, t], {x, 2}] + 2*Exp[-t]*(x - (1/2)*x^2 + (1/2)*t - 1) +
NeumannValue[0, x == 1];
bc = {u[0, t] == 0};
ic = {u[x, 0] == x^2 - 2*x
, PeriodicBoundaryCondition[
u[x, t] - (((1/2)*x^2 - x)*Exp[-1] - ((3*x^2)/4 - (3/2)*x)*
Exp[-2^(-1)])
, t == 1 && 0 < x < 1
, Function[xy, xy - {0, 1/2}]]};
U = NDSolveValue[{pde, ic, bc}, u, {x, 0, 1}, {t, 0, 1}];
Plot3D[U[x, t], {x, 0, 1}, {t, 0, 1}, AxesLabel -> {x, t, u}]
enter image description here
Plot[{
U[x, 0]
, U[x, 1/2]
, U[x, 1]
, U[x, 1/2] + ((1/2)*x^2 - x)*Exp[-1] - ((3*x^2)/4 - (3/2)*x)*
Exp[-2^(-1)]}, {x, 0, 1},
PlotStyle -> {Red, Green, Directive[Blue, AbsoluteThickness[7]],
Directive[Black, Dashed, AbsoluteThickness[3]]},
PlotLegends -> "Expressions"]
enter image description here
Here is the error :
Plot3D[Evaluate[-D[U[x, t], {t, 2}] +
U[x, t] - (D[U[x, t], {x, 2}] +
2*Exp[-t]*(x - (1/2)*x^2 + (1/2)*t - 1))], {t, 0, 1}, {x, 0, 1}]
enter image description here
The method automaticaly chosen by NDSolve is Method -> {"PDEDiscretization" -> {"FiniteElement"}} (as opposed to Method] -> {"PDEDiscretization" -> {"MethodOfLines", "SpatialDiscretization" -> {"FiniteElement", femopts}}}) . This is the reason why one can impose boundaries condition on the variable "time".
Note also that the term "PeriodicBoundaryCondition" is a little bit misleading because the source of the "boundary condition" does not need to be a boundary.
$\endgroup$
• 1
$\begingroup$ What a tricky solution! Thanks. That means in a rectangular region it is possible to define coupled boundary conditions which aren't periodic at all. For example PeriodicBoundaryCondition[u[x, t] - f[x,t] , t == 1 && 0 < x < 1 , Function[xy, xy - {x0, t0}]] is somthing like u[x,1]+f[x,1]==u[x-x0,1-t0]+f[ x-x0,t-t0]??? $\endgroup$ – Ulrich Neumann Jul 23 at 8:48
• $\begingroup$ Nice answer (+1). @UlrichNeumann, you should have a look at the ref page of PeriodicBoundaryCondition That explains a bit better what PBC does and has examples. $\endgroup$ – user21 Jul 23 at 11:15
• $\begingroup$ @user21 Thank you for your hint, I tried to understand the documentation. Do you think my conclusion (see comment) is ok? $\endgroup$ – Ulrich Neumann Jul 23 at 11:25
• $\begingroup$ @UlrichNeumann, I think that is correct but I hesitate a bit because I have not tried it. $\endgroup$ – user21 Jul 23 at 12:07
• $\begingroup$ @user21 Thanks, I also try to build an example... $\endgroup$ – Ulrich Neumann Jul 23 at 12:10
1
$\begingroup$
Just an extended comment:
If you change the second bc to NeumannValue Mathematica is able to solve the modified initial value problem u[x, 1] ==(* u[x,1/2]+*) ((1/2)*x^2 - x)*Exp[-1] - ((3*x^2)/4 - (3/2)*x)*Exp[-2^(-1)]
pde = -D[u[x, t], {t, 2}] + u[x, t] ==D[u[x, t], {x, 2}] + 2*Exp[-t]*(x - (1/2)*x^2 +(1/2)*t - 1) +NeumannValue[0, x == 1];
bc = {u[0, t] == 0};
ic = {u[x, 0] == x^2 - 2*x,
u[x, 1] ==(* u[x,1/2]+*) ((1/2)*x^2 - x)*Exp[-1] - ((3*x^2)/4 - (3/2)*x)*Exp[-2^(-1)]};
U = NDSolveValue[{pde, ic, bc}, u, {x, 0, 1}, {t, 0, 1} ];
Plot3D[U[x, t], {x, 0, 1}, {t, 0, 1}, AxesLabel -> {x, t, u}]
enter image description here
The coupling u[x,1],u[x,1/2] still remains unsolved!
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.989799 |
Licence CC 0
Les énumérations
Jusqu’à présent, nous avons toujours employé le préprocesseur pour définir des constantes au sein de nos codes. Toutefois, une solution un peu plus commode existe pour les constantes entières : les énumérations.
Définition
Une énumération se définit à l’aide du mot-clé enum suivi du nom de l’énumération et de ses membres.
enum naturel { ZERO, UN, DEUX, TROIS, QUATRE, CINQ };
La particularité de cette définition est qu’elle crée en vérité deux choses : un type dit « énuméré » enum naturel et des constantes dites « énumérées » ZERO, UN, DEUX, etc. Le type énuméré ainsi produit peut être utilisé de la même manière que n’importe quel autre type. Quant aux constantes énumérées, il s’agit de constantes entières.
Certes me direz-vous, mais que valent ces constantes ? Eh bien, à défaut de préciser leur valeur, chaque constante énumérée se voit attribuer la valeur de celle qui la précède augmentée de un, sachant que la première constante est mise à zéro. Dans notre cas donc, la constante ZERO vaut zéro, la constante UN un et ainsi de suite jusque cinq.
L’exemple suivant illustre ce qui vient d’être dit.
#include <stdio.h>
enum naturel { ZERO, UN, DEUX, TROIS, QUATRE, CINQ };
int main(void)
{
enum naturel n = ZERO;
printf("n = %d.\n", (int)n);
printf("UN = %d.\n", UN);
return 0;
}
Résultat
n = 0.
UN = 1.
Notez qu’il n’est pas obligatoire de préciser un nom lors de la définition d’une énumération. Dans un tel cas, seules les constantes énumérées sont produites.
enum { ZERO, UN, DEUX, TROIS, QUATRE, CINQ };
Toutefois, il est possible de préciser la valeur de certaines constantes (voire de toutes les constantes) à l’aide d’une affectation.
enum naturel { DIX = 10, ONZE, DOUZE, TREIZE, QUATORZE, QUINZE };
Dans un tel cas, la règle habituelle s’applique : les constantes sans valeur se voient attribuer celle de la constante précédente augmentée de un et celle dont la valeur est spécifiée sont initialisées avec celle-ci. Dans le cas ci-dessus, la constante DIX vaut donc dix, la constante ONZE onze et ainsi de suite jusque quinze. Notez que le code ci-dessous est parfaitement équivalent.
enum naturel { DIX = 10, ONZE = 11, DOUZE = 12, TREIZE = 13, QUATORZE = 14, QUINZE = 15 };
Types entiers sous-jacents
Vous aurez sans doute remarqué que, dans notre exemple, nous avons converti la variable n vers le type int. Cela tient au fait qu’un type énuméré est un type entier (ce qui est logique puisqu’il est censé stocker des constantes entières), mais que le type sous-jacent n’est pas déterminé (cela peut donc être _Bool, char, short, int ,long ou long long) et dépend entre autres des valeurs devant être contenues. Ainsi, une conversion s’impose afin de pouvoir utiliser un format d’affichage correct.
Pour ce qui est des constantes énumérées, c’est plus simple : elles sont toujours de type int.
Utilisation
Dans la pratique, les énumérations servent essentiellement à fournir des informations supplémentaires via le typage, par exemple pour les retours d’erreurs. En effet, le plus souvent, les fonctions retournent un entier pour préciser si leur exécution s’est bien déroulée. Toutefois, indiquer un retour de type int ne fournit pas énormément d’information. Un type énuméré prend alors tout son sens.
La fonction vider_tampon() du dernier TP s’y prêterait par exemple bien.
enum erreur { E_OK, E_ERR };
static enum erreur vider_tampon(FILE *fp)
{
int c;
do
c = fgetc(fp);
while (c != '\n' && c != EOF);
return ferror(fp) ? E_ERR : E_OK;
}
De cette manière, il est plus clair à la lecture que la fonction retourne le statut de son exécution.
Dans la même idée, il est possible d’utiliser un type énuméré pour la fonction statut_jeu() (également employée dans la correction du dernier TP) afin de décrire plus amplement son type de retour.
enum statut { STATUT_OK, STATUT_GAGNE, STATUT_EGALITE };
static enum statut statut_jeu(struct position *pos, char jeton)
{
if (grille_complete())
return STATUT_EGALITE;
else if (calcule_nb_jetons_depuis(pos, jeton) >= 4)
return STATUT_GAGNE;
return STATUT_OK;
}
Dans un autre registre, un type enuméré peut être utilisé pour contenir des drapeaux. Par exemple, la fonction traitement() présentée dans le chapitre relatif aux opérateurs de manipulation des bits peut être réecrite comme suit.
enum drapeau {
PAIR = 0x01,
PUISSANCE = 0x02,
PREMIER = 0x04
};
void traitement(int nombre, enum drapeau drapeaux)
{
if (drapeaux & PAIR) /* Si le nombre est pair */
{
/* ... */
}
if (drapeaux & PUISSANCE) /* Si le nombre est une puissance de deux */
{
/* ... */
}
if (drapeaux & PREMIER) /* Si le nombre est premier */
{
/* ... */
}
}
En résumé
1. Sauf si le nom de l’énumération n’est pas renseigné, une définition d’énumération crée un type énuméré et des constantes énumérées ;
2. Sauf si une valeur leur est attribuée, la valeur de chaque constante énumérée est celle de la précédente augmentée de un et celle de la première est zéro.
3. Le type entier sous-jacent à un type énuméré est indéterminé ; les constantes énumérées sont de type int.
|
__label__pos
| 0.716991 |
chapter6questions - Prosser Career Academy, CA
tangibleassistantΛογισμικό & κατασκευή λογ/κού
3 Δεκ 2013 (πριν από 3 χρόνια και 4 μήνες)
68 εμφανίσεις
Name: ___________________
_____________________
January 30
, 2012
Chapter 6: Game Systems, Personal Computers, and Hardware
Yesterday:
1.
Go get a book! Read Chapter 6
pages 145 to 170.
2.
Define the following vocabulary words. Open WORD and find the
definitions with and example
(yes, you can use a picture)
Dedicated
Barrier to Entry
PlayStation3 (PS3)
CELL processor architecture
Virtual light source or lamp
Graphics
Blu
-
Ray
Nanometers
Market share
Porting
Xbox 360
Red Ring of Death
Laser burn
Wii
Blue
tooth
Niche market
Motion
-
based controller
PC gaming
Flash memory
3.
Research
the
current sales
of the
Xbox 360, PS3,
and
Wii.
Enter these data into a Microsoft
Excel spreadsheet. Create a
bar graph
to display the units sold. Create a
pie graph
to show the
total percentages of market share.
Do your research!
Create your excel spreadsheet.
Create your bar graph.
Create your pie graph.
Put your names in your graphs.
Print and turn in your work for credit.
January 31, 2012
For Today:
1.
Finish
yesterday’s worksheet! You have a copy of the worksheet on the other side of this
sheet.
2.
Go get a book! Chapter 6
Answer the following questions:
1.
List the three major video game consoles and their manufacturer.
2.
What is a
dedicated video game con
sole?
3.
What is the largest barrier to entry for new game systems?
Use the table from Figure 6
-
2
(page 147)
to answer questions 4 through 10.
4.
As of August 2007, what was the total market for all generation 7 video game
systems?
5.
What is
the market share of each generation 7 video game console? Provide the
figure for August 2007 and June 2009. Market share is the number of units sold by one
company divided by the total units sold multiplied by 100 to show a percentage.
A.
PlayStation 3
B.
Xbox 360
C.
Wii
6.
As of June 2009, which was the only game system without a motion
-
based controller?
7.
As of June 2009, which system did not offer online multiplayer gaming?
8.
Which system had the lowest cost at the time of its release?
9.
As of June 2009, which system had the most RAM?
10.
As of June 2009, which system uses a standard DVD player to read games?
11.
What do CPU and GPU stand for?
12.
What are FLOPS?
13.
What is the USP for the PlayStation 3?
14.
What are the two
biggest loads on the CPU?
15.
What GPU function requires a virtual light source?
16.
What two functions of 3D graphics cards resize objects as they move and change
object colors with changes in perspective?
17.
What is the process used to make a ga
me playable on other systems?
18.
What is the downside to making a game that takes full advantage of the computing
power of the PS3?
19.
What is the USP for the Xbox 360?
20.
What was the biggest drawback to the early Xbox 360 video game co
nsoles?
|
__label__pos
| 0.999929 |
How Do I See App Time on Android?
Android, Android Apps
If you’re an Android user, you might be wondering how to see the time you’ve spent on various apps. It can be helpful to monitor your app usage, especially if you want to limit your screen time or be mindful of how much time you spend on social media or other apps. In this tutorial, we’ll show you how to see app time on Android using different methods.
Method 1: Digital Wellbeing
If your Android device has Digital Wellbeing features, it’s easy to check your app usage. Here’s how:
• Go to “Settings” on your device.
• Scroll down and tap “Digital Wellbeing & parental controls.”
• Select “Dashboard.”
• You’ll see a list of apps and the amount of time you’ve spent on them.
You can also set timers for specific apps or schedule “wind down” times that reduce your screen time before bed.
Method 2: Third-Party Apps
If your device doesn’t have Digital Wellbeing features, don’t worry! There are plenty of third-party apps that can help you monitor your app usage. Here are a few popular options:
• Moment: This app tracks how much you use your phone and specific apps, and gives you reminders to take breaks.
• App Usage: This app shows detailed statistics about how much time you spend on each app per day, week, or month.
• YourHour: This app tracks phone usage and offers various tools like setting daily limits and blocking distracting apps during specific times.
Tips for Reducing Screen Time
Now that you know how to see app time on Android, you might be wondering how to reduce your screen time. Here are a few tips:
• Take breaks: Set reminders to take breaks throughout the day. Stand up, stretch, and give your eyes a rest.
• Create boundaries: Set limits for how long you can use specific apps or use your phone in general.
• Disconnect: Consider turning off notifications or putting your phone in another room during certain times of the day (like dinner or bedtime).
Conclusion
Monitoring your app usage can be helpful for reducing screen time and being more mindful about how you use your phone. Whether you use Digital Wellbeing features or third-party apps, there are plenty of options for tracking your app time on Android. Try out different methods and see what works best for you!
|
__label__pos
| 0.539081 |
Get Free SSL Certicate For Your Website
Free SSL Certificate
Get a 100% free SSL/TLS certificate issued by Global Certificate Authority (CA) – Let’s Encrypt.
Please Enter Your Domain(s) Name.
Please Enter A Valid Email Address.
You Must Accept Let’s Encrypt SA Before Requesting To Create A Free SSL/TLS Certificate.
Don’t know how to generate an SSL certificate? Just follow this easy guide on how to get free SSL certificate for my website.
What is an SSL Certificate? And why does my website needs one?
An SSL certificate is a digital document that proves the ownership of a website. When some visit and interact with your website, it’s SSL that makes the connection secure. An SSL certificate verifies that website content is coming from a verified web server. SSL is a secure socket layer. It provides end-to-end encryption between a web server and a web browser.
what is an ssl certificate?
Your website needs an SSL certificate to provide a highly secure communication channel to the website visitors. You need an SSL certificate to move your website from HTTP to HTTPS. It ensures all requests to a web server and responses from the web server are encrypted and secure. It helps your website visitors to trust your website and provide their personal information without any risk of data theft. It enhances security features like secure logins, secure payments, and other features.
why does my website needs an ssl certificate?
Why do I need to verify my domain to get a free SSL certificate?
Let’s Encrypt will issue you the free SSL certificate. It is a global certificate authority(CA) that uses ACME protocol to generate an SSL certificate. ACME is an Automated Certificate Management Environment protocol. Here, we provide you a web-based ACME client to verify your domain ownership. So, verify your domain ownership using the HTTP or DNS method and get an SSL certificate at zero cost.
let's encrypt needs domain validation to generte a free ssl for your website/domain.
What is the domain verification method?
The domain verification method is a process of verifying ownership of a domain. Let’s Encrypt use ACME protocol to generate a free SSL certificate for your website. It helps us to validate that you are the rightful owner of a domain. Hence, you can get a free SSL certificate for your domain(website). We use two types of verification method:
How to verify my domain using HTTP?
HTTP verification method is a simple and recommended method to verify your domain(website) ownership. This method checks the verification file(s) on your website using the HTTP protocol. You have to follow these simple steps to get your domain verified:
1. Fill in the SSL Request form and select http as verification method. select http as verification method
2. Download verification file(s). download http verification file(s)
3. Create a directory .well-known inside root directory. A root directory is the main directory(folder) where your website files are stored.
4. Then create a subdirectory acme-challenge inside .well-known directory.
5. Upload the downloaded verification files(s) into the subdirectory acme-challenge
6. After uploading click the verify option to verify you did it all right. If it shows verified then you are ready to generate an SSL certificate, else you need to check what you did in steps 2 to 4.
How to verfiy my domain using DNS?
DNS verification method is a comparatively complex method to verify your domain(website) ownership. This method checks the DNS TXT records of your domain. DNS verification method is recommended for a bit advanced users. You have to follow these steps to get your domain verified:
• Fill in the SSL Request form and select DNS as the verification method. select http as verification method
• Go to the DNS management console of your domain.
• Create a new TXT record in your domain DNS records.
• Copy the TXT Record (e.g. _acme-challenge.domain) from our website. Then paste it in the Name/Host/Alias field of the new TXT record entry of your DNS Manager. Please avoid the trailing white spaces. copy and paste dns txt record and values
• Now Copy the Value from our website and paste it into the value field of the new TXT record entry of your DNS Manager. Please avoid the trailing white spaces.
• Set the TTL to 60 seconds or 1 Minute or the least time your DNS manager allows.
• Wait for at least 5-10 minutes before requesting to generate a free SSL certificate. It allows the DNS server to propagate the new TXT record. The time taken to propagate DNS records depends on your DNS server.
Why I’m getting an error while verifying my domain?
There are two domain verification methods you can choose for verifying your domain.
1. Let us first talk about the HTTP verification method. Found error? The reason must be the downloaded verification file is not uploaded to the right location(http://yourdomain/.well-known/acme-challenge/). Please follow the above instructions under the section: How to verify my domain using HTTP? verification failed, please retry after uploading http verification files.
2. Now, let’s talk about the DNS verification method. Got error? The reason must be the required DNS TXT record is unavailable. You should check the DNS management console of your domain for the TXT record. It should have the TXT Record name and value generated on the domain validation page. Also, please wait for some time to let your DNS record propagate. Please follow the above instructions under the section: How to verify my domain using DNS? Finding DNS method very complicated? Then we recommend you should use the HTTP verification method. error while verifying domain
How to install the SSL certificate on my website?
After generating a free SSL certificate the next challenge is to install it on your website. First, let’s understand your SSL certificate. It contains three parts:
1. Certificate (CRT)
2. Intermediate Certificate (CA_BUNDLE CRT)
3. Private Key (KEY)
SSL certificate is generated successfully.
These 3 component forms an SSL certificate for your domain. Now you have to install it according to your web hosting provider. Following are the links to explain how to install it on your specific web hosting:
What is the validity period of my free SSL certificate?
The validity period of a free SSL certificate is limited to 90 days. You can generate a new certificate any time before the expiration of your current certificate. So, it means you can have an SSL certificate free for a lifetime.
You don’t have to pay a single penny to get your domain a valid SSL certificate. Your free SSL certificate comes with a total validity of 90 days.
Who is issuing this free SSL certificate for my domain?
The credit goes to Let’ Encrypt. It is a Certificate Authority run by the Internet Security Research Group (ISRG). Let’s Encrypt issues free SSL/TLS certificate. To get more knowledge about it, please visit this page.Your free forever SSL certificate is generated by Let's Encrypt
I still have a question regarding how to get a free SSL certificate for my website?
Still finding answer to your quetions?
Do you have any other questions related to our free SSL certificate generator? Please click the following button to submit your query.
Ask your question here
|
__label__pos
| 0.997146 |
Rich-text Reply
Is it possible to only show one variant to a user when running multiple A/B-tests?
engelhardt 12-02-14
Is it possible to only show one variant to a user when running multiple A/B-tests?
Our conversion-process (as probably many others) consists of multiple pages. We run multiple A/B-tests at a time, but only one test per page.
Lets say we have one test for each page in our conversion process.
What i want to do is to exclude one user which ran into the variant of page 1 from every other test on the following pages, since this would corrupt the results.
Or in general: I want exclude one user from the variants of all following A/B-Tests after he ran into the variant of one test.
Is this possible? And if yes, does it work properly? how can i do this?
Thanks
Daniel
0 Likes
0 Likes
JDahlinANF 12-02-14
Re: Is it possible to only show one variant to a user when running multiple A/B-tests?
[ Edited ]
The easiest way to handle this, IMO, is to set up one experiment where each variation only targets one page of the funnel.
For example, if you have 3 steps in your checkout and want to run an A/B on each page:
Variation A targets Shipping Info
Variation B targets Payment Info
Variation C targets Order Review
To achieve this you would set URL targeting to include all 3 URLs and include code in the varitions that limit which URL the code affects.
e.g.
if (window.location.pathname.indexOf('shipping') > 0) {
//Variation A's changes go here
}
This easily ensures that users are only in one version of the overall experiment.
Unfortunately, this approach means that you would not be able to use mere participation in the experiment for your funnel metrics (Users in the B and C variations are in the experiment even though they did not actually reach their page of the experiment).
A slightly more complicated way to go would be to set up the overall experiment as 3 separate experiments (let's call them "shipping", "payment', and 'review") and use conditional javascript in each experiment that excludes the other two experiments' "B" variations (using the IDs of the variations from the diagnostic editor). Like this:
First - set up the experiments:
Experiment Shipping with variations A and B
Experiment Payment with variations A and B
Experiment Review with variations A and B
Second - use the diagnostics report to find the ID numbers of each variation:
Experiment Shipping with variations A (111111) and B (222222)
Experiment Payment with variations A (333333) and B (444444)
Experiment Review with variations A (555555) and B (666666)
Third - add conditional javascript to each experiment that excludes the "B" versions of the other experiments:
For Experiment Shipping, it could look like this (the function at the top checks to see which variations a user is in, the second line from the bottom is where you place the IDs you want to check against). In this case we want to exclude users who are in either variation, so where the function returns a "true", we want the audience targeting to return "false" (to exclude them):
function checkVariation(variantID) {
var OptSegment = (function(){
var name = "optimizelyBuckets";
var ID = variantID;
var cookie = document.cookie.match(name+"=([^;]*)") || "";
if (cookie != null) {
var decodedCookie = decodeURIComponent(cookie[0]);
var value = decodedCookie.match('"([^,]*)":"'+ID+'"');
return value;
} else {
return null;
}
})();
var optlyVal = false;
if (OptSegment != null) {
optlyVal = true;
}
return optlyVal;
}
(function(){
return !checkVariation('444444') || !checkVariation('666666');
})();
The tricky part here will be getting enough sample size for your 3rd experiment...
Using a 50-50 split on each experiment would mean that only 25% of the users who hit Payment would be placed in the Payment-B varitation and only 12.5% of users who reach the review page would be in the Review-B variation.
0 Likes
0 Likes
nolanmargo 12-02-14
Re: Is it possible to only show one variant to a user when running multiple A/B-tests?
[ Edited ]
Before heading down the elegant route suggested by nap0leon, I'd suggest familiarizing yourself with this awesome article from Optimizely, which discusses the Simultaneous Testing conundrum, especially as it relates to introducing "noise" into an experiment.
https://help.optimizely.com/hc/en-us/articles/200064329-Simultaneous-Testing-Running-two-different-t...
engelhardt 12-04-14
Re: Is it possible to only show one variant to a user when running multiple A/B-tests?
Thanks a lot for this detailed answer! Will give it a try
0 Likes
0 Likes
|
__label__pos
| 0.713757 |
Why do students need help with a python assignment
added by DotNetKicks
1/31/2022 5:46:13 PM
221 Views
Here's the fundamental reasons why students have started leaning toward learning Python in college. - Article by Kunal Chowdhury on Although being released in 1991, only recently has Python become a leading programming language, and that's for a reason. Compared to other programming languages, Python is straightforward, and it doesn't require solid knowledge in programming to learn it.
0 comments
|
__label__pos
| 0.986873 |
4
votes
1answer
81 views
How to know what to grep for in dmesg?
I recently had some trouble with a wireless card, and an online forum suggested searching dmesg with grep firmware. That helped, I was able to find the problem immediately! However, the previous hour ...
2
votes
2answers
103 views
How can I find the text show in the screen when linux boot? That's not the same as dmesg shows
When gnu/linux boot message shows, that's not the same as dmesg shows. How can I find it back?
7
votes
5answers
3k views
How can I see dmesg output as it changes?
I'm writing a device driver that prints error message into ring buffer dmesg output. I want to see the output of dmesg as it changes. How can I do this?
2
votes
1answer
2k views
How can dmesg content be logged into a file?
I'm running a Linux OS that was built from scratch. I'd like to save the kernel message buffer (dmesg) to a file that will remain persistent between reboots. I've tried running syslogd but it just ...
3
votes
1answer
142 views
What do the fields in the libata device probe line in dmesg mean?
When the kernel boots, it prints out lines like this for each SATA device: [ 0.919450] ata2.00: ATA-8: ST2000DM001-1CH164, CC24, max UDMA/133 [ 0.919487] ata2.00: 3907029168 sectors, multi 16: ...
2
votes
2answers
129 views
How to test whether Linux is running on a ThinkPad?
I need to programmatically detect if Linux is running on a ThinkPad. A shell script would be ideal but I can programmatically generate any binary by downloading some source and compiling it on the ...
1
vote
1answer
574 views
How to avoid overflowing the kernel printk ring buffer?
I'm trying to debug a linux driver and a particular piece of code is behaving very strangely. In order to see what's going on I've filled the code with printk statements so I can see exactly what the ...
1
vote
1answer
2k views
Which drive had a “journal commit I/O error”?
I received a message: kernel:[123456.789012] journal commit I/O error Which disk drive had the journal error?
7
votes
3answers
3k views
How can I write to dmesg from command line?
I'd like to write a statement to dmesg. [How] can I do this?
|
__label__pos
| 0.997169 |
Home > Recover Raw Drive
3 Ways to Format RAW Drive to NTFS without Losing Data (2023)
Updated on Monday, July 10, 2023
iBoysoft author Jenny Zeng
Written by
Jenny Zeng
Professional tech editor
Approved by
Jessica Shee
English Français Deutsch
How to Format RAW Drive to NTFS without Losing Data?
Summary: This post tells you how to format RAW hard disk without losing valuable data and what to do when you are unable to format external hard drive from RAW to NTFS. You're recommended to download iBoysoft to recover files from the RAW drive first.
Hard drive showing as RAW in Disk Management
I found my hard drive showing as RAW in Disk Management and I cannot open it anymore. Is there a way to get my data back or format the RAW drive to NTFS file system without losing data?
Like this user, if you also see your disk showing as RAW in Disk Management, you've come to the right place. Formating the RAW to NTFS is the most effective way to fix the RAW drive on Windows.
But it should be done with caution as you will lose all your files on the RAW disk by doing so. Here, we will tell you how to format RAW hard drive on Windows without losing data.
Guide on how to format RAW hard drive on Windows:
What is RAW drive and why does it happen?
RAW drive refers to drives with the RAW file system. As the name implies, the RAW file system indicates the state that a disk has no or unknown file system.
Your portable storage device, such as USB flash drives, SD cards, SSDs, HDDs, etc., will turn to RAW and stops you from accessing data on it when one of the following happens:
• Missing or corrupted file system
• Sudden power outage
• Unsupported file system
• System crash
• Denied access to the drive
• Bad blocks
• Virus infection
• Force ejection
• Drive not being formatted or unsuccessfully formatted
Now that you have a rough idea of the culprit behind your RAW disk, you can convert RAW hard drive to a Windows-supported format such as NTFS, FAT32, REFS file systems, etc., with the instructions below.
How to format RAW drive to NTFS without losing data?
If you have a backup of the data on the RAW Disk or don't have essential files stored on it, you can erase your drive without second thoughts. Otherwise, you need to restore data from it before erasing the drive.
That being said, you will need a high-quality RAW data recovery tool that's safe and effective to help you recover files on the drive beforehand. It will be best if the data recovery tool can also allow you to repair the RAW Drive on Windows 10 and other Windows operating systems.
• Recover data before formatting RAW hard drive to NTFS
• Format RAW drive using File Explorer
• Format RAW drive using Disk Management
Recover data before formatting RAW hard drive to NTFS
With these requirements in mind, we highly recommend trying iBoysoft Data Recovery for Windows. This professional data recovery software has a dedicated RAW recovery module designed to repair RAW drives that were formatted with NTFS, exFAT, and FAT32.
If your drive isn't seriously damaged, as is the case with most users, iBoysoft Data Recovery will first attempt to rebuild the missing or corrupted data while performing a deep scan for the lost files.
If iBoysoft detects a repairable file system, it writes the repaired file system data to your RAW drive. That way, you can regain access to your files on the drive with the original drive letter and file system on it, saving yourself the time to recover data and format the drive. If the file system is beyond repair, you can switch to the data recovery mode to secure your files and then reformat the drive.
Besides, This RAW data recovery tool also supports all types of RAW storage devices and file formats on Windows 11 - Windows XP, as well as Windows Server 2003 and above. You can download it for free to preview recoverable files now.
How to read RAW format hard drive without formatting:
1. Download and install iBoysoft Data recovery.
2. Select the "RAW Drive Recovery" module.
Select RAW recovery module to fix RAW drive
3. Choose your RAW drive partition and click Next.
Choose your RAW drive to be scanned
4. Preview recoverable files iBoysoft find from your drive.
5. Click "Fix Drive" to fix RAW hard drive.
Click Fix Drive to repait the RAW disk
If you receive an error message like "The raw drive cannot be fixed. Please use data recovery module.", you need to switch to the Data Recovery module to recover files.
How to recover data from RAW drive partition:
1. Click "Switch to Data Recovery Mode".
2. Select your RAW drive and click Next.
3. Preview the files and select all the data you want to recover.
Preview recoverable files from the RAW drive
4. Click Recover to restore files to another location.
Having the lost data secured, you can carry on to convert RAW drive to NTFS, as we will discuss next.
Format RAW drive using File Explorer
The simplest way to convert RAW partition or drive to NTFS is through File Explorer.
How to format RAW drive on Windows:
1. Click on the bottom-left search bar, type in "file explorer" and press Enter.
2. Open File Explorer.
3. Right-click on your RAW storage device.
4. Select Format.
5. Next to "File System", choose NTFS file system or others.
Format RAW drive with File Explorer
6. Keep "Quick Format" ticked.
7. Give it a name under "Volume Label".
8. Click Start to format partition.
How to choose a proper file system for the RAW drive?
NTFS: The best choice if you only use the disk on Windows.
exFAT: The best choice for using a disk in Windows and macOS.
FAT32: Only choose it if your device doesn't support exFAT, and you don't plan to store any file up to 4GB.
Format RAW drive using Disk Management
Alternatively, you can convert RAW to NTFS or another file system with Windows Disk Management.
How to format RAW hard disk on Windows:
1. Right-click on "This PC" and choose Manage.
2. Click Storage > Disk Management.
3. Right-click on your RAW partition or disk and tap on Format.
4. Under "File System", choose NTFS file system or others.
Format RAW drive with Disk Management
5. Give it a name under "Volume Label".
6. Click Start to begin the formatting process.
In some cases, a user may find his/her hard drive stuck in RAW format. If you are unable to format external hard drive with RAW file system using the methods above, move on to the next part.
What to do if you can't format RAW drive?
You may encounter a format failure with an error message telling you that you "cannot format RAW drive" or "Window was unable to complete the format." In which case, you need to ensure that the drive is firmly connected to your computer and detected by it, then try the solutions below.
Fixes to try when you cannot format RAW drive:
• Format RAW drive using command prompt
• Format RAW drive in Safe Mode
• Remove write protection
• Convert RAW to a different file system
Format RAW drive using command prompt
We will utilize the command-line tool Diskpart to change the RAW disk format to one of the standard file systems like NTFS, as it works better than the format partition feature in File Explorer and Disk Management.
How to format RAW drive using command prompt:
1. Type "cmd" in the bottom-left search box and press Enter.
2. Right-click on Command Prompt and select "Run as administrator".
3. Launch Diskpart by inputting the following command and hitting Enter.diskpart
4. List all the active drives by executing the command below.list disk
5. Check if there's an asterisk (*) next to your external drive. If there's one, it's set to GPT. Otherwise, it's configured to MBR.
6. (Optional) If you want to use a different partition style like the newer GPT, run:convert gpt
7. Select the RAW drive to clean with this command. (e.g., the RAW disk number is 2)select disk 2
8. Wipe out the external hard disk with the RAW format by executing another command.cleanClean and format RAW drive using Command Prompt
9. Create a new partition by running the command below.create partition primary
10. Select the new primary partition with this command:select partition 1
11. (Optional) If you use MBR, make the partition active with the command below.activeSelect the RAW drive to format using Command Prompt
12. Format RAW to NTFS in quick format and give it a name. (e.g., my Data)format fs=ntfs label=myData quick
13. Assign a drive letter to the disk by running this command. (e.g., assign it with the drive letter G)assign letter=g Command to format RAW drive to NTFS
14. Once the formatting process is done, check whether the drive is formatted to NTFS by executing:list disk
15. Close Diskpart by running:exit
16. Close Command Prompt by executing:exit
Note: Don't forget to press Enter to run all the above commands.
Suppose you cannot format RAW drive after using Diskpart, then you can format RAW to NTFS in Safe Mode.
Format RAW drive in Safe Mode
Sometimes, Windows is unable to format external hard drive from RAW to another file system because it's overloaded with other applications. Starting your PC in Safe Mode can stop third-party software or drivers from launching.
Here is how to boot into Safe Mode if you can't format RAW drive:
1. Type "msconfig" into the bottom-left search box and press Enter.
2. Select the Boot tab and check the box next to "Safe Boot".
3. Click OK.
After restarting in Safe Mode, format RAW drive using command prompt again to see whether it works. You can exit Safe Mode by following the same steps to uncheck "Safe Boot." If you find your hard drive stuck in RAW format, it may be write-protected.
Remove write protection
Windows can't format RAW USB drive and SD card when they have write protection enabled. You should check your device and make sure the switch is unlocked if it has a lock.
You can also grant the RAW disk write access with these steps:
1. Type in "gpedit.msc" in the bottom-left search box and hit Enter.
2. Under "Computer Configuration", select "Administrative Templates" > System.
3. Choose "Removable Storage Access."
4. Locate "Removable Disks: Deny write access."
5. Choose "Disabled" and restart your PC.
Enable write access to format RAW disks
6. Try again to see if you can convert RAW to NTFS. If not, proceed to the next fix.
Convert RAW to a different file system
If you can't format RAW drive as every attempt to convert RAW to NTFS has failed, consider changing to a different file system like exFAT or FAT32. Besides, you can format RAW hard drive on a different PC or Mac as some users succeed at reformatting the RAW partition on another computer.
How to avoid disk showing as RAW in the future?
Here are a few tips you can apply to avoid your hard drive showing as RAW in Disk Management:
• Always eject your hard disk properly.
• Download software only from trustworthy resources.
• Back up essential files regularly.
• Don't cancel an ongoing formatting process.
• Take proper care of the drive.
• Avoid using the disk with an unstable power supply.
|
__label__pos
| 0.860766 |
User:Psychonaut/palette.sh
From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
#!/bin/bash
# Name : palette.sh
# Author : Psychonaut
# Date : 2007-11-16
# Licence: public domain
# Purpose: This bash script generates an SVG image of a uniform RGB palette
# Usage : Modify the variables in the "User-modifiable variables" section
# to taste; then run the script. The SVG image is sent to
# standard output.
# User-modifiable variables
rbits=2 # Number of bits for red
gbits=2 # Number of bits for green
bbits=2 # Number of bits for blue
cols=8 # Number of columns in grid
gridsize=64 # Width of each grid square
cellsize=60 # Width of each cell within a grid square
strokewidth=4 # Stroke width
strokecolor="black" # Stroke colour
# Dependent variables
rvals=$(( 2 ** rbits ))
gvals=$(( 2 ** gbits ))
bvals=$(( 2 ** bbits ))
rows=$(( rvals * gvals * bvals / cols ))
cat <<EOF
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg"
version="1.0"
width="$(( cols * gridsize ))"
height="$(( rows * gridsize ))">
EOF
row=0
col=0
for (( r = 0; r < rvals; r++ ))
do
for (( g = 0; g < gvals; g++ ))
do
for (( b = 0; b < bvals; b++ ))
do
cat <<EOF
<rect width="$cellsize"
height="$cellsize"
y="$((row * gridsize + (gridsize - cellsize) / 2))"
x="$((col * gridsize + (gridsize - cellsize) / 2))"
style="fill:rgb($((255 * r / (rvals-1))),$((255 * g / (gvals-1))),$((255 * b / (bvals-1))));
stroke-width:$strokewidth;
stroke:$strokecolor;" />
EOF
if ((++col == cols))
then
col=0
((row++))
fi
done
done
done
cat <<EOF
</svg>
EOF
|
__label__pos
| 0.942937 |
Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw
Ankle s aqua Clear c Pvc Hologram Pleaser Bootie Women's Ado1017sqf Clear c CXqxnBfUw
The easiest way to adapt jsreport to your needs is to change its configuration. jsreport configuration provides many options like changing http port, setting store provider to different mechanism and many others.
Configuration sources
jsreport merges configuration from config file, environment variables, command line arguments and also directly from the application code in this exact order.
Configuration file
The configuration file is the most common way to adapt jsreport. The default jsreport.config.json is usually pre-created for you if you follow Hawk W US 8 Purple M Women's Shoe Jadeite Heather Grey 6 Diadora running 57Ufqtxwf.
jsreport also loads dev.config.json or prod.config.json based on the NODE_ENV=development or NODE_ENV=production environment variable if such file exists.
The config file can be also explicitly specified using configFile=path option which can be passed from one of the configuration source methods. The config file path can be both relative or absolute.
Hint: You should see the currently applied configuration file name in the first lines of log when starting the instance.
info: Initializing [email protected] in development mode using configuration file: jsreport.config.json
Environment variables
The environment variables are collected and merged into the final configuration during the jsreport startup as well. You can use it to change the port for example:
unix:
httpPort=3000 jsreport start
windows:
set httpPort=3000
jsreport start
This will start jsreport on port 3000 even if you have the httpPort entry in the config file because the environment variables have the higher priority.
If you want to use environment variable for configuring a complex object you should separate the nested path in the key using _ :
unix:
extensions_authentication_admin_username=john jsreport start
windows:
set extensions_authentication_admin_username=john
jsreport start
another alternative to the _ separator is to use the : as separator
unix:
env extensions:authentication:admin:username=john jsreport start
windows:
set extensions:authentication:admin:username=john
jsreport start
Arguments
The command line arguments are parsed as well. This is very convenient for changing the port for example:
Chukka Men's Blue 2 Adventure Iris Flamenco Timberland Cupsole Blue 0 XRfAwA
jsreport start --httpPort=3000
The configuration for complex objects should use the . as separator
jsreport start --extensions.authentication.admin.username=john
Application code
The last option is to edit the server.js file which is part of the default installation. This is usually common when integrating jsreport into existing node.js application. Note this approach cannot be used if you use precompiled jsreport binary.
const jsreport = require('jsreport')({
httpPort: 3000
})
Configuring extensions
Each extension (recipe, store...) usually provides some options you can apply and adapt its behavior. These options can be typically set through standard configuration under the top level extensions property, in which you can put the specific extensions options with extension's name inside it. For example the Shoe Women's Jeater Water Outdoor Grey77 Hiking Mesh Men's Breathable Shoes tdqfqH can be configured under the same named node in the config.
"extensions": {
"authentication": s Ankle Bootie Ado1017sqf Pvc Clear Hologram c Pleaser Clear aqua Women's c {
"admin": {
"username" : "admin",
"password": "password"
}Black Dress Horizon Badgley Sandal Women's Jewel Mischka qSwwfOY
}
}
extensions that has a name with a hyphen in it (like Pvc Ankle c s Clear Ado1017sqf Clear Hologram Pleaser Women's Bootie aqua c html-to-xlsx for example) also supports receiving configuration with the name in camel case, so both of the following examples are valid for extensions with hyphen in its name
s Clear aqua Pvc c Women's Pleaser Bootie Ankle Ado1017sqf Clear c Hologram "extensions": {
"html-to-xlsx": {
...
}
}
"extensions": {
"htmlToXlsx": {
...
}
}
this support of camel case form of extensions also works when specifying configuration as cli arguments or env vars, which is handy when working in environments where is difficult to pass arguments or env vars with hyphens
jsreport start --extensions.htmlToXlsx.someConfig value
extensions_htmlToXlsx_someConfig=value jsreport start
Please refer to particular extension's documentation to find what configuration options you have. There is usually Configuration section where you can find it.
Disabling extensions
You can disable an extension by setting enabled: false in the configuration of particular extension.
{
"extensions": {
// ..other options here..
"authentication": {
// disabling authentication extension
"enabled": false
},
"handlebars": {
// disabling handlebars extension
"enabled": false
},
// ..other options here..
}
}
Web server configuration
httpPort (number) - http port on which is jsreport running, if both httpPort and httpsPortBootie It Lupica Spring Purple Call Women's Ankle AXwvCXdqx are specified, jsreport will automaticaly create http redirects from http to https, if any of httpPort and httpsPort is specified default process.env.PORT will be used
httpsPort (number) - https port on which jsreport is running
certificate object - path to key and cert file used by https
hostname (string) - hostname to be used for the jsreport server (optional)
extensions.express.inputRequestLimit (string) - optional limit for incoming request size, default is 2mb
appPath (string) - optionally set application path, if you run application on Synthetic 2017 Black 8 Hyperdunk University Shoes Team M Basketball Low US Red White React Mens Red Nike tqw0PP then set "/reporting" to appPath. The default behavior is that it is assumed that jsreport is running behind a proxy, so you need to do url url rewrite /reporting -> / to make it work correctly, See mountOnAppPath when there is no proxy + url rewrite involved in your setup.
mountOnAppPath (boolean) - use this option along with appPath. it specifies if all jsreport routes should be available with appPath as prefix, therefore making c Clear aqua Women's Pleaser c Ado1017sqf Ankle Bootie Hologram Clear s Pvc appPath the new root url of application
Store configuration
store (object) - jsreport supports multiple implementations for storing templates. The particular implementation is distinguish based on store.provider attribute. The predefined value in the precreated configuration file is fs which employs jsreport-fs-store to store report templates on the file system. Alternatively you can install additional extension providing template store and change store to reflect it. You can find the list of available store drivers and further details how to configure them here.
blobStorage (object) - optional, specifies type of storage used for storing reports. The particular implementation is distinguish based on blobStorage.provider attribute. It can be fsWomen's Dusk Slimpack Lace Kettle II Sorel P7qxp7, memory or gridFS. Defaults to fs in full jsreport or to memory when integrating jsreport into existing node.js application.
Directories configurations
rootDirectory (string) - optionally specifies where's the application root and where jsreport searches for extensions
tempDirectory (string) - optionally specifies absolute or relative path to directory where the application stores temporary files
Allow local files and local modules
allowLocalFilesAccess (boolean) - When true this property specifies that jsreport should allow access to the local file system and use of custom nodejs modules during rendering execution
Rendering configuration
jsreport uses by default dedicated processes for rendering pdf or scripts. This solution works better in some cloud and corporate environments with proxies. However for other cases, for example when using phantomjs it is better to reuse and nodejs workers over multiple requests. This can be achieved using this config options.
"phantom": {
"strategy": "phantom-server"
},
"templatingEngines": {
"strategy": "http-server"
}
Templating engines configuration
templatingEngines (object) - this attribute is optional and is used to configure the component that executes rendering tasks. This component is used to execute javascript templating engines during rendering or in scripts extension.
templatingEngines.strategy (dedicated-process | http-server | in-process) - The first strategy uses a new nodejs instance for every task. The second strategy reuses every instance over multiple requests. Where http-server has better performance, the default dedicated-process is more suitable to some cloud and corporate environments with proxies. The last in-process strategy simply runs the scripts and helpers inside the same process. This is the fastest, but it is not safe to use this strategy with users' templates which can have potentially endless loops or other critical errors which could terminate the application. The in-process strategy is also handy when you need to debug jsreport with node.js debugging tools.
templatingEngines.numberOfWorkers (number) - how many child nodejs instances will be used for task execution
templatingEngines.timeout (number) - specify default timeout in ms for one task execution
templatingEngines.host (string) - Set a custom hostname on which script execution server is started, useful is cloud environments where you need to set specific IP.
templatingEngines.portLeftBoundary (number) - set a specific port range for script execution server
templatingEngines.portRightBoundary (number) - set a specific port range for script execution server
templatingEngines.allowedModules (array) - set the allowed external modules that can be used (imported with require) inside helpers of template engines. Ex: allowedModules: ["lodash", "request"], alternatively you can enable importing any external module using allowedModules: "*". If instead of helpers you want to control the allowed modules for scripts then check the corresponding docs
Logging configuration
Note: Logging in jsreport is implemented using the Black Tanjun Anthracite 5 Black Size Mens NIKE 8 4E XnTZw5I7x package and many of its concepts apply the same for jsreport logging configuration
logger (object) - To have complete control about logging in jsreport you can declare output (where should logs be sent) and log level using an object:
{
"logger": {
"console": { "transport": "console", "level": "debug" },
"file": { "transport"Hologram Bootie Pleaser c Pvc Ado1017sqf c Clear s Clear aqua Women's Ankle : "file", "level": "info", "filename": "logs/log.txt" },
"error": { "transport": "file", "level": "error", "filename": "logs/error.txt" }
}
}
For example the above config specifies the following:
• configure an output named "console" which sent all logs with level debug, and all levels with low priority than debug level ("level": "debug") to the console ("transport": "console")
• configure and output named "file" which sent all logs with level info, and all levels with low priority than info level ("level": "info") to the file system ("transport": "file") storing them at "logs/log.txt" ("filename": "logs/log.txt")
• configure and output named Clear Ankle c Pvc Bootie Hologram c Pleaser Women's aqua Ado1017sqf s Clear "error" which sent all logs with level error, and all levels with low priority than error level ("level": "error") to the file system ("transport": "file") storing them at "logs/error.txt" ("filename": "logs/error.txt")
Each output object specifies where to send the logs using a transport property and which is the level that should taken in consideration using a level property.
As you can see in the previous logging configuration each output object can take additional properties that let you configure the functionality of a transport, for example in the case of the output named "file", we are using the filename property to tell the file transport where to save the logs, each transport type supports a different set of properties to configure its behaviour.
Values for the transport property:
• debug -> specifies that logs should be sent to console but they only be visible when using DEBUG=jsreport env var
• console -> specifies that logs should be sent to console, available options here
• file -> specifies that logs should be sent to the file system, available options here
• Ankle Pvc Women's Clear Ado1017sqf c Hologram c Pleaser s aqua Clear Bootie http -> specifies that logs should be sent to and http endpoint, available options here
Available log level ordered by priority (top ones have more priority):
• silly
• debug
• verbose
• info
• warn
• error
For advanced use cases we provide a way to configure output which can use a transport available from external modules using the module property, since logging in jsreport is implemented using the Black Tanjun Anthracite 5 Black Size Mens NIKE 8 4E XnTZw5I7x package any external module that is compatible with winston transports will work in jsreport, for example to tell jsreport to use the third-party winston-loggly transport you can create a configuration like the following:
{
"logger": {
"loggly": {
"module": "winston-loggly", // module should be the name of the third-party module
"transport": "Loggly",
"level": "info",
// custom loggly transport options, see https://github.com/winstonjs/winston-loggly
"subdomain": "test",
"inputToken": "",
"auth": {
"username": "",
"password": ""
}
}
}
}
s Hologram Clear Pleaser Bootie Pvc Ankle c aqua c Clear Ado1017sqf Women's Default logger configuration in jsreport:
{
Bootie Ankle Pleaser s Ado1017sqf aqua Clear c Women's Hologram c Pvc Clear "logger": {
"debug": {
"transport": "debug"c c aqua Clear s Pleaser Ado1017sqf Women's Hologram Pvc Clear Ankle Bootie ,
"level": "debug"
},
"console": {
"transport": "console",
"level": "debug" // "info" in production mode
},
"file": {
"transport": "file",
"level": "debug" // "info" in production mode
},
"error": {
"transport": "file",
"level": "error"
}
}
}
Note that you can override all or just some part of the predefined configuration using:
{
"logger": {
"console": {
"level": "error" // now only logs with level "error" will be sent to console, the rest of predefined outputs are still configured, we are only overriding the "level" option for the predefined console output here
}
}
}
Special options:
• logger.silent (boolean): handy option to silence (logs will not be stored) all outputs configured. default: false
Studio configuration
studio (object) - object used to configure studio
studio.entityTreeOrder (string array) - this optional attribute will let you customize the order in which entity sets are shown in studio's entity tree, items in the array should be valid entity sets names, and its ordering will reflect the order of sets in studio's entity tree.
{
"extensions": {
"studio": {
"entityTreeOrder": ["templates", "data", "scripts", "assets", "images"]
}
Ado1017sqf Women's c Hologram s aqua Ankle Clear Pvc Clear Pleaser c Bootie }
}
Example of the config file
{
"certificate": {
"key":Geox Geox Men's Sand Men's Sand Indigo Geox Indigo BxqEaHzx "certificates/jsreport.net.key",
s c Ado1017sqf Hologram aqua Ankle Clear Pvc Pleaser Women's Clear Bootie c "cert": Ado1017sqf s Hologram c aqua Pvc Ankle Clear Pleaser Clear Women's Bootie c "certificates/jsreport.net.cert"
},
"store": { "provider": "fs" },
"httpPort": 3000,
"allowLocalFilesAccess": true,
"blobStorage"Yoga Womens Shoes and for Barefoot Beach Outdoor Water HooyFeel Quick Blue Mens Dry Sport AwH0Zqp: { "provider": "fs" },
"logger": {
"console": {
"transport": "console",
"level": "debug"
},
Women's c Clear Pvc Bootie Ado1017sqf s c Hologram aqua Clear Ankle Pleaser "file": {
"transport": "file",
"level": "info",
"filename": "logs/reporter.log"
aqua Ankle Hologram Pleaser Bootie c Women's Clear Clear s c Ado1017sqf Pvc },
"error": {
"transport": "file",
"level": "error",
"filename": "logs/error.log"
}
},
"chrome": {
"timeout": 180000
},
"templatingEngines": {
"numberOfWorkers" : 2,
"timeout": 10000,
"strategy": "http-server"
},
"extensions"Vaporous White SHOES CONVERSE WHITE 151312C q7xtvYn: {
"studio": {
"entityTreeOrder": ["templates", "data", "scripts", "assets", "images"]
}
}
}
|
__label__pos
| 0.686687 |
Google Cloud Spanner C++ Client 2.2.1
A C++ Client Library for Google Cloud Spanner
timestamp.h
Go to the documentation of this file.
1 // Copyright 2019 Google LLC
2 //
3 // Licensed under the Apache License, Version 2.0 (the "License");
4 // you may not use this file except in compliance with the License.
5 // You may obtain a copy of the License at
6 //
7 // https://www.apache.org/licenses/LICENSE-2.0
8 //
9 // Unless required by applicable law or agreed to in writing, software
10 // distributed under the License is distributed on an "AS IS" BASIS,
11 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 // See the License for the specific language governing permissions and
13 // limitations under the License.
14
15 #ifndef GOOGLE_CLOUD_CPP_GOOGLE_CLOUD_SPANNER_TIMESTAMP_H
16 #define GOOGLE_CLOUD_CPP_GOOGLE_CLOUD_SPANNER_TIMESTAMP_H
17
18 #include "google/cloud/spanner/version.h"
19 #include "google/cloud/status_or.h"
20 #include "absl/time/time.h"
21 #include <google/protobuf/timestamp.pb.h>
22 #include <chrono>
23 #include <cstdint>
24 #include <limits>
25 #include <ostream>
26 #include <string>
27
28 namespace google {
29 namespace cloud {
30 namespace spanner {
32
33 /**
34 * Convenience alias. `std::chrono::sys_time` since C++20.
35 */
36 template <typename Duration>
37 using sys_time = std::chrono::time_point<std::chrono::system_clock, Duration>;
38
39 /**
40 * A representation of the Spanner TIMESTAMP type: An instant in time.
41 *
42 * A `Timestamp` represents an absolute point in time (i.e., is independent of
43 * any time zone), with at least nanosecond precision, and with a range of
44 * 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, inclusive.
45 *
46 * The `MakeTimestamp(src)` factory function(s) should be used to construct
47 * `Timestamp` values from standard representations of absolute time.
48 *
49 * A `Timestamp` can be converted back to a standard representation using
50 * `ts.get<T>()`.
51 *
52 * @see https://cloud.google.com/spanner/docs/data-types#timestamp_type
53 */
54 class Timestamp {
55 public:
56 /// Default construction yields 1970-01-01T00:00:00Z.
57 Timestamp() : Timestamp(absl::UnixEpoch()) {}
58
59 /// @name Regular value type, supporting copy, assign, move.
60 ///@{
61 Timestamp(Timestamp&&) = default;
62 Timestamp& operator=(Timestamp&&) = default;
63 Timestamp(Timestamp const&) = default;
64 Timestamp& operator=(Timestamp const&) = default;
65 ///@}
66
67 /// @name Relational operators
68 ///@{
69 friend bool operator==(Timestamp const& a, Timestamp const& b) {
70 return a.t_ == b.t_;
71 }
72 friend bool operator!=(Timestamp const& a, Timestamp const& b) {
73 return !(a == b);
74 }
75 friend bool operator<(Timestamp const& a, Timestamp const& b) {
76 return a.t_ < b.t_;
77 }
78 friend bool operator<=(Timestamp const& a, Timestamp const& b) {
79 return !(b < a);
80 }
81 friend bool operator>=(Timestamp const& a, Timestamp const& b) {
82 return !(a < b);
83 }
84 friend bool operator>(Timestamp const& a, Timestamp const& b) {
85 return b < a;
86 }
87 ///@}
88
89 /// @name Output streaming
90 friend std::ostream& operator<<(std::ostream& os, Timestamp ts);
91
92 /**
93 * Convert the `Timestamp` to the user-specified template type. Fails if
94 * `*this` cannot be represented as a `T`.
95 *
96 * Supported destination types are:
97 * - `absl::Time` - Since `absl::Time` can represent all possible
98 * `Timestamp` values, `get<absl::Time>()` never returns an error.
99 * - `google::protobuf::Timestamp` - Never returns an error, but any
100 * sub-nanosecond precision will be lost.
101 * - `google::cloud::spanner::sys_time<Duration>` - `Duration::rep` may
102 * not be wider than `std::int64_t`, and `Duration::period` may be no
103 * more precise than `std::nano`.
104 *
105 * @par Example
106 *
107 * @code
108 * sys_time<std::chrono::nanoseconds> tp = ...;
109 * Timestamp ts = MakeTimestamp(tp).value();
110 * assert(tp == ts.get<sys_time<std::chrono::nanoseconds>>().value());
111 * @endcode
112 */
113 template <typename T>
114 StatusOr<T> get() const {
115 // All `ConvertTo()` overloads return a `StatusOr<T>` even when they
116 // cannot actually fail (e.g., when the destination type can represent
117 // all `Timestamp` values). See individual comments below.
118 return ConvertTo(T{});
119 }
120
121 private:
122 friend StatusOr<Timestamp> MakeTimestamp(absl::Time);
123
124 StatusOr<std::int64_t> ToRatio(std::int64_t min, std::int64_t max,
125 std::int64_t num, std::int64_t den) const;
126
127 // Conversion to a `std::chrono::time_point` on the system clock. May
128 // produce out-of-range errors, depending on the properties of `Duration`
129 // and the `std::chrono::system_clock` epoch.
130 template <typename Duration>
131 StatusOr<sys_time<Duration>> ConvertTo(sys_time<Duration> const&) const {
132 using Rep = typename Duration::rep;
133 using Period = typename Duration::period;
134 static_assert(std::ratio_greater_equal<Period, std::nano>::value,
135 "Duration must be no more precise than std::nano");
136 auto count =
137 ToRatio(std::numeric_limits<Rep>::min(),
138 std::numeric_limits<Rep>::max(), Period::num, Period::den);
139 if (!count) return std::move(count).status();
140 auto const unix_epoch = std::chrono::time_point_cast<Duration>(
141 sys_time<Duration>::clock::from_time_t(0));
142 return unix_epoch + Duration(static_cast<Rep>(*count));
143 }
144
145 // Conversion to an `absl::Time`. Can never fail.
146 StatusOr<absl::Time> ConvertTo(absl::Time) const { return t_; }
147
148 // Conversion to a `google::protobuf::Timestamp`. Can never fail, but
149 // any sub-nanosecond precision will be lost.
150 StatusOr<protobuf::Timestamp> ConvertTo(protobuf::Timestamp const&) const;
151
152 explicit Timestamp(absl::Time t) : t_(t) {}
153
154 absl::Time t_;
155 };
156
157 /**
158 * Construct a `Timestamp` from an `absl::Time`. May produce out-of-range
159 * errors if the given time is beyond the range supported by `Timestamp` (see
160 * class comments above).
161 */
162 StatusOr<Timestamp> MakeTimestamp(absl::Time);
163
164 /**
165 * Construct a `Timestamp` from a `google::protobuf::Timestamp`. May produce
166 * out-of-range errors if the given protobuf is beyond the range supported by
167 * `Timestamp` (which a valid protobuf never will).
168 */
169 StatusOr<Timestamp> MakeTimestamp(protobuf::Timestamp const&);
170
171 /**
172 * Construct a `Timestamp` from a `std::chrono::time_point` on the system
173 * clock. May produce out-of-range errors, depending on the properties of
174 * `Duration` and the `std::chrono::system_clock` epoch. `Duration::rep` may
175 * not be wider than `std::int64_t`. Requires that `Duration::period` is no
176 * more precise than `std::nano`.
177 */
178 template <typename Duration>
179 StatusOr<Timestamp> MakeTimestamp(sys_time<Duration> const& tp) {
180 using Period = typename Duration::period;
181 static_assert(std::ratio_greater_equal<Period, std::nano>::value,
182 "Duration must be no more precise than std::nano");
183 auto const unix_epoch = std::chrono::time_point_cast<Duration>(
184 sys_time<Duration>::clock::from_time_t(0));
185 auto const period = absl::Seconds(Period::num) / Period::den;
186 auto const count = (tp - unix_epoch).count();
187 return MakeTimestamp(absl::UnixEpoch() + count * period);
188 }
189
190 /**
191 * A sentinel type used to update a commit timestamp column.
192 *
193 * @see https://cloud.google.com/spanner/docs/commit-timestamp
194 */
196 friend bool operator==(CommitTimestamp, CommitTimestamp) { return true; }
197 friend bool operator!=(CommitTimestamp, CommitTimestamp) { return false; }
198 };
199
201 } // namespace spanner
202
203 namespace spanner_internal {
205 StatusOr<spanner::Timestamp> TimestampFromRFC3339(std::string const&);
206 std::string TimestampToRFC3339(spanner::Timestamp);
208 } // namespace spanner_internal
209
210 } // namespace cloud
211 } // namespace google
212
213 #endif // GOOGLE_CLOUD_CPP_GOOGLE_CLOUD_SPANNER_TIMESTAMP_H
|
__label__pos
| 0.996828 |
Anfängerfrage Touch Display
Hallo zusammen,
ich möchte aktuell diverse Funktionen mit dem 2.8'' TFT LCD Shield an meinen Uno ausprobieren, jedoch scheitert es gerade bereits an einer eigentlich einfachen Eingabe :roll_eyes: Das LCD soll bloß bei einer Berührung des Displays die Farbe wechseln. Kann mir jemand sagen wo der Fehler im Code verborgen ist? Nach der ersten Berührung wechselt die Farbe, danach jedoch nicht mehr.
#include <Adafruit_TFTLCD.h>
#include <Adafruit_GFX.h>
#include <stdint.h>
#include <TouchScreen.h>
#define LCD_CS A3
#define LCD_CD A2
#define LCD_WR A1
#define LCD_RD A0
#define LCD_RESET A4
#define YP A1
#define XM A2
#define YM 6
#define XP 5
#define BLACK 0x0000
#define YELLOW 0xFFE0
#define WHITE 0xFFFF
TouchScreen ts = TouchScreen(XP, YP, XM, YM, 300);
Adafruit_TFTLCD tft(LCD_CS, LCD_CD, LCD_WR, LCD_RD, LCD_RESET);
int z;
/////////////////////////////////////////////////////////////////////////////////////////////////////////
void setup(void) {
Serial.begin(9600);
tft.reset();
uint16_t identifier = tft.readID();
tft.begin(identifier);
tft.fillScreen(BLACK);
z = 1;
}
////////////////////////////////////////////////////////////////////////////////////////////////////////
void loop() {
TSPoint p = ts.getPoint();
pinMode(XM, OUTPUT);
pinMode(YP, OUTPUT);
if (p.z > ts.pressureThreshhold) {
Serial.print("X = "); Serial.print(p.x);
Serial.print("\tY = "); Serial.print(p.y);
Serial.print("\tPressure = "); Serial.println(p.z);
if (z == 1 || z == 0){
if (z == 1) {
tft.fillScreen(WHITE);
z = 0;
}
else {
tft.fillScreen(BLACK);
z = 1;
tft.setTextSize(3);
tft.setTextColor(YELLOW);
tft.setCursor(20, 20);
tft.println("Funktioniert");
}
}
}
}
Abgesehen von einem Prellen funktioniert Dein Programm bei mir.
Laß Dir mittels der seriellen Schnittstelle mal mehr Daten ausgeben, ob Du daran etwas erkennen kannst.
Über den seriellen Monitor bekomme ich die Werte für X, Y und Pressure nach der ersten Berührung angezeigt, danach passiert gar nichts mehr. Als wenn das Programm 'durchgelaufen' wäre.
Hab mal eine Endlosschleife mit while(1) {...} darum gelegt, hilft aber auch nichts. Es scheint in der If-Schleife irgendwie fest zu hängen
jorkan:
Über den seriellen Monitor bekomme ich die Werte für X, Y und Pressure nach der ersten Berührung angezeigt, danach passiert gar nichts mehr. Als wenn das Programm 'durchgelaufen' wäre.
Hab mal eine Endlosschleife mit while(1) {...} darum gelegt, hilft aber auch nichts. Es scheint in der If-Schleife irgendwie fest zu hängen
Das solltest du doch per serieller Ausgabe herausfinden, wo es hakt.
Im übrigen gibt es keine if-Schleife, nur eine if-Abfrage.
Das hab ich ja probiert indem ich mir an verschiedensten Stellen etwas habe ausgeben lassen über den Monitor. Fazit: Nach dem ersten Farbwechsel passiert nichts mehr..
Dann zeige doch mal den aktuellen Sketch mit den seriellen Ausgaben und zeige uns wo er die letzte Ausgabe macht.
Und bitte verwende dazu die Code-Tags.
void loop() {
Serial.print("start");
Serial.print("\tz = "); Serial.print(z);
TSPoint p = ts.getPoint();
Serial.print("\tX = "); Serial.print(p.x);
Serial.print("\tY = "); Serial.print(p.y);
Serial.print("\tPressure = "); Serial.println(p.z);
pinMode(XM, OUTPUT);
pinMode(YP, OUTPUT);
if (p.z > 1) {
Serial.print("X = "); Serial.print(p.x);
Serial.print("\tY = "); Serial.print(p.y);
Serial.print("\tPressure = "); Serial.println(p.z);
if (z == 1 || z == 0){
if (z == 1) {
tft.fillScreen(WHITE);
z = 0;
Serial.print("Erste Farbe");
}
else {
Serial.print("Zeite Farbe");
tft.fillScreen(BLACK);
z = 1;
}
}
}
}
Habe es so getestet. Als ersten Wert für z bekomme ich 1, nach der ersten Berührung einen abweichenden Y und Pressure Wert, die Ausgabe "Erste Farbe" und z verändert sich zu 0.
Soweit so gut, nun schwanken jedoch sowohl der Y ( zwischen 1 und 2) als auch Pressure (zwischen 0 und -1) Wert, welche vorher bei genau Y=1023 und Pressure=0 lagen. Auch "start" wird ausgegeben.
Meine einzig sinnvolle Erklärung ist, dass das LCD keine Eingabe mehr annimmt. Denn nach einer weiteren Berührung gibt der Monitor weder einen abweichenden Wert an, noch geht er wieder in die If-Abfrage.
Wozu setzt du die Variable z in der if-Abfrage auf 0 bzw. auf 1 ?
Da liest du doch die Touch-Information aus, oder ?
Und so wie es aussieht, landet er nie in dem else-Zweig, da in der ersten if-Abfrage auch nach einer 0 (oder) abgefragt wird.
Die Variable stellt sozusagen die Farbe dar, 1 bedeutet schwarz und 0 dementsprechend weiß. Ist der Bildschirm weiß (z=0) dann soll die Farbe zu schwarz (z=1) gewechselt werden und umgekehrt. Einfach als Hilfsvariable für die Abfrage.
Sorry....hatte noch eine Ergänzung geschrieben.
Ich würde es mal so probieren:
if (z == 1){
tft.fillScreen(WHITE);
z = 0;
Serial.print("Erste Farbe");
}
if (z == 0){
Serial.print("Zeite Farbe");
tft.fillScreen(BLACK);
z = 1;
}
Dann knallt er von weis sofort auf schwarz.
Gruß Tommy
Tommy56:
Dann knallt er von weis sofort auf schwarz.
Gruß Tommy
Stimmt, ist doch wenigstens eine andere Farbe die er aktuell nicht hat.
War gestern wohl doch schon zu spät.
Also statt if ein else oder ein delay einbauen.
Update: Durch den Wechsel der Pinzuweisung auf
#define YP A1
#define XM A2
#define YM 9
#define XP 8
funktioniert es - bis auf eine Einschränkung: der X-Wert wird nicht eingelesen. Bekomme immer für X den Wert 1020-1023. Hat jemand eine Idee woran es liegen könnte?
Vor ein paar Jahren habe ich mal ein Touch Display beim fC für 3 € erworben, welches ich nicht unbedingt als Referenz anbieten möchte. Normalerweise verwende ich es auch mit einem Mega2560, weil beim UNO schnell der Speicher voll wird. Außerdem habe ich angepaßte Adafruit_TFTLCD und Adafruit_GFX von 2015. Wenn sich aber sonst niemand meldet ::slight_smile:
Ich habe das Shield nun mal auf meinen UNO gesteckt und den arduin-o-phone-sketch probiert. Die Tasten werden angezeigt und die richtige Taste erkannt:
// Arduin-o-Phone Sketch: https://learn.adafruit.com/arduin-o-phone-arduino-powered-diy-cellphone/arduin-o-phone-sketch
// Shield direkt auf UNO gesteckt
/*
TFT - Mega - UNO
LCD_D2 - 24 - 2
LCD_D3 - 25 - 3
LCD_D4 - 26 - 4
LCD_D5 - 27 - 5
LCD_D6 - 28 - 6
LCD_D7 - 29 - 7
LCD_D0 - 22 - 8
LCD_D1 - 23 - 9
SD_SS - 53 - 10
SD_DI - 51 - 11
SD_DO - 50 - 12
SD_SCK - 52 - 13
LCD_RST- RESET- A4
LCD_CS - 31 - A3
LCD_RS - A15 - A2
LCD_WR - A14 - A1
LCD_RD - 30 - A0
GND, 5V, 3V3 mit dem jeweiligen Pin vom UNO
*/
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_TFTLCD.h> // Hardware-specific library
#include <TouchScreen.h>
#include <SD.h>
#include <SPI.h>
#define SD_CS 10 // Set the chip select line to whatever you use (10 doesnt conflict with the library)
#define YP A2 // must be an analog pin, use "An" notation!
#define XM A1 // must be an analog pin, use "An" notation!
#define YM 6 // can be a digital pin
#define XP 7 // can be a digital pin
#define TS_MINX 130
#define TS_MINY 90
#define TS_MAXX 900
#define TS_MAXY 910
TouchScreen ts = TouchScreen(XP, YP, XM, YM, 300);
#define LCD_CS A3 // Chip Select
#define LCD_CD A2 // Command/Data
#define LCD_WR A1 // LCD Write
#define LCD_RD A0 // LCD Read
#define LCD_RESET A4
Adafruit_TFTLCD tft(LCD_CS, LCD_CD, LCD_WR, LCD_RD, LCD_RESET);
// Color definitions
#define ILI9341_BLACK 0x0000 /* 0, 0, 0 */
#define ILI9341_NAVY 0x000F /* 0, 0, 128 */
#define ILI9341_DARKGREEN 0x03E0 /* 0, 128, 0 */
#define ILI9341_DARKCYAN 0x03EF /* 0, 128, 128 */
#define ILI9341_MAROON 0x7800 /* 128, 0, 0 */
#define ILI9341_PURPLE 0x780F /* 128, 0, 128 */
#define ILI9341_OLIVE 0x7BE0 /* 128, 128, 0 */
#define ILI9341_LIGHTGREY 0xC618 /* 192, 192, 192 */
#define ILI9341_DARKGREY 0x7BEF /* 128, 128, 128 */
#define ILI9341_BLUE 0x001F /* 0, 0, 255 */
#define ILI9341_GREEN 0x07E0 /* 0, 255, 0 */
#define ILI9341_CYAN 0x07FF /* 0, 255, 255 */
#define ILI9341_RED 0xF800 /* 255, 0, 0 */
#define ILI9341_MAGENTA 0xF81F /* 255, 0, 255 */
#define ILI9341_YELLOW 0xFFE0 /* 255, 255, 0 */
#define ILI9341_WHITE 0xFFFF /* 255, 255, 255 */
#define ILI9341_ORANGE 0xFD20 /* 255, 165, 0 */
#define ILI9341_GREENYELLOW 0xAFE5 /* 173, 255, 47 */
#define ILI9341_PINK 0xF81F
/******************* UI details */
#define BUTTON_X 40
#define BUTTON_Y 100
#define BUTTON_W 60
#define BUTTON_H 30
#define BUTTON_SPACING_X 20
#define BUTTON_SPACING_Y 20
#define BUTTON_TEXTSIZE 2
// text box where numbers go
#define TEXT_X 10
#define TEXT_Y 10
#define TEXT_W 220
#define TEXT_H 50
#define TEXT_TSIZE 3
#define TEXT_TCOLOR ILI9341_MAGENTA
// the data (phone #) we store in the textfield
#define TEXT_LEN 12
char textfield[TEXT_LEN + 1] = "";
uint8_t textfield_i = 0;
// We have a status line for like, is FONA working
#define STATUS_X 10
#define STATUS_Y 65
Adafruit_GFX_Button buttons[15];
/* create 15 buttons, in classic candybar phone style */
char buttonlabels[15][5] = {"Send", "Clr", "End", "1", "2", "3", "4", "5", "6", "7", "8", "9", "*", "0", "#" };
uint16_t buttoncolors[15] = {ILI9341_DARKGREEN, ILI9341_DARKGREY, ILI9341_RED,
ILI9341_BLUE, ILI9341_BLUE, ILI9341_BLUE,
ILI9341_BLUE, ILI9341_BLUE, ILI9341_BLUE,
ILI9341_BLUE, ILI9341_BLUE, ILI9341_BLUE,
ILI9341_ORANGE, ILI9341_BLUE, ILI9341_ORANGE
};
void setup(void) {
Serial.begin(9600);
Serial.println(F("TFT LCD test"));
tft.reset();
Serial.println(F("Using 0x9341 LCD driver"));
tft.begin(0x9341);
//tft.invert(); // funktioniert nur mit der Bibliotheksanpassung
tft.setRotation(2);
tft.fillScreen(ILI9341_BLACK);
// create buttons
for (uint8_t row = 0; row < 5; row++) {
for (uint8_t col = 0; col < 3; col++) {
buttons[col + row * 3].initButton(&tft, BUTTON_X + col * (BUTTON_W + BUTTON_SPACING_X),
BUTTON_Y + row * (BUTTON_H + BUTTON_SPACING_Y), // x, y, w, h, outline, fill, text
BUTTON_W, BUTTON_H, ILI9341_WHITE, buttoncolors[col + row * 3], ILI9341_WHITE,
buttonlabels[col + row * 3], BUTTON_TEXTSIZE);
buttons[col + row * 3].drawButton();
}
}
// create 'text field'
tft.drawRect(TEXT_X, TEXT_Y, TEXT_W, TEXT_H, ILI9341_WHITE);
}
// Print something in the mini status bar with either flashstring
void status(const __FlashStringHelper *msg) {
tft.fillRect(STATUS_X, STATUS_Y, 240, 8, ILI9341_BLACK);
tft.setCursor(STATUS_X, STATUS_Y);
tft.setTextColor(ILI9341_WHITE);
tft.setTextSize(1);
tft.print(msg);
}
// or charstring
void status(char *msg) {
tft.fillRect(STATUS_X, STATUS_Y, 240, 8, ILI9341_BLACK);
tft.setCursor(STATUS_X, STATUS_Y);
tft.setTextColor(ILI9341_WHITE);
tft.setTextSize(1);
tft.print(msg);
}
#define MINPRESSURE 10
#define MAXPRESSURE 1000
void loop(void) {
digitalWrite(13, HIGH);
TSPoint p = ts.getPoint();
digitalWrite(13, LOW);
// if sharing pins, you'll need to fix the directions of the touchscreen pins
//pinMode(XP, OUTPUT);
pinMode(XM, OUTPUT);
pinMode(YP, OUTPUT);
//pinMode(YM, OUTPUT);
if (p.z != 0) {
p.x = map(p.x, TS_MINX, TS_MAXX, 0, tft.width());
p.y = map(p.y, TS_MINY, TS_MAXY, 0, tft.height());
Serial.print("("); Serial.print(p.x); Serial.print(", ");
Serial.print(p.y); Serial.print(", ");
Serial.print(p.z); Serial.println(") ");
}
if (p.z > MINPRESSURE && p.z < MAXPRESSURE) {
// scale from 0->1023 to tft.width
p.x = (tft.width() - map(p.x, TS_MINX, TS_MAXX, tft.width(), 0));
p.y = (tft.height() - map(p.y, TS_MINY, TS_MAXY, tft.height(), 0));
}
// go thru all the buttons, checking if they were pressed
for (uint8_t b = 0; b < 15; b++) {
if (buttons[b].contains(p.x, p.y)) {
//Serial.print("Pressing: "); Serial.println(b);
buttons[b].press(true); // tell the button it is pressed
} else {
buttons[b].press(false); // tell the button it is NOT pressed
}
}
// now we can ask the buttons if their state has changed
for (uint8_t b = 0; b < 15; b++) {
if (buttons[b].justReleased()) {
// Serial.print("Released: "); Serial.println(b);
buttons[b].drawButton(); // draw normal
}
if (buttons[b].justPressed()) {
buttons[b].drawButton(true); // draw invert!
// if a numberpad button, append the relevant # to the textfield
if (b >= 3) {
if (textfield_i < TEXT_LEN) {
textfield[textfield_i] = buttonlabels[b][0];
textfield_i++;
textfield[textfield_i] = 0; // zero terminate
// fona.playDTMF(buttonlabels[b][0]);
}
}
// clr button! delete char
if (b == 1) {
textfield[textfield_i] = 0;
if (textfield_i > 0) {
textfield_i--;
textfield[textfield_i] = ' ';
}
}
// update the current text field
Serial.println(textfield);
tft.setCursor(TEXT_X + 2, TEXT_Y + 10);
tft.setTextColor(TEXT_TCOLOR, ILI9341_BLACK);
tft.setTextSize(TEXT_TSIZE);
tft.print(textfield);
// its always OK to just hang up
if (b == 2) {
status(F("Hanging up"));
//fona.hangUp();
}
// we dont really check that the text field makes sense
// just try to call
if (b == 0) {
status(F("Calling"));
Serial.print("Calling "); Serial.print(textfield);
//fona.callPhone(textfield);
}
delay(100); // UI debouncing
}
}
}
Was tut das Programm bei Dir?
Läuft durch die veränderte Library leider nicht, habe auch nochmal ältere/andere Libraries probiert aber nichts hat funktioniert. Trotzdem danke für die Mühe!
Wird dann wohl als reine Anzeige dienen :smiley:
jorkan:
Läuft durch die veränderte Library leider nicht ...
Frische Bibliotheken geladen.
Neuer Versuch:
#include <Adafruit_TFTLCD.h>
#include <Adafruit_GFX.h>
#include <TouchScreen.h>
#define LCD_CS A3
#define LCD_CD A2
#define LCD_WR A1
#define LCD_RD A0
#define LCD_RESET A4
// These are the pins for the shield!
#define YP A1 // must be an analog pin, use "An" notation!
#define XM A2 // must be an analog pin, use "An" notation!
#define YM 7 // can be a digital pin
#define XP 6 // can be a digital pin
#define MINPRESSURE 10
#define MAXPRESSURE 1000
#define BLACK 0x0000
#define YELLOW 0xFFE0
#define GREEN 0x07E0
#define RED 0xF800
#define WHITE 0xFFFF
TouchScreen ts = TouchScreen(XP, YP, XM, YM, 300);
Adafruit_TFTLCD tft(LCD_CS, LCD_CD, LCD_WR, LCD_RD, LCD_RESET);
int z;
void setup(void) {
Serial.begin(9600);
Serial.println("Anfang");
tft.reset();
tft.begin(0x9341);
tft.fillScreen(BLACK);
tft.setTextSize(3);
tft.setTextColor(GREEN);
tft.setCursor(20, 20);
tft.println("Start");
z = 1;
}
////////////////////////////////////////////////////////////////////////////////////////////////////////
void loop() {
TSPoint p = ts.getPoint();
//Serial.print("\tPressureThreshhold = "); Serial.println(ts.pressureThreshhold); Serial.print("\tPressure = "); Serial.println(p.z);
pinMode(XM, OUTPUT);
pinMode(YP, OUTPUT);
if (p.z > MINPRESSURE && p.z < MAXPRESSURE) {
Serial.print("X = "); Serial.print(p.x);
Serial.print("\tY = "); Serial.print(p.y);
Serial.print("\tPressure = "); Serial.println(p.z);
if (z == 1 || z == 0) {
if (z == 1) {
tft.fillScreen(WHITE);
z = 0;
tft.setTextSize(3);
tft.setTextColor(RED);
tft.setCursor(20, 20);
tft.println("Funktion 1");
}
else {
tft.fillScreen(BLACK);
z = 1;
tft.setTextSize(3);
tft.setTextColor(YELLOW);
tft.setCursor(20, 20);
tft.println("Funktion 2");
}
}
}
//delay(500);
}
Ausgabe:
Anfang
X = 179 Y = 210 Pressure = 605
X = 180 Y = 208 Pressure = 253
X = 848 Y = 210 Pressure = 497
X = 183 Y = 788 Pressure = 805
X = 846 Y = 775 Pressure = 452
|
__label__pos
| 0.955523 |
Skip to main content
Posts
Showing posts from 2016
uWSGI Basic Django Setup
Here are two basic examples of almost the same uWSGI configuration to run a Django project; one is configured via an ini configuration file and the other is configured via a command line argument. This does not represent a production-ready example, but can be used as a starting point for the configuration. Setup for this example: # create dir for virtualenv and activate mkdir envs/ virtualenv envs/runrun/ . ./envs/runrun/bin/activate # create dir for project codebase mkdir proj/ # install some django deps pip install django uwsgi whitenose # create a new django project cd proj/ django-admin startproject runrun cd runrun/ # Add to or modify django settings.py to setup static file serving with Whitenoise. # Note: for prod environments, staticfiles can be served via Nginx. # settings.py MIDDLEWARE_CLASSES = [ .... 'whitenoise.middleware.WhiteNoiseMiddleware', ] STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_STORAGE =
|
__label__pos
| 0.656772 |
Exportar (0) Imprimir
Expandir Tudo
Este artigo foi traduzido por máquina. Coloque o ponteiro do mouse sobre as frases do artigo para ver o texto original. Mais informações.
Tradução
Original
Classe MeasureItemEventArgs
Fornece dados para o MeasureItem o evento da ListBox, ComboBox, CheckedListBox, e MenuItem controles.
System.Object
System.EventArgs
System.Windows.Forms.MeasureItemEventArgs
Namespace: System.Windows.Forms
Assembly: System.Windows.Forms (em System.Windows.Forms.dll)
public class MeasureItemEventArgs : EventArgs
O tipo MeasureItemEventArgs expõe os membros a seguir.
NomeDescrição
Método públicoMeasureItemEventArgs(Graphics, Int32)Initializes a new instance of the MeasureItemEventArgs class.
Método públicoMeasureItemEventArgs(Graphics, Int32, Int32)Inicializa uma nova instância de MeasureItemEventArgs classe fornecendo um parâmetro para a altura do item.
Início
NomeDescrição
Propriedade públicaGraphicsObtém o Graphics o objeto para medir contra.
Propriedade públicaIndexObtém o índice do item para o qual a altura e largura é necessária.
Propriedade públicaItemHeightObtém ou define a altura do item especificado pelo Index.
Propriedade públicaItemWidthObtém ou define a largura do item especificado pelo Index.
Início
NomeDescrição
Método públicoEquals(Object) Determina se o Object especificado é igual ao Object atual. (Herdado de Object.)
Método protegidoFinalize Permite um objeto tentar liberar recursos e executar outras operações de limpeza antes que ele seja recuperado pela coleta de lixo. (Herdado de Object.)
Método públicoGetHashCodeServe como uma função hash para um tipo específico. (Herdado de Object.)
Método públicoGetType Obtém o Type da instância atual. (Herdado de Object.)
Método protegidoMemberwiseCloneCria uma cópia superficial do Object atual. (Herdado de Object.)
Método públicoToStringRetorna uma string que representa o objeto atual. (Herdado de Object.)
Início
Este evento é enviado quando o OwnerDraw propriedade de ListBox, ComboBox, CheckedListBox, ou MenuItem é definida como true. Ele é usado para informar a função de desenho como dimensionar um item.
Para obter informações sobre o modelo de eventos, consulte Eventos e representantes.
O exemplo de código a seguir demonstra como usar o Graphics propriedade para realizar o desenho personalizado dos itens em um ListBox.
public class Form1 : System.Windows.Forms.Form
{
private System.Windows.Forms.ListBox listBox1;
private System.ComponentModel.Container components = null;
protected override void Dispose(bool disposing)
{
if( disposing )
{
if ( components != null )
components.Dispose();
if ( foreColorBrush != null )
foreColorBrush.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.listBox1 = new System.Windows.Forms.ListBox();
this.SuspendLayout();
//
// listBox1
//
this.listBox1.DrawMode = System.Windows.Forms.DrawMode.OwnerDrawVariable;
this.listBox1.Location = new System.Drawing.Point(16, 48);
this.listBox1.Name = "listBox1";
this.listBox1.SelectionMode = System.Windows.Forms.SelectionMode.MultiExtended;
this.listBox1.Size = new System.Drawing.Size(256, 134);
this.listBox1.TabIndex = 0;
this.listBox1.MeasureItem += new System.Windows.Forms.MeasureItemEventHandler(this.listBox1_MeasureItem);
this.listBox1.DrawItem += new System.Windows.Forms.DrawItemEventHandler(this.listBox1_DrawItem);
//
// Form1
//
this.ClientSize = new System.Drawing.Size(292, 273);
this.Controls.AddRange(new System.Windows.Forms.Control[] {
this.listBox1});
this.Name = "Form1";
this.Text = "Form1";
this.ResumeLayout(false);
}
#endregion
[STAThread]
static void Main()
{
Application.Run(new Form1());
}
private void listBox1_MeasureItem(object sender, System.Windows.Forms.MeasureItemEventArgs e)
{
Font font = ((ListBoxFontItem)listBox1.Items[e.Index]).Font;
SizeF stringSize = e.Graphics.MeasureString(font.Name, font);
// Set the height and width of the item
e.ItemHeight = (int)stringSize.Height;
e.ItemWidth = (int)stringSize.Width;
}
// For efficiency, cache the brush to use for drawing.
private SolidBrush foreColorBrush;
private void listBox1_DrawItem(object sender, System.Windows.Forms.DrawItemEventArgs e)
{
Brush brush;
// Create the brush using the ForeColor specified by the DrawItemEventArgs
if ( foreColorBrush == null )
foreColorBrush = new SolidBrush(e.ForeColor);
else if ( foreColorBrush.Color != e.ForeColor )
{
// The control's ForeColor has changed, so dispose of the cached brush and
// create a new one.
foreColorBrush.Dispose();
foreColorBrush = new SolidBrush(e.ForeColor);
}
// Select the appropriate brush depending on if the item is selected.
// Since State can be a combinateion (bit-flag) of enum values, you can't use
// "==" to compare them.
if ( (e.State & DrawItemState.Selected) == DrawItemState.Selected )
brush = SystemBrushes.HighlightText;
else
brush = foreColorBrush;
// Perform the painting.
Font font = ((ListBoxFontItem)listBox1.Items[e.Index]).Font;
e.DrawBackground();
e.Graphics.DrawString(font.Name, font, brush, e.Bounds);
e.DrawFocusRectangle();
}
/// <summary>
/// A wrapper class for use with storing Fonts in a ListBox. Since ListBox uses the
/// ToString() of its items for the text it displays, this class is needed to return
/// the name of the font, rather than its ToString() value.
/// </summary>
public class ListBoxFontItem
{
public Font Font;
public ListBoxFontItem(Font f)
{
Font = f;
}
public override string ToString()
{
return Font.Name;
}
}
}
.NET Framework
Com suporte em: 4, 3.5, 3.0, 2.0, 1.1, 1.0
.NET Framework Client Profile
Com suporte em: 4, 3.5 SP1
Windows 7, Windows Vista SP1 ou posterior, Windows XP SP3, Windows XP SP2 x64 Edition, Windows Server 2008 (Server Core não compatível), Windows Server 2008 R2 (Server Core não compatível com SP1 ou posterior), Windows Server 2003 SP2
O .NET Framework não oferece suporte a todas as versões de cada plataforma. Para obter uma lista das versões com suporte, consulte Requisitos de sistema do .NET Framework.
Quaisquer membros static (Shared no Visual Basic) públicos deste tipo são thread-safe. Não há garantia de que qualquer membro de instância seja thread-safe.
Contribuições da comunidade
ADICIONAR
Mostrar:
© 2014 Microsoft
|
__label__pos
| 0.698365 |
This package is working and the interface is mostly stable. ** WWW::Curl compatibility ** For packages requiring WWW::Curl you can use Net::Curl instead of WWW::Curl if you don't want to install WWW::Curl in your system. Can be useful in space-constrained systems if you already have something that requires Net::Curl anyways. Set environment variable WWW_COMPAT to "ext" before calling Makefile.PL to install WWW/Curl/* wrapper files directly. By default build system creates Net::Curl::Compat package which can be used to enable WWW::Curl compatibility in Net::Cutl, but it must be loaded manually before something tries to use WWW::Curl. ** WARNING ** - pushopt() may disappear yet. ** TODO ** XS: - under coro and threads, when forcibly destroyed, there are some ways to make it leak or double-free memory, must be investigated further. Easy: - test callback arguments just before perform (maybe only for default writers) - write more documentation Form: - implement read callback - write more documentation Multi: - write more documentation Share: - write more documentation tests: - use some pure-perl http server, should do for most tests - review and renumerate - add more crash tests (remove last reference while performing, forced destrution, etc)
|
__label__pos
| 0.983327 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
PerlMonks
Re^3: The most useless key on my keyboard is:
by hobbs (Monk)
on Nov 20, 2005 at 04:57 UTC ( #510197=note: print w/ replies, xml ) Need Help??
in reply to Re^2: The most useless key on my keyboard is:
in thread The most useless key on my keyboard is:
I have syslog set to log mostly everything to vt12, so I use alt-F12 (or C-A-F12) to see what's going on fairly often. That's worth something, right?
Comment on Re^3: The most useless key on my keyboard is:
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://510197]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (3)
As of 2015-10-04 17:19 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Does Humor Belong in Programming?
Results (103 votes), past polls
|
__label__pos
| 0.939143 |
Book Image
Mastering JavaScript Functional Programming - Second Edition
By : Federico Kereki
Book Image
Mastering JavaScript Functional Programming - Second Edition
By: Federico Kereki
Overview of this book
Functional programming is a paradigm for developing software with better performance. It helps you write concise and testable code. To help you take your programming skills to the next level, this comprehensive book will assist you in harnessing the capabilities of functional programming with JavaScript and writing highly maintainable and testable web and server apps using functional JavaScript. This second edition is updated and improved to cover features such as transducers, lenses, prisms and various other concepts to help you write efficient programs. By focusing on functional programming, you’ll not only start to write but also to test pure functions, and reduce side effects. The book also specifically allows you to discover techniques for simplifying code and applying recursion for loopless coding. Gradually, you’ll understand how to achieve immutability, implement design patterns, and work with data types for your application, before going on to learn functional reactive programming to handle complex events in your app. Finally, the book will take you through the design patterns that are relevant to functional programming. By the end of this book, you’ll have developed your JavaScript skills and have gained knowledge of the essential functional programming techniques to program effectively.
Table of Contents (17 chapters)
1
Technical Requirements
14
Bibliography
Logical higher-order functions
Up to now, we have been using higher-order functions to produce new results, but there are also some other functions that produce logical results by applying a predicate to all the elements of an array. (By the way, we'll be seeing much more about higher-order functions in the next chapter.)
A bit of terminology: the word predicate can be used in several senses (as in predicate logic), but for us, in computer science, it has the meaning of a function that returns true or false. Okay, this isn't a very formal definition, but it's enough for our needs. For example, saying that we will filter an array depending on a predicate just means that we get to decide which elements are included or excluded depending on the result of the predicate.
Using these functions implies that your code will become shorter: you can, with a single...
|
__label__pos
| 0.796862 |
elementAt method Null safety
Future<T> elementAt(
1. int index
)
Returns the value of the indexth data event of this stream.
Stops listening to this stream after the indexth data event has been received.
Internally the method cancels its subscription after these elements. This means that single-subscription (non-broadcast) streams are closed and cannot be reused after a call to this method.
If an error event occurs before the value is found, the future completes with this error.
If a done event occurs before the value is found, the future completes with a RangeError.
Implementation
Future<T> elementAt(int index) {
RangeError.checkNotNegative(index, "index");
_Future<T> result = new _Future<T>();
int elementIndex = 0;
StreamSubscription<T> subscription;
subscription =
this.listen(null, onError: result._completeError, onDone: () {
result._completeError(
new RangeError.index(index, this, "index", null, elementIndex),
StackTrace.empty);
}, cancelOnError: true);
subscription.onData((T value) {
if (index == elementIndex) {
_cancelAndValue(subscription, result, value);
return;
}
elementIndex += 1;
});
return result;
}
|
__label__pos
| 0.871943 |
Commit 520f8890 authored by Gaëtan Caillaut's avatar Gaëtan Caillaut
Browse files
eval minibert
parent 780eb86d
......@@ -20,6 +20,7 @@ except:
import sys
sys.path.append("../minibert")
from minibert import *
from eval import *
# Retourne les embeddings de definitions
......@@ -33,7 +34,7 @@ except:
def get_acr_def_dict(model, tokenizer, acronyms, device="cuda"):
definitions_file = "output/minibert/definitions.tar"
if os.path.exists(definitions_file):
if os.path.exists(definitions_file) and False:
definitions = torch.load(definitions_file)
else:
definitions = {}
......@@ -43,8 +44,8 @@ def get_acr_def_dict(model, tokenizer, acronyms, device="cuda"):
attention_mask = torch.tensor([encoded.attention_mask], device=device)
# wids = torch.tensor([encoded.word_ids], device=device)
output = model.minibert(x, attention_mask)
mean_vec = torch.mean(output, dim=1)
output = torch.squeeze(model.minibert(x, attention_mask))
mean_vec = torch.mean(output, dim=0, keepdim=False)
if acr not in definitions:
definitions[acr] = { "vectors": [], "definitions": [], "lemmatized": [] }
......@@ -58,7 +59,7 @@ def get_acr_def_dict(model, tokenizer, acronyms, device="cuda"):
return definitions
def minibert_wsd(args):
def minibert_wsd(model_path, glob):
device = "cuda"
pin_memory = device != "cpu"
......@@ -74,7 +75,7 @@ def minibert_wsd(args):
tokenizer = Tokenizer.from_file("../minibert-sncf/data/tokenizer.json")
collater = SncfCollater(tokenizer, pad_token)
checkpoint = torch.load(args.model, map_location=torch.device(device))
checkpoint = torch.load(model_path, map_location=torch.device(device))
configuration_dict = checkpoint["configuration"]
device = checkpoint["device"]
configuration = MiniBertForMLMConfiguration(**configuration_dict)
......@@ -84,6 +85,14 @@ def minibert_wsd(args):
definitions = get_acr_def_dict(model, tokenizer, acronyms, device)
if glob:
all_defs = []
all_defs_vecs = []
for d in definitions.values():
all_defs.extend(d["lemmatized"])
all_defs_vecs.append(d["vectors"])
all_defs_vecs = torch.vstack(all_defs_vecs)
json_path = "data/annotation.json"
with open(json_path, "r", encoding="UTF-8") as f:
json_data = f.read()
......@@ -101,14 +110,47 @@ def minibert_wsd(args):
acr = tok["token"]
v = embeddings[0, i, :].view(-1, 1)
dists = torch.matmul(definitions[acr]["vectors"], v).view(-1)
idef = torch.argmax(dists).item()
annotated[isent]["acronymes"][iacr]["prediction"] = definitions[acr]["lemmatized"][idef]
predictions_path = "output/minibert/predictions.json"
if glob:
dists = torch.matmul(all_defs_vecs, v).view(-1)
idef = torch.argmax(dists).item()
annotated[isent]["acronymes"][iacr]["prediction"] = all_defs[idef]
else:
dists = torch.matmul(definitions[acr]["vectors"], v).view(-1)
idef = torch.argmax(dists).item()
annotated[isent]["acronymes"][iacr]["prediction"] = definitions[acr]["lemmatized"][idef]
fname = os.path.basename(os.path.dirname(model_path))
if glob:
predictions_path = f"output/minibert/predictions_{fname}_glob.json"
else:
predictions_path = f"output/minibert/predictions_{fname}.json"
with open(predictions_path, "w", encoding="UTF-8") as f:
f.write(json.dumps(annotated, indent=4, ensure_ascii=False))
def all_minibert_wsd(args):
for md in os.listdir(args.path):
cp_path = os.path.join(args.path, md, "checkpoint-00100.tar")
minibert_wsd(cp_path, args.glob)
def eval_minibert(args):
pred_dir = "output/minibert"
pred_files = [f for f in os.listdir(pred_dir) if f.startswith("predictions_")]
resd = {
"file": [],
"pos": []
}
for f in pred_files:
pred_path = os.path.join(pred_dir, f)
annot = load_annot(pred_path)
prec, rapp, prm = acc(count(annot))
resd["file"].append(f)
resd["pos"].append(prec)
df = pd.DataFrame(resd)
df.sort_values("pos", inplace=True, ascending=False)
df.to_csv("output/minibert/scores.csv", index=False)
if __name__ == "__main__":
import argparse
......@@ -117,8 +159,13 @@ if __name__ == "__main__":
subparsers = parser.add_subparsers()
wsd_parser = subparsers.add_parser("wsd")
wsd_parser.add_argument("-m", "--model", default="../minibert-sncf/models/d64_self-attention_fixed_gelu_norm/checkpoint-00100.tar")
wsd_parser.set_defaults(func=minibert_wsd)
# wsd_parser.add_argument("-m", "--model", default="../minibert-sncf/models/d64_self-attention_fixed_gelu_norm/checkpoint-00100.tar")
wsd_parser.add_argument("-p", "--path", default="../minibert-sncf/models")
wsd_parser.add_argument("-g", "--glob", action="store_true")
wsd_parser.set_defaults(func=all_minibert_wsd)
eval_parser = subparsers.add_parser("eval")
eval_parser.set_defaults(func=eval_minibert)
args = parser.parse_args()
args.func(args)
args.func(args)
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.984266 |
why would you remove the ability to play local files ?
Reply
2 people liked this
why would you remove the ability to play local files ?
309blank
Newbie
I have used spotify intensively since I got a premium account. However the new update breaks the most useful feature, being able to play your own mp3s with the spotify app. I used the app as a unified media player for my mobile. Great, now I have to either get a new app just to play mp3s or I have to use the crappy standard player, that doesn't play songs once the battery is below a certain threshold...
Why would you remove these core features of a media player app? It doesn't make any sense and I hope you reconsider.
3 Replies
1 person liked this
Re: why would you remove the ability to play local files ?
WranglerJones
Casual Listener
I'm wondering the same thing - especially since it's a feature they advertise on their website.
1 person liked this
Re: why would you remove the ability to play local files ?
Turbowargen
Regular
I'm missing this feature too! Please put it back.
Highlighted
Re: why would you remove the ability to play local files ?
user-removed
Not applicable
ditto.
here is one temporary possible solution. on yhour desktop create a playlist called 'local' and add all your personal mp3s to it. then wirelessly sync this playlist with your device.
however, one problem with this solution is that there does not appear to be a way to move this playlist to my SD card and i don't have enough room on the device for the full playlist.
SUGGESTED POSTS
|
__label__pos
| 0.982638 |
You are currently browsing legacy 1.0 version of documentation. Click here to switch to the newest 5.0 version.
We can help you with migration to the latest RavenDB
Contact Us Now
see on GitHub
Polymorphism in RavenDB
RavenDB stores document in JSON format, which make it very flexible, but also make some code patterns harder to work with. In particular, the RavenDB Client API will not, by default, record type information into embedded parts of the JSON document. That makes for a much easier to read JSON, but it means that using polymorphism for objects that are embedded inside another document requires some modification.
Note
There is no problem with polymorphism for entities that are stored as documents, only with embedded documents.
That modification happens entirely at the JSON.Net layer, which is responsible for serializing and deserializing documents. The problem is when you have a model such as this:
public class Sale
{
public Sale()
{
Items = new List<SaleItem>();
}
public string Id { get; set; }
public List<SaleItem> Items { get; private set; }
}
public abstract class SaleItem
{
public decimal Amount { get; set; }
}
public class ProductSaleItem : SaleItem
{
public string ProductNumber { get; set; }
}
public class DiscountSaleItem : SaleItem
{
public string DiscountText { get; set; }
}
And you want to store the following data:
using (var session = documentStore.OpenSession())
{
var sale = new Sale();
sale.Items.Add(new ProductSaleItem { Amount = 1.99m, ProductNumber = "123" });
sale.Items.Add(new DiscountSaleItem { Amount = -0.10m, DiscountText = "Hanukkah Discount" });
session.Store(sale);
session.SaveChanges();
}
With the default JSON.Net behavior, you can serialize this object graph, but you can't deserialize it, because there isn't enough information in the JSON to do so.
RavenDB gives you the following extension point to handle that:
documentStore.Conventions.CustomizeSerializer = serializer => serializer.TypeNameHandling = TypeNameHandling.All;
|
__label__pos
| 0.972105 |
Unset [133772]
27/06/2021 11:10:45 AM
Toán so sánh nghiệm phương trình
Cho PT $x^2 - 2( m + 1 )x + m^2 - 4m + 5 = 0$ ( $m$ là tham số ) tìm $m$ để PT có 2 nghiệm dương phân biệt
Toán học 27/06/2021 2:01:13 PM 1 câu trả lời 491 lượt xem
1 Câu trả lời
Lời giải
Đã ghim
Đặng Thành Nam [6119] Publisher, Admin Đã mua 28 khóa học 12:55 27-06-2021
Em giải đk delta dương; tổng dương và tích dương.
\[\begin{gathered} {x^2} - 2(m + 1)x + {m^2} - 4m + 5 = 0 \hfill \\ {x_1} > {x_2} > 0 \Leftrightarrow \left\{ \begin{gathered} \Delta ' = {\left( {m + 1} \right)^2} - \left( {{m^2} - 4m + 5} \right) > 0 \hfill \\ S = 2\left( {m + 1} \right) > 0 \hfill \\ P = {m^2} - 4m + 5 > 0 \hfill \\ \end{gathered} \right. \Leftrightarrow \left\{ \begin{gathered} m > \frac{2}{3} \hfill \\ m > - 1 \hfill \\ \end{gathered} \right. \Leftrightarrow m > \frac{2}{3}. \hfill \\ \end{gathered} \]
0
Câu trả lời của bạn
Để bình luận, bạn cần đăng nhập bằng tài khoản Vted.
Đăng nhập
Không phải câu trả lời hoặc câu hỏi bạn đang tìm kiếm? Hỏi câu hỏi của riêng bạn.
|
__label__pos
| 0.999511 |
0
Cifrar mensajes: consiste en que el usuario ingrese un mensaje y a continuación indique un número entre 0 y 9 para cada vocal. El programa debe mostrar el mensaje con la codificación de vocales entregada por el usuario y la cantidad de vocales codificadas. Por ejemplo, si el mensaje ingresado es: ESTA ES MI PAUSA ACTIVA y los números asignados a las vocales son, en su orden: 4, 3, 6, 2, 8, el mensaje que mostrará el programa será: 3ST4 3S M6 P48S4 4CT6V4 se han codificado 10 vocales .
Tengo esto pero no funciona.
String frase = JOptionPane.showInputDialog(null, "Ingrese un mensaje ");
String num = JOptionPane.showInputDialog(null, "Ingrese un numero del 1 al 9 para cada vocal ");
regexp = "[1,2,3,4,5,6,7,8,9,10]";
msj = frase.replaceAll(regexp, num);
JOptionPane.showMessageDialog(null, msj)
1 respuesta 1
Reset to default
0
Pude resolver tu problemita. Comente un poco el codigo para que sepas de que va cada ciclo y demas.
public static void main(String[] args) {
int[] listaNumeros = new int[5];
char[] vocales = {'A', 'E', 'I', 'O', 'U'};
int contVocales = 0;
String frase = JOptionPane.showInputDialog(null, "Ingrese un mensaje ");
//Comvierte la frase a mayusculas
frase = frase.toUpperCase();
//Conteo de vocales
char[] Arrayfrase = frase.toCharArray();
for (char letra : Arrayfrase) {
if (letra == 'A' || letra == 'E' || letra == 'I' || letra == 'O' || letra == 'U') {
contVocales++;
}
}
//Ciclo para introducir los disintos números
for (int i = 0; i < 5; i++) {
int num = Integer.parseInt(JOptionPane.showInputDialog(null, "Ingrese un numero del 1 al 9 para cada vocal "));
listaNumeros[i] = num;
}
//Cada vocal es reemplazada por el orden en el que se ingreso los números
for (int n = 0; n < 5; n++) {
frase = frase.replaceAll(String.valueOf(vocales[n]), String.valueOf(listaNumeros[n]));
}
String msj = frase + " se an codificado " + contVocales + " vocales";
JOptionPane.showMessageDialog(null, msj);
}
Tu Respuesta
Al pulsar en “Publica tu respuesta”, muestras tu consentimiento a nuestros términos de servicio, política de privacidad y política de cookies
¿No es la respuesta que buscas? Examina otras preguntas con la etiqueta o formula tu propia pregunta.
|
__label__pos
| 0.60733 |
ArsTechnica
Technology Lab / Information Technology
How do you dive into a large code base?
Give us your tips, tricks, tools, and sage advice.
Stack Exchange
This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.
miku asks:
What tools and techniques do you use for exploring and learning an unknown code base?
I am thinking of tools like grep, ctags, unit-tests, functional tests, class-diagram generators, call graphs, code metrics like sloccount, and so on. I'd be interested in your experiences, the helpers you used or wrote yourself, and the size of the code base with which you worked.
I realize that becoming acquainted with a code base is a process that happens over time, and familiarity can mean anything from "I'm able to summarize the code" to "I can refactor and shrink it to 30 percent of the size." But how to even begin?
See the full, original question here.
Throw the kitchen sink at it
Kramii answers (24 votes):
Do you have to hack till you get the job done?
To a large extent, yes (sorry).
Approaches you might consider:
1. Try to find out what the code is supposed to do, in business terms.
2. Read all the documentation that exists, no matter how bad it is.
3. Talk to anyone who might know something about the code.
4. Step through the code in the debugger.
5. Introduce small changes and see what breaks.
6. Make small changes to the code to make it clearer.
Some of the things I do to clarify code are:
1. Run a code prettifier to format the code nicely.
2. Add comments to explain what I think it might do.
3. Change variable names to make them clearer (using a refactoring tool).
4. Use a tool that highlights all the uses of a particular symbol.
5. Reducing clutter in the code like commented out code, meaningless comments, pointless variable initializations and so forth.
6. Change the code to use current code conventions (again using refactoring tools).
7. Start to extract functionality into meaningful routines.
8. Start to add tests where possible (not often possible).
9. Get rid of magic numbers.
10. Reduce duplication where possible.
... and whatever other simple improvements you can make.
Gradually, the meaning behind it all should become clearer.
As for the place to start? Start with what you do know. I suggest inputs and outputs. You can often get a handle on what these are supposed to be and what they are used for. Follow data through the application and see where it goes and how it is changed.
One of the problems I have with all this is motivation—it can be a real slog. It helps me to think of the whole business as a puzzle, and to celebrate the progress that I'm making, no matter how small.
Normalize your surroundings
sal answers (9 votes):
I like to do the following when I have a really large source file:
• Copy the whole mess onto the clipboard.
• Paste into Word/textmate whatever.
• Reduce font size to the minimum.
• Scroll down looking at the patterns in the code.
You would be amazed at how oddly familiar the code looks when you get back to your normal editor.
Tools of the trade
JeffV answers (2 votes):
Some things I do...
1. Use a source analysis tool like Source Monitor to determine the various module sizes, complexity metrics etc.. to get a feel for the project and help identify the areas that are non-trivial.
2. Drill through the code top to bottom in Eclipse (good to have an editor that can browse references, etc.) until I get to know what's going on and where in the code base.
3. Occasionally, I draw diagrams in Visio to get a better picture of the architecture. This can be helpful for others on the project as well.
Walk through the steps
JB King answers (2 votes):
Here's my short list:
1. If possible, have someone give a high-level view of the code. What patterns were considered, what kinds of conventions could I expect to see, etc. This may have a few rounds to it—as I get more familiar with the code, I may have new questions to ask as I work through the onion of the pre-existing project.
2. Run the code and see what does the system(s) look like. Granted it may have more than a few bugs, but this can be useful for getting an idea of what does it do. This isn't about changing the code, but rather just seeing 'how does this run? How do various pieces fit together to be a system overall?'
3. Look for tests and other indicators of basic documentation that may assist in building an internal mental model of the code. This is where I'd probably suggest at least a few days unless there is extremely little documentation and tests of course.
4. How well do I know the languages and frameworks used in this project? The importance here is the difference between looking at some things and going, "Yes, I've seen that a dozen times before and know it fairly well," and "What in the world is being attempted here? Who thought this was a good idea?" These are the kind of questions that, while I wouldn't say them out loud, I would be thinking them, especially if I'm looking at legacy code that may be quite fragile and whose writers are either unavailable or just don't remember why things were done the way they were. For new areas, it may be worthwhile to spend some extra time getting to know what is the structure and what patterns can I find in this code.
Finally, know the expectations of those running the project in terms of what you are supposed to do at each point in time, given the following few ideas of what may be expected:
• Are you putting in new features?
• Are you fixing bugs?
• Are you refactoring code? Are the standards new to you or are they very familiar?
• Are you supposed to be just familiarizing yourself with the code base?
Find more answers or leave your own at the original post. See more Q&A like this at Programmers, a question and answer site for professional programmers interested in conceptual questions about software development. If you've got your own programming problem that requires a solution, login to Programmers and ask a question (it's free).
Expand full story
25 Reader Comments
1. A key tool is a good debugger, particularly one that you can drop into some interactive mode.
Code is a dynamic system so, for me at least, seeing it run is key in building an organic understanding of how it works to achieve its aim.
Run in the debugger, put breakpoints and then step through at various levels, seeing how the data / UI changes. Static analysis needs to be done as well but running it really helps speed up the process.
Imagine trying to work out how the pistons in a 4-cylinder engine work by looking at the interconnections between them. It is almost impossible unless the documentation walks you through the cycle. But if you can watch it working in slow motion (ie in the debugger) then the function, sequencing, etc become clear.
34 posts | registered
2. The bigger the project the smaller your choices become and your freedom to spend learning time on the code (during working hours).
The best solution is to find someone in the same company who has been through the same process and is willing to help you. All other tools as in programs and stuff don't really matter much. If the project is big seminars from senior engineers can be of use as well.
790 posts | registered
3. The first thing to do is look at the docs. Know what the system does and why. Learn any specific domain knowledge that the system is dealing with.
Then learn the dev enviroment, code formatting rules, learn about release process, the CI system if any.
*Then*, you can dive into the code, and if you do all above, you should do OK.
30 posts | registered
4. A really big codebase is not only technical but has a history with stories and personalities. Getting to know these helps me because that kind of thing you remember without any effort. You can attach the technical details to the stories and that that makes you remember them easier.
122 posts | registered
5. pyramidic wrote:
A key tool is a good debugger, particularly one that you can drop into some interactive mode.
Code is a dynamic system so, for me at least, seeing it run is key in building an organic understanding of how it works to achieve its aim.
Run in the debugger, put breakpoints and then step through at various levels, seeing how the data / UI changes. Static analysis needs to be done as well but running it really helps speed up the process.
Imagine trying to work out how the pistons in a 4-cylinder engine work by looking at the interconnections between them. It is almost impossible unless the documentation walks you through the cycle. But if you can watch it working in slow motion (ie in the debugger) then the function, sequencing, etc become clear.
I used to use the debugger a lot before, but for getting an overview of how things work, I feel it is more in the way now than usefull. To me it feels like that because I am much more familiar with software patterns and how people code, I tend to be able to see patterns by just scanning the code. Stepping around in the debugger would just be slower.
2020 posts | registered
6. Most large software systems generally (hopefully) have a test suite which exercises most of the functionality provided by the system. Skimming through the test cases should give you a pretty good idea of what the system does and how those functions are invoked from the outside in.
Stepping through particular test cases in a debugger can help illuminate how the code works and what the assertions mean, and you can step into or step over call sites according to how deep you feel you need to wade into the details.
But simply understanding the outermost layers of the system where it interacts with its runtime environment can go a long way toward resolving one of the key obstacles to understanding large software systems: the compile-time organization of classes/files in a codebase tends to be quite different structurally from how those objects interact at run-time.
As a developer, you must understand both the compile-time and run-time structure of the system, and the test suite is often the best bridge between those two domains.
269 posts | registered
7. Doxygen
8 posts | registered
8. The first question to ask is how much do you really need to know to solve the problem at hand? The next one is did anyone document stuff or name things in a way that might help you learn.
Sometimes you are learning a big project because you are the new team member getting initiated. Sometimes you are learning a big project becuause the team left and you were brought in to fix a bug or add a new feature, with the minimal of documentation being provided and no assumputions indicated. In the former situation help is at hand, while in the latter you are going to need to put on a brave face and start by working out how the program loads and where the code section your need to address it hiding.
Sometimes you are given an unfortunate case of variables being written a foreign language or even mixed language. I have been there, but luckily I had a grasp of the languages.
In the 'team has gone' scenario, tools you will need:
- patience
- paper (you will need to note down code flow and clues)
- a debugger, for when reading the code just isn't enough
- something to document your findings in a more permanent manner (if you are staying on the project, you will be helping yourself)
- a decompiler if the application uses a library from a defunct company
1466 posts | registered
9. Only two tools are needed: Eyes and brain.
As with all apps, start with the main/starting class, and follow the trails from there.
1817 posts | registered
10. Deeviant wrote:
The first thing to do is look at the docs. Know what the system does and why. Learn any specific domain knowledge that the system is dealing with.
Then learn the dev enviroment, code formatting rules, learn about release process, the CI system if any.
*Then*, you can dive into the code, and if you do all above, you should do OK.
This 1000 fold!
plus Get a full set of the programs/codes output under all possible conditions including all messages , numeric output etc
2378 posts | registered
11. What? An interesting question on a stack exchange site that hasn't been closed?
33 posts | registered
12. brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
It's too open-ended for definitive answer. Keeps it interesting :)
479 posts | registered
13. Shannara wrote:
Only two tools are needed: Eyes and brain.
As with all apps, start with the main/starting class, and follow the trails from there.
I agree. Just start reading all the code.
If it's a basic unix style input/output start reading from the input and read every like until you get to the output.
For GUI/Event driven software pick an event, like clicking or tapping a button, and read all the code.
It helps to have a text editor that can do project wide string matching quickly. Put your code on a RAM disk if you don't have an SSD. You should be able to search 30,000 source code files in 2 or 3 seconds.
1550 posts | registered
14. brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
Pretty much every question that appears on ars each weekend is so open ended I'm surprised it wasn't closed.
I know I would have voted to close this one. Stack Overflow is not a place for open discussion. It's a perfect question for Ars though!
1550 posts | registered
15. brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
Pretty much every question that appears on ars each weekend is so open ended I'm surprised it wasn't closed.
I know I would have voted to close this one. Stack Overflow is not a place for open discussion. It's a perfect question for Ars though!
Except this question was posted on Programmers, not Stack Overflow. Therefore, it was on topic. Perhaps you should know what you're voting about before voting?
1219 posts | registered
16. brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
Pretty much every question that appears on ars each weekend is so open ended I'm surprised it wasn't closed.
I know I would have voted to close this one. Stack Overflow is not a place for open discussion. It's a perfect question for Ars though!
I think I just learned something :O So only the odd ones make it over here.
I do wish the original poster visited here to tell us whether it was about a simple feature extension in a working, compiling, well documented code; or reviving a giant undocumented mess in some ancient language that did not even compile; or whatever else in between.
Last edited by pqr on Mon Feb 17, 2014 3:06 am
479 posts | registered
17. pqr wrote:
brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
Pretty much every question that appears on ars each weekend is so open ended I'm surprised it wasn't closed.
I know I would have voted to close this one. Stack Overflow is not a place for open discussion. It's a perfect question for Ars though!
I think I just learned something :O So only the odd ones make it over here.
No. The question did not come from Stack Overflow. It came from Programmers, part of the Stack Exchange network and a sister-site of Stack Overflow. It is for more conceptual type questions and often results in more open ended discussion.
1219 posts | registered
18. msm8bball wrote:
pqr wrote:
brotlifst wrote:
What? An interesting question on a stack exchange site that hasn't been closed?
Pretty much every question that appears on ars each weekend is so open ended I'm surprised it wasn't closed.
I know I would have voted to close this one. Stack Overflow is not a place for open discussion. It's a perfect question for Ars though!
I think I just learned something :O So only the odd ones make it over here.
No. The question did not come from Stack Overflow. It came from Programmers, part of the Stack Exchange network and a sister-site of Stack Overflow. It is for more conceptual type questions and often results in more open ended discussion.
Very informative, I could just copy my earlier line here :) And indeed
https://meta.stackoverflow.com/question ... k-exchange
479 posts | registered
19. comment the stuff I don't understand , publish to dev and see where it breaks
uncomment.
hey, it was a 10000 line java file I inherited.
45 posts | registered
20. Looking at the comments it seems people have very different ideas about what a 'large' code base is. A 10.000 lines code file is not a large code base, it just a big file. For me a large code base is at least several million lines of actual code, spread across multiple projects/libraries. Usually these large code bases will be quite old, meaning they are in maintenance mode, have virtually no up to date documentation, no (unit) tests, many different people worked on them, no consistent style, etc.
A debugger won't help you, it will just cost you time, as you are focusing on non-important details. You cannot grasp all the details of a large code base, it just does not fit in your head :) In stead you should break the system down in components/layers/whatever, and determine what the responsibilities are of these parts. What are the interfaces by which they communicate. Then you can pick a single part and dig deeper. Use common sense to determine when you know enough of the details. Use time boxing, to limit the amount of time spend on this.
What has helped me in the past is generating a graph that shows all the includes between folders (doxygen + graphviz). This will show you the real (including the unintentional ones) dependencies between components.
8 posts | registered
21. elmar_1 wrote:
Looking at the comments it seems people have very different ideas about what a 'large' code base is. A 10.000 lines code file is not a large code base, it just a big file. For me a large code base is at least several million lines of actual code, spread across multiple projects/libraries. [sic]
To get a sense of scale I checked the most recent Linux kernel (3.13.3) with sloccount - roughly 11 million lines of code. It also estimates 4,000 man-years and $500M USD. I would have considered half a million lines 'large' already. Does anyone know an example project in the few million lines of code range? So you have something like the complete GCC in mind or bigger.
EDIT: rough line counts for a couple projects:
* Linux kernel (3.13.3) - 11M lines
* GNU Compiler Collection (4.8.2) - 5M lines
* Wine (1.6.2) - 2.4M lines
* GNU C library (2.19) - 1M lines
* X.org server (1.12.2) - 400k lines
* OpenSSL (1.0.1f) - 350k lines
* Postfix (2.11.0) - 120k lines
* SpamAssassin (3.4.0) - 60k lines
* Procmail (3.22) - 12k lines
Last edited by pqr on Mon Feb 17, 2014 10:40 am
479 posts | registered
22. Every project I've come in on after it was pretty much developed has been a spaghetti code rambling mess. The programmer learned as they went and no pattern emerged. I've always found that it is best to look at the end product, find out what the result is supposed to be and work back from there. Yes, I look for project wide variables and define what they are, if the code is not commented well.
I have found that if you look at what the output should be first then it is easier to find out what the previous programmer was trying to achieve, even if they did not quite do it.
401 posts | registered
23. Mazarax wrote:
Doxygen
I came here to mention Doxygen myself. I will try to add a little more value that just the name of the tool ;-)
I think Doxygen works for C, C++ and C-alikes rather well (Java too, although it has its own tool.)
I got introduced to Doxygen when working on a massive C/C++ code base that was a real time embedded system. In the end I actually used it for assisting in generating useful documentation to replace some of the [useless but necessary for process] design docs that I had to create. I think this is really its intended purpose, to allow you to embed useful documentation with the code and be able to extract it for use documenting the code base.
However, as a means to get a quick understanding of a codebase you've never been in before, its not a bad tool. It's pretty easy to add some minimal Doxygen tags to the code to get it minimally processed by Doxygen, and what you will get as a result is a cross-referenced HTML file that allows you navigate from symbol directly into the definition of the symbol in the code.
122 posts | registered
24. Mazarax wrote:
Doxygen
I came here to mention Doxygen myself. I will try to add a little more value that just the name of the tool ;-)
I think Doxygen works for C, C++ and C-alikes rather well (Java too, although it has its own tool.)
I got introduced to Doxygen when working on a massive C/C++ code base that was a real time embedded system. In the end I actually used it for assisting in generating useful documentation to replace some of the [useless but necessary for process] design docs that I had to create. I think this is really its intended purpose, to allow you to embed useful documentation with the code and be able to extract it for use documenting the code base.
However, as a means to get a quick understanding of a codebase you've never been in before, its not a bad tool. It's pretty easy to add some minimal Doxygen tags to the code to get it minimally processed by Doxygen, and what you will get as a result is a cross-referenced HTML file that allows you navigate from symbol directly into the definition of the symbol in the code.
Add C# to the list of languages supported. It can even read traditional C# XML comments and create the documentation, rather than having to switch to the Doxygen commenting style. Since C# XML style comments are well supported in Visual Studio, this works pretty well.
1219 posts | registered
25. start by fully understanding the business requirements.
1502 posts | registered
You must to comment.
Need to register for a new account?
If you don't have an account yet it's free and easy.
Register
|
__label__pos
| 0.621273 |
Red Circute India Pvt Ltd
Network Infrastructure
What is a Network Architecture?
In Information Technology, the term network means at least two computer systems connected by either a cable or a wireless connection.
The simplest form of the network consists of two computers connected through a cable. This is called a peer-to-peer network.
There is no hierarchy in this kind of network, and both systems have equal access rights. Each computer system can access data of the other system and can also share resources such as disk space, applications, and peripheral devices.
Today’s networks are a bit complex and consist of many computers linked to each other. Systems with more than ten computers usually use client-server networks. Here, a central computer system known as a server provides resources to all other connected systems in the network; they are known as clients.
blockchain, block, chain-3019120.jpg
What are the tasks of a network?
The main task of a network is to provide all clients (individual computers or devices) with a single platform for exchanging data and sharing resources. This task enables the smooth functioning of everyday life in a modern world and it would be not possible without networks.
The network is useful when a project needs to be done by many teams and each team needs to access data from a central resource and also contribute to the resource that others can find useful. Without networks, it would be impossible to update any resource in real-time and it would also be time-consuming.
Advantages of networks:
• Sharing of data across teams/departments
• Sharing of resources across teams/departments
• Central control platform of data, apps, and programs
• Central storage of data
• Central backup of data
• Shared processing power and storage capacity
• Easy access & management of authorizations and responsibilities
Components of a Network System:
A typical network has 5 basic components, namely clients, servers, channels, interface devices, and operating systems.
Servers: Servers or Host computers are powerful computers that store data or applications and all the resources that are shared by other users within a network.
Clients: The client is the individual computer used by the users within the network to access the servers for shared resources (such as hard disks and printers). Thus any personal computer or peripheral device such as a printer is a client.
Channels: Channels are known as network circuits. It is the pathway over which information/data travels between the different computers (clients and servers) that comprise the network.
Interface devices: The devices that connect clients and servers to the channel are called interface devices. Modems and network interface cards are common examples.
Operating systems: This is the software that runs the Network systems. It serves the same purpose as any normal computer. It provides the user interface for accessing the data and resources.
blockchain, block, chain-3019120.jpg
What is a Passive Network?
A passive network is one of the most common types of network. It requires pre-designing and configuring the entire network infrastructure before any operation.
A node in the passive network will only perform actions that are configured within it. When packets of data are transferred over a passive network, they can only transfer the data and not process it.
Passive Network Hardware includes:
• Cables (fibre optic cable, coaxial cables)
• Connectors.
• Switchboards.
• Clutches.
• Plugs.
What is Active Network?
Active networking means that packets of data flowing through a network can dynamically modify the operation of that network.
It consists of hardware that supports switching or routing along with executing a code within active packets over the network.
Active Network Hardware includes:
• Switches.
• Repeater.
• Hub.
• Bridge.
• Routers.
• Print Servers.
• Access points (AP)
• Power E-Net.
Active network components include elements and/or devices that are capable of providing or delivering energy in the circuit. Passive network components are only capable of storing energy in the form of current or voltage in the circuit
Differences between Active and Passive Network Components
Functions
Active components
Passive Components
Energy use
Can produce and deliver energy in the form of current or voltage
Can only utilize and store energy in the form of current or voltage
Power Gain
Capable of providing power gain
Not capable
Flow of Current
Can control the flow of current
Cannot control the flow of current
Energy
Energy donors
Energy acceptors
Power source
Require an external source
Do not require an external source
Examples
Diodes, transistors, integrated circuits, Silicon-controlled rectifiers, etc.
Resistors, capacitors, inductors, clutches, plugs, etc.
Below are some of the Network Infrastructure we built for our clients.
Red Circute Passive network
Passive network
Passive network
Red Circute Passive Network
Passive network
passive network
Why choose Red Circute India Pvt Ltd?
• Assess your network needs
• Build and integrate old systems into the network
• Regular maintenance of networks
• Setup network security
• Quick response customer service
• Budget friendly
• Free consultation
Can Red Circute help us set up our Network Infrastructure?
We have expertise in setting up Network Infrastructure for our clients. We build both passive and active networks for your office, commercial establishment, and industry.
Fix an appointment today to know how we can proceed forward.
|
__label__pos
| 0.925222 |
Using the Elastic Beanstalk Python platform - AWS Elastic Beanstalk
Using the Elastic Beanstalk Python platform
The AWS Elastic Beanstalk Python platform is a set of platform versions for Python web applications that can run behind a proxy server with WSGI. Each platform branch corresponds to a version of Python, such as Python 3.8.
Starting with Amazon Linux 2 platform branches, Elastic Beanstalk provides Gunicorn as the default WSGI server.
You can add a Procfile to your source bundle to specify and configure the WSGI server for your application. For details, see Configuring the WSGI server with a Procfile.
You can use the Pipfile and Pipfile.lock files created by Pipenv to specify Python package dependencies and other requirements. For details about specifying dependencies, see Specifying dependencies using a requirements file.
Elastic Beanstalk provides configuration options that you can use to customize the software that runs on the EC2 instances in your Elastic Beanstalk environment. You can configure environment variables needed by your application, enable log rotation to Amazon S3, and map folders in your application source that contain static files to paths served by the proxy server.
Configuration options are available in the Elastic Beanstalk console for modifying the configuration of a running environment. To avoid losing your environment's configuration when you terminate it, you can use saved configurations to save your settings and later apply them to another environment.
To save settings in your source code, you can include configuration files. Settings in configuration files are applied every time you create an environment or deploy your application. You can also use configuration files to install packages, run scripts, and perform other instance customization operations during deployments.
Settings applied in the Elastic Beanstalk console override the same settings in configuration files, if they exist. This lets you have default settings in configuration files, and override them with environment-specific settings in the console. For more information about precedence, and other methods of changing settings, see Configuration options.
For Python packages available from pip, you can include a requirements file in the root of your application source code. Elastic Beanstalk installs any dependency packages specified in a requirements file during deployment. For details, see Specifying dependencies using a requirements file.
For details about the various ways you can extend an Elastic Beanstalk Linux-based platform, see Extending Elastic Beanstalk Linux platforms.
Configuring your Python environment
The Python platform settings let you fine-tune the behavior of your Amazon EC2 instances. You can edit the Elastic Beanstalk environment's Amazon EC2 instance configuration using the Elastic Beanstalk console.
Use the Elastic Beanstalk console to configure Python process settings, enable AWS X-Ray, enable log rotation to Amazon S3, and configure variables that your application can read from the environment.
To configure your Python environment in the Elastic Beanstalk console
1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.
2. In the navigation pane, choose Environments, and then choose the name of your environment from the list.
Note
If you have many environments, use the search bar to filter the environment list.
3. In the navigation pane, choose Configuration.
4. In the Software configuration category, choose Edit.
Python settings
• Proxy server – The proxy server to use on your environment instances. By default, nginx is used.
• WSGI Path – The name of or path to your main application file. For example, application.py, or django/wsgi.py.
• NumProcesses – The number of processes to run on each application instance.
• NumThreads – The number of threads to run in each process.
AWS X-Ray settings
Log options
The Log Options section has two settings:
• Instance profile– Specifies the instance profile that has permission to access the Amazon S3 bucket associated with your application.
• Enable log file rotation to Amazon S3 – Specifies whether log files for your application's Amazon EC2 instances are copied to the Amazon S3 bucket associated with your application.
Static files
To improve performance, you can use the Static files section to configure the proxy server to serve static files (for example, HTML or images) from a set of directories inside your web application. For each directory, you set the virtual path to directory mapping. When the proxy server receives a request for a file under the specified path, it serves the file directly instead of routing the request to your application.
For details about configuring static files using configuration files or the Elastic Beanstalk console, see Serving static files.
By default, the proxy server in a Python environment serves any files in a folder named static at the /static path. For example, if your application source contains a file named logo.png in a folder named static, the proxy server serves it to users at subdomain.elasticbeanstalk.com/static/logo.png. You can configure additional mappings as explained in this section.
Environment properties
You can use environment properties to provide information to your application and configure environment variables. For example, you can create an environment property named CONNECTION_STRING that specifies a connection string that your application can use to connect to a database.
Inside the Python environment running in Elastic Beanstalk, these values are accessible using Python's os.environ dictionary. For more information, see http://docs.python.org/library/os.html.
You can use code that looks similar to the following to access the keys and parameters:
import os endpoint = os.environ['API_ENDPOINT']
Environment properties can also provide information to a framework. For example, you can create a property named DJANGO_SETTINGS_MODULE to configure Django to use a specific settings module. Depending on the environment, the value could be development.settings, production.settings, etc.
See Environment properties and other software settings for more information.
Python configuration namespaces
You can use a configuration file to set configuration options and perform other instance configuration tasks during deployments. Configuration options can be defined by the Elastic Beanstalk service or the platform that you use and are organized into namespaces.
The Python platform defines options in the aws:elasticbeanstalk:environment:proxy, aws:elasticbeanstalk:environment:proxy:staticfiles, and aws:elasticbeanstalk:container:python namespaces.
The following example configuration file specifies configuration option settings to create an environment property named DJANGO_SETTINGS_MODULE, choose the Apache proxy server, specify two static files options that map a directory named statichtml to the path /html and a directory named staticimages to the path /images, and specify additional settings in the aws:elasticbeanstalk:container:python namespace. This namespace contains options that let you specify the location of the WSGI script in your source code, and the number of threads and processes to run in WSGI.
option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: production.settings aws:elasticbeanstalk:environment:proxy: ProxyServer: apache aws:elasticbeanstalk:environment:proxy:staticfiles: /html: statichtml /images: staticimages aws:elasticbeanstalk:container:python: WSGIPath: ebdjango.wsgi:application NumProcesses: 3 NumThreads: 20
Notes
• If you're using an Amazon Linux AMI Python platform version (preceding Amazon Linux 2), replace the value for WSGIPath with ebdjango/wsgi.py. The value in the example works with the Gunicorn WSGI server, which isn't supported on Amazon Linux AMI platform versions.
• In addition, these older platform versions use a different namespace for configuring static files—aws:elasticbeanstalk:container:python:staticfiles. It has the same option names and semantics as the standard static file namespace.
Configuration files also support several keys to further modify the software on your environment's instances. This example uses the packages key to install Memcached with yum and container commands to run commands that configure the server during deployment:
packages: yum: libmemcached-devel: '0.31' container_commands: collectstatic: command: "django-admin.py collectstatic --noinput" 01syncdb: command: "django-admin.py syncdb --noinput" leader_only: true 02migrate: command: "django-admin.py migrate" leader_only: true 03wsgipass: command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf' 99customize: command: "scripts/customize.sh"
Elastic Beanstalk provides many configuration options for customizing your environment. In addition to configuration files, you can also set configuration options using the console, saved configurations, the EB CLI, or the AWS CLI. See Configuration options for more information.
|
__label__pos
| 0.806535 |
How to rename a column name in the scriptfeild?
Hello Team,
how to rename a column using scripted field in kibana.
if possible need a solution or alternate solution for this.
Thanks.
Dharani
Hi,
you can add a new field with a different name using scripted fields, but not altering any existing field name with scripted fields.
If it's ok to rename it, just create a new scripted field, with the same type as the existing one and configure the painless script to return the original field like doc['user.keyword'] for example
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.996004 |
Matthias Hoffmann - Tcl-Code-Snippets - Misc - Bgexec
Starting Process-Pipelines in the backgound, collecting their output via configurable callback. Requires the event loop to be running.
For a Object oriented variant, scroll down...
History until 1.7
• 1.8: fixed version check [expr {[info patchlevel] >= "8.4.7"}] which does of course not always gave the right result ;-)
• 1.9: Optionally signal EOF to given eofHandler
• 1.10: fixed bug: number_of_processes remain incremented even if "open |..." failed. Now incr late.
• 1.11: llength instead of string length for some tests. Calling EOF-handler when processing terminates via readhandler-break.
• 1.12: bugfix: preventing invalid processcounter w/timeout (I hope). Only used a few hours...
• 1.13: eof handler not fired if user readhandler breaks. logik of user timeout handler now equals user read handler.
• 1.14: see script header
• 1.15: Optional Err Handler. Internal changes.
BgExec-Procedure v1.16
################################################################################
# Modul : bgexec.tcl 1.16 #
# Changed : 16.10.2015 #
# Purpose : running processes in the background, catching their output via #
# event handlers #
# Author : M.Hoffmann #
# Hinweise : >&@ and 2>@stdout don't work on Windows. A work around probably #
# could be using a temporay file. Beginning with Tcl 8.4.7 / 8.5 #
# there is another (yet undocumented) way of redirection: 2>@1. #
# History : #
# 19.11.03 v1.0 1st version #
# 20.07.04 v1.1 callback via UPLEVEL #
# 08.09.04 v1.2 using 2>@1 instead of 2>@stdout if Tcl >= 8.4.7; #
# timeout-feature #
# 13.10.04 v1.3 bugfix in bgExecTimeout, readHandler is interruptable #
# 18.10.04 v1.4 bugfix: bgExecTimeout needs to be canceled when work is done; #
# some optimizations #
# 14.03.05 v1.4 comments translated to english #
# 17.11.05 v1.5 If specidied, a user defined timeout handler `toExit` runs in #
# case of a timeout to give chance to kill the PIDs given as #
# arg. Call should be compatible (optional parameter). #
# 23.11.05 v1.6 User can give additional argument to his readhandler. #
# 03.07.07 v1.7 Some Simplifications (almost compatible, unless returned #
# string where parsed): #
# - don't catch error first then returning error to main... #
# 08.10.07 v1.8 fixed buggy version check! #
# 20.02.12 v1.9 Optionally signal EOF to eofHandler. #
# 13.09.14 v1.10 bugfix: incr myCount later (in case of an (open)error it was #
# erranously incremented yet) #
# 22.02.15 v1.11 llength instead of string length for some tests. Calling EOF- #
# handler when processing terminates via readhandler-break. #
# 28.02.15 v1.12 bugfix: preventing invalid processcounter w/timeout (I hope). #
# 02.03.15 v1.13 eof handler not fired if user readhandler breaks. #
# Logic of user timeout handler now equals user read handler. #
# 21.03.15 v1.14 Testing EOF right after read (man page); -buffering line. #
# 21.03.15 v1.15 CATCHing gets. New optional errHandler. Logic changed. #
# 16.10.15 v1.16 Bugfix: missing return after user-readhandler CATCHed. #
# ATTENTION: closing a pipe leads to error broken pipe if the opened process #
# itself is a tclsh interpreter. Currently I don't know how to #
# avoid this without killing the process via toExit before closing #
# the pipeline. #
# - This Code uses one global var, the counter of currently started pipelines. #
# TODO: Namespace or OO to clean up overall design. #
################################################################################
# ATTENTION: This is the last version which maintains upward compatibility (I hope)
package provide bgexec 1.16
#-------------------------------------------------------------------------------
# If the <prog>ram successfully starts, its STDOUT and STDERR is dispatched
# line by line to the <readHandler> (via bgExecGenericHandler) as last arg. The
# global var <pCount> holds the number of processes called this way. If a <timeout>
# is specified (as msecs), the process pipeline will be automatically closed after
# that duration. If specified, and a timeout occurs, <toExit> is called with the
# PIDs of the processes right before closing the process pipeline.
# Returns the handle of the process-pipeline.
#
proc bgExec {prog readHandler pCount {timeout 0} {toExit ""} {eofHandler ""} {errHandler ""}} {
upvar #0 $pCount myCount
set p [expr {[lindex [lsort -dict [list 8.4.7 [info patchlevel]]] 0] == "8.4.7"?"| $prog 2>@1":"| $prog 2>@stdout"}]
set pH [open $p r]
# Possible Problem if both after event and fileevents are delayed (no event loop) until timeout fires;
# ProcessCount is then decremented before ever incremented. So increment ProcessCount early!
set myCount [expr {[info exists myCount]?[incr myCount]:1}]; # precaution < 8.6
fconfigure $pH -blocking 0 -buffering line
set tID [expr {$timeout?[after $timeout [list bgExecTimeout $pH $pCount $toExit]]:{}}]
fileevent $pH readable [list bgExecGenericHandler $pH $pCount $readHandler $tID $eofHandler $errHandler]
return $pH
}
#-------------------------------------------------------------------------------
proc bgExecGenericHandler {chan pCount readHandler tID eofHandler errHandler} {
upvar #0 $pCount myCount
if {[catch {gets $chan line} result]} {
# read error -> abort processing. NOTE eof-handler NOT fired!
after cancel $tID
catch {close $chan}
incr myCount -1
if {[llength $errHandler]} {
catch {uplevel $errHandler $chan $result}
}
return
} elseif {$result >= 0} {
# we got a whole line
lappend readHandler $line; # readhandler doesn't get the chan...
if {[catch {uplevel $readHandler}]} {
# user-readHandler ended with errorcode which means here
# "terminate the processing". NOTE eof-handler NOT fired!
after cancel $tID
catch {close $chan}
incr myCount -1
return
}
}; # not enough data (yet)
if {[eof $chan]} {
after cancel $tID; # terminate Timeout, no longer needed!
catch {close $chan}; # automatically deregisters the fileevent handler
incr myCount -1
if {[llength $eofHandler]} {
catch {uplevel $eofHandler $chan}; # not called on timeout or user-break
}
}
}
#-------------------------------------------------------------------------------
proc bgExecTimeout {chan pCount toExit} {
upvar #0 $pCount myCount
if {[llength $toExit]} {
# The PIDs are one arg (list)
if {[catch {uplevel [list {*}$toExit [pid $chan]]}]} {
# user-timeoutHandler ended with error which means here
# "we didn't kill the processes" (such a kill would have
# normally triggered an EOF, so no other cleanup would be
# required then), so end the processing explicitely and do
# the cleanup. NOTE eof-handler NOT fired!
catch {close $chan}
incr myCount -1
}
} else {
# No user-timeoutHandler exists, we must cleanup anyway
# NOTE eof-handler NOT fired!
catch {close $chan}
incr myCount -1
}
}
#===============================================================================
So, what is this program for at all?
• keep the user interface responding!
• parallelizing processes
A user interface could be
• A long running command line procedure
• A long running CGI proc
• A Tk interface
If you simply exec a long running external program, waiting around for it's output, the user interface will be blocked meanwhile. If you start it in the background with & and some redirection, you can continue with your process, but you have to detect and wait until the external program finishes before you can open the wanted file(s) with the program feedback (output). It would be better if the user begins receiving feedback as soon as possible, especially with webservers/cgi-scripts - that's what I wrote this module for originally. You quickly see the first output record of the called program. After all it's just a wrapper for open |prog and fileevent... However, there are some performance drawbacks driving programs this way.
Remarks
• Due to TCL's lack of process killing cababilities, it's likely that processes continue to run in some state where they lost their stdout/stderr-channels after a timeout arised. Closing the TCL-channels doesn't seem to help. So it is better to additionally use an external process killer (many of them are available on windows). The "kill" subcommand of the TclX package works fine for this. With version 1.5, this can be triggered automatically via the new UserExit toExit , so the user can decide which process killer to use.
• As of tcl 8.4.11.2, the 2>@1 still seems to be undocumented.... As of Tcl 8.4.11.2, 2>@1 is still undocumented.... (still seeing and hoping)
Remark: The following procs are not always up to date'
Testproc bgexec1.6_test.tcl
# Testing and demonstrating bgExec v1.6
# 23.11.2005
lappend auto_path .
package require bgexec 1.6
proc dummy {userdata what} {
# data from delayout1 & 3
puts >>>$what<<<$userdata
}
proc dummy2 {userdata what} {
# data from delayout2
puts >>>$what<<<$userdata
return -code error {Break triggerd via ReadHandler}
}
proc time {} {
puts [clock format [clock seconds]]
after 1000 [list time]
}
proc toExit {PIDs} {
puts "Timeoutexit: $PIDs; trying to kill process"
# avoid 'broken pipe'
foreach PID $PIDs {
# for PV, see http://www.xmlsp.com/pview/prcview.htm
catch {exec -- [auto_execok pv] -k -f -i $PID} rc
puts $rc
}
}
after 1000 [list time]
set h1 [bgExec "[info nameofexe] delayout1.tcl" [list dummy *1*] pCount]
puts "Handle: $h1"
catch {puts [pid $h1]}
set h2 [bgExec "[info nameofexe] delayout2.tcl" [list dummy2 *2*] pCount]
puts "Handle: $h2"
catch {puts [pid $h2]}
set h3 [bgExec "[info nameofexe] delayout3.tcl" [list dummy *3*] pCount 5000 toExit]
puts "Handle: $h3"
catch {puts [pid $h3]}
puts "pCount: $pCount"
# alternative: vwait pCount (problematic as pCount has to be GLOBAL)
while {$pCount > 0} {
vwait pCount
puts "pCount: $pCount"
# or: update; # not: update idletasks!
}
puts "pCount (after loop): $pCount"
And the three-Testsubprocs:
delayout1.tcl:
puts "1 output to STDOUT"
puts "1 output to STDOUT"
puts "1 output to STDOUT"
puts stderr "1 output to STDERR"
puts stderr "1 output to STDERR"
puts stderr "1 output to STDERR - a normal end via EOF"
delayout2.tcl:
puts "2 output to STDOUT - aborts after this, via UserReadHandler"
puts "2 output to STDOUT"
puts "2 output to STDOUT"
puts stderr "2 output to STDERR"
puts stderr "2 output to STDERR"
puts stderr "2 output to STDERR"
delayout3.tcl:
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR"
after 3000
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR"
after 3000
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts "3 output to STDOUT"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR"
puts stderr "3 output to STDERR - this is not displayed anymore - script's ending somewhere before after 5000s via timeout"
To test everything:
• do tclsh bgexec1.6_test.tcl
Output should appear similar to the following:
Handle: file798570
1336
Handle: file841908
1368
2 Output To STDERR
2 Output To STDERR
2 Output To STDERR
Handle: file840d08
1380
pCount: 3
>>>2 Output To STDOUT - should be the last one due to Break via UserReadHandler<
<<*2*
pCount: 2
>>>1 Output To STDOUT<<<*1*
>>>1 Output To STDOUT<<<*1*
>>>1 Output To STDOUT<<<*1*
1 Output To STDERR
1 Output To STDERR
1 Output To STDERR - should normally end via EOF
pCount: 1
>>>3 Output To STDOUT<<<*3*
>>>3 Output To STDOUT<<<*3*
>>>3 Output To STDOUT<<<*3*
3 Output To STDERR
3 Output To STDERR
3 Output To STDERR
Wed Nov 23 09:50:17 Westeuropäische Normalzeit 2005
Wed Nov 23 09:50:18 Westeuropäische Normalzeit 2005
Wed Nov 23 09:50:19 Westeuropäische Normalzeit 2005
>>>3 Output To STDOUT<<<*3*
3 Output To STDERR
3 Output To STDERR
3 Output To STDERR
>>>3 Output To STDOUT<<<*3*
>>>3 Output To STDOUT<<<*3*
Wed Nov 23 09:50:20 Westeuropäische Normalzeit 2005
Wed Nov 23 09:50:21 Westeuropäische Normalzeit 2005
Timeoutexit: 1380; trying to kill process
Killing '1380'
tclsh.exe (1380)
pCount: 0
pCount (after loop): 0
LES: This looks interesting. Would someone be willing to translate and rewrite the comments and strings in English? M.H.: translated! I hope my english is not too bad to understand the code... If someone would review the code and we are able to make it fool proof, it would be a nice addition to tcllib...
How does this compare to BLT's bgexec function? MHo I don't know; it must be similar. But I don't need BLT for my bgExec, so my scripts keep small...
US BLT's bgexec belongs into Tcl's core. For ages. MHo But it's not there yet...
Test whats happening if calling a GUI-App through bgExec:
# Testing and demonstrating bgExec v1.6, (2)
# Test what happens if calling a 32bit-GUI-Tool
# 23.11.2005
lappend auto_path [pwd]
package require bgexec 1.6
proc dummy {what} {
puts >>>$what<<<
}
set h1 [bgExec notepad.exe dummy pCount]
vwait pCount
It seems to work! The program is blocking, though.
Remarks: unfortunally, STDERR-CATCHing with 2>@ doesn't seems to work with Windows 2000...... For 8.4.7 and above, there is or will be a fix, see https://www.tcl-lang.org/cgi-bin/tct/tip/202.html .
MB : Inspired by this package, I extended the bgexec command features and moved it into a SNIT class. The code is publicly available in the Tclrep project, in the module jobexec :
http://tclrep.cvs.sourceforge.net/viewvc/tclrep/modules/jobexec/
The jobexec class provides the method waitend, which allows to synchronize different jobs which were executed in background (this feature is not available in the current bgexec implementation). The waitend method is based on the vwait Tcl command presented earlier on this page.
I also developped the jobscheduler class, which allows to schedule a list of jobs, then to executes them all in background until all are processed. The algorithm is so that at most nbprocs jobs are running at the same time. See jobscheduler.tcl in the Tclrep project for more details.
MHo 2017-07-11: Here's my OO-variant of bgExec. There are some handling differences, see below.
oo::class create bgExec {
self variable objNr
self method nextObjNr {} {incr objNr}
self method activeObjects {} {info class instances bgExec}
self method activeObjectsCount {} {llength [my activeObjects]}; # := vwaitvar
###
# Generische Handler (werden über Fileevent gerufen, müssen also public sein...)
# $obj wird an den Userhandler übergeben, da hierüber bei Bedarf zusätzliche
# Daten gelesen werden können (siehe getInfos).
# Signatur UserHandler: proc callback {obj type {data ""}}.
self method onFileEvent {obj chan callback} {
if {[catch {gets $chan line} result]} {
$obj cancelTimeout
catch {uplevel 1 [list {*}$callback $obj error $result]}; # Fehler vor Close melden
$obj destroy
} elseif {$result >= 0} {
catch {uplevel 1 [list {*}$callback $obj data $line]} ; # Daten vorhanden
} else {
catch {uplevel 1 [list {*}$callback $obj nodata]} ; # keine Daten vorhanden (Idle)
}
if {[eof $chan]} {
$obj cancelTimeout
catch {uplevel 1 [list {*}$callback $obj eof]} ; # End-of-File vor Close melden
$obj destroy
}
}
self method onTimeout {obj callback pids} {
catch {uplevel 1 [list {*}$callback $obj timeout $pids]} ; # Timeout vor Close melden
$obj destroy
}
variable pipe cb chan timeoutID userData objNr waitvar
constructor {pipeline callback args} {
set options [dict create -timeout 0 -userdata "" -fconf "" -vwaitvar ::bgExecVwaitVar]
set keys [dict keys $options]
foreach {arg val} $args {
set key [lsearch -glob -nocase -inline $keys $arg*]
if {$key ne ""} {
dict set options $key $val
} else {
return -code error "invalid option. Allowed are: $keys."
}
}
set pipe $pipeline
set cb $callback
set fconf [dict merge {-blocking 0 -buffering line} [dict get $options -fconf]]
set chan [open "| $pipeline 2>@1" r]; # aktuell wieder nur READ-Channel
fconfigure $chan {*}$fconf
if {[dict get $options -timeout]} {
set timeoutID [after [dict get $options -timeout] [list bgExec onTimeout [self] $callback [pid $chan]]]
} else {
set timeoutID ""
}
set waitvar [dict get $options -vwaitvar]
incr $waitvar
set userData [dict get $options -userdata]
set objNr [bgExec nextObjNr]
fileevent $chan readable [list bgExec onFileEvent [self] $chan $callback]
}
destructor {
my cancelTimeout
catch {close $chan}; # falls nicht bereits explizit getätigt (catch erforderlich?)
incr $waitvar -1
}
method getInfos {} {
return [list $objNr $chan $pipe $userData $waitvar $timeoutID]
}
method cancelTimeout {} {
if {$timeoutID ne ""} {
after cancel $timeoutID
}
}
}
Noticable differences from the non-OO-bgExec (besides the fact that this uses TclOO ;-), most of which I see as enhancements, are:
• Only one usercallback. The callback can decide what to do by means of an type-argument (data, nodata, error, eof)
• For now, no clean possibility to interrupt the processing from the outside (other than calling $obj destroy...)
• Possibility to assign "userdata" to an bgExec instance (for any purpose)
• Possibility to specify fconfigure-options for the open channel
• when the user callback is called in case of error or eof, the channel isn't already closed
• From within the callback, the mainprog can read some additional state data via [getInfo] (so not every peace have to be transferred via proc args)
• The constructor takes only two required parameters, the others are optional an can be specified via -key value-syntax in any order (keys can be shortend)
• In Case of a timeout, the PID(s) of the timed out process(es) is/are delivered to the callback (for killing, etc.)
I tried to use a class variable for counting instances to vwait upon, but I didn't succeed. So again, one have to specify a global variable (default name ::bgExecVwaitVar).
Here's some test script (to be called with a milliseconds-timeout-value):
package require twapi; # optional
proc cb {dummy obj typ {data ""}} {
lassign [$obj getInfos] objNr chan pipe userData waitvar timeoutID
switch -nocase $typ {
"eof" {
set PIDs [pid $chan]
catch {twapi::get_process_handle [lindex $PIDs end]} sysHandle
catch {twapi::get_process_exit_code $sysHandle} sysRC
puts "$objNr <EOF>, SysRC=$sysRC"
}
"timeout" {
puts "$objNr <TIMEOUT>, PID(s)=$data"
}
"data" {
puts "$objNr $data (userData=$userData, objNr=$objNr, dummy=$dummy, chan=$chan, pipe=$pipe, after=$timeoutID)"
}
"nodata" {
puts "$objNr <IDLE>"
}
default {
puts "<Fehler:> $data"
}
}
}
for {set i 1} {$i <= 3} {incr i} {
set to [expr {[lindex $argv 0]+30}]
puts "handle -> [bgExec new "tclkitsh emitter.tcl $i" [list cb dummyArg] -user XYZ -t $to]"
puts "count -> [bgExec activeObjectsCount]"
puts "objects -> [bgExec activeObjects]"
puts "afterIDs -> [after info]"
puts "waitVar -> $::bgExecVwaitVar"
}
puts "Entering event loop..."
while {$::bgExecVwaitVar > 0} {
vwait ::bgExecVwaitVar
puts "count -> [bgExec activeObjectsCount]"
puts "objects -> [bgExec activeObjects]"
puts "afterIDs -> [after info]"
puts "waitVar -> $::bgExecVwaitVar"
}
bgExec new "tclkitsh emitter.tcl 99" -wrongparm falsch
Test output looks like this:
d:\home\Hoffmann\pgm\tcl\usr\Tst\ooBgExec>tclkitsh ooBgExec.tcl 170
handle -> ::oo::Obj26
count -> 1
objects -> ::oo::Obj26
afterIDs -> after#0
waitVar -> 1
handle -> ::oo::Obj27
count -> 2
objects -> ::oo::Obj26 ::oo::Obj27
afterIDs -> after#1 after#0
waitVar -> 2
handle -> ::oo::Obj28
count -> 3
objects -> ::oo::Obj26 ::oo::Obj27 ::oo::Obj28
afterIDs -> after#2 after#1 after#0
waitVar -> 3
Entering event loop...
3 3 - Zeile 1 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 2 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 3 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 4 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 5 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 6 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 7 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 8 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 3 - Zeile 9 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 <IDLE>
3 3 - Zeile 10 (userData=XYZ, objNr=3, dummy=dummyArg, chan=file3c093e0, pipe=tclkitsh emitter.tcl 3, after=after#2)
3 <EOF>, SysRC=3
count -> 2
objects -> ::oo::Obj26 ::oo::Obj27
afterIDs -> after#1 after#0
waitVar -> 2
2 2 - Zeile 1 (userData=XYZ, objNr=2, dummy=dummyArg, chan=file3c091e0, pipe=tclkitsh emitter.tcl 2, after=after#1)
2 2 - Zeile 2 (userData=XYZ, objNr=2, dummy=dummyArg, chan=file3c091e0, pipe=tclkitsh emitter.tcl 2, after=after#1)
1 <TIMEOUT>, PID(s)=6308
count -> 1
objects -> ::oo::Obj27
afterIDs -> after#1
waitVar -> 1
2 2 - Zeile 3 (userData=XYZ, objNr=2, dummy=dummyArg, chan=file3c091e0, pipe=tclkitsh emitter.tcl 2, after=after#1)
2 <TIMEOUT>, PID(s)=4604
count -> 0
objects ->
afterIDs ->
waitVar -> 0
invalid option. Allowed are: -timeout -userdata -fconf -vwaitvar.
while executing
"bgExec new "tclkitsh emitter.tcl 99" -wrongparm falsch"
(file "ooBgExec.tcl" line 122)
d:\home\Hoffmann\pgm\tcl\usr\Tst\ooBgExec>
The program emitter.tcl is as follows:
for {set i 1} {$i < 10} {incr i} {
puts [format {%s - Zeile %2i} [lindex $argv 0] $i]
}
puts -nonewline [format {%s - Zeile %2i} [lindex $argv 0] $i]
exit [lindex $argv 0]
The code is also available at http://chiselapp.com/user/MHo/repository/tcl-modules/index
[ Category Package | Category Interprocess Communication ]
|
__label__pos
| 0.96458 |
HOW-TO:Install XBMC on Apple TV 1 (Linux)
From XBMC
Revision as of 11:22, 1 May 2012 by Norwix (Talk | contribs)
Jump to: navigation, search
See also: Apple TV 1 and HOW-TO:Install XBMC on Apple TV 1 (original OS)
Apple TV 1 (silver) is no longer available from Apple, but can be purchased from alternative sources (eBay, kijiji, craigslist, etc.).
It is highly recommended that you replace the WiFi card with a Broadcom Crystal HD to enable playback of HD videos.
Contents
1 Before installing a replacement OS
Warning Before installing a Linux-based OS, you will want to make sure that your ATV1 has been updated to the original ATV OS 3.0.2 at least once in order to flash the HDMI controller firmware. You also want to change the original ATV OS settings to "RGB High" in the AV settings.
2 Crystalbuntu
See also: Crystalbuntu
Crystalbuntu is probably the best and easiest option for running a linux-based OS with XBMC on the Apple TV 1. The system updates itself and boots directly into XBMC.
2.1 Installing from Linux or Windows
If you are using a Linux or Windows computer to prepare your USB install drive, you can use the GUI installer
2.2 Installing from Mac OS X
If you are using Mac OS X to prepare your USB install drive, use the following install guides:
To run off a USB drive and not touch the internal HDD:
Great for dual booting if you want to still use original ATV OS features, or to just test Crystalbuntu out.
1. Download http://download.stmlabs.com/atv-images/ubuntu/hardy/usb/USB.img.gz
2. Open Terminal, navigate to the downloaded file, and enter the following command followed by return:
gunzip USB.img.gz
3. Once back to the command prompt, continue with:
diskutil list
4. This will list all connected memory devices. Look for the one that is your USB stick. Normally, with no other USB drives connected, it is listed as disk1. If your USB drive shows up as something else then you must replace that number in the instructions below. Failure to do so will likely result in data loss.
5. Enter these commands in your Terminal window, in sequence:
diskutil umountDisk /dev/disk1
dd if=USB.img of=/dev/rdisk1
6. Wait for the process to finish. It may take a while.
7. Remove the USB drive and stick it into the ATV1, then power the ATV1 on (or reboot it).
8. The first time the drive is ran it will do some first-time installation things. This only happens the very first time the new USB drive is ran.
9. The ATV1 should now reboot itself, and you should be now using XBMC via Crystalbuntu via USB drive.
To make an install USB drive that will erase the internal hard drive and install Crystalbuntu:
1. Download http://download.stmlabs.com/atv-images/installer/installer.img.gz
2. Open Terminal, navigate to the downloaded file, and enter the following command followed by return:
gunzip installer.img.gz
diskutil list
3. This will list all connected memory devices. Look for the one that is your USB stick. Normally, with no other USB drives connected, it is listed as disk1. If your USB drive shows up as something else then you must replace that number in the instructions below. Failure to do so will likely result in data loss.
4. Enter these commands in your Terminal window, in sequence:
diskutil umountDisk /dev/disk1
dd if=installer.img of=/dev/rdisk1
5. Wait for the process to finish. It may take a while.
6. Remove the USB drive and take it to the Apple TV 1, along with a USB hub and a USB keyboard
7. Plug in the USB hub into the Apple TV 1, and then plug the USB drive and USB keyboard into the hub
8. Boot the Apple TV 1 and wait until it gives you an error.
9. Log-in using atv as both the username and password
10. Enter the following command:
sudo -s
11. You will be asked for a password, again, which is still atv
12. Enter the following commands:
cd /
echo ubuntu > .distro
reboot
13. Wait for the initial installation process to finish. It will ask you to remove the USB drive and reboot. Do so.
14. Wait for the rest of the installation process to finish and soon you will be using XBMC via Crystalbuntu on the internal hard drive
2.3 Alternative guides
3 OpenELEC
Incomplete.png INCOMPLETE:
This page or section is incomplete. Please add information or correct uncertain data which is marked with a ?
See also: OpenELEC
Make sure to select the Apple TV build.
4 Manual installations
Personal tools
Namespaces
Variants
Actions
Navigation
Wiki help
Google Search
Toolbox
|
__label__pos
| 0.645855 |
Guest
If a,b, c, d are positive real numbers such that a + b + c + d = 2, then M = (a + b) (c + d) satisfies the relation : (A) 0 ≤ M ≤ 1 (B) 1 ≤ M ≤ 2 (C) 2 ≤ M ≤ 3 (D) 3 ≤ M ≤ 4
If a,b, c, d are positive real numbers such that a + b + c + d = 2, then M = (a + b) (c + d) satisfies the relation :
(A) 0 ≤ M ≤ 1 (B) 1 ≤ M ≤ 2
(C) 2 ≤ M ≤ 3 (D) 3 ≤ M ≤ 4
Grade:12
1 Answers
askiitian.expert- chandra sekhar
10 Points
14 years ago
Hi pallavi,
{(a+b)+(c+d)}/2 ≥√{(a+b)(c+d) since A.M>G.M
2/2 ≥ √M
√M ≤ 1
0≤ M ≤ 1
Think You Can Provide A Better Answer ?
ASK QUESTION
Get your questions answered by the expert for free
|
__label__pos
| 0.998375 |
7 terms
8th Grade Science: 1-3 Converting Measurement Units
STUDY
PLAY
Dimensional Analysis
A method of manipulating unit measures algebraically to determine the proper units for a quantity computed.
Ratio
A comparison between two quantities.
Proportion
A statement showing equivalent ratios.
Conversion Factor
A factor which one of the quantities in the ratio has "1" as a factor.
English System
Inches, Feet, Yards, Miles, Ounces, Pounds, Tons, Cups, Pints, Quarts, and Gallons.
Metric System
The decimal measuring system based on the meter, liter, gram, and seconds which are units of length, capacity, mass, and time
Rule for Solving Problems
Write what you have (on the left)
Write what you need (on the right)
Find a Conversion Factor(s)
Cancel units that are in both the numerator and denominator
Perform the math
Draw a circle or box around the final answer
YOU MIGHT ALSO LIKE...
|
__label__pos
| 0.648362 |
Creating an Automated System to Add Redirect URLs Using Webhooks and AWS Lambda
During the course of time, we keep changing the URLs of entries as and when required. And if we have several such entries, adding a redirect URL for each entry not only consumes time but also takes effort if we do it manually.
At times, we may even forget to add a redirect URL to an entry which can result in our users getting a 404 error, if they have bookmarked that page.
Therefore, in this guide, we will discuss how we can create a system that adds a redirect URL automatically when the URL field in an entry is changed or updated.
Prerequisites
Process Overview
To understand how this example works, we will create a content type named “Redirect Rules” with a specific schema and add a blank entry in it at the start. When we update the URL of any entry, of any content type of a stack, a webhook gets triggered. This webhook then invokes an AWS lambda function that creates a redirect entry automatically inside the Redirect Rules content type with the required details.
Steps for Execution
1. Download the code
2. Create the “Redirect Rules” content type
3. Add the AWS Lambda Function
4. Create an API Gateway
5. Deploy your API
6. Create a Webhook in Contentstack
7. Try it out
Let's now move ahead with the steps and create the required system.
1. Download the Code
Go to our GitHub page and download the code for this exercise. We have already created the required functions that take care of creating this system. You just have to upload this project to your AWS Lambda Function as discussed below.
2. Create the “Redirect Rules” Content Model
For this exercise, we will create a content type named Redirect Rules and add an entry inside it. Log in to your Contentstack account and perform the steps given below to create this content type:
1. Go to your stack and click the “Content Models” icon (press “C”) on the left navigation panel.
2. Click on the + New Content Type button.
3. On the Create New Content Type modal, name the content type as Redirect Rules. Add an optional description if you want but ensure that Multiple is selected as we want several entries (with the same structure) to be created for every URL change, automatically.
4. Click on Create and Add fields as depicted in the following screenshot: Creating_an_Automated_System_to_Add_Redirect_URLs_Using_Webhooks_and_Lambda_1_no_highlight.png
5. On the Content Type Builder page, add the following fields to your content type. To add a field, click on the “Insert a schema” (+) icon that appears below your default fields:
1. Title (default field)
2. URL (default field)
3. Single Line Textbox (name it “From”) to show the old URL
4. Single Line Textbox (name it “To”) to show the new (changed) URL
5. Select Field (name it “Type”). Add two choices as Permanent and Temporary in the Add Choices option as shown below: Creating_an_Automated_System_to_Add_Redirect_URLs_Using_Webhooks_and_Lambda_2_no_highlight.png
6. Multiline Textbox (name it “Notes”) if you want to add any notes for the redirect.
7. Group Field (name it “Entry”) and add 4 Single Line Textbox with the following names: Creating_an_Automated_System_to_Add_Redirect_URLs_Using_Webhooks_and_Lambda_3_no_highlight.png
6. Once you have added these fields to your content type, click on Save and Close.
With these steps, we have created our content type that will hold all our redirected URLs. The group field that we have added provides useful information about the entry that had the URL changed.
• It will provide the version number of the entry in which the URL was changed
• The content type it belonged to
• Its locale information
• Its UID
Open this content type, add a blank entry in it, and save it. Let's now move ahead and create the Lambda Function.
3. Add the AWS Lambda Function
Perform the following steps to set up the AWS Lambda function:
1. Login to your AWS Management Console, and select Lambda from the Services list.
2. Click on the Create Function button and then Author from Scratch.
3. Configure the lambda based on your requirements. Choose Node.js 12.x as your run-time language and click on the Create function button.
4. AWS Lambda offers an inline code editor. You can write your lambda code here or alternatively upload the code. For our example, we have created a sample code that you can download or clone from our GitHub repository.
After cloning the repo, move inside the project root directory and install the modules. Create a production build by running the npm run build command from the command prompt and then zip it.
5. Then, upload the zip on Lambda by selecting the Upload a .zip file option from the Code entry type drop-down. Change the Handler to index.create and click on Save.
This is how it will look when you upload the code in the editor:
AWS LF.PNG
6. Once we have uploaded the code in the editor, let's now set up the Environment variables by adding your Contentstack's credentials as follows:
BASE_URL_REGION: <<YOUR BASE URL REGION>>
MANAGEMENT_TOKEN: <<YOUR STACK's MANAGEMENT TOKEN>>
REDIRECT_CONTENT_TYPE: <<UID of the CONTENT TYPE>>
EV LF.PNG
You can name the content type as you want. However, ensure that the content type UID that you mention in the environment variable is the same that you use in the code (note the second line of the code where you need to enter the UID of the content type).
EV LF.PNG
Note: You have to disable the 2FA (if enabled) to ensure the Lambda Function works as expected.
7. Once you have added these environment variables, click on Save.
With these steps, we have set up our Lambda Function. Let's now move ahead with creating an API gateway.
4. Create an API Gateway
Execute the following steps to create the API Gateway:
1. Login to AWS Management Console (if you have logged out) and select API Gateway from the Services list. You can also type “API” in the search box to locate it quickly as shown:
API Gateway Shortcut.PNG
2. Click on the Getting started or the Create API button (depending on whether you have an API already configured or not).
3. On the Choose an API type page, go to the REST API option (the public one) and click on Build.
4. On the next screen, ensure that Choose the protocol section has REST selected and the Create new API section has New API selected. Enter the API name and a description (optional) in the Settings section and click on Create API.
5. On the next page, from the Actions drop-down in Resources, select Create Method.
6. From the resultant drop-down, select POST and click on the checkmark.
7. Select your Lambda function by typing the name of your function (it auto-populates) in the Lambda Function field. Ensure the Use Lambda Proxy integration option is checked, as shown below, and click on Save.
LF Integration.PNG
8. You'll get the Add Permission to Lambda Function pop-up, click on OK.
With these steps, we have created the required API Gateway. Let's now deploy this API.
5. Deploy your API
Next, we need to deploy our API by following the steps given below:
1. From the Actions drop-down in Resources, select the Deploy API option.
2. Select [New Stage] in the Deployment stage and enter “prod” (or anything you like to identify this stage with) in the Stage name.
3. Click on the Deploy button.
4. On the next screen, you will get your deployed API under Invoke URL. We will use this URL in the next step when we create a webhook to initiate notifications to our Lambda function.
Once you have deployed your API, your Lambda Function will look similar to this:
LF_with_API.PNG
That's it! We have created the Lambda Function and have deployed the API to invoke it.
6. Create a Webhook in Contentstack
Let's now create a webhook in Contentstack to invoke the Lambda Function through the API that we just created. To do this, follow the steps given below:
1. Log in to your Contentstack account, go to your stack, and click on the Settings icon (press “S”) on the left navigation panel.
2. Then, click on Webhooks (press “alt + W” for Windows OS, and “option + W” for Mac OS). On the Webhook page, click on + New Webhook.
3. On the Create Webhook page, fill up the Name field (for example, Redirect1). In the URL to notify field, enter the URL that you generated when you deployed your APIs, in the previous step.
4. Scroll down to the When section for creating a trigger for the webhook as shown below: Creating_an_Automated_System_to_Add_Redirect_URLs_Using_Webhooks_and_AWS_Lambda_4_highlighted.png
5. Click on Save to save your Webhook settings.
Your webhook is now ready to invoke the Lambda Function when the URL field of any entry in any content type of that stack is updated.
7. Note: It is possible to limit the webhook to only content types needed - the example runs on all entries from all content types.
8. Try it Out
The entire set up is done and you are ready to try this out.
1. Navigate to your entry and try changing the URL of an entry.
2. Save your entry after changing the URL. Upon saving, the webhook triggers. It then invokes the Lambda Function through the APIs and creates a new entry in the Redirect Rules content type.
3. Go to the Redirect Rules content type, you should see a new entry inside it.
4. Open, the entry that was just created, you should see details of the old and new URL. And, in the group field named Entry, you will see the entry details.
Tip: To manage redirects from the application side, set up a middleware function to your application. It will make a specific call to find if the Redirect Rules content type has the current URL. If it finds an entry in Redirect Rules content type with the current URL, then you can redirect your current page to the ‘from’ URL of the entry.
Additional Resource: To know more about how to integrate Webhooks with AWS Lambda, refer our Webhook Integrations page for more details.
Was this article helpful?
Thanks for your feedbackSmile-icon
On This Page
^
|
__label__pos
| 0.989611 |
/* * linux/kernel/softirq.c * * Copyright (C) 1992 Linus Torvalds * * Rewritten. Old one was good in 2.2, but in 2.3 it was immoral. --ANK (990903) */ #include #include #include #include #include #include #include #include #include #include #include #include /* - No shared variables, all the data are CPU local. - If a softirq needs serialization, let it serialize itself by its own spinlocks. - Even if softirq is serialized, only local cpu is marked for execution. Hence, we get something sort of weak cpu binding. Though it is still not clear, will it result in better locality or will not. Examples: - NET RX softirq. It is multithreaded and does not require any global serialization. - NET TX softirq. It kicks software netdevice queues, hence it is logically serialized per device, but this serialization is invisible to common code. - Tasklets: serialized wrt itself. */ #ifndef __ARCH_IRQ_STAT irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned; EXPORT_SYMBOL(irq_stat); #endif static struct softirq_action softirq_vec[32] __cacheline_aligned_in_smp; static DEFINE_PER_CPU(struct task_struct *, ksoftirqd); /* * we cannot loop indefinitely here to avoid userspace starvation, * but we also don't want to introduce a worst case 1/HZ latency * to the pending events, so lets the scheduler to balance * the softirq load for us. */ static inline void wakeup_softirqd(void) { /* Interrupts are disabled: no need to stop preemption */ struct task_struct *tsk = __get_cpu_var(ksoftirqd); if (tsk && tsk->state != TASK_RUNNING) wake_up_process(tsk); } /* * This one is for softirq.c-internal use, * where hardirqs are disabled legitimately: */ #ifdef CONFIG_TRACE_IRQFLAGS static void __local_bh_disable(unsigned long ip) { unsigned long flags; WARN_ON_ONCE(in_irq()); raw_local_irq_save(flags); add_preempt_count(SOFTIRQ_OFFSET); /* * Were softirqs turned off above: */ if (softirq_count() == SOFTIRQ_OFFSET) trace_softirqs_off(ip); raw_local_irq_restore(flags); } #else /* !CONFIG_TRACE_IRQFLAGS */ static inline void __local_bh_disable(unsigned long ip) { add_preempt_count(SOFTIRQ_OFFSET); barrier(); } #endif /* CONFIG_TRACE_IRQFLAGS */ void local_bh_disable(void) { __local_bh_disable((unsigned long)__builtin_return_address(0)); } EXPORT_SYMBOL(local_bh_disable); void __local_bh_enable(void) { WARN_ON_ONCE(in_irq()); /* * softirqs should never be enabled by __local_bh_enable(), * it always nests inside local_bh_enable() sections: */ WARN_ON_ONCE(softirq_count() == SOFTIRQ_OFFSET); sub_preempt_count(SOFTIRQ_OFFSET); } EXPORT_SYMBOL_GPL(__local_bh_enable); /* * Special-case - softirqs can safely be enabled in * cond_resched_softirq(), or by __do_softirq(), * without processing still-pending softirqs: */ void _local_bh_enable(void) { WARN_ON_ONCE(in_irq()); WARN_ON_ONCE(!irqs_disabled()); if (softirq_count() == SOFTIRQ_OFFSET) trace_softirqs_on((unsigned long)__builtin_return_address(0)); sub_preempt_count(SOFTIRQ_OFFSET); } EXPORT_SYMBOL(_local_bh_enable); void local_bh_enable(void) { #ifdef CONFIG_TRACE_IRQFLAGS unsigned long flags; WARN_ON_ONCE(in_irq()); #endif WARN_ON_ONCE(irqs_disabled()); #ifdef CONFIG_TRACE_IRQFLAGS local_irq_save(flags); #endif /* * Are softirqs going to be turned on now: */ if (softirq_count() == SOFTIRQ_OFFSET) trace_softirqs_on((unsigned long)__builtin_return_address(0)); /* * Keep preemption disabled until we are done with * softirq processing: */ sub_preempt_count(SOFTIRQ_OFFSET - 1); if (unlikely(!in_interrupt() && local_softirq_pending())) do_softirq(); dec_preempt_count(); #ifdef CONFIG_TRACE_IRQFLAGS local_irq_restore(flags); #endif preempt_check_resched(); } EXPORT_SYMBOL(local_bh_enable); void local_bh_enable_ip(unsigned long ip) { #ifdef CONFIG_TRACE_IRQFLAGS unsigned long flags; WARN_ON_ONCE(in_irq()); local_irq_save(flags); #endif /* * Are softirqs going to be turned on now: */ if (softirq_count() == SOFTIRQ_OFFSET) trace_softirqs_on(ip); /* * Keep preemption disabled until we are done with * softirq processing: */ sub_preempt_count(SOFTIRQ_OFFSET - 1); if (unlikely(!in_interrupt() && local_softirq_pending())) do_softirq(); dec_preempt_count(); #ifdef CONFIG_TRACE_IRQFLAGS local_irq_restore(flags); #endif preempt_check_resched(); } EXPORT_SYMBOL(local_bh_enable_ip); /* * We restart softirq processing MAX_SOFTIRQ_RESTART times, * and we fall back to softirqd after that. * * This number has been established via experimentation. * The two things to balance is latency against fairness - * we want to handle softirqs as soon as possible, but they * should not be able to lock up the box. */ #define MAX_SOFTIRQ_RESTART 10 asmlinkage void __do_softirq(void) { struct softirq_action *h; __u32 pending; int max_restart = MAX_SOFTIRQ_RESTART; int cpu; pending = local_softirq_pending(); account_system_vtime(current); __local_bh_disable((unsigned long)__builtin_return_address(0)); trace_softirq_enter(); cpu = smp_processor_id(); restart: /* Reset the pending bitmask before enabling irqs */ set_softirq_pending(0); local_irq_enable(); h = softirq_vec; do { if (pending & 1) { h->action(h); rcu_bh_qsctr_inc(cpu); } h++; pending >>= 1; } while (pending); local_irq_disable(); pending = local_softirq_pending(); if (pending && --max_restart) goto restart; if (pending) wakeup_softirqd(); trace_softirq_exit(); account_system_vtime(current); _local_bh_enable(); } #ifndef __ARCH_HAS_DO_SOFTIRQ asmlinkage void do_softirq(void) { __u32 pending; unsigned long flags; if (in_interrupt()) return; local_irq_save(flags); pending = local_softirq_pending(); if (pending) __do_softirq(); local_irq_restore(flags); } EXPORT_SYMBOL(do_softirq); #endif #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED # define invoke_softirq() __do_softirq() #else # define invoke_softirq() do_softirq() #endif /* * Exit an interrupt context. Process softirqs if needed and possible: */ void irq_exit(void) { account_system_vtime(current); trace_hardirq_exit(); sub_preempt_count(IRQ_EXIT_OFFSET); if (!in_interrupt() && local_softirq_pending()) invoke_softirq(); preempt_enable_no_resched(); } /* * This function must run with irqs disabled! */ inline fastcall void raise_softirq_irqoff(unsigned int nr) { __raise_softirq_irqoff(nr); /* * If we're in an interrupt or softirq, we're done * (this also catches softirq-disabled code). We will * actually run the softirq once we return from * the irq or softirq. * * Otherwise we wake up ksoftirqd to make sure we * schedule the softirq soon. */ if (!in_interrupt()) wakeup_softirqd(); } EXPORT_SYMBOL(raise_softirq_irqoff); void fastcall raise_softirq(unsigned int nr) { unsigned long flags; local_irq_save(flags); raise_softirq_irqoff(nr); local_irq_restore(flags); } void open_softirq(int nr, void (*action)(struct softirq_action*), void *data) { softirq_vec[nr].data = data; softirq_vec[nr].action = action; } /* Tasklets */ struct tasklet_head { struct tasklet_struct *list; }; /* Some compilers disobey section attribute on statics when not initialized -- RR */ static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec) = { NULL }; static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec) = { NULL }; void fastcall __tasklet_schedule(struct tasklet_struct *t) { unsigned long flags; local_irq_save(flags); t->next = __get_cpu_var(tasklet_vec).list; __get_cpu_var(tasklet_vec).list = t; raise_softirq_irqoff(TASKLET_SOFTIRQ); local_irq_restore(flags); } EXPORT_SYMBOL(__tasklet_schedule); void fastcall __tasklet_hi_schedule(struct tasklet_struct *t) { unsigned long flags; local_irq_save(flags); t->next = __get_cpu_var(tasklet_hi_vec).list; __get_cpu_var(tasklet_hi_vec).list = t; raise_softirq_irqoff(HI_SOFTIRQ); local_irq_restore(flags); } EXPORT_SYMBOL(__tasklet_hi_schedule); static void tasklet_action(struct softirq_action *a) { struct tasklet_struct *list; local_irq_disable(); list = __get_cpu_var(tasklet_vec).list; __get_cpu_var(tasklet_vec).list = NULL; local_irq_enable(); while (list) { struct tasklet_struct *t = list; list = list->next; if (tasklet_trylock(t)) { if (!atomic_read(&t->count)) { if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state)) BUG(); t->func(t->data); tasklet_unlock(t); continue; } tasklet_unlock(t); } local_irq_disable(); t->next = __get_cpu_var(tasklet_vec).list; __get_cpu_var(tasklet_vec).list = t; __raise_softirq_irqoff(TASKLET_SOFTIRQ); local_irq_enable(); } } static void tasklet_hi_action(struct softirq_action *a) { struct tasklet_struct *list; local_irq_disable(); list = __get_cpu_var(tasklet_hi_vec).list; __get_cpu_var(tasklet_hi_vec).list = NULL; local_irq_enable(); while (list) { struct tasklet_struct *t = list; list = list->next; if (tasklet_trylock(t)) { if (!atomic_read(&t->count)) { if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state)) BUG(); t->func(t->data); tasklet_unlock(t); continue; } tasklet_unlock(t); } local_irq_disable(); t->next = __get_cpu_var(tasklet_hi_vec).list; __get_cpu_var(tasklet_hi_vec).list = t; __raise_softirq_irqoff(HI_SOFTIRQ); local_irq_enable(); } } void tasklet_init(struct tasklet_struct *t, void (*func)(unsigned long), unsigned long data) { t->next = NULL; t->state = 0; atomic_set(&t->count, 0); t->func = func; t->data = data; } EXPORT_SYMBOL(tasklet_init); void tasklet_kill(struct tasklet_struct *t) { if (in_interrupt()) printk("Attempt to kill tasklet from interrupt\n"); while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) { do yield(); while (test_bit(TASKLET_STATE_SCHED, &t->state)); } tasklet_unlock_wait(t); clear_bit(TASKLET_STATE_SCHED, &t->state); } EXPORT_SYMBOL(tasklet_kill); void __init softirq_init(void) { open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL); open_softirq(HI_SOFTIRQ, tasklet_hi_action, NULL); } static int ksoftirqd(void * __bind_cpu) { set_user_nice(current, 19); current->flags |= PF_NOFREEZE; set_current_state(TASK_INTERRUPTIBLE); while (!kthread_should_stop()) { preempt_disable(); if (!local_softirq_pending()) { preempt_enable_no_resched(); schedule(); preempt_disable(); } __set_current_state(TASK_RUNNING); while (local_softirq_pending()) { /* Preempt disable stops cpu going offline. If already offline, we'll be on wrong CPU: don't process */ if (cpu_is_offline((long)__bind_cpu)) goto wait_to_die; do_softirq(); preempt_enable_no_resched(); cond_resched(); preempt_disable(); } preempt_enable(); set_current_state(TASK_INTERRUPTIBLE); } __set_current_state(TASK_RUNNING); return 0; wait_to_die: preempt_enable(); /* Wait for kthread_stop */ set_current_state(TASK_INTERRUPTIBLE); while (!kthread_should_stop()) { schedule(); set_current_state(TASK_INTERRUPTIBLE); } __set_current_state(TASK_RUNNING); return 0; } #ifdef CONFIG_HOTPLUG_CPU /* * tasklet_kill_immediate is called to remove a tasklet which can already be * scheduled for execution on @cpu. * * Unlike tasklet_kill, this function removes the tasklet * _immediately_, even if the tasklet is in TASKLET_STATE_SCHED state. * * When this function is called, @cpu must be in the CPU_DEAD state. */ void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu) { struct tasklet_struct **i; BUG_ON(cpu_online(cpu)); BUG_ON(test_bit(TASKLET_STATE_RUN, &t->state)); if (!test_bit(TASKLET_STATE_SCHED, &t->state)) return; /* CPU is dead, so no lock needed. */ for (i = &per_cpu(tasklet_vec, cpu).list; *i; i = &(*i)->next) { if (*i == t) { *i = t->next; return; } } BUG(); } static void takeover_tasklets(unsigned int cpu) { struct tasklet_struct **i; /* CPU is dead, so no lock needed. */ local_irq_disable(); /* Find end, append list for that CPU. */ for (i = &__get_cpu_var(tasklet_vec).list; *i; i = &(*i)->next); *i = per_cpu(tasklet_vec, cpu).list; per_cpu(tasklet_vec, cpu).list = NULL; raise_softirq_irqoff(TASKLET_SOFTIRQ); for (i = &__get_cpu_var(tasklet_hi_vec).list; *i; i = &(*i)->next); *i = per_cpu(tasklet_hi_vec, cpu).list; per_cpu(tasklet_hi_vec, cpu).list = NULL; raise_softirq_irqoff(HI_SOFTIRQ); local_irq_enable(); } #endif /* CONFIG_HOTPLUG_CPU */ static int __cpuinit cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu) { int hotcpu = (unsigned long)hcpu; struct task_struct *p; switch (action) { case CPU_UP_PREPARE: p = kthread_create(ksoftirqd, hcpu, "ksoftirqd/%d", hotcpu); if (IS_ERR(p)) { printk("ksoftirqd for %i failed\n", hotcpu); return NOTIFY_BAD; } kthread_bind(p, hotcpu); per_cpu(ksoftirqd, hotcpu) = p; break; case CPU_ONLINE: wake_up_process(per_cpu(ksoftirqd, hotcpu)); break; #ifdef CONFIG_HOTPLUG_CPU case CPU_UP_CANCELED: if (!per_cpu(ksoftirqd, hotcpu)) break; /* Unbind so it can run. Fall thru. */ kthread_bind(per_cpu(ksoftirqd, hotcpu), any_online_cpu(cpu_online_map)); case CPU_DEAD: p = per_cpu(ksoftirqd, hotcpu); per_cpu(ksoftirqd, hotcpu) = NULL; kthread_stop(p); takeover_tasklets(hotcpu); break; #endif /* CONFIG_HOTPLUG_CPU */ } return NOTIFY_OK; } static struct notifier_block __cpuinitdata cpu_nfb = { .notifier_call = cpu_callback }; __init int spawn_ksoftirqd(void) { void *cpu = (void *)(long)smp_processor_id(); int err = cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu); BUG_ON(err == NOTIFY_BAD); cpu_callback(&cpu_nfb, CPU_ONLINE, cpu); register_cpu_notifier(&cpu_nfb); return 0; } #ifdef CONFIG_SMP /* * Call a function on all processors */ int on_each_cpu(void (*func) (void *info), void *info, int retry, int wait) { int ret = 0; preempt_disable(); ret = smp_call_function(func, info, retry, wait); local_irq_disable(); func(info); local_irq_enable(); preempt_enable(); return ret; } EXPORT_SYMBOL(on_each_cpu); #endif
|
__label__pos
| 0.998631 |
欢迎来到雅狐站长网!
兔子CDN
Javascript/Ajax
当前位置:主页 > 网页制作 > Javascript/Ajax >
原生js canvas实现简单贪吃蛇
时间:2021-04-23 11:13:27|栏目:Javascript/Ajax|点击:
本文实例为大家分享了js canvas实现简单贪吃蛇的具体代码,供大家参考,具体内容如下
Js原生贪吃蛇:canvas
HTML
<canvas id="can"></canvas>
CSS
#can{
background: #000000;
height: 400px;
width: 850px;
}
JS
//公共板块
var blockSize = 10;
//地图的高宽
var mapW = 300;
var mapH = 150;
//历史食物var
var historyfood = new Array();
//食物数组
var img = new Image()
var arrFood = ["ananas.png","hamburg.png","watermelon.png"]
historyfood =[{x: 0,y:0}];
//贪吃蛇默认值
var snake = [{x:3,y:0},{x:2,y:0},{x:1,y:0}]
//贪吃蛇方向
var directionX = 1;
var directionY = 0;
//添加一个标记,标记食物是否被吃掉
//默认值:没有被吃掉
var isFood = false;
//判断游戏状态
//默认游戏继续
var gameState = false;
//限制贪吃蛇所移动的方向不能反方向操作
var lockleft = true;
var lockright = true;
var lockup = true;
var lockdown = true;
//贪吃蛇分数
var count = 0;
//贪吃蛇速度
var speed = 1000 - (count + 5);
$(function () {
$("#divContainer").height($("#can").height());
//1,获取到画布对象
var can = document.getElementById("can");
//2,获取到画图工具箱
var tools = can.getContext('2d');
tools.beginPath();
//设置食物默认值
var XY = Randomfood();
console.log(XY);
var x1 = Randomfood().x;
var y1 = Randomfood().y;
img.src = "/aimless/img/GluttonousSnakeFood/"+ arrFood[Math.floor(Math.random() * arrFood.length)];
//控制贪吃蛇移动
document.addEventListener('keydown',function (e){
switch (e.keyCode) {
case 38:
if (lockup){
directionX = 0;
directionY = -1;
lockdown = false;
lockleft = true;
lockright = true;
}
break;
case 40:
if (lockdown){
directionX = 0;
directionY = 1;
lockup = false;
lockleft = true;
lockright = true;
}
break;
case 37:
if (lockleft){
directionX = - 1;
directionY = 0;
lockright = false;
lockup = true;
lockdown = true;
}
break;
case 39:
if (lockright){
directionX = 1;
directionY = 0;
lockleft = false;
lockup = true;
lockdown = true;
}
break;
}
})
setIntervalSnake(tools,x1,y1);
//4,找位置
})
function setIntervalSnake(tools,x1,y1){
setInterval(function (){
if (gameState){
return;
}
//1,擦除画板
tools.clearRect(0,0,850,420);
var oldHead = snake[0];
if (oldHead.x < 0 || oldHead.y < 0 || oldHead.x * blockSize >= mapW || oldHead.y * blockSize >= mapH){
gameState = true;
alert("游戏结束");
}else {
//蛇移动
if (snake[0].x * blockSize === x1 && snake[0].y * blockSize === y1){
isFood = true;
} else {
snake.pop()
}
var newHead = {
x: oldHead.x + directionX,
y: oldHead.y + directionY
}
snake.unshift(newHead);
}
//2,判断食物是否被吃掉,吃掉刷新,不吃掉不刷新
if (isFood){
count = count + 1;
$("#count").html(count);
x1 = Randomfood().x;
y1 = Randomfood().y;
img.src = "/aimless/img/GluttonousSnakeFood/"+ arrFood[Math.floor(Math.random() * arrFood.length)];
isFood = false;
}
tools.drawImage(img,x1,y1,blockSize,blockSize);
//蛇身数组
var Thesnakebody = new Array();
//3,画蛇
for (var i = 0; i < snake.length; i++){
if (i == 0){
tools.fillStyle = "#9933CC";
} else {
var newHead1 = {
x: snake[i].x,
y: snake[i].y
}
Thesnakebody.unshift(newHead1);
tools.fillStyle = "#33adcc";
}
tools.fillRect(snake[i].x * blockSize,snake[i].y * blockSize,blockSize,blockSize);
}
// //判断蛇头咬到了蛇身
Thesnakebody.forEach(item=>{
if(item.x == snake[0].x && item.y == snake[0].y){
gameState = true;
setIntervalSnake(tools,x1,y1);
}
})
//4,画地图
var width = Math.round($("#can").width() / blockSize);
var height = Math.round($("#can").height() / blockSize);
for (var i = 1; i < width;i++){
tools.moveTo(0,blockSize * i);
tools.lineTo($("#can").width(),blockSize * i);
}
for (var i = 1; i < height;i++){
tools.moveTo(blockSize * i,0);
tools.lineTo(blockSize * i,$("#can").height());
}
tools.strokeStyle = "#FFFFFF";
//5,绘制
tools.stroke();
},speed / 3);
}
//随机食物不
function Randomfood() {
var RandomX = Math.floor(Math.random() * mapW / blockSize) * blockSize;
var RandomY = Math.floor(Math.random() * mapH / blockSize) * blockSize;
setInterval(function (){
snake.forEach(item=>{
console.log(RandomX / blockSize,RandomY / blockSize);
// console.log(item.x,item.y);
if(item.x == RandomX / blockSize && item.y == RandomY / blockSize){
return Randomfood();
} else {
return ;
}
})
}, 10 / 3);
var newRandom = {
x: RandomX,
y: RandomY
}
return newRandom;
}
上一篇:Vue如何使用Dayjs计算常用日期详解
栏 目:Javascript/Ajax
下一篇:JavaScript实现二叉搜索树
本文标题:原生js canvas实现简单贪吃蛇
本文地址:http://www.cnzzww.cn/JavaScript/56445.html
广告投放 | 联系我们 | 版权申明
重要申明:本站所有的文章、图片、评论等,均由网友发表或上传并维护或收集自网络,属个人行为,与本站立场无关。
如果侵犯了您的权利,请与我们联系,我们将在24小时内进行处理、任何非本站因素导致的法律后果,本站均不负任何责任。
联系QQ: | 邮箱:111#qq.com(#换成@)
Copyright © 2015-2020 雅狐站长网 版权所有 苏ICP备20040415号
|
__label__pos
| 0.999161 |
Hide a column
Contents[Hide]
1. Overview
This article shows the various ways you can hide a column in a table visualization.
2. Using properties
You can show or hide a table column by using its Visible property. You cannot hide a row header column using this method.
Note
Hiding a column only in the visualization properties does not remove or hide the data from the metric set, so viewers can see a hidden column using the Show Columns right-click option and the data can also be exported.
A table with two columns
A table with two columns
In Properties, on the Main tab and under Columns, click the column that you want to hide and uncheck the Visible property.
Visible property
Visible property
The table column is now hidden.
Column is hidden
Column is hidden
To make it visible again, click the column in Properties and select the Visible property.
3. Using the Data Analysis Panel
You can remove a column from a table visualization using the following method, but you can still use that column in filtering, state rules, etc.
On the Data Analysis Panel, click Visualization.
Edit Visualization
Edit Visualization
Under Column, click the X icon for the column(s) you want to remove.
Remove the columns
Remove the columns
You can also remove a column from Properties by clicking the Main tab and then clicking the delete icon next to the column that you want to remove.
Remove columns from Properties
Remove columns from Properties
3.1. Hiding a measure
When you want to use a measure for a formula, sorting, or other metric set features but it must be hidden in the visualization and after export, then you can hide the measure itself, rather than just hiding the table column. Access the measure options by clicking its green tile or the edit icon in the Data Analysis Panel and select the Hidden box to prevent the measure's values from being visible.
4. Show and hide columns when viewing
Viewers can select which columns to view by right-clicking the table visualization when viewing and selecting the Show Columns option.
Select columns to show/hide
Select columns to show/hide
4.1. Using scripting
If you need to use script rather than the built-in context menu option, you can set the isVisible property:
// To hide 1st column
table1.control.columns[0].isVisible = false;
// To show 1st column
table1.control.columns[0].isVisible = true;
To remove a table column entirely, call removeColumnByIndex:
table1.removeColumnByIndex(0);
5. See also
Dundas Data Visualization, Inc.
500-250 Ferrand Drive
Toronto, ON, Canada
M3C 3G8
North America: 1.800.463.1492
International: 1.416.467.5100
Dundas Support Hours:
Phone: 9am-6pm, ET, Mon-Fri
Email: 7am-6pm, ET, Mon-Fri
|
__label__pos
| 0.501826 |
mmd: rename dbinfo to selector_info
[paraslash.git] / sdl_gui.c
1 /*
2 * Copyright (C) 2003-2006 Andre Noll <[email protected]>
3 *
4 * This program is free software; you can redistribute it and/or modify
5 * it under the terms of the GNU General Public License as published by
6 * the Free Software Foundation; either version 2 of the License, or
7 * (at your option) any later version.
8 *
9 * This program is distributed in the hope that it will be useful,
10 * but WITHOUT ANY WARRANTY; without even the implied warranty of
11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 * GNU General Public License for more details.
13 *
14 * You should have received a copy of the GNU General Public License
15 * along with this program; if not, write to the Free Software
16 * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111, USA.
17 */
18
19 /** \file sdl_gui.c SDL-based interface for paraslash */
20
21 #include "para.h"
22 #include "string.h"
23
24
25 #include <SDL/SDL.h>
26 #include "SFont.h"
27
28 #include <SDL/SDL_image.h>
29 #include <SDL/SDL_events.h>
30 #include <SDL/SDL_keyboard.h>
31 #include <SDL/SDL_keysym.h>
32 #include <sys/time.h> /* timeval, needed for select */
33
34 #include "sdl_gui.cmdline.h"
35
36 #define FRAME_RED 100
37 #define FRAME_GREEN 200
38 #define FRAME_BLUE 200
39
40 SDL_Surface *screen;
41 static int width = 0;
42 static int height = 0;
43
44 extern const char *status_item_list[NUM_STAT_ITEMS];
45 struct gengetopt_args_info args_info;
46
47 #define FRAME_WIDTH 10
48 #define FONT_HEIGHT 36
49
50 #define INPUT_WIDTH width - 2 * FRAME_WIDTH
51 #define INPUT_X FRAME_WIDTH
52 #define INPUT_HEIGHT (args_info.interactive_flag? FONT_HEIGHT : 0)
53 #define INPUT_Y height - FRAME_WIDTH - INPUT_HEIGHT
54
55 #define OUTPUT_WIDTH width - 2 * FRAME_WIDTH
56 #define OUTPUT_X FRAME_WIDTH
57 #define OUTPUT_Y FRAME_WIDTH
58 #define OUTPUT_HEIGHT (height - INPUT_HEIGHT - 2 * FRAME_WIDTH)
59
60 #define NUM_LINES (height - 2 * FRAME_WIDTH - INPUT_HEIGHT) / FONT_HEIGHT
61
62 #define M_YELLOW 0
63 #define N_YELLOW 1
64
65 #define LEFT 1
66 #define RIGHT 2
67 #define CENTER 3
68
69
70 struct stat_item{
71 char *name;
72 char *prefix;
73 char *postfix;
74 char *content;
75 unsigned x;
76 unsigned y;
77 unsigned w;
78 unsigned h;
79 Uint8 r;
80 Uint8 g;
81 Uint8 b;
82 int font;
83 int align;
84 };
85
86 struct font {
87 char name[MAXLINE];
88 SFont_FontInfo fontinfo;
89 };
90
91 struct font fonts[] = {
92 {
93 .name = "24P_Arial_Metallic_Yellow.png",
94 .fontinfo = {NULL, {0}, 0}
95 },
96 {
97 .name = "24P_Arial_NeonYellow.png",
98 .fontinfo = {NULL, {0}, 0}
99 },
100 {
101 .name = "24P_Copperplate_Blue.png",
102 .fontinfo = {NULL, {0}, 0}
103 },
104 {
105 .name = "",
106 .fontinfo = {NULL, {0}, 0}
107 }
108 };
109
110
111 #define PIC_WIDTH width * 3 / 10
112 #define PIC_HEIGHT height / 3
113
114 #define INPUT_FONT 0
115 #define OUTPUT_FONT 1
116 #define MSG_FONT 2
117
118
119 static struct stat_item stat_items[NUM_STAT_ITEMS];
120
121 void para_log(__unused int ll, __unused char* fmt,...) /* no logging */
122 {
123 }
124
125 static void init_stat_items(void)
126 {
127 int i;
128 struct stat_item *s = stat_items;
129
130 for (i = 0; i < NUM_STAT_ITEMS; i++) {
131 s[i].w = 0;
132 s[i].content = NULL;
133 }
134
135
136 s[SI_STATUS_BAR].prefix = "";
137 s[SI_STATUS_BAR].postfix = "";
138 s[SI_STATUS_BAR].content = "";
139 s[SI_STATUS_BAR].x = 0;
140 s[SI_STATUS_BAR].y = 10;
141 s[SI_STATUS_BAR].w = 100;
142 s[SI_STATUS_BAR].h = FONT_HEIGHT;
143 s[SI_STATUS_BAR].r = 0;
144 s[SI_STATUS_BAR].g = 0;
145 s[SI_STATUS_BAR].b = 0;
146 s[SI_STATUS_BAR].font = M_YELLOW;
147 s[SI_STATUS_BAR].align = CENTER;
148
149 s[SI_PLAY_TIME].prefix = "";
150 s[SI_PLAY_TIME].postfix = "";
151 s[SI_PLAY_TIME].content = "";
152 s[SI_PLAY_TIME].x = 35;
153 s[SI_PLAY_TIME].y = 20;
154 s[SI_PLAY_TIME].w = 65;
155 s[SI_PLAY_TIME].h = FONT_HEIGHT;
156 s[SI_PLAY_TIME].r = 0;
157 s[SI_PLAY_TIME].g = 0;
158 s[SI_PLAY_TIME].b = 0;
159 s[SI_PLAY_TIME].font = M_YELLOW;
160 s[SI_PLAY_TIME].align = CENTER;
161
162 s[SI_STATUS].prefix = "";
163 s[SI_STATUS].postfix = "";
164 s[SI_STATUS].content = "";
165 s[SI_STATUS].x = 35;
166 s[SI_STATUS].y = 28;
167 s[SI_STATUS].w = 12;
168 s[SI_STATUS].h = FONT_HEIGHT;
169 s[SI_STATUS].r = 0;
170 s[SI_STATUS].g = 0;
171 s[SI_STATUS].b = 0;
172 s[SI_STATUS].font = N_YELLOW;
173 s[SI_STATUS].align = LEFT;
174
175 s[SI_STATUS_FLAGS].prefix = " (";
176 s[SI_STATUS_FLAGS].postfix = ")";
177 s[SI_STATUS_FLAGS].content = "";
178 s[SI_STATUS_FLAGS].x = 47;
179 s[SI_STATUS_FLAGS].y = 28;
180 s[SI_STATUS_FLAGS].w = 15;
181 s[SI_STATUS_FLAGS].h = FONT_HEIGHT;
182 s[SI_STATUS_FLAGS].r = 0;
183 s[SI_STATUS_FLAGS].g = 0;
184 s[SI_STATUS_FLAGS].b = 0;
185 s[SI_STATUS_FLAGS].font = N_YELLOW;
186 s[SI_STATUS_FLAGS].align = CENTER;
187
188 s[SI_NUM_PLAYED].prefix = "#";
189 s[SI_NUM_PLAYED].postfix = "";
190 s[SI_NUM_PLAYED].content = "0";
191 s[SI_NUM_PLAYED].x = 62;
192 s[SI_NUM_PLAYED].y = 28;
193 s[SI_NUM_PLAYED].w = 13;
194 s[SI_NUM_PLAYED].h = FONT_HEIGHT;
195 s[SI_NUM_PLAYED].r = 0;
196 s[SI_NUM_PLAYED].g = 0;
197 s[SI_NUM_PLAYED].b = 0;
198 s[SI_NUM_PLAYED].font = N_YELLOW;
199 s[SI_NUM_PLAYED].align = CENTER;
200
201 s[SI_UPTIME].prefix = "Up: ";
202 s[SI_UPTIME].postfix = "";
203 s[SI_UPTIME].content = "";
204 s[SI_UPTIME].x = 75;
205 s[SI_UPTIME].y = 28;
206 s[SI_UPTIME].w = 25;
207 s[SI_UPTIME].h = FONT_HEIGHT;
208 s[SI_UPTIME].r = 0;
209 s[SI_UPTIME].g = 0;
210 s[SI_UPTIME].b = 0;
211 s[SI_UPTIME].font = N_YELLOW;
212 s[SI_UPTIME].align = RIGHT;
213
214 s[SI_SELECTOR].prefix = "selector: ";
215 s[SI_SELECTOR].postfix = "";
216 s[SI_SELECTOR].content = "no content yet";
217 s[SI_SELECTOR].x = 35;
218 s[SI_SELECTOR].y = 48;
219 s[SI_SELECTOR].w = 35;
220 s[SI_SELECTOR].h = FONT_HEIGHT;
221 s[SI_SELECTOR].r = 0;
222 s[SI_SELECTOR].g = 0;
223 s[SI_SELECTOR].b = 0;
224 s[SI_SELECTOR].font = N_YELLOW;
225 s[SI_SELECTOR].align = LEFT;
226
227 s[SI_FORMAT].prefix = "Format: ";
228 s[SI_FORMAT].postfix = "";
229 s[SI_FORMAT].content = "";
230 s[SI_FORMAT].x = 70;
231 s[SI_FORMAT].y = 48;
232 s[SI_FORMAT].w = 30;
233 s[SI_FORMAT].h = FONT_HEIGHT;
234 s[SI_FORMAT].r = 0;
235 s[SI_FORMAT].g = 0;
236 s[SI_FORMAT].b = 0;
237 s[SI_FORMAT].font = N_YELLOW;
238 s[SI_FORMAT].align = RIGHT;
239
240 s[SI_MTIME].prefix = "MTime: ";
241 s[SI_MTIME].postfix = "";
242 s[SI_MTIME].content = "";
243 s[SI_MTIME].x = 35;
244 s[SI_MTIME].y = 35;
245 s[SI_MTIME].w = 65;
246 s[SI_MTIME].h = FONT_HEIGHT;
247 s[SI_MTIME].r = 0;
248 s[SI_MTIME].g = 0;
249 s[SI_MTIME].b = 0;
250 s[SI_MTIME].font = N_YELLOW;
251 s[SI_MTIME].align = LEFT;
252
253 s[SI_FILE_SIZE].prefix = "Size: ";
254 s[SI_FILE_SIZE].postfix = "kb";
255 s[SI_FILE_SIZE].content = "";
256 s[SI_FILE_SIZE].x = 35;
257 s[SI_FILE_SIZE].y = 42;
258 s[SI_FILE_SIZE].w = 20;
259 s[SI_FILE_SIZE].h = FONT_HEIGHT;
260 s[SI_FILE_SIZE].r = 0;
261 s[SI_FILE_SIZE].g = 0;
262 s[SI_FILE_SIZE].b = 0;
263 s[SI_FILE_SIZE].font = N_YELLOW;
264 s[SI_FILE_SIZE].align = LEFT;
265
266 s[SI_AUDIO_INFO1].prefix = "";
267 s[SI_AUDIO_INFO1].postfix = "";
268 s[SI_AUDIO_INFO1].content = "";
269 s[SI_AUDIO_INFO1].x = 0;
270 s[SI_AUDIO_INFO1].y = 60;
271 s[SI_AUDIO_INFO1].w = 100;
272 s[SI_AUDIO_INFO1].h = FONT_HEIGHT;
273 s[SI_AUDIO_INFO1].r = 0;
274 s[SI_AUDIO_INFO1].g = 0;
275 s[SI_AUDIO_INFO1].b = 0;
276 s[SI_AUDIO_INFO1].font = N_YELLOW;
277 s[SI_AUDIO_INFO1].align = CENTER;
278
279 s[SI_AUDIO_INFO2].prefix = "";
280 s[SI_AUDIO_INFO2].postfix = "";
281 s[SI_AUDIO_INFO2].content = "";
282 s[SI_AUDIO_INFO2].x = 0;
283 s[SI_AUDIO_INFO2].y = 65;
284 s[SI_AUDIO_INFO2].w = 100;
285 s[SI_AUDIO_INFO2].h = FONT_HEIGHT;
286 s[SI_AUDIO_INFO2].r = 0;
287 s[SI_AUDIO_INFO2].g = 0;
288 s[SI_AUDIO_INFO2].b = 0;
289 s[SI_AUDIO_INFO2].font = N_YELLOW;
290 s[SI_AUDIO_INFO2].align = CENTER;
291
292 s[SI_AUDIO_INFO3].prefix = "";
293 s[SI_AUDIO_INFO3].postfix = "";
294 s[SI_AUDIO_INFO3].content = "";
295 s[SI_AUDIO_INFO3].x = 0;
296 s[SI_AUDIO_INFO3].y = 70;
297 s[SI_AUDIO_INFO3].w = 100;
298 s[SI_AUDIO_INFO3].h = FONT_HEIGHT;
299 s[SI_AUDIO_INFO3].r = 0;
300 s[SI_AUDIO_INFO3].g = 0;
301 s[SI_AUDIO_INFO3].b = 0;
302 s[SI_AUDIO_INFO3].font = N_YELLOW;
303 s[SI_AUDIO_INFO3].align = CENTER;
304
305 s[SI_DBINFO1].name = "dbinfo1:";
306 s[SI_DBINFO1].prefix = "";
307 s[SI_DBINFO1].postfix = "";
308 s[SI_DBINFO1].content = "";
309 s[SI_DBINFO1].x = 0;
310 s[SI_DBINFO1].y = 83;
311 s[SI_DBINFO1].w = 100;
312 s[SI_DBINFO1].h = FONT_HEIGHT;
313 s[SI_DBINFO1].r = 0;
314 s[SI_DBINFO1].g = 0;
315 s[SI_DBINFO1].b = 0;
316 s[SI_DBINFO1].font = N_YELLOW;
317 s[SI_DBINFO1].align = CENTER;
318
319 s[SI_DBINFO2].prefix = "";
320 s[SI_DBINFO2].postfix = "";
321 s[SI_DBINFO2].content = "";
322 s[SI_DBINFO2].x = 0;
323 s[SI_DBINFO2].y = 88;
324 s[SI_DBINFO2].w = 100;
325 s[SI_DBINFO2].h = FONT_HEIGHT;
326 s[SI_DBINFO2].r = 0;
327 s[SI_DBINFO2].g = 0;
328 s[SI_DBINFO2].b = 0;
329 s[SI_DBINFO2].font = N_YELLOW;
330 s[SI_DBINFO2].align = CENTER;
331
332 s[SI_DBINFO3].name = "dbinfo3:";
333 s[SI_DBINFO3].prefix = "";
334 s[SI_DBINFO3].postfix = "";
335 s[SI_DBINFO3].content = "";
336 s[SI_DBINFO3].x = 0;
337 s[SI_DBINFO3].y = 93;
338 s[SI_DBINFO3].w = 100;
339 s[SI_DBINFO3].h = FONT_HEIGHT;
340 s[SI_DBINFO3].r = 0;
341 s[SI_DBINFO3].g = 0;
342 s[SI_DBINFO3].b = 0;
343 s[SI_DBINFO3].font = N_YELLOW;
344 s[SI_DBINFO3].align = CENTER;
345 }
346
347 /*
348 * init SDL libary and set window title
349 */
350 static void init_SDL(void)
351 {
352 if (SDL_Init(SDL_INIT_VIDEO) == -1) {
353 fprintf(stderr,
354 "Couldn't initialize SDL: %s\n", SDL_GetError());
355 exit(1);
356 }
357 /* Clean up on exit */
358 atexit(SDL_Quit);
359 /* Initialize the display */
360 if (args_info.fullscreen_flag)
361 screen = SDL_SetVideoMode(width, height, 0, SDL_FULLSCREEN);
362 else
363 screen = SDL_SetVideoMode(width, height, 0, 0);
364 if (!screen) {
365 fprintf(stderr, "Couldn't set video mode: %s\n",
366 SDL_GetError());
367 exit(1);
368 }
369 SDL_EventState(SDL_MOUSEMOTION, SDL_IGNORE);
370 SDL_EventState(SDL_MOUSEBUTTONDOWN, SDL_IGNORE);
371 SDL_EventState(SDL_MOUSEBUTTONUP, SDL_IGNORE);
372 /* Set the window manager title bar */
373 SDL_WM_SetCaption("The Gui of death that makes you blind (paraslash "
374 VERSION ")", "SFont");
375 }
376
377 /*
378 * draw rectangular frame of width FRAME_WIDTH
379 */
380 static void draw_frame(Uint8 r, Uint8 g, Uint8 b) {
381 SDL_Rect rect;
382
383 rect.x = 0;
384 rect.y = 0;
385 rect.w = width;
386 rect.h = FRAME_WIDTH;
387 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, r, g, b));
388 SDL_UpdateRect(screen, rect.x, rect.y, rect.w, rect.h);
389
390 rect.x = 0;
391 rect.y = height - FRAME_WIDTH;
392 rect.w = width;
393 rect.h = FRAME_WIDTH;
394 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, r, g, b));
395 SDL_UpdateRect(screen, rect.x, rect.y, rect.w, rect.h);
396
397 rect.x = 0;
398 rect.y = FRAME_WIDTH;
399 rect.w = FRAME_WIDTH;
400 rect.h = height - 2 * FRAME_WIDTH;
401 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, r, g, b));
402 SDL_UpdateRect(screen, rect.x, rect.y, rect.w, rect.h);
403
404 rect.x = width - FRAME_WIDTH;
405 rect.y = FRAME_WIDTH;
406 rect.w = FRAME_WIDTH;
407 rect.h = height - 2 * FRAME_WIDTH;
408 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, r, g, b));
409 SDL_UpdateRect(screen, rect.x, rect.y, rect.w, rect.h);
410 }
411
412 /*
413 * fill input rect with color
414 */
415 static void fill_input_rect(void)
416 {
417 SDL_Rect rect;
418
419 rect.x = INPUT_X;
420 rect.y = INPUT_Y;
421 rect.w = INPUT_WIDTH;
422 rect.h = INPUT_HEIGHT;
423 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, 10, 150, 10));
424 }
425
426 /*
427 * fill output rect with color
428 */
429 static void fill_output_rect(void)
430 {
431 SDL_Rect rect;
432
433 rect.x = OUTPUT_X;
434 rect.y = OUTPUT_Y;
435 rect.w = OUTPUT_WIDTH;
436 rect.h = OUTPUT_HEIGHT;
437 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format, 0, 0, 0));
438 }
439
440 /*
441 * convert tab to space
442 */
443 static void tab2space(char *text)
444 {
445 char *p = text;
446 while (*p) {
447 if (*p == '\t')
448 *p = ' ';
449 p++;
450 }
451 }
452
453 static void print_msg(char *msg)
454 {
455 SFont_FontInfo *font = &(fonts[MSG_FONT].fontinfo);
456 char *buf = strdup(msg);
457 int len = strlen(buf);
458
459 if (!buf)
460 return;
461 while (TextWidth2(font, buf) > INPUT_WIDTH && len > 0) {
462 *(buf + len) = '\0';
463 len--;
464 }
465 fill_input_rect();
466 PutString2(screen, font, INPUT_X, INPUT_Y, buf);
467 free(buf);
468 }
469
470 static void update_all(void)
471 {
472 SDL_UpdateRect(screen, 0, 0, 0, 0);
473 }
474
475 static void update_input(void)
476 {
477 SDL_UpdateRect(screen, INPUT_X, INPUT_Y, INPUT_WIDTH, INPUT_HEIGHT);
478
479 }
480
481 /*
482 * wait for key, ignore all other events, return 0 if there is no key event
483 * pending. Otherwise return keysym of key
484 */
485 SDLKey get_key(void)
486 {
487 SDL_Event event;
488
489 while (SDL_PollEvent(&event) > 0) {
490 if(event.type != SDL_KEYDOWN)
491 continue;
492 // printf("Key pressed, scancode: 0x%x\n",
493 // event.key.keysym.scancode);
494 return event.key.keysym.sym;
495 }
496 return 0;
497 }
498
499 /*
500 * print message, wait for key (blocking), return 1 for 'q', 0 else
501 */
502 static SDLKey hit_key(char *msg)
503 {
504 SDLKey sym;
505
506 print_msg(msg);
507 update_input();
508 while (!(sym = get_key()))
509 ;
510 fill_input_rect();
511 update_input();
512 if (sym == SDLK_q)
513 return 1;
514 else
515 return 0;
516 }
517
518 /*
519 * read paraslash command from input, execute it and print results
520 */
521 static int command_handler(void)
522 {
523 FILE *pipe;
524 unsigned count = 0;
525 char text[MAXLINE]="";
526 char buf[MAXLINE]="";
527 SFont_FontInfo *font = &fonts[OUTPUT_FONT].fontinfo;
528
529 // printf("string input\n");
530 SFont_Input2(screen, &fonts[INPUT_FONT].fontinfo,
531 INPUT_X, INPUT_Y - 5, INPUT_WIDTH, text);
532 if (!strlen(text))
533 return 1;
534 if (!strcmp(text, "exit") || !strcmp(text, "quit"))
535 return 0;
536 if (text[0] == '!') {
537 if (text[1] == '\0')
538 return 1;
539 pipe = popen(text + 1, "r");
540 } else {
541 sprintf(buf, BINDIR "/para_client %s 2>&1", text);
542 pipe = popen(buf, "r");
543 }
544 if (!pipe)
545 return 0;
546 fill_output_rect();
547 while(fgets(text, MAXLINE - 1, pipe)) {
548 int len;
549
550 tab2space(text);
551 len = strlen(text);
552 // printf("string: %s\n", dest);
553 while (TextWidth2(font, text) > width - 2 * FRAME_WIDTH &&
554 len > 0) {
555 text[len] = '\0';
556 len--;
557 }
558 PutString2(screen, font, OUTPUT_X,
559 OUTPUT_Y + count * FONT_HEIGHT, text);
560 count++;
561 if (count >= NUM_LINES) {
562 update_all();
563 if (hit_key("Hit any key to continue, q to return"))
564 goto out;
565 count = 0;
566 fill_output_rect();
567 }
568 }
569 update_all();
570 hit_key("Hit any key to return");
571 out: fill_output_rect();
572 pclose(pipe);
573 return 1;
574 }
575
576
577 /*
578 * Add prefix and postfix to string, delete characters from the end
579 * if its length exceeds the max length defined in stat_items[item]
580 */
581 char *transform_string(int item)
582 {
583 struct stat_item s = stat_items[item];
584 size_t len;
585 char *ret;
586 unsigned pixels = s.w * (width - 2 * FRAME_WIDTH) / 100;
587 SFont_FontInfo *font = &(fonts[s.font].fontinfo);
588
589 ret = make_message("%s%s%s", s.prefix, s.content, s.postfix);
590 len = strlen(ret);
591 while (TextWidth2(font, ret) > pixels && len > 0) {
592 *(ret + len) = '\0';
593 len--;
594 }
595 return ret;
596 }
597
598 SDL_Surface *load_jpg(void)
599 {
600 SDL_RWops *rwop;
601 int fds[3] = {0, 1, 0};
602 pid_t pid;
603 FILE *pipe;
604
605 if (para_exec_cmdline_pid(&pid, args_info.pic_cmd_arg, fds) < 0)
606 return NULL;
607 pipe = fdopen(fds[1], "r");
608 if (!pipe)
609 return NULL;
610 if (!(rwop = SDL_RWFromFP(pipe, 0)))
611 return NULL;
612 return IMG_LoadJPG_RW(rwop);
613 }
614
615 void update_pic(void)
616 {
617 SDL_Surface *img;
618 SDL_Rect src_pic_rect = {
619 .x = 0,
620 .y = 0,
621 .w = PIC_WIDTH,
622 .h = PIC_HEIGHT,
623 };
624 SDL_Rect dest_pic_rect = {
625 .x = FRAME_WIDTH,
626 .y = OUTPUT_HEIGHT / 5,
627 .w = PIC_WIDTH,
628 .h = PIC_HEIGHT,
629 };
630
631 if (!screen)
632 return;
633
634 if (!(img = load_jpg()))
635 return;
636 SDL_FillRect(screen, &dest_pic_rect, SDL_MapRGB(screen->format,
637 0, 0, 0));
638 SDL_BlitSurface(img, &src_pic_rect, screen, &dest_pic_rect);
639 SDL_Flip(screen);
640 SDL_FreeSurface(img);
641 }
642
643 /*
644 * update status item number i.
645 */
646 static void do_update(int i)
647 {
648 static int last_played = -1;
649 SDL_Rect rect;
650 char *buf;
651 SFont_FontInfo *font = &(fonts[stat_items[i].font].fontinfo);
652 if (!stat_items[i].w)
653 return;
654
655 rect.x = stat_items[i].x * (width - FRAME_WIDTH * 2) / 100
656 + FRAME_WIDTH;
657 rect.y = stat_items[i].y * (height - 2 * FRAME_WIDTH - INPUT_HEIGHT)
658 / 100;
659 rect.w = stat_items[i].w * (width - 2 * FRAME_WIDTH) / 100;
660 rect.h = stat_items[i].h;
661 buf = transform_string(i);
662 SDL_FillRect(screen, &rect, SDL_MapRGB(screen->format,
663 stat_items[i].r, stat_items[i].g, stat_items[i].b));
664 switch(stat_items[i].align) {
665 case CENTER:
666 PutString2(screen, font,
667 rect.x + (rect.w - TextWidth2(font, buf)) / 2,
668 rect.y, buf);
669 break;
670 case LEFT:
671 PutString2(screen, font, rect.x, rect.y, buf);
672 break;
673 case RIGHT:
674 PutString2(screen, font, rect.x + (rect.w -
675 TextWidth2(font, buf)), rect.y, buf);
676 break;
677 }
678 free(buf);
679 SDL_UpdateRect(screen, rect.x, rect.y, rect.w, rect.h);
680 if (i == SI_NUM_PLAYED && atoi(stat_items[i].content) != last_played) {
681 update_pic();
682 last_played = atoi(stat_items[i].content);
683 };
684 }
685
686 /*
687 * Check if buf is a known status line. If so call do_update and return 1.
688 * Return 0 otherwise.
689 */
690 void update_status(char *buf)
691 {
692 int i;
693
694 i = stat_line_valid(buf);
695 if (i < 0)
696 return;
697 //free(stat_items[i].content);
698 stat_items[i].content = para_strdup(buf +
699 strlen(status_item_list[i]) + 1);
700 do_update(i);
701 }
702
703 /*
704 * Read stat line from pipe if pipe is ready, call update_status to
705 * display information.
706 */
707 static int draw_status(int pipe)
708 {
709 fd_set rfds;
710 int ret;
711 struct timeval tv;
712
713 tv.tv_sec = 0;
714 tv.tv_usec = 3000000;
715 FD_ZERO(&rfds);
716 FD_SET(pipe, &rfds);
717 ret = select(pipe + 1, &rfds, NULL, NULL, &tv);
718 // printf("select returned %d\n", ret);
719 if (ret <= 0)
720 return 0;
721 if (read_audiod_pipe(pipe, update_status) > 0)
722 return 1;
723 // clear_all_items();
724 free(stat_items[SI_STATUS_BAR].content);
725 stat_items[SI_STATUS_BAR].content =
726 para_strdup("audiod not running!?\n");
727 update_all();
728 sleep(1);
729 return -1;
730 }
731
732 static void clean_exit(int ret)
733 {
734 SDL_Quit();
735 exit(ret);
736 }
737
738 static void print_help(void)
739 {
740 print_msg("Hit q to quit, any other key to enter command mode");
741 }
742
743 static int configfile_exists(void)
744 {
745 if (!args_info.config_file_given) {
746 char *home = para_homedir();
747 args_info.config_file_arg = make_message(
748 "%s/.paraslash/sdl_gui.conf", home);
749 free(home);
750 }
751 return file_exists(args_info.config_file_arg);
752 }
753
754 /*
755 * MAIN
756 */
757 int main(int argc, char *argv[])
758 {
759 int i, ret, pipe;
760 SDLKey sym;
761
762 cmdline_parser(argc, argv, &args_info);
763 ret = configfile_exists();
764 // printf("w=%i,h=%i,ret=%i, cf=%s\n", width, height, ret, args_info.config_file_arg);
765
766 if (!ret && args_info.config_file_given) {
767 fprintf(stderr, "Can't read config file %s\n",
768 args_info.config_file_arg);
769 exit(EXIT_FAILURE);
770 }
771 if (ret)
772 cmdline_parser_configfile(args_info.config_file_arg,
773 &args_info, 0, 0, 0);
774 signal(SIGCHLD, SIG_IGN);
775 width = args_info.width_arg;
776 height = args_info.height_arg;
777 // printf("w=%i,h=%i,ret=%i, cf=%s\n", width, height, ret, args_info.config_file_arg);
778 init_stat_items();
779 pipe = para_open_audiod_pipe(args_info.stat_cmd_arg);
780 init_SDL();
781 for (i = 0; fonts[i].name[0]; i++) {
782 char buf[MAXLINE];
783 sprintf(buf, "%s/%s", FONTDIR, fonts[i].name);
784 /* Load the font - You don't have to use the IMGlib for this */
785 fonts[i].fontinfo.Surface = IMG_Load(buf);
786 /* Prepare the font for use */
787 InitFont2(&fonts[i].fontinfo);
788 }
789 draw_frame(FRAME_RED, FRAME_GREEN, FRAME_BLUE);
790 if (args_info.interactive_flag) {
791 print_help();
792 update_input();
793 }
794 for (;;) {
795 ret = draw_status(pipe);
796 if (ret < 0) {
797 close(pipe);
798 pipe = -1;
799 }
800 if (SDL_QuitRequested())
801 clean_exit(0);
802 while ((sym = get_key())) {
803 if (!args_info.interactive_flag)
804 clean_exit(0);
805 if (sym == SDLK_q)
806 clean_exit(0);
807 if ( sym == SDLK_LSHIFT
808 || sym == SDLK_RSHIFT
809 || sym == SDLK_LMETA
810 || sym == SDLK_RMETA
811 || sym == SDLK_RCTRL
812 || sym == SDLK_LCTRL
813 || sym == SDLK_MODE
814 || sym == SDLK_CAPSLOCK
815 || sym == SDLK_LALT
816 || sym == SDLK_RALT
817 || sym == SDLK_RSUPER
818 || sym == SDLK_LSUPER
819 || sym == SDLK_COMPOSE
820 )
821 continue;
822 if (pipe < 0) {
823 // printf("closing pipe\n");
824 kill(0, SIGINT);
825 close(pipe);
826 // printf("pipe closed\n");
827 }
828 fill_input_rect();
829 update_input();
830 if (!command_handler())
831 clean_exit(0);
832 fill_output_rect();
833 print_help();
834 update_pic();
835 SDL_UpdateRect(screen, 0, 0, 0, 0);
836 pipe = para_open_audiod_pipe(args_info.stat_cmd_arg);
837 break;
838 }
839 }
840 }
|
__label__pos
| 0.997432 |
Sim card questions...
Discussion in 'iPhone' started by harrison, Aug 21, 2010.
1. harrison
harrison New Member
Joined:
Jan 16, 2010
Messages:
138
Likes Received:
0
Device:
3G iPod touch
Hey folks
I got a couple quick questions to ask...
If I get a cap with an iphone can I use its sim card in an unlocked phone with an adapter (micro sim to sim) or is the sim locked to the iphone?
If I chop a normal sim card into a micro sim card will it work in a iphone 4?
Finally is android phone better than an iphone?
Cheers Harrison
2. leadergo
leadergo Active Member
Joined:
Aug 2, 2008
Messages:
2,384
Likes Received:
6
1. If the phone is unlocked, it stays unlocked (unless unlocked with Ultrasn0w)
2. Yes. Make sure you cut it properly.
3. IMO, no.
3. Honda
Honda Member
Joined:
Aug 12, 2010
Messages:
327
Likes Received:
0
Device:
iPhone 4 (Black)
1. I don't know.
2. Buy a Micro Sim Cutter
3. NO, all those phones are wannabes iPhones.
4. harrison
harrison New Member
Joined:
Jan 16, 2010
Messages:
138
Likes Received:
0
Device:
3G iPod touch
If I an iPhone 4 can i use the sim card in another phone.
5. Honda
Honda Member
Joined:
Aug 12, 2010
Messages:
327
Likes Received:
0
Device:
iPhone 4 (Black)
iP4 sim card is a micro sim, so no you can't.
6. harrison
harrison New Member
Joined:
Jan 16, 2010
Messages:
138
Likes Received:
0
Device:
3G iPod touch
What about using a adapter? "Micro sim to sim" / like those on ebay?
7. RandomEskimo
RandomEskimo Active Member
Joined:
Apr 27, 2010
Messages:
1,198
Likes Received:
2
Device:
iPad 2 (Black)
I see no reason why you couldn't, they don't exactly lock the sim to the phone, just the phone to the network.
8. harrison
harrison New Member
Joined:
Jan 16, 2010
Messages:
138
Likes Received:
0
Device:
3G iPod touch
Well I need to be sure because I'll be signing up for 24 months on a 79 dollar plan. It's a uk based carrier called 3 but is also in austraila where I live.
9. TakenAppleDownAPeg
TakenAppleDownAPeg Member
Joined:
Oct 6, 2009
Messages:
871
Likes Received:
5
Device:
iPhone 4 (Black)
if your getting an iPhone4 why are you going to be using another phone? If you're going to be selling it that is a rip off if your carrier makes you pay for a higher data plan than what normal phones get(thats how it is in that States at least). I would recommend a android based phone with 2.0 or later at least.
10. harrison
harrison New Member
Joined:
Jan 16, 2010
Messages:
138
Likes Received:
0
Device:
3G iPod touch
Well heres the deal my mums getting the iphone and I convinced her to give it to me. So I'll give her my phone which is some crappy phone.
Share This Page
|
__label__pos
| 0.979488 |
blob: b751f43d76ed5c9d8e168389a1238b9218bee46f [file] [log] [blame]
/*
* Mediatek Watchdog Driver
*
* Copyright (C) 2014 Matthias Brugger
*
* Matthias Brugger <[email protected]>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Based on sunxi_wdt.c
*/
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/types.h>
#include <linux/watchdog.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/delay.h>
#define WDT_MAX_TIMEOUT 31
#define WDT_MIN_TIMEOUT 1
#define WDT_LENGTH_TIMEOUT(n) ((n) << 5)
#define WDT_LENGTH 0x04
#define WDT_LENGTH_KEY 0x8
#define WDT_RST 0x08
#define WDT_RST_RELOAD 0x1971
#define WDT_MODE 0x00
#define WDT_MODE_EN (1 << 0)
#define WDT_MODE_EXT_POL_LOW (0 << 1)
#define WDT_MODE_EXT_POL_HIGH (1 << 1)
#define WDT_MODE_EXRST_EN (1 << 2)
#define WDT_MODE_IRQ_EN (1 << 3)
#define WDT_MODE_AUTO_START (1 << 4)
#define WDT_MODE_DUAL_EN (1 << 6)
#define WDT_MODE_KEY 0x22000000
#define WDT_SWRST 0x14
#define WDT_SWRST_KEY 0x1209
#define DRV_NAME "mtk-wdt"
#define DRV_VERSION "1.0"
static bool nowayout = WATCHDOG_NOWAYOUT;
static unsigned int timeout = WDT_MAX_TIMEOUT;
struct mtk_wdt_dev {
struct watchdog_device wdt_dev;
void __iomem *wdt_base;
struct notifier_block restart_handler;
};
static int mtk_reset_handler(struct notifier_block *this, unsigned long mode,
void *cmd)
{
struct mtk_wdt_dev *mtk_wdt;
void __iomem *wdt_base;
mtk_wdt = container_of(this, struct mtk_wdt_dev, restart_handler);
wdt_base = mtk_wdt->wdt_base;
while (1) {
writel(WDT_SWRST_KEY, wdt_base + WDT_SWRST);
mdelay(5);
}
return NOTIFY_DONE;
}
static int mtk_wdt_ping(struct watchdog_device *wdt_dev)
{
struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
void __iomem *wdt_base = mtk_wdt->wdt_base;
iowrite32(WDT_RST_RELOAD, wdt_base + WDT_RST);
return 0;
}
static int mtk_wdt_set_timeout(struct watchdog_device *wdt_dev,
unsigned int timeout)
{
struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
void __iomem *wdt_base = mtk_wdt->wdt_base;
u32 reg;
wdt_dev->timeout = timeout;
/*
* One bit is the value of 512 ticks
* The clock has 32 KHz
*/
reg = WDT_LENGTH_TIMEOUT(timeout << 6) | WDT_LENGTH_KEY;
iowrite32(reg, wdt_base + WDT_LENGTH);
mtk_wdt_ping(wdt_dev);
return 0;
}
static int mtk_wdt_stop(struct watchdog_device *wdt_dev)
{
struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
void __iomem *wdt_base = mtk_wdt->wdt_base;
u32 reg;
reg = readl(wdt_base + WDT_MODE);
reg &= ~WDT_MODE_EN;
reg |= WDT_MODE_KEY;
iowrite32(reg, wdt_base + WDT_MODE);
return 0;
}
static int mtk_wdt_start(struct watchdog_device *wdt_dev)
{
u32 reg;
struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
void __iomem *wdt_base = mtk_wdt->wdt_base;
int ret;
ret = mtk_wdt_set_timeout(wdt_dev, wdt_dev->timeout);
if (ret < 0)
return ret;
reg = ioread32(wdt_base + WDT_MODE);
reg &= ~(WDT_MODE_IRQ_EN | WDT_MODE_DUAL_EN);
reg |= (WDT_MODE_EN | WDT_MODE_KEY);
iowrite32(reg, wdt_base + WDT_MODE);
return 0;
}
static const struct watchdog_info mtk_wdt_info = {
.identity = DRV_NAME,
.options = WDIOF_SETTIMEOUT |
WDIOF_KEEPALIVEPING |
WDIOF_MAGICCLOSE,
};
static const struct watchdog_ops mtk_wdt_ops = {
.owner = THIS_MODULE,
.start = mtk_wdt_start,
.stop = mtk_wdt_stop,
.ping = mtk_wdt_ping,
.set_timeout = mtk_wdt_set_timeout,
};
static int mtk_wdt_probe(struct platform_device *pdev)
{
struct mtk_wdt_dev *mtk_wdt;
struct resource *res;
int err;
mtk_wdt = devm_kzalloc(&pdev->dev, sizeof(*mtk_wdt), GFP_KERNEL);
if (!mtk_wdt)
return -ENOMEM;
platform_set_drvdata(pdev, mtk_wdt);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mtk_wdt->wdt_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mtk_wdt->wdt_base))
return PTR_ERR(mtk_wdt->wdt_base);
mtk_wdt->wdt_dev.info = &mtk_wdt_info;
mtk_wdt->wdt_dev.ops = &mtk_wdt_ops;
mtk_wdt->wdt_dev.timeout = WDT_MAX_TIMEOUT;
mtk_wdt->wdt_dev.max_timeout = WDT_MAX_TIMEOUT;
mtk_wdt->wdt_dev.min_timeout = WDT_MIN_TIMEOUT;
mtk_wdt->wdt_dev.parent = &pdev->dev;
watchdog_init_timeout(&mtk_wdt->wdt_dev, timeout, &pdev->dev);
watchdog_set_nowayout(&mtk_wdt->wdt_dev, nowayout);
watchdog_set_drvdata(&mtk_wdt->wdt_dev, mtk_wdt);
mtk_wdt_stop(&mtk_wdt->wdt_dev);
err = watchdog_register_device(&mtk_wdt->wdt_dev);
if (unlikely(err))
return err;
mtk_wdt->restart_handler.notifier_call = mtk_reset_handler;
mtk_wdt->restart_handler.priority = 128;
err = register_restart_handler(&mtk_wdt->restart_handler);
if (err)
dev_warn(&pdev->dev,
"cannot register restart handler (err=%d)\n", err);
dev_info(&pdev->dev, "Watchdog enabled (timeout=%d sec, nowayout=%d)\n",
mtk_wdt->wdt_dev.timeout, nowayout);
return 0;
}
static void mtk_wdt_shutdown(struct platform_device *pdev)
{
struct mtk_wdt_dev *mtk_wdt = platform_get_drvdata(pdev);
if (watchdog_active(&mtk_wdt->wdt_dev))
mtk_wdt_stop(&mtk_wdt->wdt_dev);
}
static int mtk_wdt_remove(struct platform_device *pdev)
{
struct mtk_wdt_dev *mtk_wdt = platform_get_drvdata(pdev);
unregister_restart_handler(&mtk_wdt->restart_handler);
watchdog_unregister_device(&mtk_wdt->wdt_dev);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int mtk_wdt_suspend(struct device *dev)
{
struct mtk_wdt_dev *mtk_wdt = dev_get_drvdata(dev);
if (watchdog_active(&mtk_wdt->wdt_dev))
mtk_wdt_stop(&mtk_wdt->wdt_dev);
return 0;
}
static int mtk_wdt_resume(struct device *dev)
{
struct mtk_wdt_dev *mtk_wdt = dev_get_drvdata(dev);
if (watchdog_active(&mtk_wdt->wdt_dev)) {
mtk_wdt_start(&mtk_wdt->wdt_dev);
mtk_wdt_ping(&mtk_wdt->wdt_dev);
}
return 0;
}
#endif
static const struct of_device_id mtk_wdt_dt_ids[] = {
{ .compatible = "mediatek,mt6589-wdt" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, mtk_wdt_dt_ids);
static const struct dev_pm_ops mtk_wdt_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(mtk_wdt_suspend,
mtk_wdt_resume)
};
static struct platform_driver mtk_wdt_driver = {
.probe = mtk_wdt_probe,
.remove = mtk_wdt_remove,
.shutdown = mtk_wdt_shutdown,
.driver = {
.name = DRV_NAME,
.pm = &mtk_wdt_pm_ops,
.of_match_table = mtk_wdt_dt_ids,
},
};
module_platform_driver(mtk_wdt_driver);
module_param(timeout, uint, 0);
MODULE_PARM_DESC(timeout, "Watchdog heartbeat in seconds");
module_param(nowayout, bool, 0);
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default="
__MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Matthias Brugger <[email protected]>");
MODULE_DESCRIPTION("Mediatek WatchDog Timer Driver");
MODULE_VERSION(DRV_VERSION);
|
__label__pos
| 0.979204 |
Information and Communinication Technologies (ICT)
Information and communication technologies (ICT) include any tools used to create, store, transmit, or share information. Some examples of communication technology are computers, the Internet, television, radio, phones and podcasts.
Level B1/B2
The examples of communication technologies, which continue to advance at a rapid pace, include:
• Social media platforms which allow people to create personal pages, post profile images and updates on their lives, and create a friend list of people who can see their updates; Facebook, Twitter, Instagram are the most used social media platforms.
• Blogs are personal websites where people can publish or ‘log’ information for others.
• Vlogs are “video logs” that enable people to post video online.
• Live video stream is an extension of vlogging and has the benefit of synchronicity in communication.The live vlogger can read live community comments appearing on-screen in real time and respond to their comments or questions mid-stream.
• Conferencing technology helps workplaces communicate across long distances.
• Live lecture technologies such as Blackboard collaborate, a teacher can be speaking to hundreds of students around the world at once.
• Wikis allow collaborative crowdsourcing of information. This can help members of the wikis to amass a lot of information in a short period of time.
• A group forum allows people to post questions and answers for others to respond to.
• Podcasts are packets of audio information that can be uploaded and stored on cloud technology ready for anyone to download and listen to at-will.
• A wearable technology is any information technology that is carried on the body (smart watches, smart glasses, exercise bracelets).
• Smart speakers are computerized personal assistants placed around offices and homes in order to help people complete tasks hands-free. They are usually activated using a hot word, like ‘Hey Computer’ or ‘OK Google’. Then the user asks the device questions or provides voice commands such as ‘turn out the lights’, ‘add this to the shopping list’ or ‘play a song’.
• Web chat an increasingly popular form of instant communication between friends.
• Email messages are distributed by electronic means from one computer user to one or more recipients via a network
Word List Communication Technologies
a desktop computer – настільний комп’ютер
A desktop computer is a computer that fits on or under a desk.
vital – життєво важливий
Computers have become a vital part of everyday life. Good communication is vital in a large organization.
data – дані
Data are individual facts, statistics, or items of information. A computer is an electronic machine which can be used to store, process and display data.
Data can be either singular or plural, though it is now more often used as a singular word. The word data came into English as the plural of the Latin word datum, which means “a single piece of information.” Over time, data became synonymous with “information”: it then became a singular word:
• Singular: The data is unreliable. Our data indicates that dogs like ice-cream.
• Plural: The data are all from the same source. Our data indicate that cats like country music.
a personal computer (PC) – персональний комп’ютер
Even a small personal computer can store vast amounts of information. The abbreviation PC stands for “personal computer“.
a laptop – ноутбук
You can take your laptop on the plane as hand luggage. This kind of laptop doesn’t come cheap.
embedded computer – вбудований комп’ютер
Embedded computers are found inside other machines such as fridges and cars.
a tablet – планшет
Tablets are available in many sizes and styles. Most tablets are touch operated and are between the size of a smartphone and a laptop. Tablets can be used to browse the Internet, check email, download and read books, play games, watch videos, organize content, and much more.
a tower case – системний блок
A computer case, also known as a computer chassis, tower, system unit, or cabinet, is the enclosure that contains most of the components of a personal computer (usually excluding the display, keyboard, and mouse).
hardware – комп’ютерне обладнання
Hardware refers to the physical components of the computer system. These components are mechanical and electronic.
software – програмне забезпечення
Software refers to the programs and other operating information used by a computer. A computer system consists of two main elements: the machine and programmes, or hardware and software.
device – пристрій
A computer is a device for processing information. The television receiver is an electronic device.
input – введення; вводити дані
In computer science, the general meaning of input is to provide or give something to the computer, in other words, when a computer or device is receiving a command or signal from outer sources.
input device – пристрій введення
The input devices are: mouse, keyboard, touchscreen, touchpad, microphone, webcam, scanner.
output – вихід; виводити інформацію
The central idea of a computing system is that input is processed into output.
output device – вихідний пристрій
Output devices include monitors, printers, speakers, headphones, projectors, GPS devices.
screen – екран
Move your cursor to the top of the screen. Our television has a 19-inch screen. The movie helped boost her screen career.
access – доступ; мати доступ
This account gives you instant access to your money. The only access to the city is across the bridge. People use the Internet to access goods and services.
surf the Internet – шукати в iнтернеті
Many towns and cities have cybercafes where you can surf the Internet. I surf the Internet every day.
attachment – вкладення
You can send photos to family and friends through email attachments. I wasn’t able to open that attachment. I’ll email my report to you as an attachment.
update – оновлення; оновлювати
We need an update to the mailing list. They decided to update the computer systems. We do not have the resources to update our computer software.
podcast – подкаст
A podcast is a digital audio file made available on the internet for downloading to a computer or mobile device, typically available as a series, new installments of which can be received by subscribers automatically.
e-commerce – електронна комерція
E-commerce (business conducted on the Internet) is an important part of our lives. Many companies offer new technologies like online banking and e-commerce to their clients.
download – завантажити
It would be wise to download your program to another computer before testing it. The software makes it easier to download music from the net. When you download a file, you take it from another location, e.g. a web server, and save it on a computer.
upload – переслати, вивантажити
Upload videos from your web camera or camcorder. I want to upload data to the computer network storage from my office computer. To upload means to transfer data from one computer to another, typically to one that is larger or remote from the user or functioning as a server.
post a message – опублікувати повідомлення
I posted a message about my graduation on my Fascebook page.
pay through the web / make online payment – оплатити через Інтернет
He paid for his train ticket through the Web. Do not shop or make online payments from public computers or any computer other than your own.
delete – видалити
The delete key doesn’t work. Delete the word ‘it’ and insert ‘them’. Delete her name from the list. Can I delete these old files?
cyberspace – кіберпростір
In cyberspace, newsgroups are the public bulletin-board areas where people talk about whatever interests them. You can find the answer to almost any question in cyberspace.
word processor – текстовий редактор
Word processor is software like Microsoft Word used to create texts. Which word processor do you have on your computer? Most reports are produced on a word processor.
What does it mean in computing?
• A file is a collection of data, programs, etc. stored in a computer’s memory or on a storage device under a single identifying name. We can create, save, rename, open, close, copy, move and delete files. Files can be organized into folders. We can compress files, so that they use less space.
• A folder is the virtual location for applications, documents, data or other sub-folders. Folders help in storing and organizing files and data in the computer.
• Menu is a list of computer operations.
• An icon is a small picture or symbol.
• A cursor is a little arrow on the screen that moves when you move the mouse.
• To click means to press and release the button on the mouse.
Communication Technologies: Computers and the Internet
Read and speak about communication technologies.
Computers have become a vital part of everyday life. You can find them in business, science, medicine, education, entertainment and at home.
A computer is an electronic machine which can be used to store, process and display data. There are many types of computers and among them are: a personal computer (PC) which can be a desktop with a tower case, a laptop, a netbook, a tablet, a smartphone. There are embedded computers which are found inside other machines such as fridges and cars.
A computer system consists of two main elements: the machine and programmes, or hardware and software. The central idea of a computing system is that input is processed into output. Input is data which is entered into the computer, and output is the result of processing done by the computer, usually printed out or displayed on the screen. The potential uses of computers are infinite. The most common current uses of computers in everyday life are personal, educational and commercial.
People use the Internet to
The Internet is an important educatonal tool and is used in distance learning. A Virtual Learning Environment (VLE) is a software system designed to help teachers in the management of educational courses for their students by creating a virtual classroom. Teachers and students use electronic learning tools such as videoconferences, online classrooms, whiteboards, chat rooms and so on.
There are many career choices that cannot be available without computer skills. The growing use of computers increases the need for employees with computer knowledge and training. If you are a computer literate person the career opportunities are limitless for you.
|
__label__pos
| 0.642943 |
How to Read Large File in Java
How to Read Large File in Java
In our last article, we cover How to read file in Java.This post will cover how to read large file in Java efficiently.
Reading the large file in Java efficiently is always a challenge, with new enhancements coming to Java IO package, it is becoming more and more efficient.
We have used sample file with size 1GB for all these. Reading such a large file in memory is not a good option, we will covering various methods outlining How to read large file in Java line by line.
1 Using Java API
We will cover various options how to read a file in Java efficiently using plain Java API.
1.1 Using Java BufferReader
public class ReadLargeFileByBufferReader {
public static void main(String[] args) throws IOException {
String fileName = "/tutorials/fileread/file.txt"; //this path is on my local
try (BufferedReader fileBufferReader = new BufferedReader(new FileReader(fileName))) {
String fileLineContent;
while ((fileLineContent = fileBufferReader.readLine()) != null) {
// process the line.
}
}
}
}
Output
Max Memory Used : 258MB
Time Take : 100 Seconds
1.2 Using Java 8 Stream API
public class ReadLargeFIleUsingStream {
public static void main(String[] args) throws IOException {
String fileName = "/tutorials/fileread/file.txt"; //this path is on my local
// lines(Path path, Charset cs)
try (Stream inputStream = Files.lines(Paths.get(fileName), StandardCharsets.UTF8)) {
inputStream.forEach(System.out::println);
}
}
}
Output
Max Memory Used : 390MB
Time Take : 60 Seconds
1.3 Using Java Scanner
Java Scanner API also provides a way to read large file line by line.
public class ReadLargeFileByScanner {
public static void main(String[] args) throws FileNotFoundException {
String fileName = "/Users/umesh/personal/tutorials/fileread/file.txt"; //this path is on my local
InputStream inputStream = new FileInputStream(fileName);
try(Scanner fileScanner = new Scanner(inputStream, StandardCharsets.UTF_8.name())){
while (fileScanner.hasNextLine()){
System.out.println(fileScanner.nextLine());
}
}
}
}
Output
Max Memory Used : 460MB
Time Take : 60 Seconds
2 Streaming File Using Apache Commons IO
This can also be achieved by using Apache Commons IO FileUtils.lineIterator () Method
public class ReadLargeFileUsingApacheCommonIO {
public static void main(String[] args) throws IOException {
String fileName = "/Users/umesh/personal/tutorials/fileread/file.txt"; //this path is on my local
LineIterator fileContents= FileUtils.lineIterator(new File(fileName), StandardCharsets.UTF_8.name());
while(fileContents.hasNext()){
System.out.println(fileContents.nextLine());
}
}
}
Output
Max Memory Used : 400MB
Time Take : 60 Seconds
As we saw how to read a large file in Java efficiently. Few things which you need to pay close attention
1. Reading the large file in one go will not be a good option (You will get OutOfMemoryError ).
2. We Adapted technique to read large file line by line to keep memory footprint low.
I used VisualVM to monitoring Memory, CPU and Threadpool information while running these programmes.
based on our test, BufferReader has the lowest memory footprint, though the overall execution was slow.
All the code of this article is available Over on Github. This is a Maven-based project.
References
1. Apache Commons IO
|
__label__pos
| 0.979546 |
Algorithms are at the core of computing. To be able to write an algorithm once and for all to work with any type of sequence makes your programs both simpler and safer. The ability to customize algorithms at runtime has revolutionalized software development.
The subset of the standard C++ library known as the Standard Template Library (STL) was originally designed around generic algorithms’ code that processes sequences of any type of values in a type-safe manner. The goal was to use predefined algorithms for almost every task, instead of hand-coding loops every time you need to process a collection of data. This power comes with a bit of a learning curve, however. By the time you get to the end of this chapter, you should be able to decide for yourself whether you find the algorithms addictive or too confusing to remember. If you’re like most people, you’ll resist them at first but then tend to use them more and more.
The Standard C++ Library: Generic algorithms (35.5 KiB, 8,288 hits)
Share
Tweet
Share
Pin
|
__label__pos
| 0.615361 |
Delegators
Zend\ServiceManager can instantiate delegators of requested services, decorating them as specified in a delegate factory implementing the delegator factory interface.
The delegate pattern is useful in cases when you want to wrap a real service in a decorator, or generally intercept actions being performed on the delegate in an AOP fashioned way.
Delegator factory signature
A delegator factory has the following signature:
use Interop\Container\ContainerInterface;
public function __invoke(
ContainerInterface $container,
$name,
callable $callback,
array $options = null
);
The parameters passed to the delegator factory are the following:
A Delegator factory use case
A typical use case for delegators is to handle logic before or after a method is called.
In the following example, an event is being triggered before Buzzer::buzz() is called and some output text is prepended.
The delegated object Buzzer (original object) is defined as following:
class Buzzer
{
public function buzz()
{
return 'Buzz!';
}
}
The delegator class BuzzerDelegator has the following structure:
use Zend\EventManager\EventManagerInterface;
class BuzzerDelegator extends Buzzer
{
protected $realBuzzer;
protected $eventManager;
public function __construct(Buzzer $realBuzzer, EventManagerInterface $eventManager)
{
$this->realBuzzer = $realBuzzer;
$this->eventManager = $eventManager;
}
public function buzz()
{
$this->eventManager->trigger('buzz', $this);
return $this->realBuzzer->buzz();
}
}
To use the BuzzerDelegator, you can run the following code:
$wrappedBuzzer = new Buzzer();
$eventManager = new Zend\EventManager\EventManager();
$eventManager->attach('buzz', function () { echo "Stare at the art!\n"; });
$buzzer = new BuzzerDelegator($wrappedBuzzer, $eventManager);
echo $buzzer->buzz(); // "Stare at the art!\nBuzz!"
This logic is fairly simple as long as you have access to the instantiation logic of the $wrappedBuzzer object.
You may not always be able to define how $wrappedBuzzer is created, since a factory for it may be defined by some code to which you don't have access, or which you cannot modify without introducing further complexity.
Delegator factories solve this specific problem by allowing you to wrap, decorate or modify any existing service.
A simple delegator factory for the buzzer service can be implemented as following:
use Interop\Container\ContainerInterface;
use Zend\ServiceManager\Factory\DelegatorFactoryInterface;
class BuzzerDelegatorFactory implements DelegatorFactoryInterface
{
public function __invoke(ContainerInterface $container, $name, callable $callback, array $options = null)
{
$realBuzzer = call_user_func($callback);
$eventManager = $serviceLocator->get('EventManager');
$eventManager->attach('buzz', function () { echo "Stare at the art!\n"; });
return new BuzzerDelegator($realBuzzer, $eventManager);
}
}
You can then instruct the service manager to handle the service buzzer as a delegate:
use Zend\ServiceManager\Factory\InvokableClass;
use Zend\ServiceManager\ServiceManager;
$serviceManager = new Zend\ServiceManager\ServiceManager([
'factories' => [
Buzzer::class => InvokableClass::class,
],
'delegators' => [
Buzzer::class => [
BuzzerDelegatorFactory::class,
],
],
]);
// now, when fetching Buzzer, we get a BuzzerDelegator instead
$buzzer = $serviceManager->get(Buzzer::class);
$buzzer->buzz(); // "Stare at the art!\nBuzz!"
You can specify multiple delegators for a service. Each will add one decorator around the instantiation logic of that particular service.
This latter point is the primary use case for delegators: decorating the instantiation logic for a service.
|
__label__pos
| 0.948389 |
summaryrefslogtreecommitdiffstats
path: root/.pylintrc
diff options
context:
space:
mode:
authorJelle van der Waa <[email protected]>2018-05-20 16:39:08 +0200
committerJelle van der Waa <[email protected]>2018-05-20 16:39:08 +0200
commitb811d43d61b1a6f2683fa9d5ee568e04c278dbd5 (patch)
tree5db0038bf4785f47b06ba443a4166af5f3eb0ff4 /.pylintrc
parent5194d0f242715014087c5726138522f5eadb1b45 (diff)
downloadarchweb-b811d43d61b1a6f2683fa9d5ee568e04c278dbd5.tar.gz
archweb-b811d43d61b1a6f2683fa9d5ee568e04c278dbd5.zip
pylintrc: Fix the build
Remove the unnecessary lambda check and add more pylint warnings
Diffstat (limited to '.pylintrc')
-rw-r--r--.pylintrc12
1 files changed, 6 insertions, 6 deletions
diff --git a/.pylintrc b/.pylintrc
index 95f388bb..b72ddb38 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -85,7 +85,6 @@ enable=import-self,
continue-in-finally,
not-in-loop,
return-outside-function,
- unnecessary-lambda,
unnecessary-pass,
deprecated-lambda,
deprecated-method,
@@ -93,7 +92,13 @@ enable=import-self,
useless-else-on-loop,
duplicate-argument-name,
return-outside-function,
+ missing-super-argument,
+ duplicate-key,
+ eval-used,
+ unused-format-string-argument,
+ unused-format-string-key,
+ # unnecessary-lambda,
# anomalous-unicode-escape-in-string,
# anomalous-backslash-in-string,
# function-redefined,
@@ -112,19 +117,16 @@ enable=import-self,
# lost-exception,
# assert-on-tuple,
# dangerous-default-value,
- # duplicate-key,
# useless-else-on-loop,
# expression-not-assigned,
# confusing-with-statement,
# pointless-statement,
# pointless-string-statement,
- # eval-used,
# exec-used,
# bad-builtin,
# using-constant-test,
# deprecated-lambda,
# bad-super-call,
- # missing-super-argument,
# slots-on-old-class,
# super-on-old-class,
# property-on-old-class,
@@ -143,8 +145,6 @@ enable=import-self,
# bad-format-string,
# missing-format-attribute,
# missing-format-argument-key,
- # unused-format-string-argument,
- # unused-format-string-key,
# invalid-format-index,
# bad-indentation,
# mixed-indentation,
|
__label__pos
| 0.996487 |
Prev NEXT
How Ethernet Works
Ethernet or 802.3?
You may have heard the term 802.3 used in place of or in conjunction with the term Ethernet. "Ethernet" originally referred to a networking implementation standardized by Digital, Intel and Xerox. (For this reason, it is also known as the DIX standard.)
In February 1980, the Institute of Electrical and Electronics Engineers, or IEEE (pronounced "I triple E"), created a committee to standardize network technologies. The IEEE titled this the 802 working group, named after the year and month of its formation. Subcommittees of the 802 working group separately addressed different aspects of networking. The IEEE distinguished each subcommittee by numbering it 802.X, with X representing a unique number for each subcommittee. The 802.3 group standardized the operation of a CSMA/CD network that was functionally equivalent to the DIX Ethernet.
Advertisement
Ethernet and 802.3 differ slightly in their terminology and the data format for their frames, but are in most respects identical. Today, the term Ethernet refers generically to both the DIX Ethernet implementation and the IEEE 802.3 standard.
|
__label__pos
| 0.86942 |
Fixed a possible bug in TextureAtlas.java
I think there may be a small bug in that file. Git changes here.
I think the code says it all:
if (normal != null && normal.getKey() != null) {
- addTexture(diffuse, "NormalMap", keyName);
+ addTexture(normal, "NormalMap", keyName);
}
if (specular != null && specular.getKey() != null) {
addTexture(specular, "SpecularMap", keyName);
Waiting for verification to make a pull request.
Well, the full involved OLD code is:
public boolean addGeometry(Geometry geometry) {
Texture diffuse = getMaterialTexture(geometry, "DiffuseMap");
Texture normal = getMaterialTexture(geometry, "NormalMap");
Texture specular = getMaterialTexture(geometry, "SpecularMap");
if (diffuse == null) {
diffuse = getMaterialTexture(geometry, "ColorMap");
}
if (diffuse != null && diffuse.getKey() != null) {
String keyName = diffuse.getKey().toString();
if (!addTexture(diffuse, "DiffuseMap")) {
return false;
} else {
if (normal != null && normal.getKey() != null) {
addTexture(diffuse, "NormalMap", keyName);
}
if (specular != null && specular.getKey() != null) {
addTexture(specular, "SpecularMap", keyName);
}
}
return true;
}
return true;
}
1 Like
Yeah, looks right.
|
__label__pos
| 0.999873 |
Entity risk scoring
edit
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Entity risk scoring is an advanced Elastic Security analytics feature that helps security analysts detect changes in an entity’s risk posture, hunt for new threats, and prioritize incident response.
Entity risk scoring allows you to monitor risk score changes of hosts and users in your environment. When generating advanced scoring analytics, the risk scoring engine utilizes threats from its end-to-end XDR use cases, such as SIEM, cloud, and endpoint. It leverages the Elastic SIEM detection engine to generate host and user risk scores from the last 30 days.
It also generates risk scores on a recurring interval, and allows for easy onboarding and management. The engine is built to factor in risks from all Elastic Security use cases, and allows you to customize and control how and when risk is calculated.
Risk scoring inputs
edit
Entity risk scores are determined by the following risk inputs:
Risk input Storage location
Alerts
.alerts-security.alerts-<space-id> index alias
Asset criticality level
.asset-criticality.asset-criticality-<space-id> index alias
The resulting entity risk scores are stored in the risk-score.risk-score-<space-id> data stream alias.
• Entities without any alerts, or with only Closed alerts, are not assigned a risk score.
• To use asset criticality, you must enable the securitySolution:enableAssetCriticality advanced setting.
How is risk score calculated?
edit
1. The risk scoring engine runs hourly to aggregate Open and Acknowledged alerts from the last 30 days. For each entity, the engine processes up to 10,000 alerts.
2. The engine groups alerts by host.name or user.name, and aggregates the individual alert risk scores (kibana.alert.risk_score) such that alerts with higher risk scores contribute more than alerts with lower risk scores. The resulting aggregated risk score is assigned to the Alerts category in the entity’s risk summary.
3. The engine then verifies the entity’s asset criticality level. If there is no asset criticality assigned, the entity risk score remains equal to the aggregated score from the Alerts category. If a criticality level is assigned, the engine updates the risk score based on the default risk weight for each criticality level. The asset criticality risk input is assigned to the Asset Criticality category in the entity’s risk summary.
Asset criticality level Default risk weight
Low impact
0.5
Medium impact
1
High impact
1.5
Extreme impact
2
Asset criticality levels and default risk weights are subject to change.
4. Based on the two risk inputs, the risk scoring engine generates a single entity risk score of 0-100. It assigns a risk level by mapping the risk score to one of these levels:
Risk level Risk score
Unknown
< 20
Low
20-40
Moderate
40-70
High
70-90
Critical
> 90
Click for a risk score calculation example
This example shows how the risk scoring engine calculates the user risk score for User_A, whose asset criticality level is Extreme impact.
There are 5 open alerts associated with User_A:
• Alert 1 with alert risk score 21
• Alert 2 with alert risk score 45
• Alert 3 with alert risk score 21
• Alert 4 with alert risk score 70
• Alert 5 with alert risk score 21
To calculate the user risk score, the risk scoring engine:
1. Sorts the associated alerts in descending order of alert risk score:
• Alert 4 with alert risk score 70
• Alert 2 with alert risk score 45
• Alert 1 with alert risk score 21
• Alert 3 with alert risk score 21
• Alert 5 with alert risk score 21
2. Generates an aggregated risk score of 36.16, and assigns it to User_A's Alerts risk category.
3. Looks up User_A's asset criticality level, and identifies it as Extreme impact.
4. Generates a new risk input under the Asset Criticality risk category, with a risk contribution score of 16.95.
5. Increases the user risk score to 53.11, and assigns User_A a Moderate user risk level.
If User_A had no asset criticality level assigned, the user risk score would remain unchanged at 36.16.
Learn how to turn on the latest risk scoring engine.
|
__label__pos
| 0.941879 |
6
\$\begingroup\$
I recently wrote this code as a more versatile stand-in for Convert.ChangeType. I have a nagging feeling that there's something I might be overlooking, or that there might be a more efficient algorithm for this.
/// <summary>
/// Returns an object of type <typeparamref name="T"/> whose value is equivalent to that of the specified
/// object.
/// </summary>
/// <typeparam name="T">
/// The output type.
/// </typeparam>
/// <param name="value">
/// An object that implements <see cref="IConvertible"/> or is <see cref="Nullable{T}"/> where the underlying
/// type implements <see cref="IConvertible"/>.
/// </param>
/// <returns>
/// An object whose type is <typeparamref name="T"/> and whose value is equivalent to <paramref name="value"/>.
/// </returns>
/// <exception cref="System.ArgumentException">
/// The specified value is not defined by the enumeration (when <typeparamref name="T"/> is an enum, or Nullable{T}
/// where the underlying type is an enum).
/// </exception>
/// <exception cref="System.InvalidCastException"
/// <remarks>
/// This method works similarly to <see cref="Convert.ChangeType(object, Type)"/> with the addition of support
/// for enumerations and <see cref="Nullable{T}"/> where the underlying type is <see cref="IConvertible"/>.
/// </remarks>
internal static T ChangeType<T>(object value) {
Type type = typeof(T);
Type underlyingNullableType = Nullable.GetUnderlyingType(type);
if ((underlyingNullableType ?? type).IsEnum) {
// The specified type is an enum or Nullable{T} where T is an enum.
T convertedEnum = (T)Enum.ToObject(underlyingNullableType ?? type, value);
if (!Enum.IsDefined(underlyingNullableType ?? type, convertedEnum)) {
throw new ArgumentException("The specified value is not defined by the enumeration.", "value");
}
return convertedEnum;
} else if (type.IsValueType && underlyingNullableType == null) {
// The specified type is a non-nullable value type.
if (value == null || DBNull.Value.Equals(value)) {
throw new InvalidCastException("Cannot convert a null value to a non-nullable type.");
}
return (T)Convert.ChangeType(value, type);
}
// The specified type is a reference type or Nullable{T} where T is not an enum.
return (value == null || DBNull.Value.Equals(value)) ? default(T) : (T)Convert.ChangeType(value, underlyingNullableType ?? type);
}
\$\endgroup\$
3
\$\begingroup\$
Doc Comments
Well done! I rarely see a single method so thoroughly documented with XML doc comments. There's a catch though. Be careful about just how much you do this. It can really obstruct the readability of the actual code. I count roughly 20 lines of documentations here. Is all of that really necessary? I don't think it is.
For example:
/// <typeparam name="T">
/// The output type.
/// </typeparam>
That's..... useless. It's obvious. Don't document the obvious.
Style
I don't know a C# dev on this site that doesn't prefer new line braces to the "Egyptian" style braces that you use. If you're working with others, I would recommend you stick with the "C# style", but really, it doesn't matter. You were 100% consistent and that is what really matters at the end of the day.
Null Coalescence
if ((underlyingNullableType ?? type).IsEnum) {
WTFs per Minute
I'm sorry, but wtf? How do you expect anyone to wrap their head around that?
Null Coalescence has a time and place. This isn't it, and it's all over the place in this code. It's seriously harming readability/understandability.
I'll be clear about it. There's nothing wrong with underlyingNullableType ?? type, but once you put that inside of an if statement and call a method on it... it's.... meaningless. It becomes completely ungrokkable.
| improve this answer | |
\$\endgroup\$
• 1
\$\begingroup\$ I agreed and I think for example: Type currentType = (underlyingNullableType ?? type); - would be a good start. \$\endgroup\$ – t3chb0t Dec 21 '14 at 14:55
• \$\begingroup\$ It would be a very good start I think @t3chb0t. \$\endgroup\$ – RubberDuck Dec 21 '14 at 15:06
4
\$\begingroup\$
/// <summary>
/// Returns an object of type <typeparamref name="T"/> whose value is equivalent to that of the specified
/// object.
/// </summary>
...
/// <returns>
/// An object whose type is <typeparamref name="T"/> and whose value is equivalent to <paramref name="value"/>.
/// </returns>
I don't see any reason to repeat what was said in <summary> in <returns>. I wouldn't write <returns> here at all. (Unless you're writing a library with very high demands on documentation, like the .Net framework itself.)
underlyingNullableType ?? type
You're repeating this expression several times. You should probably extract it into a variable.
You should be consistent: either use if-else if-else and ignore the returns, or acknowledge the returns and use if-if-nothing.
| improve this answer | |
\$\endgroup\$
2
\$\begingroup\$
Review
• re-organize your code to avoid redundant code segments underlyingNullableType ?? type or DBNull.Value checks
• I just found out you have a follow-up question: pitty I didn't notice it before :) That one is also review-worthy / can be improved
Bugs
Additional reason why you can't use Enum.IsDefined: suppose we have the following enum..
[Flags]
enum A : uint
{
None = 0,
X = 1,
Y = 2
}
Conversion fails because of not defined, even though A.X | A.Y is perfectly valid.
Refactored code
To be honest, I had to completely rewrite the flow to get rid of the redundant code blocks. First check the edge case when value is null. Then extract a nonNullableType to continue to work with. Process the other edge case with Enum and its underlying type. Use the exiting API Convert.ChangeType to map the normal cases. I have made some inline comments to explain what I'm doing.
public static T ChangeType<T>(object value)
{
var isNull = IsNull(value);
var type = typeof(T);
if (isNull)
{
if (!type.IsNullAssignable())
{
throw new InvalidCastException($"Cannot cast null to {type}");
}
// null-assignable types (reference types and nullable types) can deal with null
return default;
}
// use this type from here on to avoid the redundant 'if nullable .. else ..'
var nonNullableType = type.AsNonNullable();
if (nonNullableType.IsEnum)
{
// convert the value to the underlying type of the enum and
// convert that result to the enum
var enumUnderlyingType = Enum.GetUnderlyingType(nonNullableType);
var enumUnderlyingValue = Convert.ChangeType(value, enumUnderlyingType);
return (T)Enum.ToObject(nonNullableType, enumUnderlyingValue);
}
// let .NET handle remaining convertions
return (T)Convert.ChangeType(value, nonNullableType);
}
public static bool IsNull(object value)
{
// - value == null uses the type's equality operator (usefull for Nullable)
// - ReferenceEquals checks for actual null references
// - DBNull is a special null value
return value == null
|| ReferenceEquals(null, value)
|| value is DBNull;
}
Helper class
argument checks are left out for brevity
public static class TypeExtension
{
public static bool IsNullable(this Type type)
{
return type.IsGenericType && type.GetGenericTypeDefinition().Equals(typeof(Nullable<>));
}
public static bool IsNullAssignable(this Type type)
{
return IsNullable(type) || !type.IsValueType;
}
public static Type AsNonNullable(this Type type)
{
return type.IsNullable() ? Nullable.GetUnderlyingType(type) : type;
}
}
Use Cases / Tests
[TestMethod]
public void TestConversions()
{
// Positive tests
var enumFlags = ChangeType<A>(A.X | A.Y);
var enumFlagsUnderlying = ChangeType<A>(4);
var enumFlagsUnderlyingDifferentType = ChangeType<A>(4d);
var enumFlagsNullable = ChangeType<A?>(A.X | A.Y);
var enumFlagsNullableNull = ChangeType<A?>(null);
var enumFlagsNullableDBNull = ChangeType<A?>(DBNull.Value);
var referenceType = ChangeType<RefType>(null);
var valueType = ChangeType<ValType>(default(ValType));
var valueTypeNullable = ChangeType<ValType?>(default(ValType));
var valueTypeNullableNull = ChangeType<ValType?>(null);
var enumTypeWithDifferentUnderlyingTypes = ChangeType<A>(B.Z);
var enumTypeWithDifferentUnderlyingTypesWithoutConstant = ChangeType<A>(B.ZZ);
// Negative tests
Assert.ThrowsException<InvalidCastException>(() => ChangeType<ValType>(null));
}
| improve this answer | |
\$\endgroup\$
1
\$\begingroup\$
You should be using the standard C# bracing style
if (Condition)
{
//Operations
}
instead of the Java Style bracing you have in your code, it makes it read funny.
| improve this answer | |
\$\endgroup\$
• 5
\$\begingroup\$ As a C# dev, I agree with this, but I will point out that it looks like OP was consistent. And really, consistency is more important than which style braces he chose. \$\endgroup\$ – RubberDuck Dec 20 '14 at 20:46
• \$\begingroup\$ @RubberDuck I agree. \$\endgroup\$ – Malachi Dec 20 '14 at 20:47
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.667886 |
JS 学习笔记 (七) 面向对象编程OOP
本文最后更新于 216 天前,其中的信息可能已经有所发展或是发生改变。
1、前言
创建对象有很多种方法,最常见的是字面量创建和new Object()创建。但是在需要创建多个相同结构的对象时,这两种方法就不太方便了。
如:创建多个学生信息的对象
let tom = {
name: "Tom",
age: 20,
sex: "boy",
height: 175
};
let marry = {
name: "Marry",
age: 22,
sex: "girl",
height: 165
}
2、对象工厂
2.1 实例
使用对象工厂改进上述代码,如:
function person(name, age, sex, height) {
return {
name, age, sex, height
}
}
let tom = new person("Tom", 20, "boy", 175)
let marry = new person("Marry", 22, "girl", 165)
console.log(tom);
console.log(marry);
打印出结果:
{ name: ‘Tom’, age: 20, sex: ‘boy’, height: 175 }
{ name: ‘Marry’, age: 22, sex: ‘girl’, height: 165 }
对象工厂函数创建返回的是一个新对象。
2.2 缺陷以及解决方法
• 对象工厂本身是一个普通函数,用于表达对象结构时,描述性不强
• 对象工厂没有解决对象标识的问题,即创建的对象是什么类型。
• 利用构造函数可以解决这些问题
3、构造函数
3.1 实例详解
更改上面的代码,并添加一个say函数:
function person(name, age, sex, height) {
this.name = name;
this.age = age;
this.sex = sex;
this.height = height;
this.say = function(){
console.log(`你好,我是${this.name}`);
}
}
let tom = new person("Tom", 20, "boy", 175)
let marry = new person("Marry", 22, "girl", 165)
tom.say()
marry.say()
打印出结果:
你好,我是Tom
你好,我是Marry
其中,代码中的this是对new Person的空对象进行扩展。
每个对象都具有constructor属性,用于标识对象的“类型”,如:
console.log(Tom.constructor == Person); // true
console.log(Tom.constructor == Object); // false
若要判tom和marry对象的类型,推荐使用instanceof方法。如:
// 使用instanceof方法判断对象类型
console.log(Tom instanceof Person); // true
console.log(Tom instanceof Object); // true
但是这个解决方法还是有一些缺陷。
// 判断实例所对应的方法是否相同
console.log(Tom.say == Jerry.say); // false
输出是false的原因是因为同一个构造函数的实例都会创建一个自己的方法。这样可能会极大的增加的内存的负荷。而且,同一个方法应该是完成相同的任务,没有必要创建多个相同的方法。
我们可能会想到的解决方案是将它的say()方法提取出来,如:
function Person(name, age, sex, height) {
this.name = name;
this.age = age;
this.sex = sex;
this.height = height;
}
function say() {
console.log(`你好,我是${this.name}`);
}
let tom = new person("Tom", 20, "boy", 175)
let marry = new person("Marry", 22, "girl", 165)
tom.say()
marry.say()
输出结果:
你好,我是Tom
你好,我是Marry
这样处理的好处就是say()方法不会被多次创建,但会产生一定的问题。即:
say() 为全局函数,会导致作用域混乱。而且只有Person创建的对象才能调用该方法,由于该方法是放在全局的,可能会产生内存泄漏,会让全局的函数更加臃肿。
解决方法:
利用原型模式,将方法定义在构造函数的原型对象上,可以解决这个问题
• 每个函数都有一个prototype属性,指向一个对象。
该对象包含应该由特定引用类型的实例共享的属性和方法。
该对象就是通过调用构造函数创建的对象的原型。
看下列代码:
function Person(name, age) {
this.name = name
this.age = age
}
Person.prototype.say = function () {
console.log(`你好,我是${this.name}`);
}
let Tom = new Person("Tom", 12)
Tom.say()
let Marry= new Person("Jerry", 10)
Marry.say()
console.log(Tom.say == Marry.say); // true
输出结果:
你好,我是Tom
你好,我是Marry
若要对person的原型进行属性扩展,可直接使用Person.prototype。因为当在构造函数原型上创建属性(或方法)时,会被改构造函数的额所有对象所共享。如:
Person.prototype.from = "China"
console.log(Tom.from); // China
console.log(Marry.from); // China
若Marry对象本身有from属性,则继承自Person的from就不起作用了(原型的from属性被屏蔽掉了),起作用的是Marry自身的from属性,如:
Marry.from = "America"
console.log(Tom.from); // China
console.log(Marry.from); // America
若要判断对象的原型,对象原型的构造函数,可枚举的属性:
console.log(Object.keys(Marry)); // 可枚举的自身的属性 [ 'name', 'age' ]
console.log("from" in Marry); // 可枚举的自身的属性和继承的属性 true
console.log(Object.getOwnPropertyNames(Marry)); // 可枚举的自身的属性 [ 'name', 'age' ]
console.log(Object.getPrototypeOf(Marry).constructor == Person); // true
3.2 基本原理总结
• 在内存中创建一个新对象。
• 这个新对象内部的[[Prototype]]特性被赋值为构造函数的prototype属性。
• 构造函数内部的this被赋值为这个新对象(即this指向新对象)。
• 执行构造函数内部的代码(给新对象添加属性)。
• 如果构造函数返回非空对象,则返回该对象;否则,返回刚创建的新对象。
4、原型继承
4.1 概要
• 利用构造函数构建对象结构(模板,近似于类),从语义上较为清晰的表达对象结构。
• 利用构造函数原型扩展,能方便的为该构造函数所创建的对象进行基于原型的扩展。
• 利用构造函数还可以进行基于原型对象的继承
4.2 构造函数、原型和实例的关系
• 每个构造函数都有一个原型对象。
• 原型有一个属性指回构造函数
• 实例有一个内部指针[[Prototype]]指向原型。
4.3 原型链
当对象原型是另一个构造函数的实例,如此迭代,形成了一连串的继承关系,即为原型链。原型链表达了对象与对象之间的继承关系。
举个例子:
function Person(name, age) {
this.name = name
this.age = age
}
let Marry = new Person("Marry", 10)
console.log(Marry instanceof Person); // true
console.log(Marry instanceof Object); // true
console.log(Object.getPrototypeOf(Marry)); // {}
console.log(Object.getPrototypeOf(Object.getPrototypeOf(Marry))); // [Object: null prototype] {}
4.4 原型链的问题
function Animal() {
this.colors = ["white", "black"];
}
function Mouse(name, age) {
this.name = name;
this.age = age;
}
Mouse.prototype = new Animal();
let m1 = new Mouse("Mickey", 10);
console.log(m1.name, m1.colors);
m1.colors.push("red");
let m2 = new Mouse("Miney", 9);
console.log(m2.colors);
输出结果:
Mickey [ ‘white’, ‘black’ ]
[ ‘white’, ‘black’, ‘red’ ]
这是因为当原型中包含引用值,在实例件共享的是该引用值的引用,当修改实例中的该属性时,会影响全部实例。
存在的问题:子类型在实例化时不能给父类型传递参数。
解决方案:盗用构造函数
4.5 盗用构造函数
在子类构造函数中调用父类构造函数,并将子类当前实例只定为构造函数的上下文。
function Animal(type) {
this.colors = ["white", "black"];
this.type = type
}
function Mouse(name, age, type = "Mouse") {
Animal.call(this, type) // 父类构造函数"盗用"(仅仅把Animal当做普通函数调用)
this.name = name;
this.age = age;
}
Mouse.prototype = new Animal()
let m1 = new Mouse("Mickey", 20)
m1.colors.push("red")
console.log(m1.name, m1.colors);
let m2 = new Mouse("Miney", 18)
console.log(m2.name, m2.colors);
console.log(m1 instanceof Mouse);
console.log(m1 instanceof Animal);
输出结果:
Mickey [ ‘white’, ‘black’, ‘red’ ]
Miney [ ‘white’, ‘black’ ]
true
true
存在的问题:
无法访问父类原型上的方法,或者说是没有父类
解决方法:
将原型链与盗用构造函数结合起来
4.6 原型链与盗用构造函数的组合
将两者优点集中起来,如:
function Animal(type) {
this.colors = ["white", "black"];
this.type = type
}
Animal.prototype.show = function () {
console.log(this.type, this.colors); // Mouse [ 'white', 'black', 'red' ]
}
function Mouse(name, age, type = "Mouse") {
Animal.call(this, type) // 父类构造函数“盗用”,解决传参问题
this.name = name;
this.age = age;
}
Mouse.prototype = new Animal() // 强制指定原型对象,表达继承关系
let m1 = new Mouse("Mickey", 20)
m1.colors.push("red")
console.log(m1.name, m1.colors); // Mickey [ 'white', 'black', 'red' ]
m1.show() // 通过原型继承获取
let m2 = new Mouse("Miney", 18)
console.log(m2.name, m2.colors); // Miney [ 'white', 'black' ]
m2.show()
console.log(Object.keys(m1));// [ 'colors', 'type', 'name', 'age' ]
存在的问题:
console.log(m1 instanceof Mouse); // false
console.log(m1 instanceof Animal);// true
构造函数的指向不正确,问题在于Mouse.prototype = new Animal(),将构造函数指向了Animal
解决方法:强制指定构造函数的指向为原型构造函数
Mouse.prototype.constructor = Mouse
console.log(m1.constructor == Mouse); // true
console.log(m1 instanceof Mouse); // true
4、类
4.1 概述
• ECMAScript 6 新引入的 class 关键字具有正式定义类的能力。
• 类( class)是ECMAScript 中新的基础性语法糖结构
• 虽然 ECMAScript 6 类表面上看起来可以支持正式的面向对象编程,但实际上它背后使用的仍然是原型和构造函数的概念
4.2 实例一
• [ ] 要求:
• 使用ES5实现
• 创建一个Person类,其中实例属性包括:姓名name,年龄age。
• 创建一个数组people,向该列表中存入4个Person的实例 按照年龄升序对people列表进行排序,显示排序后的姓名和年龄。
• 为Person类添加一个方法setAttr(attr, value),可以动态的为Person类的实例添加属性和属性值。**
function Person(name, age) {
this.name = name;
this.age = age
}
// 在Person的原型上复写toString方法
Person.prototype.toString = function () {
return `${this.name}:${this.age}`
}
// 在Person的原型上定义setAttr方法
Person.prototype.setAttr = function (attr, value) {
this[attr] = value
}
let people = [];
// 定义people空数组,实例化4个Person对象,并添加到people数组中
let p1 = new Person("tom", 20)
let p2 = new Person("henry", 19)
let p3 = new Person("mark", 21)
let p4 = new Person("jeorge", 23)
people.push(p1, p2, p3, p4)
// 对people数组的age属性的值从大到小的排序
people.sort((a, b) => a.age - b.age)
console.log(people);
// 输出每个实例的name和age组成的语句
people.forEach((item) => console.log(item.toString()))
// 给第一个实例添加gender属性并赋值
people[0].setAttr("gender", "Male")
console.log(people[0]);
// 输出第二个实例化对象与第一个对象进行比较,看添加gender属性是否成功
console.log(people[1]);
输出结果:
[
Person { name: ‘henry’, age: 19 },
Person { name: ‘tom’, age: 20 },
Person { name: ‘mark’, age: 21 },
Person { name: ‘jeorge’, age: 23 }
]
henry:19
tom:20
mark:21
jeorge:23
Person { name: ‘henry’, age: 19, gender: ‘Male’ }
Person { name: ‘tom’, age: 20 }
4.2 实例二
• [ ] 要求:
• 使用ES6实现实例一的功能
// 定义一个Person类
class Person {
// 在Person类里面定义的constructor构造函数,并传入参数name,age
constructor(name, age) {
// 定义name属性和age属性,并把传入的值赋给它
this.name = name;
this.age = age
};
// 定义的toString方法
toString() {
// 直接返回name和age的属性值
return `${this.name}:${this.age}`
};
// 定义setAttr方法,传入参数attr和value,表示属性和属性值
setAttr(attr, value) {
// 给当前的attr属性赋value值
this[attr] = value
}
}
let people = [
new Person("tom", 20),
new Person("henry", 19),
new Person("mark", 21),
new Person("jeorge", 23)
];
// 从大到小排序
people.sort((a, b) => a.age - b.age)
console.log(people);
// 返回每个实例的toString方法return的值
people.forEach((item) => console.log(item.toString()))
// 给第一个person实例对象添加gender属性并赋Male值
people[0].setAttr("gender", "Male")
// 打印出添加gender属性后的实例和未添加的实例进行比较
console.log(people[0]);
console.log(people[1]);
输出结果:
[
Person { name: ‘henry’, age: 19 },
Person { name: ‘tom’, age: 20 },
Person { name: ‘mark’, age: 21 },
Person { name: ‘jeorge’, age: 23 }
]
henry:19
tom:20
mark:21
jeorge:23
Person { name: ‘henry’, age: 19, gender: ‘Male’ }
Person { name: ‘tom’, age: 20 }
4.3 实例三
• [ ] 要求:
• 使用ES6实现,在 实例二 Person类的基础上,编写以下两个类继承自Person。
• Teacher类除了具有Person类的姓名和性别,还具有一个“课程”course属性。
• Student类除了具有Person类的姓名和性别,还具有一个“分数”score属性
• 通过代码分析一个Teacher实例与Person类、Teacher类、Student类以及object之间的关系
class Person {
constructor(name, age) {
this.name = name;
this.age = age
};
toString() {
return `${this.name}:${this.age}`
}
}
// 这里实现Teacher子类继承Person父类
class Teacher extends Person {
constructor(name, age, course) {
// super 子类Teacher调用父类Person的属性
super(name,age)
// 添加course属性
this.course = course
}
}
// 实例化一个Teacher对象
let Liming = new Teacher("Liming",22,"语文")
console.log(Liming);
// 这里实现Student子类继承Person父类
class Student extends Person{
constructor(name, age, score) {
// super 子类Student调用父类Person的属性
super(name,age)
// 添加score属性
this.score = score
}
}
// 实例化一个Student对象
let Tom = new Student("Tom",22,100)
console.log(Tom);
// 判断Tom 是否是Student的实例
console.log(Tom instanceof Student);
console.log(Tom instanceof Person);
console.log(Tom instanceof Object);
// 判断实例化对象Liming和Tom的原型对象
console.log(Object.getPrototypeOf(Liming));
console.log(Object.getPrototypeOf(Tom));
// 判断Student类和Teacher类的原型对象
console.log(Object.getPrototypeOf(Student));
console.log(Object.getPrototypeOf(Teacher));
输出结果:
Teacher { name: ‘Liming’, age: 22, course: ‘语文’ }
Student { name: ‘Tom’, age: 22, score: 100 }
true
true
true
Person {}
Person {}
[class Person]
[class Person]
4.4 实例四
• [ ] 要求
• 使用ES6实现,在 实例3 中Student类的基础上进行修改。
• 为Student类添加属性grade,用于表示该学生实例的年级(如:Grade One等)。
• 为学生实例的grade属性赋值时,能自动的将输入的信息转换为所有字母大写进行保存在对象中。
• 读取学生实例的grade属性时,如果该属性没有信息,则返回“NO GRADE”。
解法一:
class Person {
constructor(name, age) {
this.name = name;
this.age = age
}
}
class Student extends Person {
constructor(name, age, score) {
super(name, age)
this.score = score;
};
// 存取器属性取出数据,若没有则显示NO GRADE
get grade() {
return this.__level || "NO GRADE"
};
// 存取器属性存入数据,并转为大写
set grade(value) {
this.__level = value.toUpperCase()
}
}
let Tom = new Student("Tom", 22, 100)
let Jerry = new Student("Jerry", 22, 100)
Tom.grade = "grade one"
console.log(Tom.grade);
console.log(Jerry.grade);
输出结果:
GRADE ONE
NO GRADE
解法二:
// 在es5构造函数中定义存储器方法
let Person = function(){
return function(name,age){
this.name = name;
this.age = age;
}
}
class Students extends Person(){
#gd; //实例私有字段
constructor(name,age,score){
super(name,age)
this.score = score
}
get grade(){
return this.#gd || "No GRADES"
}
set grade(value){
this.#gd = value.toUpperCase().trim()
}
}
let p1 = new Person("Jerry",20)
let p2 = new Person("Jerry")
p1.grade = "abc"
console.log(p1);
console.log(p2);
输出:
[Function (anonymous)] { grade: ‘abc’ }
[Function (anonymous)]
本文链接:https://likepoems.com/articles/js-learning-notes-for-oop/
转载说明:本站文章若无特别说明,皆为原创,转载请注明来源:likepoems,谢谢!^^
暂无评论
发送评论 编辑评论
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇
|
__label__pos
| 0.999451 |
首页
加入收藏
您现在的位置 : 首页 > 最新资讯
解构数字的公式:如何找出某个数所有可能的除数?
时间:02-11 来源:最新资讯 访问次数:18
解构数字的公式:如何找出某个数所有可能的除数?
在数学中,除数是执行除法运算时用来除以另一个数的数。如果有两个整数 a 和 b(b ≠ 0),那么 b 是 a 的除数,如果存在某个整数 c 使得 a = b × c。在这种情况下,b 能够“整除” a,记作 b ∣ a。当我们说 b 是 a 的除数时,也可以称 b 为 a 的因子(因数)。这几个术语在数学中可互换来使用,都描述了数 a 与数 b 之间的这种特定关系。当我们在研究一个数,首先会想到它由哪些更小的因子乘积组成。这个过程称为因数分解。这种分解,特别是分解到无法再分解的数——素数,揭示了一个数的许多重要性质。比如 60,以下是所有可能的因子乘积组成方式:请注意,由于 60 只有 2 的平方、3 的一次幂和 5 的一次幂作为素因子,所以不会有五因子或更多因子的组合,因为那会导致乘积超过 60。探索数字的秘密:素因数分解让我们举个更合适的例子来探索这个概念,下图展示了如何找到 2520 的素因数分解的过程:对于任何正整数 n,它都可以唯一地分解为素数的幂的乘积,这些素数称为 n 的素因数。一旦我们获得了 n 的素因数分解,就能够轻松回答许多关于 n 的除数的问题。如何确定一个数的所有除数素因数分解为我们提供了一个优雅的方式来确定一个数的所有可能除数。例如,为了找到 2520 的除数,我们可以考虑其素数及其幂指数。任何能整除 2520 的数,其素因数只能包括 2、3、5 和 7。我们可以将这个数的素因数分解表示为:在这里,δ₁、δ₂、δ₃和δ₄是指数,代表对应素数可能出现的次数。针对每个素因子,这个次数可以是从0开始到该素数在2520中出现的最高次幂。对于素因数 2,有 4 种可能的值(0, 1, 2, 3);对于素因数 3,有 3 种可能(0, 1, 2);素因数 5 和 7 每个都有 2 种可能(0, 1)。因此,可以得出 2520 的除数总数:(3 + 1) × (2 + 1) × (1 + 1) × (1 + 1) = 4 × 3 × 2 × 2 = 48.实际上可以尝试将这些指数的所有不同组合列出来,下面是除数的列表:可以从每个素因数的 0 次幂开始,一直到该素因数在 2520 中的最高次幂,为了方便,这里列出除数的完整列表:1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 28, 30, 35, 36, 40, 42, 45, 48, 56, 60, 63, 70, 72, 80, 84, 90, 105, 112, 120, 126, 140, 144, 168, 180, 210, 252, 280, 315, 360, 420, 504, 560, 630, 720, 840, 1260, 2520总结一下,一个数的因数可以通过将其素因数分解,然后列出这些素因数所有可能的组合以求得。2520 共有 48 个正因数。更普遍的规律这个方法不仅适用于 2520,它适用于任何正整数 n。如果知道了 n 的素因数分解式:其中 p₁, p₂, ..., pᵣ 是 n 的素因数,a₁, a₂, ..., aᵣ 是对应的指数,那么任何数 d 如果是 n 的除数,它的素因数分解也只能包含 p₁, p₂, ..., pᵣ 这些素数。对于每个素因数 pᵢ,它在除数 d 中出现的次数最多与它在 n 中出现的次数相同。因此,对于 p₁,我们有 a₁ + 1 种选择(从 0 到 a₁),对于 p₂,我们有 a₂ + 1 种选择,以此类推。由于每个素因数的每种可能的指数都可以与其他素因数的指数任意组合,我们可以得到一个公式来计算 n 的所有除数的数量,即 τ(n):通过上面这些内容,我们可以更深入地理解数的内在结构和它们的分解性质。这不仅仅是数学上的技巧,它反映了数学的一个核心思想:通过分解和构建来理解和发现数的本质特征。
本信息由网络用户发布,本站只提供信息展示,内容详情请与官方联系确认。
标签 : 最新资讯
|
__label__pos
| 0.81143 |
Converting Numeric Values to Character Strings in MySQL
Q
How To Convert Numeric Values to Character Strings in MySQL?
✍: FYIcenter.com
A
You can convert numeric values to character strings by using the CAST(value AS CHAR) function as shown in the following examples:
SELECT CAST(4123.45700 AS CHAR) FROM DUAL;
4123.45700
-- How to get rid of the last 2 '0's?
SELECT CAST(4.12345700E+3 AS CHAR) FROM DUAL;
4123.457
SELECT CAST(1/3 AS CHAR);
0.3333
-- Very poor conversion
Introduction to SQL Basics in MySQL
⇒⇒MySQL Database Tutorials
2016-10-16, 242👍, 0💬
|
__label__pos
| 0.960743 |
Part 4
Introduction to object-oriented programming
We'll now begin our journey into the world of object-oriented programming. We'll start with focusing on describing concepts and data using objects. From there on, we'll learn how to add functionality, i.e., methods to our program.
Object-oriented programming is concerned with isolating concepts of a problem domain into separate entities and then using those entities to solve problems. Concepts related to a problem can only be considered once they've been identified. In other words, we can form abstractions from problems that make those problems easier to approach.
Once concepts related to a given problem have been identified, we can also begin to build constructs that represent them into programs. These constructs, and the individual instances that are formed from them, i.e., objects, are used in solving the problem. The statement "programs are built from small, clear, and cooperative objects" may not make much sense yet. However, it will appear more sensible as we progress through the course, perhaps even self-evident.
Classes and Objects
We've already used some of the classes and objects provided by Java. A class defines the attributes of objects, i.e., the information related to them (instance variables), and their commands, i.e., their methods. The values of instance (i.e., object) variables define the internal state of an individual object, whereas methods define the functionality it offers.
A Method is a piece of source code written inside a class that's been named and has the ability to be called. A method is always part of some class and is often used to modify the internal state of an object instantiated from a class.
As an example, ArrayList is a class offered by Java, and we've made use of objects instantiated from it in our programs. Below, an ArrayList object named integers is created and some integers are added to it.
// we create an object from the ArrayList class named integers
ArrayList<Integer> integers = new ArrayList<>();
// let's add the values 15, 34, 65, 111 to the integers object
integers.add(15);
integers.add(34);
integers.add(65);
integers.add(111);
// we print the size of the integers object
System.out.println(integers.size());
An object is always instantiated by calling a method that created an object, i.e., a constructor by using the new keyword.
Loading
Loading
:
Loading interface...
:
Loading interface...
Login to view the exercise
Creating Classes
A class specifies what the objects instantiated from it are like.
• The object's variables (instance variables) specify the internal state of the object
• The object's methods specify what the object does
We'll now familiarize ourselves with creating our own classes and defining the variable that belong to them.
A class is defined to represent some meaningful entity, where a "meaningful entity" often refers to a real-world object or concept. If a computer program had to process personal information, it would perhaps be meaningful to define a seperate class Person consisting of methods and attributes related to an individual.
Let's begin. We'll assume that we have a project template that has an empty main program:
public class Main {
public static void main(String[] args) {
}
}
Let's create a class named Person. For this class, we create a separate file named Person.java. Our program now consists of two separate files since the main program is also in its own file. The Person.java file initially contains the class definition public class Person and the curly brackets that confine the contents of the class.
public class Person {
}
After creating a new file in NetBeans, the current state is as follows. In the image below, the class Person has been added to the SandboxExercise.
part4 1 class created
You can also draw a class diagram to depict a class. We'll become familiar with its notations as we go along. An empty person-named class looks like this:
part4 1 classdiagram person
A class defines the attributes and behaviors of objects that are created from it. Let's decide that each person object has a name and an age. It's natural to represent the name as a string, and the age as an integer. We'll go ahead and add these to our blueprint:
public class Person {
private String name;
private int age;
}
We specify above that each object created from the Person class has a name and an age. Variables defined inside a class are called instance variables, or object fields or object attributes. Other names also seem to exist.
Instance variables are written on the lines following the class definition public class Person {. Each variable is preceded by the keyword private. The keyword private means that the variables are "hidden" inside the object. This is known as encapsulation.
In the class diagram, the variables associated with the class are defined as "variableName: variableType". The minus sign before the variable name indicates that the variable is encapsulated (it has the keyword private).
part4 1 classdiagram person name age
We have now defined a blueprint — a class — for the person object. Each new person object has the variables name and age, which are able to hold object-specific values. The "state" of a person consists of the values assigned to their name and age.
Loading
Defining a Constructor
We want to set an initial state for an object that's created. Custom objects are created the same way as objects from pre-made Java classes, such as ArrayList, using the new keyword. It'd be convenient to pass values to the variables of that object as it's being created. For example, when creating a new person object, it's useful to be able to provide it with a name:
public static void main(String[] args) {
Person ada = new Person("Ada");
// ...
}
This is achieved by defining the method that creates the object, i.e., its constructor. The constructor is defined after the instance variables. In the following example, a constructor is defined for the Person class, which can be used to create a new Person object. The constructor sets the age of the object being created to 0, and the string passed to the constructor as a parameter as its name:
public class Person {
private String name;
private int age;
public Person(String initialName) {
this.age = 0;
this.name = initialName;
}
}
The constructor's name is always the same as the class name. The class in the example above is named Person, so the constructor will also have to be named Person. The constructor is also provided, as a parameter, the name of the person object to be created. The parameter is enclosed in parentheses and follows the constructor's name. The parentheses that contain optional parameters are followed by curly brackets. In between these brackets is the source code that the program executes when the constructor is called (e.g., new Person ("Ada")).
Objects are always created using a constructor.
A few things to note: the constructor contains the expression this.age = 0. This expression sets the instance variable age of the newly created object (i.e., "this" object's age) to 0. The second expression this.name = initialName likewise assigns the string passed as a parameter to the instance variable name of the object created.
part4 1 classdiagram person name age constructor
Loading
Defining Methods For an Object
We know how to create an object and initialize its variables. However, an object also needs methods to be able to do anything. As we've learned, a method is a named section of source code inside a class which can be invoked.
public class Person {
private String name;
private int age;
public Person(String initialName) {
this.age = 0;
this.name = initialName;
}
public void printPerson() {
System.out.println(this.name + ", age " + this.age + " years");
}
}
A method is written inside of the class beneath the constructor. The method name is preceded by public void, since the method is intended to be visible to the outside world (public), and it does not return a value (void).
In addition to the class name, instance variables and constructor, the class diagram now also includes the method printPerson. Since the method comes with the public modifier, the method name is prefixed with a plus sign. No parameters are defined for the method, so nothing is put inside the method's parentheses. The method is also marked with information indicating that it does not return a value, here void.
part4 1 classdiagram person name age constructor print
The method printPerson contains one line of code that makes use of the instance variables name and age — the class diagram says nothing about its internal implementations. Instance variables are referred to with the prefix this. All of the object's variables are visible and available from within the method.
Let's create three persons in the main program and request them to print themselves:
public class Main {
public static void main(String[] args) {
Person ada = new Person("Ada");
Person antti = new Person("Antti");
Person martin = new Person("Martin");
ada.printPerson();
antti.printPerson();
martin.printPerson();
}
}
Prints:
Sample output
Ada, age 0 years Antti, age 0 years Martin, age 0 years
This as a screencast:
Loading
Loading
Loading
Changing an Instance Variable's Value in a Method
Let's add a method to the previously created person class that increments the age of the person by a year.
public class Person {
private String name;
private int age;
public Person(String initialName) {
this.age = 0;
this.name = initialName;
}
public void printPerson() {
System.out.println(this.name + ", age " + this.age + " years");
}
// growOlder() method has been added
public void growOlder() {
this.age = this.age + 1;
}
}
The method is written inside the Person class just as the printPerson method was. The method increments the value of the instance variable age by one.
The class diagram also gets an update.
[Henkilo|-nimi:String;-ika:int|+Henkilo(String);+tulostaHenkilo():void;+vanhene():void]
Let's call the method and see what happens:
public class Main {
public static void main(String[] args) {
Person ada = new Person("Ada");
Person antti = new Person("Antti");
ada.printPerson();
antti.printPerson();
System.out.println("");
ada.growOlder();
ada.growOlder();
ada.printPerson();
antti.printPerson();
}
}
The program's print output is as follows:
Sample output
Ada, age 0 years Antti, age 0 years
Ada, age 2 years Antti, age 0 years
That is to say that when the two objects are "born" they're both zero-years old (this.age = 0; is executed in the constructor). The ada object's growOlder method is called twice. As the print output demonstrates, the age of Ada is 2 years after growing older. Calling the method on an object corresponding to Ada has no impact on the age of the other person object since each object instantiated from a class has its own instance variables.
The method can also contain conditional statements and loops. The growOlder method below limits aging to 30 years.
public class Person {
private String name;
private int age;
public Person(String initialName) {
this.age = 0;
this.name = initialName;
}
public void printPerson() {
System.out.println(this.name + ", age " + this.age + " years");
}
// no one exceeds the age of 30
public void growOlder() {
if (this.age < 30) {
this.age = this.age + 1;
}
}
}
Loading
Loading
Returning a Value From a Method
A method can return a value. The methods we've created in our objects haven't so far returned anything. This has been marked by typing the keyword void in the method definition.
public class Door {
public void knock() {
// ...
}
}
The keyword void means that the method does not return a value.
If we want the method to return a value, we need to replace the void keyword with the type of the variable to be returned. In the following example, the Teacher class has a method grade that always returns an integer-type (int) variable (in this case, the value 10). The value is always returned with the return command:
public class Teacher {
public int grade() {
return 10;
}
}
The method above returns an int type variable of value 10 when called. For the return value to be used, it needs to be assigned to a variable. This happens the same way as regular value assignment, i.e., by using the equals sign:
public static void main(String[] args) {
Teacher teacher = new Teacher();
int grading = teacher.grade();
System.out.println("The grade received is " + grading);
}
Sample output
The grade received is 10
The method's return value is assigned to a variable of type int value just as any other int value would be. The return value could also be used to form part of an expression.
public static void main(String[] args) {
Teacher first = new Teacher();
Teacher second = new Teacher();
Teacher third = new Teacher();
double average = (first.grade() + second.grade() + third.grade()) / 3.0;
System.out.println("Grading average " + average);
}
Sample output
Grading average 10.0
All the variables we've encountered so far can also be returned by a method. To sum:
• A method that returns nothing has the void modifier as the type of variable to be returned.
public void methodThatReturnsNothing() {
// the method body
}
• A method that returns an integer variable has the int modifier as the type of variable to be returned.
public int methodThatReturnsAnInteger() {
// the method body, requires a return statement
}
• A method that returns a string has the String modifier as the type of the variable to be returned
public String methodThatReturnsAString() {
// the method body, requires a return statement
}
• A method that returns a double-precision number has the double modifier as the type of the variable to be returned.
public double methodThatReturnsADouble() {
// the method body, requires a return statement
}
Let's continue with the Person class and add a returnAge method that returns the person's age.
public class Person {
private String name;
private int age;
public Person(String initialName) {
this.age = 0;
this.name = initialName;
}
public void printPerson() {
System.out.println(this.name + ", age " + this.age + " years");
}
public void growOlder() {
if (this.age < 30) {
this.age = this.age + 1;
}
}
// the added method
public int returnAge() {
return this.age;
}
The class in its entirety:
[Henkilo|-nimi:String;-ika:int|+Henkilo(String);+tulostaHenkilo():void;+vanhene():void;+palautaIka():int]
Let's illustrate how the method works:
public class Main {
public static void main(String[] args) {
Person pekka = new Person("Pekka");
Person antti = new Person("Antti");
pekka.growOlder();
pekka.growOlder();
antti.growOlder();
System.out.println("Pekka's age: " + pekka.returnAge());
System.out.println("Antti's age: " + antti.returnAge())
int combined = pekka.returnAge() + antti.returnAge();
System.out.println("Pekka's and Antti's combined age " + combined + " years");
}
}
Sample output
Pekka's age 2 Antti's age 1
Pekka's and Antti's combined age 3 years
:
Loading interface...
:
Loading interface...
Login to view the exercise
Loading
Loading
As we came to notice, methods can contain source code in the same way as other parts of our program. Methods can have conditionals or loops, and other methods can also be called from them.
Let's now write a method for the person that determines if the person is of legal age. The method returns a boolean - either true or false:
public class Person {
// ...
public boolean isOfLegalAge() {
if (this.age < 18) {
return false;
}
return true;
}
/*
The method could have been written more succinctly in the following way:
public boolean isOfLegalAge() {
return this.age >= 18;
}
*/
}
And let's test it out:
public static void main(String[] args) {
Person pekka = new Person("Pekka");
Person antti = new Person("Antti");
int i = 0;
while (i < 30) {
pekka.growOlder();
i = i + 1;
}
antti.growOlder();
System.out.println("");
if (antti.isOfLegalAge()) {
System.out.print("of legal age: ");
antti.printPerson();
} else {
System.out.print("underage: ");
antti.printPerson();
}
if (pekka.isOfLegalAge()) {
System.out.print("of legal age: ");
pekka.printPerson();
} else {
System.out.print("underage: ");
pekka.printPerson();
}
}
Sample output
underage: Antti, age 1 years of legal age: Pekka, age 30 years
Let's fine-tune the solution a bit more. In its current form, a person can only be "printed" in a way that includes both the name and the age. Situations exist, however, where we may only want to know the name of an object. Let's write a separate method for this use case:
public class Person {
// ...
public String getName() {
return this.name;
}
}
The getName method returns the instance variable name to the caller. The name of this method is somewhat strange. It is the convention in Java to name a method that returns an instance variable exactly this way, i.e., getVariableName. Such methods are often referred to as "getters".
The class as a whole:
[Henkilo|-nimi:String;-ika:int|+Henkilo(String);+tulostaHenkilo():void;+vanhene():void;+palautaIka():int;+taysiIkainen():boolean;+getNimi():String]
Let's mould the main program to use the new "getter" method:
public static void main(String[] args) {
Person pekka = new Person("Pekka");
Person antti = new Person("Antti");
int i = 0;
while (i < 30) {
pekka.growOlder();
i = i + 1;
}
antti.growOlder();
System.out.println("");
if (antti.isOfLegalAge()) {
System.out.println(antti.getName() + " is of legal age");
} else {
System.out.println(antti.getName() + " is underage");
}
if (pekka.isOfLegalAge()) {
System.out.println(pekka.getName() + " is of legal age");
} else {
System.out.println(pekka.getName() + " is underage ");
}
}
The print output is starting to turn out quit neat:
Sample output
Antti is underage Pekka is of legal age
Loading
A string representation of an object and the toString-method
We are guilty of programming in a somewhat poor style by creating a method for printing the object, i.e., the printPerson method. A preferred way is to define a method for the object that returns a "string representation" of the object. The method returning the string representation is always toString in Java. Let's define this method for the person in the following example:
public class Person {
// ...
public String toString() {
return this.name + ", age " + this.age + " years";
}
}
The toString functions as printPerson does. However, it doesn't itself print anything but instead returns a string representation, which the calling method can execute for printing as needed.
The method is used in a somewhat surprising way:
public static void main(String[] args) {
Person pekka = new Person("Pekka");
Person antti = new Person("Antti");
int i = 0;
while (i < 30) {
pekka.growOlder();
i = i + 1;
}
antti.growOlder();
System.out.println(antti); // same as System.out.println(antti.toString());
System.out.println(pekka); // same as System.out.println(pekka.toString());
}
In principle, the System.out.println method requests the object's string representation and prints it. The call to the toString method returning the string representation does not have to be written explicitly, as Java adds it automatically. When a programmer writes:
System.out.println(antti);
Java extends the call at run time to the following form:
System.out.println(antti.toString());
As such, the call System.out.println(antti) calls the toString method of the antti object and prints the string returned by it.
We can remove the now obsolete printPerson method from the Person class.
The second part of the screencast:
Loading
Method parameters
Let's continue with the Person class once more. We've decided that we want to calculate people's body mass indexes. To do this, we write methods for the person to set both the height and the weight, and also a method to calculate the body mass index. The new and changed parts of the Person object are as follows:
public class Person {
private String name;
private int age;
private int weight;
private int height;
public Person(String initialName) {
this.age = 0;
this.weight = 0;
this.height = 0;
this.name = initialName;
}
public void setHeight(int newHeight) {
this.height = newHeight;
}
public void setWeight(int newWeight) {
this.weight = newWeight;
}
public double bodyMassIndex() {
double heigthPerHundred = this.height / 100.0;
return this.weight / (heigthPerHundred * heigthPerHundred);
}
// ...
}
The instance variables height and weight were added to the person. Values for these can be set using the setHeight and setWeight methods. Java's standard naming convention is used once again, that is, if the method's only purpose is to set a value to an instance variable, then it's named as setVariableName. Value-setting methods are often called "setters". The new methods are put to use in the following case:
public static void main(String[] args) {
Person matti = new Person("Matti");
Person juhana = new Person("Juhana");
matti.setHeight(180);
matti.setWeight(86);
juhana.setHeight(175);
juhana.setWeight(64);
System.out.println(matti.getName() + ", body mass index is " + matti.bodyMassIndex());
System.out.println(juhana.getName() + ", body mass index is " + juhana.bodyMassIndex());
}
Prints:
Sample output
Matti, body mass index is 26.54320987654321 Juhana, body mass index is 20.897959183673468
A parameter and instance variable having the same name!
In the preceding example, the setHeight method sets the value of the parameter newHeight to the instance variable height:
public void setHeight(int newHeight) {
this.height = newHeight;
}
The parameter's name could also be the same as the instance variable's, so the following would also work:
public void setHeight(int height) {
this.height = height;
}
In this case, height in the method refers specifically to a parameter named height and this.height to an instance variable of the same name. For example, the following example would not work as the code does not refer to the instance variable height at all. What the code does in effect is set the height variable received as a parameter to the value it already contains:
public void setHeight(int height) {
// DON'T DO THIS!!!
height = height;
}
public void setHeight(int height) {
// DO THIS INSTEAD!!!
this.height = height;
}
Loading
Calling an internal method
The object may also call its methods. For example, if we wanted the string representation returned by toString to also tell of a person's body mass index, the object's own bodyMassIndex method should be called in the toString method:
public String toString() {
return this.name + ", age " + this.age + " years, my body mass index is " + this.bodyMassIndex();
}
So, when an object calls an internal method, the name of the method and this prefix suffice. An alternative way is to call the object's own method in the form bodyMassIndex(), whereby no emphasis is placed on the fact that the object's own bodyMassIndex method is being called:
public String toString() {
return this.name + ", age " + this.age + " years, my body mass index is " + bodyMassIndex();
}
The screencast's third part:
Loading
Loading
You have reached the end of this section! Continue to the next section:
Remember to check your points from the ball on the bottom-right corner of the material!
|
__label__pos
| 0.999687 |
Operation aging
If you see operations listed one day but you don't see the same operations listed the next, operation aging is almost certainly the reason. Operation (URL) aging is the process by which the NAM Server limits the number of unique operations it retains in its database.
Why NAM ages out operations
Aging improves reporting by making frequently seen operations more readily viewable in reports, and by deemphasizing less important information.
Aging also improves NAM processing:
• It reduces the amount of data populating memory cache, so the NAM Server is more responsive and requires less memory.
• It helps to keep the database at a manageable size.
• It shortens nightly tasks by reducing the volume of data the NAM Server needs to perform tasks against.
How aging works
1. Every time a data sample is processed, the NAM Server checks to see how many unique operations (Unique URLs) exist in the database against the number it's configured to automatically retain (RTM_URLAGING_COUNT).
2. If Unique URLs is greater than RTM_URLAGING_COUNT, the NAM Server checks to see if any operations can be aged.
An operation can be aged if:
• The operation isn't statically defined in the NAM Console under URL/Query Monitoring
• The operation hasn't been reported by the NAM Probe in N number of minutes
If a URL is used in a business unit definition, it is not aged even if the operation is not present during the period defined in RTM_URLAGING_LIFETIME.
See Managing operation aging below for instructions on configuring relevant properties.
Operations are aged FIFO (first in, first out).
How reports and metrics are affected
If you see a high volume of operations and you have a relatively low setting for URL count, it is likely that operations age out regularly and become unavailable for reports displaying data for later in the day or in previous days. You do not lose the overall data aggregated at the software service level, just data associated with that operation. Any report based on an operation or set of operations that are candidates to be aged and that cover a date range or date in the past may show different values depending on whether those operations have been aged or not.
Aging may thus lead to apparent inconsistencies in related metrics:
• Metrics reported per server or per software services may not agree with the metric reported for all operations reported under this server or software service.
• When metrics reported per business unit are defined with operations
• Aged operations are not counted within the business unit reported in the Software service, operation, and site data view
• Aged operations are counted in the Application, transaction, and tier data view
Operations that are not present in traffic for some period of time are no longer displayed on reports if they're based on the following dimensions:
• Operation
• Task
• Module
• Service
If the business unit definition is based on the above dimensions, the following dimensions also age:
• Application
• Transaction
• Transaction step
• Tier
Managing operation aging
You can turn aging on and off, and you can tweak the parameters that govern the aging process.
Checking current unique operations
The aging algorithm compares Unique URLs to RTM_URLAGING_COUNT (see below) to determine whether it is time to look for operations that should be aged.
To check the count of unique operations (Unique URLs) in the database
1. Select Diagnostics > System status in the NAM Server navigation menu.
2. Scroll to the bottom of the System status report and select the Advanced settings link.
3. Find the Unique URLs row.
Configuring aging parameters
Check and set aging parameters through the Advanced properties editor on the NAM Server.
1. Select Tools > Admin console in the NAM Server navigation menu.
2. Select Advanced properties editor.
3. Search for the property name:
• Property: RTM_URLAGING_ENABLE
Default: ON
Turns aging on or off. No server restart is needed.
• Property: RTM_URLAGING_COUNT
Default: 500 URLs/operations
Determines the number of operations/URLs stored before aging is triggered. No server restart is needed.
• Property: RTM_URLAGING_LIFETIME
Default: 420 minutes (7 hours)
Determines the lifetime of auto-learned operations/URLs. This is the number of minutes an auto-learned URL has to be inactive before it's removed from the database. This aging is performed for every processed sample. No server restart is needed.
• If a URL is used in a business unit definition, it is not aged even if the operation is not present during the period defined in RTM_URLAGING_LIFETIME.
• Property: AMD_STORAGE_PERIOD
Default: 10 days
Determines the number of days the NAM Probe stores raw data, with a time resolution of 1 interval. This aging is performed during the nightly tasks.
How to check if an operation has aged out
If your log files go back far enough, you can find the name of the operation there.
1. Open Tools > Admin console in the NAM Server navigation menu.
2. Select Browse server log.
3. Set View to url_aging.
4. Set Include lines for a search string.
5. Select Update to search for the operation.
• To focus on a specific time, you can set Begin and End to timestamps such as 20-01-29 01:25:27.298 (YY-MM-DD HH:MM:SS.sss)
|
__label__pos
| 0.725197 |
72
C++ has std::vector and Java has ArrayList, and many other languages have their own form of dynamically allocated array. When a dynamic array runs out of space, it gets reallocated into a larger area and the old values are copied into the new array. A question central to the performance of such an array is how fast the array grows in size. If you always only grow large enough to fit the current push, you'll end up reallocating every time. So it makes sense to double the array size, or multiply it by say 1.5x.
Is there an ideal growth factor? 2x? 1.5x? By ideal I mean mathematically justified, best balancing performance and wasted memory. I realize that theoretically, given that your application could have any potential distribution of pushes that this is somewhat application dependent. But I'm curious to know if there's a value that's "usually" best, or is considered best within some rigorous constraint.
I've heard there's a paper on this somewhere, but I've been unable to find it.
10 Answers 10
39
It will entirely depend on the use case. Do you care more about the time wasted copying data around (and reallocating arrays) or the extra memory? How long is the array going to last? If it's not going to be around for long, using a bigger buffer may well be a good idea - the penalty is short-lived. If it's going to hang around (e.g. in Java, going into older and older generations) that's obviously more of a penalty.
There's no such thing as an "ideal growth factor." It's not just theoretically application dependent, it's definitely application dependent.
2 is a pretty common growth factor - I'm pretty sure that's what ArrayList and List<T> in .NET uses. ArrayList<T> in Java uses 1.5.
EDIT: As Erich points out, Dictionary<,> in .NET uses "double the size then increase to the next prime number" so that hash values can be distributed reasonably between buckets. (I'm sure I've recently seen documentation suggesting that primes aren't actually that great for distributing hash buckets, but that's an argument for another answer.)
88
I remember reading many years ago why 1.5 is preferred over two, at least as applied to C++ (this probably doesn't apply to managed languages, where the runtime system can relocate objects at will).
The reasoning is this:
1. Say you start with a 16-byte allocation.
2. When you need more, you allocate 32 bytes, then free up 16 bytes. This leaves a 16-byte hole in memory.
3. When you need more, you allocate 64 bytes, freeing up the 32 bytes. This leaves a 48-byte hole (if the 16 and 32 were adjacent).
4. When you need more, you allocate 128 bytes, freeing up the 64 bytes. This leaves a 112-byte hole (assuming all previous allocations are adjacent).
5. And so and and so forth.
The idea is that, with a 2x expansion, there is no point in time that the resulting hole is ever going to be large enough to reuse for the next allocation. Using a 1.5x allocation, we have this instead:
1. Start with 16 bytes.
2. When you need more, allocate 24 bytes, then free up the 16, leaving a 16-byte hole.
3. When you need more, allocate 36 bytes, then free up the 24, leaving a 40-byte hole.
4. When you need more, allocate 54 bytes, then free up the 36, leaving a 76-byte hole.
5. When you need more, allocate 81 bytes, then free up the 54, leaving a 130-byte hole.
6. When you need more, use 122 bytes (rounding up) from the 130-byte hole.
• 2
A random forum post I found (objectmix.com/c/…) reasons similarly. A poster claims that (1+sqrt(5))/2 is the upper limit for reuse. – Naaff Jul 8 '09 at 21:05
• 16
If that claim is correct, then phi (== (1 + sqrt(5)) / 2) is indeed the optimal number to use. – Chris Jester-Young Jul 8 '09 at 21:07
• 1
I like this answer because it reveals the rationale of 1.5x versus 2x, but Jon's is technically most correct for the way I stated it. I should have just asked why 1.5 has been recommended in the past :p – Joseph Garvin Apr 15 '10 at 15:00
• 4
Facebook uses 1.5 in it's FBVector implementation, article here explains why 1.5 is optimal for FBVector. – csharpfolk Nov 4 '14 at 18:13
• 1
@jackmott Right, exactly as my answer noted: "this probably doesn't apply to managed languages, where the runtime system can relocate objects at will". – Chris Jester-Young Aug 16 '16 at 13:12
37
Ideally (in the limit as n → ∞), it's the golden ratio: ϕ = 1.618...
In practice, you want something close, like 1.5.
The reason is that you want to be able to reuse older memory blocks, to take advantage of caching and avoid constantly making the OS give you more memory pages. The equation you'd solve to ensure this reduces to xn − 1 − 1 = xn + 1xn, whose solution approaches x = ϕ for large n.
• +1, I hope you don't mind removing the overly bold font. – 2501 Nov 4 '14 at 19:45
11
One approach when answering questions like this is to just "cheat" and look at what popular libraries do, under the assumption that a widely used library is, at the very least, not doing something horrible.
So just checking very quickly, Ruby (1.9.1-p129) appears to use 1.5x when appending to an array, and Python (2.6.2) uses 1.125x plus a constant (in Objects/listobject.c):
/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
*/
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
/* check for integer overflow */
if (new_allocated > PY_SIZE_MAX - newsize) {
PyErr_NoMemory();
return -1;
} else {
new_allocated += newsize;
}
newsize above is the number of elements in the array. Note well that newsize is added to new_allocated, so the expression with the bitshifts and ternary operator is really just calculating the over-allocation.
• So it grows the array from n to n + (n/8 + (n<9?3:6)), which means the growth factor, in the question's terminology, is 1.25x (plus a constant). – ShreevatsaR Jul 8 '09 at 20:58
• Wouldn't it be 1.125x plus a constant? – Jason Creighton Jul 8 '09 at 21:15
• Er right, 1/8=0.125. My mistake. – ShreevatsaR Jul 9 '09 at 16:39
7
Let's say you grow the array size by x. So assume you start with size T. The next time you grow the array its size will be T*x. Then it will be T*x^2 and so on.
If your goal is to be able to reuse the memory that has been created before, then you want to make sure the new memory you allocate is less than the sum of previous memory you deallocated. Therefore, we have this inequality:
T*x^n <= T + T*x + T*x^2 + ... + T*x^(n-2)
We can remove T from both sides. So we get this:
x^n <= 1 + x + x^2 + ... + x^(n-2)
Informally, what we say is that at nth allocation, we want our all previously deallocated memory to be greater than or equal to the memory need at the nth allocation so that we can reuse the previously deallocated memory.
For instance, if we want to be able to do this at the 3rd step (i.e., n=3), then we have
x^3 <= 1 + x
This equation is true for all x such that 0 < x <= 1.3 (roughly)
See what x we get for different n's below:
n maximum-x (roughly)
3 1.3
4 1.4
5 1.53
6 1.57
7 1.59
22 1.61
Note that the growing factor has to be less than 2 since x^n > x^(n-2) + ... + x^2 + x + 1 for all x>=2.
• You seem to claim that you can already reuse the previously deallocated memory at the 2nd allocation with a factor of 1.5. This is not true (see above). Let me know if I misunderstood you. – awx Feb 22 '13 at 12:54
• At 2nd allocation you are allocating 1.5*1.5*T = 2.25*T while total deallocation you will be doing until then is T + 1.5*T = 2.5*T. So 2.5 is greater than 2.25. – CEGRD Feb 23 '13 at 16:22
• Ah, I should read more carefully; all you say is that the total deallocated memory will be more than the allocated memory at the nth allocation, not that you can reuse it at the nth allocation. – awx Feb 28 '13 at 14:11
4
It really depends. Some people analyze common usage cases to find the optimal number.
I've seen 1.5x 2.0x phi x, and power of 2 used before.
• Phi! That's a nice number to use. I should start using it from now on. Thanks! +1 – Chris Jester-Young Jul 8 '09 at 20:40
• I don't understand...why phi? What properties does it have that makes it suitable for this? – Jason Creighton Jul 8 '09 at 20:55
• 4
@Jason: phi makes for a Fibonacci sequence, so the next allocation size is the sum of the current size and the previous size. This allows for moderate rate of growth, faster than 1.5 but not 2 (see my post as to why >= 2 is not a good idea, at least for unmanaged languages). – Chris Jester-Young Jul 8 '09 at 21:03
• 1
@Jason: Also, according to a commenter to my post, any number > phi is in fact a bad idea. I haven't done the math myself to confirm this, so take it with a grain of salt. – Chris Jester-Young Jul 8 '09 at 21:09
2
If you have a distribution over array lengths, and you have a utility function that says how much you like wasting space vs. wasting time, then you can definitely choose an optimal resizing (and initial sizing) strategy.
The reason the simple constant multiple is used, is obviously so that each append has amortized constant time. But that doesn't mean you can't use a different (larger) ratio for small sizes.
In Scala, you can override loadFactor for the standard library hash tables with a function that looks at the current size. Oddly, the resizable arrays just double, which is what most people do in practice.
I don't know of any doubling (or 1.5*ing) arrays that actually catch out of memory errors and grow less in that case. It seems that if you had a huge single array, you'd want to do that.
I'd further add that if you're keeping the resizable arrays around long enough, and you favor space over time, it might make sense to dramatically overallocate (for most cases) initially and then reallocate to exactly the right size when you're done.
1
I agree with Jon Skeet, even my theorycrafter friend insists that this can be proven to be O(1) when setting the factor to 2x.
The ratio between cpu time and memory is different on each machine, and so the factor will vary just as much. If you have a machine with gigabytes of ram, and a slow CPU, copying the elements to a new array is a lot more expensive than on a fast machine, which might in turn have less memory. It's a question that can be answered in theory, for a uniform computer, which in real scenarios doesnt help you at all.
• 1
To elaborate, doubling the array size means that you get amotized O(1) inserts. The idea is that every time you insert an element, you copy an element from the old array as well. Lets say you have an array of size m, with m elements in it. When adding element m+1, there is no space, so you allocate a new array of size 2m. Instead of copying all the first m elements, you copy one every time you insert a new element. This minimize the variance (save for the allocation of the memory), and once you have inserted 2m elements, you will have copied all elements from the old array. – hvidgaard Nov 4 '14 at 10:06
0
I know it is an old question, but there are several things that everyone seems to be missing.
First, this is multiplication by 2: size << 1. This is multiplication by anything between 1 and 2: int(float(size) * x), where x is the number, the * is floating point math, and the processor has to run additional instructions for casting between float and int. In other words, at the machine level, doubling takes a single, very fast instruction to find the new size. Multiplying by something between 1 and 2 requires at least one instruction to cast size to a float, one instruction to multiply (which is float multiplication, so it probably takes at least twice as many cycles, if not 4 or even 8 times as many), and one instruction to cast back to int, and that assumes that your platform can perform float math on the general purpose registers, instead of requiring the use of special registers. In short, you should expect the math for each allocation to take at least 10 times as long as a simple left shift. If you are copying a lot of data during the reallocation though, this might not make much of a difference.
Second, and probably the big kicker: Everyone seems to assume that the memory that is being freed is both contiguous with itself, as well as contiguous with the newly allocated memory. Unless you are pre-allocating all of the memory yourself and then using it as a pool, this is almost certainly not the case. The OS might occasionally end up doing this, but most of the time, there is going to be enough free space fragmentation that any half decent memory management system will be able to find a small hole where your memory will just fit. Once you get to really bit chunks, you are more likely to end up with contiguous pieces, but by then, your allocations are big enough that you are not doing them frequently enough for it to matter anymore. In short, it is fun to imagine that using some ideal number will allow the most efficient use of free memory space, but in reality, it is not going to happen unless your program is running on bare metal (as in, there is no OS underneath it making all of the decisions).
My answer to the question? Nope, there is no ideal number. It is so application specific that no one really even tries. If your goal is ideal memory usage, you are pretty much out of luck. For performance, less frequent allocations are better, but if we went just with that, we could multiply by 4 or even 8! Of course, when Firefox jumps from using 1GB to 8GB in one shot, people are going to complain, so that does not even make sense. Here are some rules of thumb I would go by though:
If you cannot optimize memory usage, at least don't waste processor cycles. Multiplying by 2 is at least an order of magnitude faster than doing floating point math. It might not make a huge difference, but it will make some difference at least (especially early on, during the more frequent and smaller allocations).
Don't overthink it. If you just spent 4 hours trying to figure out how to do something that has already been done, you just wasted your time. Totally honestly, if there was a better option than *2, it would have been done in the C++ vector class (and many other places) decades ago.
Lastly, if you really want to optimize, don't sweat the small stuff. Now days, no one cares about 4KB of memory being wasted, unless they are working on embedded systems. When you get to 1GB of objects that are between 1MB and 10MB each, doubling is probably way too much (I mean, that is between 100 and 1,000 objects). If you can estimate expected expansion rate, you can level it out to a linear growth rate at a certain point. If you expect around 10 objects per minute, then growing at 5 to 10 object sizes per step (once every 30 seconds to a minute) is probably fine.
What it all comes down to is, don't over think it, optimize what you can, and customize to your application (and platform) if you must.
• 9
Of course n + n >> 1 is the same as 1.5 * n. It is fairly easy to come up with similar tricks for every practical growth factor you can think of. – Björn Lindqvist Oct 13 '16 at 1:11
• This is a good point. Note, however, that outside of ARM, this at least doubles the number of instructions. (Many ARM instructions, including the add instruction, can do an optional shift on one of the arguments, allowing your example to work in a single instruction. Most architectures can't do this though.) No, in most cases, doubling the number of instructions from one to two is not a significant issue, but for more complex growth factors where the math is more complex, it could make a performance difference for a sensitive program. – Rybec Arethdar Oct 23 '18 at 3:40
0
Another two cents
• Most computers have virtual memory! In the physical memory you can have random pages everywhere which are displayed as a single contiguous space in your program's virtual memory. The resolving of the indirection is done by the hardware. Virtual memory exhaustion was a problem on 32 bit systems, but it is really not a problem anymore. So filling the hole is not a concern anymore (except special environments). Since Windows 7 even Microsoft supports 64 bit without extra effort. @ 2011
• O(1) is reached with any r > 1 factor. Same mathematical proof works not only for 2 as parameter.
• r = 1.5 can be calculated with old*3/2 so there is no need for floating point operations. (I say /2 because compilers will replace it with bit shifting in the generated assembly code if they see fit.)
• MSVC went for r = 1.5, so there is at least one major compiler that does not use 2 as ratio.
As mentioned by someone 2 feels better than 8. And also 2 feels better than 1.1.
My feeling is that 1.5 is a good default. Other than that it depends on the specific case.
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.658818 |
Export (0) Print
Expand All
MemoryStream Class
Creates a stream whose backing store is memory.
For a list of all members of this type, see MemoryStream Members.
System.Object
System.MarshalByRefObject
System.IO.Stream
System.IO.MemoryStream
[Visual Basic]
<Serializable>
Public Class MemoryStream
Inherits Stream
[C#]
[Serializable]
public class MemoryStream : Stream
[C++]
[Serializable]
public __gc class MemoryStream : public Stream
[JScript]
public
Serializable
class MemoryStream extends Stream
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Remarks
For an example of creating a file and writing text to a file, see Writing Text to a File. For an example of reading text from a file, see Reading Text from a File. For an example of reading from and writing to a binary file, see Reading and Writing to a Newly Created Data File.
The MemoryStream class creates streams that have memory as a backing store instead of a disk or a network connection. MemoryStream encapsulates data stored as an unsigned byte array that is initialized upon creation of a MemoryStream object, or the array can be created as empty. The encapsulated data is directly accessible in memory. Memory streams can reduce the need for temporary buffers and files in an application.
The current position of a stream is the position at which the next read or write operation could take place. The current position can be retrieved or set through the Seek method. When a new instance of MemoryStream is created, the current position is set to zero.
Memory streams created with an unsigned byte array provide a non-resizable stream view of the data, and can only be written to. When using a byte array, you can neither append to nor shrink the stream, although you might be able to modify the existing contents depending on the parameters passed into the constructor. Empty memory streams are resizable, and can be written to and read from.
Example
[Visual Basic, C#, C++] The following code example shows how to read and write data using memory as a backing store.
[Visual Basic]
Imports System
Imports System.IO
Imports System.Text
Module MemStream
Sub Main()
Dim count As Integer
Dim byteArray As Byte()
Dim charArray As Char()
Dim uniEncoding As New UnicodeEncoding()
' Create the data to write to the stream.
Dim firstString As Byte() = _
uniEncoding.GetBytes("Invalid file path characters are: ")
Dim secondString As Byte() = _
uniEncoding.GetBytes(Path.InvalidPathChars)
Dim memStream As New MemoryStream(100)
Try
' Write the first string to the stream.
memStream.Write(firstString, 0 , firstString.Length)
' Write the second string to the stream, byte by byte.
count = 0
While(count < secondString.Length)
memStream.WriteByte(secondString(count))
count += 1
End While
' Write the stream properties to the console.
Console.WriteLine( _
"Capacity = {0}, Length = {1}, Position = {2}", _
memStream.Capacity.ToString(), _
memStream.Length.ToString(), _
memStream.Position.ToString())
' Set the stream position to the beginning of the stream.
memStream.Seek(0, SeekOrigin.Begin)
' Read the first 20 bytes from the stream.
byteArray = _
New Byte(CType(memStream.Length, Integer)){}
count = memStream.Read(byteArray, 0, 20)
' Read the remaining Bytes, Byte by Byte.
While(count < memStream.Length)
byteArray(count) = _
Convert.ToByte(memStream.ReadByte())
count += 1
End While
' Decode the Byte array into a Char array
' and write it to the console.
charArray = _
New Char(uniEncoding.GetCharCount( _
byteArray, 0, count)){}
uniEncoding.GetDecoder().GetChars( _
byteArray, 0, count, charArray, 0)
Console.WriteLine(charArray)
Finally
memStream.Close()
End Try
End Sub
End Module
[C#]
using System;
using System.IO;
using System.Text;
class MemStream
{
static void Main()
{
int count;
byte[] byteArray;
char[] charArray;
UnicodeEncoding uniEncoding = new UnicodeEncoding();
// Create the data to write to the stream.
byte[] firstString = uniEncoding.GetBytes(
"Invalid file path characters are: ");
byte[] secondString = uniEncoding.GetBytes(
Path.InvalidPathChars);
using(MemoryStream memStream = new MemoryStream(100))
{
// Write the first string to the stream.
memStream.Write(firstString, 0 , firstString.Length);
// Write the second string to the stream, byte by byte.
count = 0;
while(count < secondString.Length)
{
memStream.WriteByte(secondString[count++]);
}
// Write the stream properties to the console.
Console.WriteLine(
"Capacity = {0}, Length = {1}, Position = {2}\n",
memStream.Capacity.ToString(),
memStream.Length.ToString(),
memStream.Position.ToString());
// Set the position to the beginning of the stream.
memStream.Seek(0, SeekOrigin.Begin);
// Read the first 20 bytes from the stream.
byteArray = new byte[memStream.Length];
count = memStream.Read(byteArray, 0, 20);
// Read the remaining bytes, byte by byte.
while(count < memStream.Length)
{
byteArray[count++] =
Convert.ToByte(memStream.ReadByte());
}
// Decode the byte array into a char array
// and write it to the console.
charArray = new char[uniEncoding.GetCharCount(
byteArray, 0, count)];
uniEncoding.GetDecoder().GetChars(
byteArray, 0, count, charArray, 0);
Console.WriteLine(charArray);
}
}
}
[C++]
#using <mscorlib.dll>
using namespace System;
using namespace System::IO;
using namespace System::Text;
void main()
{
int count;
Byte byteArray __gc[];
Char charArray __gc[];
UnicodeEncoding* uniEncoding = new UnicodeEncoding();
// Create the data to write to the stream.
Byte firstString __gc[] =
uniEncoding->GetBytes(S"Invalid file path characters are: ");
Byte secondString __gc[] =
uniEncoding->GetBytes(Path::InvalidPathChars);
MemoryStream* memStream = new MemoryStream(100);
try
{
// Write the first string to the stream.
memStream->Write(firstString, 0 , firstString->Length);
// Write the second string to the stream, byte by byte.
count = 0;
while(count < secondString->Length)
{
memStream->WriteByte(secondString[count++]);
}
// Write the stream properties to the console.
Console::WriteLine(S"Capacity = {0}, Length = {1}, "
S"Position = {2}\n", memStream->Capacity.ToString(),
memStream->Length.ToString(),
memStream->Position.ToString());
// Set the stream position to the beginning of the stream.
memStream->Seek(0, SeekOrigin::Begin);
// Read the first 20 bytes from the stream.
byteArray = new Byte __gc[memStream->Length];
count = memStream->Read(byteArray, 0, 20);
// Read the remaining bytes, byte by byte.
while(count < memStream->Length)
{
byteArray[count++] =
Convert::ToByte(memStream->ReadByte());
}
// Decode the Byte array into a Char array
// and write it to the console.
charArray =
new Char __gc[uniEncoding->GetCharCount(
byteArray, 0, count)];
uniEncoding->GetDecoder()->GetChars(
byteArray, 0, count, charArray, 0);
Console::WriteLine(charArray);
}
__finally
{
memStream->Close();
}
}
[JScript] No example is available for JScript. To view a Visual Basic, C#, or C++ example, click the Language Filter button Language Filter in the upper-left corner of the page.
Requirements
Namespace: System.IO
Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family, .NET Compact Framework
Assembly: Mscorlib (in Mscorlib.dll)
See Also
MemoryStream Members | System.IO Namespace | Working with I/O | Reading Text from a File | Writing Text to a File
Show:
© 2014 Microsoft
|
__label__pos
| 0.990384 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
Do you know of any way to delete all of the entries stored in Core Data? My schema should stay the same; I just want to reset it to blank.
Edit
I'm looking to do this programmatically so that a user can essentially hit a reset button.
share|improve this question
1
Many of the answers below are dated. Use NSBatchDeleteRequest. stackoverflow.com/a/31961330/3681880 – Suragch Aug 12 '15 at 9:36
21 Answers 21
up vote 176 down vote accepted
You can still delete the file programmatically, using the NSFileManager:removeItemAtPath:: method.
NSPersistentStore *store = ...;
NSError *error;
NSURL *storeURL = store.URL;
NSPersistentStoreCoordinator *storeCoordinator = ...;
[storeCoordinator removePersistentStore:store error:&error];
[[NSFileManager defaultManager] removeItemAtPath:storeURL.path error:&error];
Then, just add the persistent store back to ensure it is recreated properly.
The programmatic way for iterating through each entity is both slower and prone to error. The use for doing it that way is if you want to delete some entities and not others. However you still need to make sure you retain referential integrity or you won't be able to persist your changes.
Just removing the store and recreating it is both fast and safe, and can certainly be done programatically at runtime.
Update for iOS5+
With the introduction of external binary storage (allowsExternalBinaryDataStorage or Store in External Record File) in iOS 5 and OS X 10.7, simply deleting files pointed by storeURLs is not enough. You'll leave the external record files behind. Since the naming scheme of these external record files is not public, I don't have a universal solution yet. – an0 May 8 '12 at 23:00
share|improve this answer
9
I know how to properly retrieve the storeCoordinator. However I dont know how to get the persistentStore. So could you please give a proper example instead of just: NSPersistentStore * store = ...; – Pascal Klein Jun 14 '11 at 12:33
10
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:&error] is better. – an0 Jun 23 '11 at 18:22
4
NSError *error = nil; is better – Tony Oct 14 '11 at 10:02
2
@Pascal If you can get the store coordinator then yo have access to all its persistent stores through the persistentStores property. – Mihai Damian Feb 9 '12 at 9:53
2
Example code including how to recreate a new empty store here: stackoverflow.com/a/8467628 – Joshua C. Lerner Nov 2 '12 at 2:35
You can delete the sqllite file - but I choose to do it by purging the tables individually with a functions:
- (void) deleteAllObjects: (NSString *) entityDescription {
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];
NSEntityDescription *entity = [NSEntityDescription entityForName:entityDescription inManagedObjectContext:_managedObjectContext];
[fetchRequest setEntity:entity];
NSError *error;
NSArray *items = [_managedObjectContext executeFetchRequest:fetchRequest error:&error];
[fetchRequest release];
for (NSManagedObject *managedObject in items) {
[_managedObjectContext deleteObject:managedObject];
DLog(@"%@ object deleted",entityDescription);
}
if (![_managedObjectContext save:&error]) {
DLog(@"Error deleting %@ - error:%@",entityDescription,error);
}
}
The reason I chose to do it table by table is that it makes me confirm as I am doing the programming that deleting the contents of the table is sensible and there is not data that I would rather keep.
Doing it this will is much slower than just deleting the file and I will change to a file delete if I this method takes too long.
share|improve this answer
Great solution. Thanks. What's DLog()? – Michael Grinich Jul 3 '09 at 12:15
Ah yes - sorry that is a special function I use that only does an NSLog when the build is a DEBUG build - just replace with NSLog. – Grouchal Jul 3 '09 at 14:33
5
You can see an implementation of DLog here: cimgf.com/2009/01/24/dropping-nslog-in-release-builds – Matt Long Oct 1 '09 at 17:28
3
This works nicely for me. But to make it go faster, is there a way to delete all the objects of a certain entity with one command? Like in SQL you could do something like, DROP TABLE entity_name. I don't want to delete the whole SQL file because I only want to delete all objects of a specific entity, not other entities. – MattDiPasquale Aug 29 '10 at 5:51
8
Use NSDictionary *allEntities = _managedObjectModel.entitiesByName; to get all entities in your model and then you can iterate over the keys in this NSDictionary to purge all entities in the store. – adam0101 Feb 25 '12 at 20:04
I've written a clearStores method that goes through every store and delete it both from the coordinator and the filesystem (error handling left aside):
NSArray *stores = [persistentStoreCoordinator persistentStores];
for(NSPersistentStore *store in stores) {
[persistentStoreCoordinator removePersistentStore:store error:nil];
[[NSFileManager defaultManager] removeItemAtPath:store.URL.path error:nil];
}
[persistentStoreCoordinator release], persistentStoreCoordinator = nil;
This method is inside a coreDataHelper class that takes care of (among other things) creating the persistentStore when it's nil.
share|improve this answer
Is the complete source code available elsewhere ? – onmyway133 Jul 27 '14 at 10:46
"no known class method for selector 'persistentStores'" – Aviram Netanel Jun 14 '15 at 15:55
I remove all data from core data on a button Event in a HomeViewController class: This article helped me so much I figured I'd contribute.
-(IBAction)buttonReset:(id)sender
{
NSLog(@"buttonReset Pressed");
//Erase the persistent store from coordinator and also file manager.
NSPersistentStore *store = [self.persistentStoreCoordinator.persistentStores lastObject];
NSError *error = nil;
NSURL *storeURL = store.URL;
[self.persistentStoreCoordinator removePersistentStore:store error:&error];
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:&error];
NSLog(@"Data Reset");
//Make new persistent store for future saves (Taken From Above Answer)
if (![self.persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error]) {
// do something with the error
}
}
Note that in order to call self.persistentStoreCoordinator I declared a property in the Home View Controller. (Don't worry about the managedObjectContext that I use for saving and loading.)
@property (nonatomic, retain) NSManagedObjectContext * managedObjectContext;
@property (nonatomic, retain) NSPersistentStoreCoordinator * persistentStoreCoordinator;
Then in the AppDelegate ApplicationDidFinishLaunching right below creating a HomeViewController I have :
homeViewController = [[HomeViewController alloc] initWithNibName:@"HomeViewController" bundle:nil];
homeViewController.managedObjectContext = self.managedObjectContext;
homeViewController.persistentStoreCoordinator = self.persistentStoreCoordinator;
share|improve this answer
Thank you for posting the FULL explanation with all of the proper code. – Michael D. Dec 20 '11 at 17:59
Nice, this is the how you post code. – RyeMAC3 Aug 1 '12 at 14:35
@ayteat, did this work for you?. for me its not working, please have a look at this stackoverflow.com/questions/14646595/… – Ranjit Feb 4 '13 at 7:00
1
THIS IS THE ANSWER except use "AppDelegate *ad = [[UIApplication sharedApplication] delegate];" and replace self with ad. and dont copy last two bits of code – CescSergey Feb 26 '14 at 5:13
1
Why are you not calling reset on managedObjectContext? What if you have some strong reference to managedObject? – Parag Bafna Jan 27 '15 at 5:29
MagicalRecord makes this very easy.
[MyCoreDataObject MR_truncateAll];
share|improve this answer
13
this is cool, but off topic since I specified a CoreData solution – Michael Grinich Apr 7 '11 at 3:11
8
Active Record Fetching is a core data solution. – casademora Apr 15 '11 at 19:18
5
But an answer like this goes beyond the scope of the question. There is no reason to assume he wants to use an addt'l framework to do this. – orange80 Jul 27 '11 at 16:13
5
update the link: github.com/magicalpanda/MagicalRecord – malaba Feb 21 '12 at 11:22
2
MagicalRecord is used by many people, and that answer IS helpful. If you don't use MagicalRecord, it's best just to ignore the answer rather than be pedantic / King of the Forum. Thanks! – horseshoe7 Mar 12 '15 at 8:54
If you want to delete all objects and do not want to delete the backing files, you can use following methods:
- (void)deleteAllObjectsInContext:(NSManagedObjectContext *)context
usingModel:(NSManagedObjectModel *)model
{
NSArray *entities = model.entities;
for (NSEntityDescription *entityDescription in entities) {
[self deleteAllObjectsWithEntityName:entityDescription.name
inContext:context];
}
}
- (void)deleteAllObjectsWithEntityName:(NSString *)entityName
inContext:(NSManagedObjectContext *)context
{
NSFetchRequest *fetchRequest =
[NSFetchRequest fetchRequestWithEntityName:entityName];
fetchRequest.includesPropertyValues = NO;
fetchRequest.includesSubentities = NO;
NSError *error;
NSArray *items = [context executeFetchRequest:fetchRequest error:&error];
for (NSManagedObject *managedObject in items) {
[context deleteObject:managedObject];
NSLog(@"Deleted %@", entityName);
}
}
Beware that it may be very slow (depends on how many objects are in your object graph).
share|improve this answer
how to remove the older data (say three tables, from one table I want to clear data)when app updates – Madan Mohan Jun 11 '13 at 16:44
[Late answer in response to a bounty asking for newer responses]
Looking over earlier answers,
• Fetching and deleting all items, as suggested by @Grouchal and others, is still an effective and useful solution. If you have very large data stores then it might be slow, but it still works very well.
• Simply removing the data store is, as you and @groundhog note, no longer effective. It's obsolete even if you don't use external binary storage because iOS 7 uses WAL mode for SQLite journalling. With WAL mode there may be (potentially large) journal files sitting around for any Core Data persistent store.
But there's a different, similar approach to removing the persistent store that does work. The key is to put your persistent store file in its own sub-directory that doesn't contain anything else. Don't just stick it in the documents directory (or wherever), create a new sub-directory just for the persistent store. The contents of that directory will end up being the persistent store file, the journal files, and the external binary files. If you want to nuke the entire data store, delete that directory and they'll all disappear.
You'd do something like this when setting up your persistent store:
NSURL *storeDirectoryURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"persistent-store"];
if ([[NSFileManager defaultManager] createDirectoryAtURL:storeDirectoryURL
withIntermediateDirectories:NO
attributes:nil
error:nil]) {
NSURL *storeURL = [storeDirectoryURL URLByAppendingPathComponent:@"MyApp.sqlite"];
// continue with storeURL as usual...
}
Then when you wanted to remove the store,
[[NSFileManager defaultManager] removeItemAtURL:storeDirectoryURL error:nil];
That recursively removes both the custom sub-directory and all of the Core Data files in it.
This only works if you don't already have your persistent store in the same folder as other, important data. Like the documents directory, which probably has other useful stuff in it. If that's your situation, you could get the same effect by looking for files that you do want to keep and removing everything else. Something like:
NSString *docsDirectoryPath = [[self applicationDocumentsDirectory] path];
NSArray *docsDirectoryContents = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:docsDirectoryPath error:nil];
for (NSString *docsDirectoryItem in docsDirectoryContents) {
// Look at docsDirectoryItem. If it's something you want to keep, do nothing.
// If it's something you don't recognize, remove it.
}
This approach may be error prone. You've got to be absolutely sure that you know every file you want to keep, because otherwise you might remove important data. On the other hand, you can remove the external binary files without actually knowing the file/directory name used to store them.
share|improve this answer
if you're afraid of the wal file, just disable it – onmyway133 Jul 27 '14 at 10:48
Updated Solution for iOS 9+
Use NSBatchDeleteRequest to delete all the objects in the entity without having to load them into memory or iterate through them.
// fetch all items in entity and request to delete them
let fetchRequest = NSFetchRequest(entityName: "MyEntity")
let deleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest)
// delegate objects
let myManagedObjectContext = (UIApplication.sharedApplication().delegate as! AppDelegate).managedObjectContext
let myPersistentStoreCoordinator = (UIApplication.sharedApplication().delegate as! AppDelegate).persistentStoreCoordinator
// perform the delete
do {
try myPersistentStoreCoordinator.executeRequest(deleteRequest, withContext: myManagedObjectContext)
} catch let error as NSError {
print(error)
}
Sources:
share|improve this answer
Let me know with any downvotes what the problems are and I will try to fix them. Using this method I was able to delete 400,000 entity objects almost instantly. – Suragch Aug 18 '15 at 14:18
1
I would place that entire block inside a moc.performBlockAndWait({ () -> Void in ... }). – SwiftArchitect Oct 10 '15 at 6:48
Here is combined solution for purging Core Data.
- (void)deleteAllObjectsInCoreData
{
NSArray *allEntities = self.managedObjectModel.entities;
for (NSEntityDescription *entityDescription in allEntities)
{
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];
[fetchRequest setEntity:entityDescription];
fetchRequest.includesPropertyValues = NO;
fetchRequest.includesSubentities = NO;
NSError *error;
NSArray *items = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error];
if (error) {
NSLog(@"Error requesting items from Core Data: %@", [error localizedDescription]);
}
for (NSManagedObject *managedObject in items) {
[self.managedObjectContext deleteObject:managedObject];
}
if (![self.managedObjectContext save:&error]) {
NSLog(@"Error deleting %@ - error:%@", entityDescription, [error localizedDescription]);
}
}
}
share|improve this answer
If you want to go the delete all objects route (which is much simpler than tearing down the Core Data stack, but less performant), than this is a better implementation:
- (void)deleteAllManagedObjectsInModel:(NSManagedObjectModel *)managedObjectModel context:(NSManagedObjectContext *)managedObjectContext
{
NSBlockOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
[managedObjectContext performBlockAndWait:^{
for (NSEntityDescription *entity in managedObjectModel) {
NSFetchRequest *fetchRequest = [NSFetchRequest new];
[fetchRequest setEntity:entity];
[fetchRequest setIncludesSubentities:NO];
NSArray *objects = [managedObjectContext executeFetchRequest:fetchRequest error:nil];
for (NSManagedObject *managedObject in objects) {
[managedObjectContext deleteObject:managedObject];
}
}
[managedObjectContext save:nil];
}];
}];
[operation setCompletionBlock:^{
// Do stuff once the truncation is complete
}];
[operation start];
}
This implementation leverages NSOperation to perform the deletion off of the main thread and notify on completion. You may want to emit a notification or something within the completion block to bubble the status back to the main thread.
share|improve this answer
Note that your NSManagedObjectContext must be initialized like NSManagedObjectContext *context = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; in order to use this method or else you get the error: Can only use -performBlock: on an NSManagedObjectContext that was created with a queue. – Will Oct 28 '15 at 13:16
Here is a somewhat simplified version with less calls to AppDelegate self and the last bit of code that was left out of the top rated answer. Also I was getting an error "Object's persistent store is not reachable from this NSManagedObjectContext's coordinator" so just needed to add that back.
NSPersistentStoreCoordinator *storeCoordinator = [self persistentStoreCoordinator];
NSPersistentStore *store = [[storeCoordinator persistentStores] lastObject];
NSURL *storeURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"dataModel"];
NSError *error;
[storeCoordinator removePersistentStore:store error:&error];
[[NSFileManager defaultManager] removeItemAtPath:storeURL.path error:&error];
[_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error];
if (storeCoordinator != nil) {
_managedObjectContext = [[NSManagedObjectContext alloc] init];
[_managedObjectContext setPersistentStoreCoordinator:storeCoordinator];
}
share|improve this answer
swift solution:
class func deleteAllManagedObjects() {
let modelURL = NSBundle.mainBundle().URLForResource("some string", withExtension: "mom")
let mom = NSManagedObjectModel(contentsOfURL: modelURL)
for entityName in mom.entitiesByName.keys {
let fr = NSFetchRequest(entityName: entityName as String)
let a = Utility.managedObjectContext().executeFetchRequest(fr, error: nil) as [NSManagedObject]
for mo in a {
Utility.managedObjectContext().deleteObject(mo)
}
}
Utility.managedObjectContext().save(nil)
}
share|improve this answer
For swift 2 let modelURL = NSBundle.mainBundle().URLForResource("some string", withExtension: "momd")! – Z.Neeson Jan 26 at 2:24
As a quick reference to save searching elsewhere - recreating the persistent store after deleting it can be done with:
if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error]) {
// do something with the error
}
share|improve this answer
I tried your code, but xcode throws an exception on this line,So what you have to say about this. – Ranjit Sep 2 '13 at 10:47
Thanks for the post. I followed it and it worked for me. But I had another issue that was not mentioned in any of the replies. So I am not sure if it was just me.
Anyway, thought I would post here the problem and my way that solved it.
I had a few records in the database, I wanted to purge everything clean before write new data to the db, so I did everything including
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:&error];
and then used managedObjectContext to access the database (supposed to be empty by now), somehow the data was still there. After a while of troubleshooting, I found that I need to reset managedObjectContext, managedObject, managedObjectModel and persistentStoreCoordinator, before I use managedObjectContext to access the dabase. Now I have a clean database to write to.
share|improve this answer
Several good answers to this question. Here's a nice concise one. The first two lines delete the sqlite database. Then the for: loop deletes any objects in the managedObjectContext memory.
NSURL *storeURL = [[(FXYAppDelegate*)[[UIApplication sharedApplication] delegate] applicationDocumentsDirectory] URLByAppendingPathComponent:@"AppName.sqlite"];
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:nil];
for (NSManagedObject *ct in [self.managedObjectContext registeredObjects]) {
[self.managedObjectContext deleteObject:ct];
}
share|improve this answer
1
Don't abuse the delegation for this purpose: hollance.com/2012/02/dont-abuse-the-app-delegate – Michael Dorner Sep 5 '13 at 13:57
1
I agree with @MichaelDorner. Adding to much into AppDelegate can impact performance and bloat the size of your binary with an interconnected spiderweb of dependencies where AppDelegate suddenly needs to be included in every class. If you find this cropping up, create a seperate controller specific to this purpose. AppDelegate should remain for basic initialization and handling state changes in the application, not much more. – jcpennypincher Jul 17 '15 at 18:31
you can also find all the entity names, and delete them by name. Its a longer version but works well, that way you dont have to work with persistence store
- (void)clearCoreData
{
NSError *error;
NSEntityDescription *des = [NSEntityDescription entityForName:@"Any_Entity_Name" inManagedObjectContext:_managedObjectContext];
NSManagedObjectModel *model = [des managedObjectModel];
NSArray *entityNames = [[model entities] valueForKey:@"name"];
for (NSString *entityName in entityNames){
NSFetchRequest *deleteAll = [NSFetchRequest fetchRequestWithEntityName:entityName];
NSArray *matches = [self.database.managedObjectContext executeFetchRequest:deleteAll error:&error];
}
if (matches.count > 0){
for (id obj in matches){
[_managedObjectContext deleteObject:obj];
}
[self.database.managedObjectContext save:&error];
}
}
for "Any_Entity_Name" just give any one of your entity's name, we only need to figure out the entity description your entities are within. ValueForKey@"name" will return all the entity names. Finally, dont forget to save.
share|improve this answer
The accepted answer is correct with removing URL by NSFileManager is correct, but as stated in iOS 5+ edit, the persistent store is not represented only by one file. For SQLite store it's *.sqlite, *.sqlite-shm and *.sqlite-wal ... fortunately since iOS 7+ we can use method
[NSPersistentStoreCoordinator +removeUbiquitousContentAndPersistentStoreAtURL:options:error:]
to take care of removal, so the code should be something like this:
NSPersistentStore *store = ...;
NSError *error;
NSURL *storeURL = store.URL;
NSString *storeName = ...;
NSPersistentStoreCoordinator *storeCoordinator = ...;
[storeCoordinator removePersistentStore:store error:&error];
[NSPersistentStoreCoordinator removeUbiquitousContentAndPersistentStoreAtURL:storeURL.path options:@{NSPersistentStoreUbiquitousContentNameKey: storeName} error:&error];
share|improve this answer
2
You need to pass the options dict, in particular the store name, e.g.: @{NSPersistentStoreUbiquitousContentNameKey: @"MyData"}; – tomi44g Feb 13 '15 at 20:29
Thanks @tomi44g, I've updated the answer. – JakubKnejzlik Apr 21 '15 at 11:13
Delete sqlite from your fileURLPath and then build.
share|improve this answer
I meant when the app is installed. – Michael Grinich Apr 7 '11 at 3:08
Assuming you are using MagicalRecord and have a default persistence store:
I don't like all the solutions that assume certain files to exist and/or demand entering the entities names or classes. This is a Swift(2), safe way to delete all the data from all the entities. After deleting it will recreate a fresh stack too (I am actually not sure as to how neccessery this part is).
It's godo for "logout" style situations when you want to delete everything but have a working store and moc to get new data in (once the user logs in...)
extension NSManagedObject {
class func dropAllData() {
MagicalRecord.saveWithBlock({ context in
for name in NSManagedObjectModel.MR_defaultManagedObjectModel().entitiesByName.keys {
do { try self.deleteAll(name, context: context) }
catch { print("⚠️ ✏️ Error when deleting \(name): \(error)") }
}
}) { done, err in
MagicalRecord.cleanUp()
MagicalRecord.setupCoreDataStackWithStoreNamed("myStoreName")
}
}
private class func deleteAll(name: String, context ctx: NSManagedObjectContext) throws {
let all = NSFetchRequest(entityName: name)
all.includesPropertyValues = false
let allObjs = try ctx.executeFetchRequest(all)
for obj in allObjs {
obj.MR_deleteEntityInContext(ctx)
}
}
}
share|improve this answer
Delete the persistent store file and setup a new persistent store coordinator?
share|improve this answer
and do a build>clean. – John Ballinger Jul 3 '09 at 5:14
6
Doing a clean will not remove the persistent store files, thankfully. That would be a recipe for disaster if true. – Hunter Jul 12 '09 at 18:05
you're all making this seem complicated. You can just send your NSManagedObjectContext the reset method
share|improve this answer
8
That only resets unsaved changes instead of removing all objects. – Sam Soffes May 23 '12 at 0:41
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.974738 |
Bulk import with IDs creates multiple individual records with the same ID
What Wildbook are you working in? ACW
What is the entire URL out of the browser, exactly where the error occurred?
Can you describe what the issue is you’re experiencing?
A user uploaded 6 bulk imports that contained both ID’d and un-ID’d images. Her process was as follows: “I uploaded them one right after the other, and once they were all uploaded I sent them to detection one after the other - not all at once to avoid completely clogging up the system but definitely with some overlap.”
The affected IDs are LL_TZ_SN_leop_0001 to LL_TZ_SN_leop_0013 (13 individuals).
This appears to be similar to an issue reported here by a Flukebook user: Questions about matching - #7 by CMKonrad
I’ve just sent the 6 spreadsheets to [email protected]
If it helps, the user’s platform is:
Windows 11 Home 22621.1265 and Chrome Version 110.0.5481.104
cheers
Maureen
Thanks for flagging this as a wider-spread issue, Maureen. I have this on our bug-report to-do list already, but I’m updating it to reflect that it’s not limited to Flukebook.
1 Like
This was found to be a pretty widespread issue in ACW and we’re actively working on a resolution. I’ll let you know when there’s an update on a fix.
1 Like
|
__label__pos
| 0.513692 |
Isosceles triangle
Calculate the area of an isosceles triangle, the base of which measures 16 cm and the arms 10 cm.
Correct result:
S = 48 cm2
Solution:
a=16 cm r=10 cm h=r2(a/2)2=102(16/2)2=6 cm S=a h2=16 62=48 cm2
We would be very happy if you find an error in the example, spelling mistakes, or inaccuracies, and please send it to us. We thank you!
Showing 0 comments:
avatar
Tips to related online calculators
Pythagorean theorem is the base for the right triangle calculator.
See also our trigonometric triangle calculator.
You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem: video1 video2
Next similar math problems:
• Isosceles trapezoid
licho_1 Calculate the area of an isosceles trapezoid whose bases are in the ratio of 4:3; leg b = 13 cm and height = 12 cm.
• Equilateral triangle
rs_triangle_1 The equilateral triangle has a 23 cm long side. Calculate its content area.
• The right triangle
rt_tr540 The right triangle ABC has a leg a = 36 cm and an area S = 540 cm2. Calculate the length of the leg b and the median t2 to side b.
• Right triangle
triangles_1 Right triangle ABC with side a = 19 and the area S = 95. Calculate the length of the remaining sides.
• Median in right triangle
rt_triangle In the rectangular triangle ABC has known the length of the legs a = 15cm and b = 36cm. Calculate the length of the median to side c (to hypotenuse).
• Stairway
schody Stairway has 20 steps. Each step has a length of 22 cm and a height of 15 cm. Calculate the length of the handrail of staircases if on the top and bottom exceeds 10 cm.
• Cone 15
cone_9 The radius of the base of a right circular cone is 14 inches and it's height 18 inches. What is the slant height?
• Oil rig
oil_rig_tower Oil drilling rig is 23 meters height and fix the ropes which ends are 7 meters away from the foot of the tower. How long are these ropes?
• Center traverse
trianles It is true that the middle traverse bisects the triangle?
• Double ladder
dvojak The double ladder is 8.5m long. It is built so that its lower ends are 3.5 meters apart. How high does the upper end of the ladder reach?
• Double ladder
rr_rebrik The double ladder shoulders should be 3 meters long. What height will the upper top of the ladder reach if the lower ends are 1.8 meters apart?
• The double ladder
dvojity_rebrik The double ladder has 3 meters long shoulders. What is the height of the upper of the ladder reach if the lower ends are 1.8 meters apart?
• The ladder
rebrik The ladder has a length of 3 m and is leaning against the wall, and its inclination to the wall is 45°. How high does it reach?
• Windbreak
vichrica A tree at a height of 3 meters broke in the windbreak. Its peak fell 4.5 m from the tree. How tall was the tree?
• Right angle
triangles_1 If a, b and c are two sides of a triangle ABC, a right angle in A, find the value on each missing side. If b=10, c=6
• Is right triangle
triangle_1111_4 Decide if the triangle XYZ is rectangular: x = 4 m, y = 6 m, z = 4 m
• Right triangles
PT How many right triangles we can construct from line segments 3,4,5,6,8,10,12,13,15,17 cm long? (Do not forget to the triangle inequality).
|
__label__pos
| 0.994734 |
Term of the Moment
DNS
Look Up Another Term
Redirected from: plasma sidechain
Definition: Layer 2 blockchain
An independent blockchain acting in concert with Bitcoin, Ethereum or other major chain, which retroactively became known as "Layer 1 chains" or "main chains." Layer 2 chains process new transactions faster while reducing the load on Layer 1 and typically taking lower fees.
A connection (channel, bridge, etc.) is opened between Layer 1 and Layer 2, and at periodic intervals, a summary of Layer 2 transactions is added to Layer 1 for a permanent record. A major consideration of Layer 2 chains is how transactions are validated before they are "cast in concrete" on the main chain. New Layer 2 solutions are being devised all the time, making this is a major topic in the crypto world. See blockchain.
Sidechains
Although sidechains and Layer 2 chains are often considered synonymous, their only similarity is that they function as auxiliary networks to a Layer 1 chain. A sidechain has its own consensus method for adding blocks, and like a Layer 1 chain, a sidechain requires a sufficient number of validators. See sidechain, consensus mechanism and Polygon blockchain.
Plasma Chains
Like sidechains, plasma chains have their own consensus method. Plasma provides a framework for building Layer 2 chains on Ethereum. At a fixed interval, a compressed representation of each block is committed to a smart contract on Ethereum.
State Channels (Payment Channels)
With state channels, crypto is deposited in a smart contract on Layer 1, and a connection is opened between two parties. Payments are made on Layer 2, and when completed, a ticket is signed on Layer 1. State channels can be bi-directional and also handle another party if channels have previously been opened. Bitcoin's Layer 2 Lightning chain uses state channels (see Lightning Network).
Rollup Chains
Rollup chains accumulate and settle transactions. At some interval, a summary of the transactions is "rolled up" and posted on the main Layer 1 chain. Optimistic Rollups assume people are honest but transactions can be disputed (see Optimism). Zero-Knowledge Rollups (ZK Rollups) provide cryptographic proof without divulging every detail (see zero-knowledge proof). See Layer 3 blockchain.
Layer 2 Solutions
As Bitcoin and Ethereum became more popular, Layer 2 chains were developed to handle thousands of small-value transactions and store them as summaries on the main chain. Layer 2 chains also keep Layer 1 chains from growing too large and cumbersome.
|
__label__pos
| 0.883955 |
java小说租阅信息管理系统,源码行行注释,免费分享
发表于 2019-10-14 408 次阅读
全微毕设擅长JAVA(SSM,SSH,SPRINGBOOT)、PYTHON(DJANGO/FLASK)、THINKPHP、C#、安卓、微信小程序、MYSQL、SQLSERVER等,欢迎咨询
每天都要认真学习,才能更加进步。└(^o^)┘
在工作和学习的过程中要善于思考,勤于学习。并做出适当的记录,才能最快速的学习并掌握一项知识。希望在这个平台和大家一起共同成长,和大家分享一个SSM(MYECLIPSE)项目,该项目名称为基于javaweb的小说租阅信息管理系统。采用当前非常流行的B/S体系结构,以JAVA作为开发技术,主要依赖SSM技术框架,mysql数据库建立本系统。为了最大限度地发挥其价值和优势,我们应该对该信息系统作出全面的了解和分析,有关方面的研究颇受关注。本文以信息化时代为背景,主要从优势、特性、功能以及发展四个方面就图书租阅管理信息系统进行了研究。
大家在学习中编写SSM(MYECLIPSE)框架的项目时常选用的开发工具是MYECLIPSE,从上面的项目背景中,我们可以得出基于javaweb的小说租阅信息管理系统拥有 后台。才能让用户完整的使用该项目。
基于javaweb的小说租阅信息管理系统项目的登录角色包括了管理员、用户,系统中所有的用户都是拥有账号密码字段的。其中管理员只能在后台和数据库进行添加。而其余的登录角色可以通过注册的方式成功拥有系统账号密码。
系统中小说、用户之间存在关联关系,我们将其关联关系保存在记录表中。他们的关联关系是这样的记录的小说与小说的mibgzi字段对应、记录的小说id与小说的id字段对应、记录的用户与用户的mingzi字段对应、记录的用户id与用户的id字段对应
总结得出基于javaweb的小说租阅信息管理系统项目所有数据为:管理员(admin)、用户(yonghu)、小说(xiaoshuo)、记录(jilu)
基于javaweb的小说租阅信息管理系统之管理员表
字段名 | 类型 | 属性 | 描述
id | INT(11) | PRIMARY KEY | 管理员id
username | VARCHAR(255) | | 账号
password | VARCHAR(255) | | 密码
基于javaweb的小说租阅信息管理系统之用户表
字段名 | 类型 | 属性 | 描述
id | INT(11) | PRIMARY KEY | 用户id
username | VARCHAR(255) | | 账号
password | VARCHAR(255) | | 密码
基于javaweb的小说租阅信息管理系统之小说表
字段名 | 类型 | 属性 | 描述
id | INT(11) | PRIMARY KEY | 小说id
mingzi | VARCHAR(255) | | 名字
jieshao | VARCHAR(255) | | 介绍
zuozhe | VARCHAR(255) | | 作者
jiage | VARCHAR(255) | | 价格
shuliang | VARCHAR(255) | | 数量
基于javaweb的小说租阅信息管理系统之记录表
字段名 | 类型 | 属性 | 描述
id | INT(11) | PRIMARY KEY | 记录id
xiaoshuo | VARCHAR(255) | | 小说
xiaoshuoid | VARCHAR(255) | | 小说id
yonghu | VARCHAR(255) | | 用户
yonghuid | VARCHAR(255) | | 用户id
jine | VARCHAR(255) | | 金额
shijian | VARCHAR(255) | | 时间
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- ----------------------------
-- Table structure for ggjyjavawebdxszyxxglxt
-- ----------------------------
DROP TABLE IF EXISTS `t_admin`;
CREATE TABLE `t_admin` (`id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '管理员id',`username` VARCHAR(255) DEFAULT NULL COMMENT '账号',`password` VARCHAR(255) DEFAULT NULL COMMENT '密码',PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='管理员';
-- ----------------------------
DROP TABLE IF EXISTS `t_yonghu`;
CREATE TABLE `t_yonghu` (`id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '用户id',`username` VARCHAR(255) DEFAULT NULL COMMENT '账号',`password` VARCHAR(255) DEFAULT NULL COMMENT '密码',PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='用户';
-- ----------------------------
DROP TABLE IF EXISTS `t_xiaoshuo`;
CREATE TABLE `t_xiaoshuo` (`id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '小说id',`mingzi` VARCHAR(255) DEFAULT NULL COMMENT '名字',`jieshao` VARCHAR(5000) DEFAULT NULL COMMENT '介绍',`zuozhe` VARCHAR(255) DEFAULT NULL COMMENT '作者',`jiage` VARCHAR(255) DEFAULT NULL COMMENT '价格',`shuliang` VARCHAR(255) DEFAULT NULL COMMENT '数量',PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='小说';
-- ----------------------------
DROP TABLE IF EXISTS `t_jilu`;
CREATE TABLE `t_jilu` (`id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '记录id',`xiaoshuo` VARCHAR(255) DEFAULT NULL COMMENT '小说',`xiaoshuoid` INT(11) DEFAULT NULL COMMENT '小说id',`yonghu` VARCHAR(255) DEFAULT NULL COMMENT '用户',`yonghuid` INT(11) DEFAULT NULL COMMENT '用户id',`jine` VARCHAR(255) DEFAULT NULL COMMENT '金额',`shijian` VARCHAR(255) DEFAULT NULL COMMENT '时间',PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='记录';
添加小说模块:
系统中存在添加小说功能,通过点击添加小说可以跳转到该功能模块,在该功能模块中,填写对应的小说信息。小说包含信息名字,介绍,作者,价格,数量,填写完所有信息后,通过post方法将数据提交到tianjiaxiaoshuo.action中,该地址将在服务器中xiaoshuoController类中的tianjiaxiaoshuoact方法中进行响应。响应结果为,获取所有的小说信息,封装一个xiaoshuo类,使用xiaoshuoController类中定义的xiaoshuodao的insert方法,将小说数据插入到数据库的xiaoshuo表中。并给出用户提示信息,添加小说成功,将该信息保存到request的message中,该信息将在页面中进行展示。该部分核心代码如下:
通过xiaoshuodao的insert方法将页面传输的小说添加到数据库中 xiaoshuodao.insert(xiaoshuo);
将添加小说成功信息,保存到request的message中,在页面中给出用户提示 request.setAttribute("message", "添加小说成功");
返回小说管理界面
return "forward:/tianjiaxiaoshuo.action";
查询小说模块:
小说的查询模块实现方式为,在页面中发起xiaoshuoguanli.action请求。通过该请求,响应服务器xiaoshuoController类中的xiaoshuoguanli,在该方法中通过selectByexample进行数据的查询操作。将所有的小说信息查询后,保存到request中的xiaoshuoall中,在页面中进行展示,返回xiaoshuoguanli.jsp,该部分核心代码如下所示:
生成小说样例类,通过example定义查询条件 XiaoshuoExample example = new XiaoshuoExample();
通过xiaoshuodao的selectByExample方法查询出所有的小说信息 List xiaoshuoall = xiaoshuodao.selectByExample(example);
将小说信息,保存到request中,在页面通过foreach方法进行展示 request.setAttribute("xiaoshuoall", xiaoshuoall);
返回小说管理界面
return "forward:/xiaoshuoguanli.action";
修改小说模块:
对已经上传的小说信息可以进行修改操作,该部分操作在小说管理界面中点击修改按钮可以跳转到小说修改页面。在修改页面中,将初始化所有的小说字段信息,字段信息包括名字,介绍,作者,价格,数量。字段信息内容通过小说id获取。修改后的信息传入到xiaoshuoController中接收为xiaoshuo。在xiaoshuoController中包含有提前定义好的xiaoshuodao,该参数为xiaoshuoMapper是实现。xiaoshuoMapper中定义了修改方法,此处使用修改方法为updateByPrimaryKeySelective,该方法可以将修改后信息同步到数据库中,最终将修改成功信息返回页面中。该部分代码如下:
通过xiaoshuodao的修改方法根据id修改对应的小说 xiaoshuodao.updateByPrimaryKeySelective(xiaoshuo);
将修改小说成功信息,保存到request的message中,在页面中给出用户提示 request.setAttribute("message", "修改小说信息成功");
返回小说管理界面
return "forward:/xiaoshuoguanli.action";
删除小说模块:
在管理页面中,点击删除。页面将通过a标签的href属性,使用get方法将该小说
的id上传到服务器中,在服务器中通过xiaoshuoController类中的shanchuxiaoshuo进行接收,之后调用xiaoshuoMapper中的deleteByPrimaryKey方法根据ID进行删除。将删除信息保存到request的message中,在页面给出用户删除成功的提示信息,该部分核心代码如下:
通过xiaoshuodao的删除方法根据id删除对应的小说 xiaoshuodao.deleteByPrimaryKey(id);
将删除小说成功信息,保存到request的message中,在页面中给出用户提示 request.setAttribute("message", "删除小说成功");
返回小说管理界面
return "forward:/xiaoshuoguanli.action";
如需源码,请留下邮箱或联系站长
当前位置:全微程序设计 > JAVA > java小说租阅信息管理系统,源码行行注释,免费分享
毕设联系Q:2196316269 定做微信:13265346583
|
__label__pos
| 0.518499 |
Corey Ford Corey Ford - 8 months ago 42
C Question
Modified Binary Search Array
EDIT: included improved code.
My current logic is incorrect. I want to have a binary search that will find the integer "wantToFind", and if it is not in the array will subtract 1 from wantToFind until it is found.
I have subtracted this from a larger program, that guarantees that the 1st item in the array will be the lowest wantToFind (the value we want to find) can ever be.
However, despite following binary search conventions the program is still getting stuck when looking for higher numbers such as 88.
float list[15] = {60,62,64,65,67,69,71,72,74,76,77,79,81,83,84};
//binary search
int wantToFind = 88; //other tests are 65, 61, 55
bool itemFound = false;
int current = 0;
int low = 0;
int high = 14;
current = (low+high)/2;
//int previousCurrent = -1;
do {
do {
//if 61 < 72
if (wantToFind < list[current])
{
//smaller
//previousCurrent = current;
high = current - 1;
current = (low+high/2);
}
else if (wantToFind > list[current])
{
//bigger
//previousCurrent = current;
low = current + 1;
current = (low+high/2);
}
else{
if(wantToFind == list[current])
{
itemFound = true;
}
}
} while (low >= high);
if (itemFound == false)
{
wantToFind--;
}
} while (itemFound == false);
printf("\n%d", wantToFind); //which will be a number within the list?
return 0;
}
Answer
I can't imagine why you'd want while (low >= high). This will cause the loop to terminate the first time through. I'm pretty sure you want while (low <= high).
Also, when the item isn't found, there are three possibilities:
1. wantToFind is smaller than the smallest item in the list.
2. wantToFind is larger than the largest item in the list.
3. wantTofind would be located somewhere in the list.
In cases 2 and 3, the value of current when the inner loop exits will be one more than the index that contains the first item smaller than wantToFind.
In case 1 above, current will be equal to 0.
The point is that there's no need for the outer loop. When the binary search fails, the value of current tells you the insertion point.
Also, you probably want to early-out when the item is found.
Finally, do yourself a favor and turn that do...while into a while loop. That is:
while (!itemFound && low <= high)
You'll find that a lot easier to reason about.
|
__label__pos
| 0.909866 |
Light Propagation Volumes
Light Propagation Volumes (LPV) is an algorithm for achieving an indirect light bounce developed by Crytek and (previously) used in CryEngine 3. Having read numerous papers and articles (listed below), looked through code on GitHub (also listed below), I was disapppointed that there was a lack of clear information on how the algorithm works technically. This article aims to provide an insight in how the algorithm works and how I have implemented it in the engine I am working on for a school project.
What does LPV do?
Light Propagation Volumes stores lighting information from a light in a 3D grid. Every light stores which points in the world they light up. These points have a coordinate in the world, which means you can stratify those coordinates in a grid. In that way you save lit points (Virtual Point Lights) in a 3D grid and can use those initial points to spread light across the scene. You can imagine the spreading of lights as covering your sandwich in Nutella. You start off with an initial lump of Nutella (virtual point lights) and use a knife to spread (propagate) this across the entire sandwich (the entire scene). It’s a bit more complex than that, but that will become clear very soon. The images below demonstrates what LPV adds to a scene.
lpv_comparison
How does it do that?
Injection
The first step is to gather the Virtual Point Lights (VPLs). In a previous article I describe Reflective Shadow Maps. For my implementation I used the flux, world-space position, and world-space normal map resulting from rendering the Reflective Shadow Map. The flux is the color of the indirect light getting injected, the world-space position is used to determine the grid cell, and the normal determines the initial propagation direction.
Not every pixel in the RSM will be used for injection in the grid, because this would mean you inject 2048×2048=4194304 VPLs for a decently sized RSM for a directional light. Performance was decent, but that was only with one light. Some intelligent down-sampling (to 512×512) still gives pretty and stable results.
The image below demonstrates the resulting 3D grid after the Injection phase. With a white light, the colors of the VPLs are very similar to the surface they are bounced off of.
Demo_2016-06-27_20-29-44
Storage
To maximize efficiency and frame rate, the algorithm stores the lighting information in Spherical Harmonics (SH). Those are mathematical objects used to record an incoming signal (like light intensity) over a sphere. The internal workings of this are still a bit of a mystery to me, but important is to know that you can encode light intensities and directions in Spherical Harmonics, allowing for propagation using just 4 floats per color channel. Implementation details will become apparent soon.
Propagation
So, we have a grid filled with Virtual Point Lights and we want to “smear” them across the scene to get pretty light bleeding. As I mentioned earlier, we store SH coefficients in grid cells. These SH coefficients represent a signal traveling in a certain direction. In the propagation phase, you calculate how much of that signal could spread to a neighbour cell using the direction towards that cell. Then you multiply this by a cosine lobe, a commonly used way to spread light in a diffuse manner. The image below shows such an object. Directions pointing “up” which is forward, has 100% intensity and directions pointing sideways or backwards have an intensity of 0%, because the light would not physically bounce in that direction.
l015-cosinelaw2
Rendering
Rendering is actually the easy part. We have a set of SH coefficients per color component in each cell of our grid. We have a G-Buffer with world space positions and world space normals. We get the grid cell for the world space position (trilinear interpolation gives better results though) and evaluate the coefficients stored in the grid cells against the SH representation for the world space normal. This gives an intensity per color component, which you can multiply by the albedo (from the G-Buffer) and by the ambient occlusion factor, and then you have a pretty image.
How do you implement it?
Theoretically, the above is easy to understand. For the implementation it took me quite a while to know how I should do it. Below, I will explain how to set it up, how to execute each step, and what pitfalls I ran into. My code is written in a rendering API agnostic engine, but heavily inspired by DirectX 11.
Preparation
Start off by creating three 3D textures of float4, one per color channel, of dimensions 32 x 32 x 32 (this can be anything you want, but you will get pretty results with as small dimensions as this). These buffers need read and write access, so I created Unordered Access Views, Render Target Views, and Shader Resource Views for them. RTVs for the injection phase, UAVs for clearing and the propagation phase, and SRVs for the rendering phase. Make sure you clear these textures every frame.
Injection
Injecting the lights was something I struggled with for quite some time. I saw people use a combination of Vertex, Geometry, and Pixel Shaders to inject lights into a 3D texture and I thought, why not just use a single Compute Shader to pull this off? You will run into race conditions if you do this and there is no straight forward way to pull it off in Compute Shaders. A less straight forward solution was using a GPU linked list and solve that list in a separate pass. Unfortunately, this was too slow for me and I am always a bit cautious to have a while loop in a shader.
So, the fastest way to get light injection done is by setting up the VS/GS/PS draw call. The reason this works is because the fixed-function blending on the GPU is thread-safe and performed in the same order every frame. This means you can blend VPLs in a grid cell without race conditions!
Vertex Shader
Using DrawArray you can specify the amount of vertices you would like the application to render. I down-sample a 2048×2048 texture to 512×512 “vertices” and that is the count I give the DrawArray call. These vertices don’t need any CPU-side info, you will only need the vertex ID. I have listed the entire shader below. This shader passes VPLs through to the Geometry Shader.
[code lang=”cpp”]#define LPV_DIM 32
#define LPV_DIMH 16
#define LPV_CELL_SIZE 4.0
// https://github.com/mafian89/Light-Propagation-Volumes/blob/master/shaders/lightInject.frag and
// https://github.com/djbozkosz/Light-Propagation-Volumes/blob/master/data/shaders/lpvInjection.cs seem
// to use the same coefficients, which differ from the RSM paper. Due to completeness of their code, I will stick to their solutions.
/*Spherical harmonics coefficients – precomputed*/
#define SH_C0 0.282094792f // 1 / 2sqrt(pi)
#define SH_C1 0.488602512f // sqrt(3/pi) / 2
/*Cosine lobe coeff*/
#define SH_cosLobe_C0 0.886226925f // sqrt(pi)/2
#define SH_cosLobe_C1 1.02332671f // sqrt(pi/3)
#define PI 3.1415926f
#define POSWS_BIAS_NORMAL 2.0
#define POSWS_BIAS_LIGHT 1.0
struct Light
{
float3 position;
float range;
//————————16 bytes
float3 direction;
float spotAngle;
//————————16 bytes
float3 color;
uint type;
};
cbuffer b0 : register(b0)
{
float4x4 vpMatrix;
float4x4 RsmToWorldMatrix;
Light light;
};
int3 getGridPos(float3 worldPos)
{
return (worldPos / LPV_CELL_SIZE) + int3(LPV_DIMH, LPV_DIMH, LPV_DIMH);
}
struct VS_IN {
uint posIndex : SV_VertexID;
};
struct GS_IN {
float4 cellIndex : SV_POSITION;
float3 normal : WORLD_NORMAL;
float3 flux : LIGHT_FLUX;
};
Texture2D rsmFluxMap : register(t0);
Texture2D rsmWsPosMap : register(t1);
Texture2D rsmWsNorMap : register(t2);
struct RsmTexel
{
float4 flux;
float3 normalWS;
float3 positionWS;
};
float Luminance(RsmTexel rsmTexel)
{
return (rsmTexel.flux.r * 0.299f + rsmTexel.flux.g * 0.587f + rsmTexel.flux.b * 0.114f)
+ max(0.0f, dot(rsmTexel.normalWS, -light.direction));
}
RsmTexel GetRsmTexel(int2 coords)
{
RsmTexel tx = (RsmTexel)0;
tx.flux = rsmFluxMap.Load(int3(coords, 0));
tx.normalWS = rsmWsNorMap.Load(int3(coords, 0)).xyz;
tx.positionWS = rsmWsPosMap.Load(int3(coords, 0)).xyz + (tx.normalWS * POSWS_BIAS_NORMAL);
return tx;
}
#define KERNEL_SIZE 4
#define STEP_SIZE 1
GS_IN main(VS_IN input) {
uint2 RSMsize;
rsmWsPosMap.GetDimensions(RSMsize.x, RSMsize.y);
RSMsize /= KERNEL_SIZE;
int3 rsmCoords = int3(input.posIndex % RSMsize.x, input.posIndex / RSMsize.x, 0);
// Pick brightest cell in KERNEL_SIZExKERNEL_SIZE grid
float3 brightestCellIndex = 0;
float maxLuminance = 0;
{
for (uint y = 0; y < KERNEL_SIZE; y += STEP_SIZE)
{
for (uint x = 0; x < KERNEL_SIZE; x += STEP_SIZE)
{
int2 texIdx = rsmCoords.xy * KERNEL_SIZE + int2(x, y);
RsmTexel rsmTexel = GetRsmTexel(texIdx);
float texLum = Luminance(rsmTexel);
if (texLum > maxLuminance)
{
brightestCellIndex = getGridPos(rsmTexel.positionWS);
maxLuminance = texLum;
}
}
}
}
RsmTexel result = (RsmTexel)0;
float numSamples = 0;
for (uint y = 0; y < KERNEL_SIZE; y += STEP_SIZE)
{
for (uint x = 0; x < KERNEL_SIZE; x += STEP_SIZE)
{
int2 texIdx = rsmCoords.xy * KERNEL_SIZE + int2(x, y);
RsmTexel rsmTexel = GetRsmTexel(texIdx);
int3 texelIndex = getGridPos(rsmTexel.positionWS);
float3 deltaGrid = texelIndex – brightestCellIndex;
if (dot(deltaGrid, deltaGrid) < 10) // If cell proximity is good enough
{
// Sample from texel
result.flux += rsmTexel.flux;
result.positionWS += rsmTexel.positionWS;
result.normalWS += rsmTexel.normalWS;
++numSamples;
}
}
}
//if (numSamples > 0) // This is always true due to picking a brightestCell, however, not all cells have light
//{
result.positionWS /= numSamples;
result.normalWS /= numSamples;
result.normalWS = normalize(result.normalWS);
result.flux /= numSamples;
//RsmTexel result = GetRsmTexel(rsmCoords.xy);
GS_IN output;
output.cellIndex = float4(getGridPos(result.positionWS), 1.0);
output.normal = result.normalWS;
output.flux = result.flux.rgb;
return output;
}[/code]
Geometry shader
Because of the way DirectX11 handles Render Target Views for 3D textures, you need to pass the vertices to a Geometry Shader, where a depth slice of the 3D texture will be determined based on the grid position. In the Geometry Shader you specify the SV_renderTargetArrayIndex, a variable you have to pass to the Pixel Shader and a variable that is not accessible in the Vertex Shader. This explains why you need the Geometry Shader and not just do a VS->PS call. I have listed the Geometry Shader below.
[code lang=”cpp”]#define LPV_DIM 32
#define LPV_DIMH 16
#define LPV_CELL_SIZE 4.0
struct GS_IN {
float4 cellIndex : SV_POSITION;
float3 normal : WORLD_NORMAL;
float3 flux : LIGHT_FLUX;
};
struct PS_IN {
float4 screenPos : SV_POSITION;
float3 normal : WORLD_NORMAL;
float3 flux : LIGHT_FLUX;
uint depthIndex : SV_RenderTargetArrayIndex;
};
[maxvertexcount(1)]
void main(point GS_IN input[1], inout PointStream<PS_IN> OutputStream) {
PS_IN output;
output.depthIndex = input[0].cellIndex.z;
output.screenPos.xy = (float2(input[0].cellIndex.xy) + 0.5) / float2(LPV_DIM, LPV_DIM) * 2.0 – 1.0;
// invert y direction because y points downwards in the viewport?
output.screenPos.y = -output.screenPos.y;
output.screenPos.zw = float2(0, 1);
output.normal = input[0].normal;
output.flux = input[0].flux;
OutputStream.Append(output);
}[/code]
Pixel Shader
The Pixel Shader for injection is not much more than scaling the SH coefficients resulting from the input world space normal by the input flux per color component. It writes to three separate render targets, one per color component.
[code lang=”cpp”]
// https://github.com/mafian89/Light-Propagation-Volumes/blob/master/shaders/lightInject.frag and
// https://github.com/djbozkosz/Light-Propagation-Volumes/blob/master/data/shaders/lpvInjection.cs seem
// to use the same coefficients, which differ from the RSM paper. Due to completeness of their code, I will stick to their solutions.
/*Spherical harmonics coefficients – precomputed*/
#define SH_C0 0.282094792f // 1 / 2sqrt(pi)
#define SH_C1 0.488602512f // sqrt(3/pi) / 2
/*Cosine lobe coeff*/
#define SH_cosLobe_C0 0.886226925f // sqrt(pi)/2
#define SH_cosLobe_C1 1.02332671f // sqrt(pi/3)
#define PI 3.1415926f
struct PS_IN {
float4 screenPosition : SV_POSITION;
float3 normal : WORLD_NORMAL;
float3 flux : LIGHT_FLUX;
uint depthIndex : SV_RenderTargetArrayIndex;
};
struct PS_OUT {
float4 redSH : SV_Target0;
float4 greenSH : SV_Target1;
float4 blueSH : SV_Target2;
};
float4 dirToCosineLobe(float3 dir) {
//dir = normalize(dir);
return float4(SH_cosLobe_C0, -SH_cosLobe_C1 * dir.y, SH_cosLobe_C1 * dir.z, -SH_cosLobe_C1 * dir.x);
}
float4 dirToSH(float3 dir) {
return float4(SH_C0, -SH_C1 * dir.y, SH_C1 * dir.z, -SH_C1 * dir.x);
}
PS_OUT main(PS_IN input)
{
PS_OUT output;
const static float surfelWeight = 0.015;
float4 coeffs = (dirToCosineLobe(input.normal) / PI) * surfelWeight;
output.redSH = coeffs * input.flux.r;
output.greenSH = coeffs * input.flux.g;
output.blueSH = coeffs * input.flux.b;
return output;
}
[/code]
Propagation
So, now we have a grid partially filled with VPLs resulting from the Injection phase. It’s time to distribute that light. When you think about distribution, you spread something from a central point. In the propagation Compute Shader, you would spread light to all surrounding directions per cell. However, this is horribly cache inefficient and prone to race conditions. This is because you sample (read) from one cell and propagate (write) to surrounding cells. This means cells are being accessed by multiple threads simultaneously. Instead of this approach, we use a Gathering algorithm. This means you sample from all surrounding directions and write to only one. This guarantees only one thread is accessing one grid cell at the same time.
gathering
Now, for the propagation itself. I will describe how the distribution would work, not how the gathering would work. This is because it is easier to explain. The code below will show how it works for gathering. This process describes how it works for propagation to one cell. To other cells is the same process with other directions.
The direction towards the neighbour cell is projected into Spherical Harmonics and evaluated against the stored SH coefficients in the current cell. This cancels out lighting going in other directions. There is a problem with this approach though. We are propagating spherical lighting information through a cubic grid. Try imagining a sphere inside a cube, there are quite some gaps. These gaps will also be visible in the final rendering as unlit spots. The way to fix this is by projecting the light onto the faces of the cell you are propagating it to.
The image below (borrowed from CLPV paper) demonstrates this. The yellow part is the solid angle, which is the part of the sphere used to light, and also what determines how much light is distributed towards a certain face. You need to do this for 5 faces (not the front one, all others) so you cover the entire cube. This preserves directional information (for improved spreading) so results are better lit and you don’t have those ugly unlit spots.
solidangle
The (compute) shader below demonstrates how propagation works in a gathering way, and how the side-faces are used for the accumulation as well.
[code lang=”cpp”]#define LPV_DIM 32
#define LPV_DIMH 16
#define LPV_CELL_SIZE 4.0
int3 getGridPos(float3 worldPos)
{
return (worldPos / LPV_CELL_SIZE) + int3(LPV_DIMH, LPV_DIMH, LPV_DIMH);
}
// https://github.com/mafian89/Light-Propagation-Volumes/blob/master/shaders/lightInject.frag and
// https://github.com/djbozkosz/Light-Propagation-Volumes/blob/master/data/shaders/lpvInjection.cs seem
// to use the same coefficients, which differ from the RSM paper. Due to completeness of their code, I will stick to their solutions.
/*Spherical harmonics coefficients – precomputed*/
#define SH_c0 0.282094792f // 1 / 2sqrt(pi)
#define SH_c1 0.488602512f // sqrt(3/pi) / 2
/*Cosine lobe coeff*/
#define SH_cosLobe_c0 0.886226925f // sqrt(pi)/2
#define SH_cosLobe_c1 1.02332671f // sqrt(pi/3)
#define Pi 3.1415926f
float4 dirToCosineLobe(float3 dir) {
//dir = normalize(dir);
return float4(SH_cosLobe_c0, -SH_cosLobe_c1 * dir.y, SH_cosLobe_c1 * dir.z, -SH_cosLobe_c1 * dir.x);
}
float4 dirToSH(float3 dir) {
return float4(SH_c0, -SH_c1 * dir.y, SH_c1 * dir.z, -SH_c1 * dir.x);
}
// End of common.hlsl.inc
RWTexture3D<float4> lpvR : register(u0);
RWTexture3D<float4> lpvG : register(u1);
RWTexture3D<float4> lpvB : register(u2);
static const float3 directions[] =
{ float3(0,0,1), float3(0,0,-1), float3(1,0,0), float3(-1,0,0) , float3(0,1,0), float3(0,-1,0)};
// With a lot of help from: http://blog.blackhc.net/2010/07/light-propagation-volumes/
// This is a fully functioning LPV implementation
// right up
float2 side[4] = { float2(1.0, 0.0), float2(0.0, 1.0), float2(-1.0, 0.0), float2(0.0, -1.0) };
// orientation = [ right | up | forward ] = [ x | y | z ]
float3 getEvalSideDirection(uint index, float3x3 orientation) {
const float smallComponent = 0.4472135; // 1 / sqrt(5)
const float bigComponent = 0.894427; // 2 / sqrt(5)
const float2 s = side[index];
// *either* x = 0 or y = 0
return mul(orientation, float3(s.x * smallComponent, s.y * smallComponent, bigComponent));
}
float3 getReprojSideDirection(uint index, float3x3 orientation) {
const float2 s = side[index];
return mul(orientation, float3(s.x, s.y, 0));
}
// orientation = [ right | up | forward ] = [ x | y | z ]
float3x3 neighbourOrientations[6] = {
// Z+
float3x3(1, 0, 0,0, 1, 0,0, 0, 1),
// Z-
float3x3(-1, 0, 0,0, 1, 0,0, 0, -1),
// X+
float3x3(0, 0, 1,0, 1, 0,-1, 0, 0
),
// X-
float3x3(0, 0, -1,0, 1, 0,1, 0, 0),
// Y+
float3x3(1, 0, 0,0, 0, 1,0, -1, 0),
// Y-
float3x3(1, 0, 0,0, 0, -1,0, 1, 0)
};
[numthreads(16, 2, 1)]
void main(uint3 dispatchThreadID: SV_DispatchThreadID, uint3 groupThreadID : SV_GroupThreadID)
{
uint3 cellIndex = dispatchThreadID.xyz;
// contribution
float4 cR = (float4)0;
float4 cG = (float4)0;
float4 cB = (float4)0;
for (uint neighbour = 0; neighbour < 6; ++neighbour)
{
float3x3 orientation = neighbourOrientations[neighbour];
// TODO: transpose all orientation matrices and use row indexing instead? ie int3( orientation[2] )
float3 mainDirection = mul(orientation, float3(0, 0, 1));
uint3 neighbourIndex = cellIndex – directions[neighbour];
float4 rCoeffsNeighbour = lpvR[neighbourIndex];
float4 gCoeffsNeighbour = lpvG[neighbourIndex];
float4 bCoeffsNeighbour = lpvB[neighbourIndex];
const float directFaceSubtendedSolidAngle = 0.4006696846f / Pi / 2;
const float sideFaceSubtendedSolidAngle = 0.4234413544f / Pi / 3;
for (uint sideFace = 0; sideFace < 4; ++sideFace)
{
float3 evalDirection = getEvalSideDirection(sideFace, orientation);
float3 reprojDirection = getReprojSideDirection(sideFace, orientation);
float4 reprojDirectionCosineLobeSH = dirToCosineLobe(reprojDirection);
float4 evalDirectionSH = dirToSH(evalDirection);
cR += sideFaceSubtendedSolidAngle * dot(rCoeffsNeighbour, evalDirectionSH) * reprojDirectionCosineLobeSH;
cG += sideFaceSubtendedSolidAngle * dot(gCoeffsNeighbour, evalDirectionSH) * reprojDirectionCosineLobeSH;
cB += sideFaceSubtendedSolidAngle * dot(bCoeffsNeighbour, evalDirectionSH) * reprojDirectionCosineLobeSH;
}
float3 curDir = directions[neighbour];
float4 curCosLobe = dirToCosineLobe(curDir);
float4 curDirSH = dirToSH(curDir);
int3 neighbourCellIndex = (int3)cellIndex + (int3)curDir;
cR += directFaceSubtendedSolidAngle * max(0.0f, dot(rCoeffsNeighbour, curDirSH)) * curCosLobe;
cG += directFaceSubtendedSolidAngle * max(0.0f, dot(gCoeffsNeighbour, curDirSH)) * curCosLobe;
cB += directFaceSubtendedSolidAngle * max(0.0f, dot(bCoeffsNeighbour, curDirSH)) * curCosLobe;
}
lpvR[dispatchThreadID.xyz] += cR;
lpvG[dispatchThreadID.xyz] += cG;
lpvB[dispatchThreadID.xyz] += cB;
}[/code]
Rendering
Rendering is pretty straight forward. You use the G-Buffer’s world space position to get a grid position. If you have those positions as floats, you can easily do trilinear sampling on the three 3D textures used for LPV. With that sampling result you have a set of Spherical Harmonics and you project the world space normal from the G-Buffer into SH and do dot products against the three textures to get scalar values per color component. Multiply that by the albedo and the Ambient Occlusion factor and you have indirect lighting.
The pixel shader below demonstrates this. This is done by rendering a full screen quad and evaluating all pixels. If you have occlusion culling, you can optimize the indirect light rendering by adding it in the G-Buffer phase in a light accumulation buffer, but in my implementation I have a lot of overdraw and no early-Z/occlusion culling.
[code lang=”cpp”]
<pre>// Start of common.hlsl.inc
#define LPV_DIM 32
#define LPV_DIMH 16
#define LPV_CELL_SIZE 4.0
int3 getGridPos(float3 worldPos)
{
return (worldPos / LPV_CELL_SIZE) + int3(LPV_DIMH, LPV_DIMH, LPV_DIMH);
}
float3 getGridPosAsFloat(float3 worldPos)
{
return (worldPos / LPV_CELL_SIZE) + float3(LPV_DIMH, LPV_DIMH, LPV_DIMH);
}
// https://github.com/mafian89/Light-Propagation-Volumes/blob/master/shaders/lightInject.frag and
// https://github.com/djbozkosz/Light-Propagation-Volumes/blob/master/data/shaders/lpvInjection.cs seem
// to use the same coefficients, which differ from the RSM paper. Due to completeness of their code, I will stick to their solutions.
/*Spherical harmonics coefficients – precomputed*/
#define SH_C0 0.282094792f // 1 / 2sqrt(pi)
#define SH_C1 0.488602512f // sqrt(3/pi) / 2
/*Cosine lobe coeff*/
#define SH_cosLobe_C0 0.886226925f // sqrt(pi)/2
#define SH_cosLobe_C1 1.02332671f // sqrt(pi/3)
#define PI 3.1415926f
float4 dirToCosineLobe(float3 dir) {
//dir = normalize(dir);
return float4(SH_cosLobe_C0, -SH_cosLobe_C1 * dir.y, SH_cosLobe_C1 * dir.z, -SH_cosLobe_C1 * dir.x);
}
float4 dirToSH(float3 dir) {
return float4(SH_C0, -SH_C1 * dir.y, SH_C1 * dir.z, -SH_C1 * dir.x);
}
// End of common.hlsl.inc
struct PSIn
{
float4 pos : SV_POSITION;
float3 normal : NORMAL;
float3 tangent : TANGENT;
float3 bitangent : BITANGENT;
float2 texcoord : TEXCOORD0;
float3 posWS : POSITION;
};
sampler trilinearSampler : register(s0);
Texture3D lpvR : register(t0);
Texture3D lpvG: register(t1);
Texture3D lpvB : register(t2);
Texture2D wsPosMap : register(t3);
Texture2D wsNorMap : register(t4);
Texture2D albedoMap : register(t5);
Texture2D ambientOcclusionMap : register(t6);
float4 main(PSIn IN) : SV_Target
{
float3 albedo = albedoMap.Sample(trilinearSampler, IN.texcoord).xyz;
float3 pxPosWS = wsPosMap.Sample(trilinearSampler, IN.texcoord).xyz;
float3 pxNorWS = wsNorMap.Sample(trilinearSampler, IN.texcoord).xyz;
float3 gridPos = getGridPosAsFloat(pxPosWS);
// https://github.com/mafian89/Light-Propagation-Volumes/blob/master/shaders/basicShader.frag
float4 SHintensity = dirToSH(-pxNorWS);
float3 lpvIntensity = (float3)0;
float4 lpvRtex = lpvR.SampleLevel(trilinearSampler, gridPos / float3(LPV_DIM, LPV_DIM, LPV_DIM), 0);
float4 lpvGtex = lpvG.SampleLevel(trilinearSampler, gridPos / float3(LPV_DIM, LPV_DIM, LPV_DIM), 0);
float4 lpvBtex = lpvB.SampleLevel(trilinearSampler, gridPos / float3(LPV_DIM, LPV_DIM, LPV_DIM), 0);
lpvIntensity = float3(
dot(SHintensity, lpvRtex),
dot(SHintensity, lpvGtex),
dot(SHintensity, lpvBtex));
float3 finalLPVRadiance = max(0, lpvIntensity) / PI;
float4 result = float4(finalLPVRadiance, 1.0) * ambientOcclusionMap.Load(int3(IN.pos.xy, 0)).r * float4(albedo, 1.0);
return result;
}</pre>
[/code]
Pros and cons
This algorithm, like any other, has its pros and cons.
Pros
• The algorithm is very fast
• One data structure which can support multiple lights
• Completely dynamic and real-time
Cons
• The amount of SH coefficients used (4) has only about 75% accuracy. This means more objects will get incorrect lighting and light bleeding will happen in wrong places.
• Trade-off between more local reflections or more global reflection as the size of the grid influences this. Can be solved by using Cascaded LPVs.
• Does not allow for specular reflections. Can be added with Screen-Spaced Reflections.
• Only allows for one light bounce. There are workarounds, but that also has its tradeoffs.
Resources
Spherical Harmonics papers and posts:
Reflective Shadow Maps: Part 2 – The implementation
I managed to implement a very naive version of Reflective Shadow Maps (an algorithm described in this paper). This post will explain how I did that and what the pitfalls were. It will also cover some possible optimizations.
rsm_impl
Figure 1: From left to right: Render without Reflective Shadow Maps, render with reflective shadow maps, difference
The result
In figure 1 you see one of the results produced by RSM. The images you see use the Stanford Bunny and three differently colored quads. In the left image, you see the result of a render without RSM, using just a spot light. Whatever falls in the shadow is completely black. In the middle image you see the same image, but rendered using RSM. Notable differences are the brighter colors everywhere, the pink color bleeding onto the floor and the bunny, the shadow not being completely black. The last image shows the differene between the two images, thus what RSM contributed to the image. You might see some harder edges and artifacts in the middle and righter image, but that can be solved by tweaking the sample kernel size, indirect light intensity, and the amount of samples taken.
The implementation
The engine I implemented this algorithm in has a cross-platform rendering architecture allowing us to create rendering techniques (like deferred shading, shadow mapping, etc.) that will theoretically work on any platform we support. The architecture was set up to be multi-threading compatible and as stateless as possible. It also uses a lot of terminology found in DirectX 11 and 12. The shaders were written in HLSL and the renders made with DirectX 11. Keep this in mind when I talk about implementation details.
I had already set up a deferred renderer with shadow maps for directional lights prior to writing this article. Then I implemented RSM for directional lights. After that, I added spot light shadow maps and added support for RSM to them.
Expanding the shadow map
Traditionally, Shadow Maps (SM) are no more than a depth map. This means you don’t even need a pixel/fragment shader for filling an SM. However, for RSM, you need a few extra buffers. You need to store the world space positions, world space normals, and the flux. This means you need multiple render targets and a pixel/fragment shader to fill them. Keep in mind that you need to cull back faces instead of front faces for this technique. Using front face culling is a commonly used technique to avoid shadow artifacts, but this does not work with RSM.
You pass the world space normal and position to the pixel shader and pass those through to the corresponding buffers. If you have normal mapping, you calculate that in the pixel shader as well. The flux is calculated in the pixel shaders and is the albedo of the material multiplied by the light’s color. For spot lights, you multiply this by the falloff. For directional lights, this will simply look like an unshaded image.
Preparing the shading pass
For the shading pass, you need to do a few things. You need to bind all buffers used in the shadow pass as textures. You also need random numbers. The paper tells you to precalculate those numbers and store them into a buffer in order to save operations for the sampling pass. Since the algorithm is heavy in terms of performance, I thoroughly agree with the paper. They also recommend this to have temporal coherency. This means it will avoid flickering images when every frame uses different shadows.
You need two random floats in the [0, 1] range per sample you take. These random numbers will be used to determine the coordinates of a sample. You will also need the same matrix you use transform world space positions to shadow map texture space positions. Further than that, a non-comparison sampler that clamps with black border colors will also be necessary.
Performing the shading pass
This is the hard part, especially to get it right. I recommend doing the indirect shading pass after you have done the direct shading for a particular light. This is because you need a full screen quad to do this and this works fine for directional lights. However, for spot and point lights you generally want to use shaped meshes with some form of culling to fill less pixels.
I will show a piece of code below that calculates the indirect shading per pixel. After that, I will step through the code and explain what is happening.
[code language=”cpp”]float3 DoReflectiveShadowMapping(float3 P, bool divideByW, float3 N)
{
float4 textureSpacePosition = mul(lightViewProjectionTextureMatrix, float4(P, 1.0));
if (divideByW) textureSpacePosition.xyz /= textureSpacePosition.w;
float3 indirectIllumination = float3(0, 0, 0);
float rMax = rsmRMax;
for (uint i = 0; i < rsmSampleCount; ++i)
{
float2 rnd = rsmSamples[i].xy;
float2 coords = textureSpacePosition.xy + rMax * rnd;
float3 vplPositionWS = g_rsmPositionWsMap.Sample(g_clampedSampler, coords.xy).xyz;
float3 vplNormalWS = g_rsmNormalWsMap.Sample(g_clampedSampler, coords.xy).xyz;
float3 flux = g_rsmFluxMap.Sample(g_clampedSampler, coords.xy).xyz;
float3 result = flux
* ((max(0, dot(vplNormalWS, P – vplPositionWS))
* max(0, dot(N, vplPositionWS – P)))
/ pow(length(P – vplPositionWS), 4));
result *= rnd.x * rnd.x;
indirectIllumination += result;
}
return saturate(indirectIllumination * rsmIntensity);
}[/code]
The first argument in the function is P, which is the world space position for a specific pixel. DivideByW is used for the perspective divide required to get a correct Z value. N is the world space normal at a pixel.
[code language=”cpp”]
float4 textureSpacePosition = mul(lightViewProjectionTextureMatrix, float4(P, 1.0));
if (divideByW) textureSpacePosition.xyz /= textureSpacePosition.w;
float3 indirectIllumination = float3(0, 0, 0);
float rMax = rsmRMax;
[/code]
This section sets up the texture space position, initializes the indirect lighting contribution where samples will accumulate into, and set the rMax variable found in the lighting equation in the paper which I will cover in the next section. Basically, rMax is the maximum distance the random sample can be from the texture space position.
[code language=”cpp”]
for (uint i = 0; i < rsmSampleCount; ++i)
{
float2 rnd = rsmSamples[i].xy;
float2 coords = textureSpacePosition.xy + rMax * rnd;
float3 vplPositionWS = g_rsmPositionWsMap.Sample(g_clampedSampler, coords.xy).xyz;
float3 vplNormalWS = g_rsmNormalWsMap.Sample(g_clampedSampler, coords.xy).xyz;
float3 flux = g_rsmFluxMap.Sample(g_clampedSampler, coords.xy).xyz;
[/code]
Here we open the loop and prepare our variables for the equation. In order to optimize it a bit further, the random samples that I calculated are already coordinate offsets, meaning I only have to add rMax * rnd to the texture space coordinates to get my UV coordinates. If the UV coordinates fall outside of the [0,1] range, the samples will be black. Which is logical, since it falls outside of the light’s range, thus does not have any shadow map point to sample from.
[code language=”cpp”]
float3 result = flux
* ((max(0, dot(vplNormalWS, P – vplPositionWS))
* max(0, dot(N, vplPositionWS – P)))
/ pow(length(P – vplPositionWS), 4));
result *= rnd.x * rnd.x;
indirectIllumination += result;
}
return saturate(indirectIllumination * rsmIntensity);
[/code]
This is the part where the indirect lighting equation (displayed in figure 2) is evaluated and weighted by the distance between the point and the pixel light. The equation looks daunting and the code doesn’t really tell you what’s going on either, so I will explain. The variable Φ (phi) is the flux, which is the radiant intensity. The previous article describes this in more detail.
The flux (Φ) is scaled by two dot products. The first dot product is between the pixel light normal and the direction from the pixel light to the surface point. The second dot product is between the surface normal and the direction from the surface point to pixel light. In order to not get inverted light contributions, those dot products are clamped between [0, ∞]. In this equation they do the normalization step last, I assume for performance reasons. It is equally valid to normalize the directions before doing the dot products.
equation
Figure 2: The equation for irradiance at a point in space by pixel light
p
The result from this shader pass can be blended on a backbuffer and will give results as seen in figure 1.
Pitfalls
While implementing this algorithm, I ran into some issues. I will cover these issues to avoid people from making the same mistakes.
Incorrect sampler
I spent a considerable amount of time figuring out why my indirect light seemed to repeat itself. Crytek’s Sponza does not have their UV coordinates in the [0,1] range, so we needed a sampler which wrapped. This is however a horrible property when you are sampling from (reflective) shadow maps.
Tweakable values
To improve my workflow, it was vital to have some variables tunable at the touch of a button. I can increase the intensity of the indirect lighting and the sampling range (rMax). For reflective shadow mapping, these variables should be tweakable per light. If you sample in a big range, you get lighting from everywhere, which is useful for big scenes. For more local indirect lighting, you will need a smaller range. Figure 3 shows global and local indirect lighting.
rMax
Figure 3: Demonstration of rMax sensitivity.
Separate pass
Initially I thought I could do the indirect lighting in the shader that does the light gathering for deferred rendering. For directional lights, this works, because you render a full screen quad anyway. However, for spot- and pointlights, you try to minimize the fill rate. I decided to move the indirect lighting to a separate pass, something that is necessary if you want to do the screen space interpolation as wel.
Cache inefficient by nature
The algorithm is horribly cache inefficient. The algorithm samples randomly around a point in multiple textures. The amount of samples taken without optimization is unacceptably high as well. With a resolution of 1280 * 720 and a sample count of 400 you take 1.105.920.000 samples per light.
Pros & cons
I will list the pros and cons of this indirect lighting algorithm that I have encountered. I do not have a lot to compare it to, since this is the first that I am implementing.
Pros Cons
Easy to understand algorithm Very cache inefficient
Integrates neatly with a deferred renderer Requires tweaking of variables
Can be used in other algorithms (LPV) Forced choice between local and global indirect light
Optimizations
I have made some attempts to increase the speed of this algorithm. As they discuss in the paper (link at the top of this page) they perform a screen space interpolation. I got this to work and it sped up the rendering quite a bit. Below I will describe what I have done and make a comparison (in frames per second) between the following states using my 3-walls-with-bunny scene; no RSM, naïve RSM, and interpolated RSM .
Z-check
One reason why my RSM was underperforming was because I was also testing for the pixels that were part of the skybox. A skybox definitely does not need indirect lighting. The speedup this gives depends on how much of the skybox you would actually see.
Pre-calculating random samples on the CPU
Pre-calculating the random samples not only gives you more temporal coherency, it also saves you from having to regenerate those samples in the shaders.
Screen space interpolation
The article proposes to use a low resolution render target for evaluating the indirect lighting. For scenes with a lot of smooth normals and straight walls, lighting information can easily be interpolated between lower resolution points. I am not going to describe this interpolation in detail, to keep this article a bit shorter.
Results and conclusion
Below are my results for a few different sample counts. I have a few observations on these results.
• Logically, the FPS stays around 700 for different sample counts when there is no RSM calculation done.
• Interpolation brings some overhead and becomes less useful with low sample counts.
• Even with 100 samples, the resulting image looked pretty good. This might be due to the interpolation which is “blurring” the indirect light. This makes it look more diffuse.
Sample count FPS for No RSM FPS for Naive RSM FPS for Interpolated RSM
100 ~700 152 264
200 ~700 89 179
300 ~700 62 138
400 ~700 44 116
Reflective Shadow Maps
Reflective Shadow Maps (RSM) is an algorithm that extends “simple” Shadow Maps. The algorithm allows for a single diffuse light bounce. This means that, besides direct illumination, you get indirect illumination. This article breaks down the algorithm from the paper to explain it in a more human-friendly way. I will also briefly cover Shadow Mapping.
Shadow Mapping
Shadow Mapping (SM) is a shadow generation algorithm. This algorithm stores the distance from a light to an object in a depth map. Figure 1 shows an example of a depth map. It stores the distance (depth) per pixel. So, when you have a depth map from a light’s point of view, you then draw the scene from the camera’s point of view. To determine if an object is lit, you check the distance from the light to that object. If the distance to the object is greater than what is stored in the shadow (depth) map, the object is in the shadow. This means the object must not be lit. Figure 2 shows an example. You do these checks per pixel.
5394470202_309d6fe655
Figure 1: This image shows a depth map. The closer the pixel is, the brighter it appears.
shadowmapexample
Figure 2: The distance from the light to the pixel in the shadow is greater than the distance stored in the depth map.
Reflective Shadow Mapping
Now that you understand the basic concept of Shadow Mapping, we continue with Reflective Shadow Mapping (RSM). This algorithm extends the functionality of “simple” Shadow Maps. Besides depth data, you also store world space coordinates,the world space normals, and the flux. I will explain why you store these pieces of data.
The data
World space coordinates
You store the world space coordinates in a Reflective Shadow Map to determine the world space distance between pixels. This is useful for calculating light attenuation. Light attenuates (becomes less concentrated) based on the distance it travels. The distance between two points in space are used to calculate how intense the lighting is.
Normals
The (world space) normal is used to calculate the light bouncing off of a surface. In case of the RSM, it is also used to determine the validity of the pixel as a light for another pixel. If two normals are very similar, they will not contribute much bouncing light for each other.
(Luminous) Flux
Flux is the luminous intensity of a light source. The unit of measurement for this is Lumen, a term you see on light bulb packages nowadays. The algorithm stores the flux for every pixel checked while drawing the shadow map. Flux is calculated by multiplying the reflected light intensity by a reflection coefficient. For directional lights, this would give a uniformly lit image. For spot lights, you take the angle falloff into consideration. Attenuation and receiver cosine are left out of this calculation, because this is taken into account when you calculate the indirect lighting. The article does not go into detail about this. Figure 3 shows an image from the RSM paper displaying the flux for a spot light in the fourth image.
rsm_maps
Figure 3: This image shows the four maps contained in an RSM. From left to right; depth map, world space coordinates, world space normals, flux
Applying the data
Now that we have generated the data (theoretically), it is time to apply it to a final image. When you draw the final image, you test all lights for every pixel. Besides just lighting the pixels using the lights, you now also use the Reflective Shadow Map.
A naive approach to calculating the light contribution is to test all texels in the RSM. You check if the normal of the texel in the RSM is not pointing away from the pixel you are evaluating. This is done using the world space coordinates and the world space normals. You calculate the direction from the RSM texel’s world space coordinates to that of the pixel. You then compare that to the direction the normal is pointing to, using the vector dot product. Any positive value means the pixel should be lit by the flux stored in the RSM. Figure 4 demonstrates this algorithm.
indirect_lighting
Figure 4: Demonstration of indirect light contribution based on world space positions and normals
Shadow maps (and RSMs) are large by nature (512×512=262144 pixels), so doing a test for every texel is far from optimal. Instead, it is best to take a set amount of samples from the map. The amount of samples you take depends on how powerful your hardware is. An insufficient amount of samples might give artifacts like banding or flickering colors.
The texels that will contribute most to the lighting result are the ones closest to the pixel you are evaluating. A sampling pattern that takes the most samples near the pixel’s coordinates will give the best results. The paper describes that the sampling density decreases with the squared distance from the pixel we are testing. They achieve this by converting the texture coordinates to polar coordinates relative to the point we are testing.
Since we are doing “importance sampling” here, we need to scale the intensity of the samples by a factor related to the distance. This is because samples that are further away get sampled less often, but would in reality still contribute the same amount of flux. Samples closer by get sampled more often, but samples farther away are more intense. This evens out an inequality while keeping the sample count low. Figure 5 shows how this works.
importance_sampling
Figure 5: Importance sampling. More samples are taken from the center and samples are scaled by a factor related to their distance from the center point. Borrowed from the RSM paper.
When you have a sample, you treat that sample the same as you would treat a point light. You use the flux value as the light’s color and only light objects in front of this sample.
The paper goes into more detail on how to further optimize this algorithm, but I will stop here. The section on Screen-Space Interpolation describes how you can gain more speed, but for now I think importance sampling will suffice. I am glad to say that I understand the RSM algorithm enough to build an implementation in C++/DirectX11. I will do my best to answer any questions you may have.
Lots of balls
A new week, a new assignment. This week I get to optimize a piece of code that renders a “glass” ball on a background, letting through some of the color hidden in the background image. Puzzling on my own and talking to classmates helped in creating code which I couldn’t get any faster. I’ve learned a lot doing this and discussing it with peers.
The initial code draws a colorful landscape with a black & white overdraw on it, making it a greyscale and darker image. The ball samples pixels from the colorful version of the image. The goal of this week’s assignment is to optimize using several rules of thumb, “Get rid of expensive operations”, “Precalculate”, “Use the power of 2”, “Avoid conditional jumps”, and “Get out early”. High-level optimizations were not specifically required, but makes the code a lot faster. I will show both versions of “Game.cpp”, mine and the unoptimized one.
glassbal
This is how the glass-ball effect looks.
The initial code was written by my teacher, Jacco Bikker, for the NHTV, and is intended to be optimized. The most relevant bit is in the “Game.cpp” and is as follows.
[code language=”cpp”]#include "game.h"
#include "surface.h"
using namespace Tmpl8;
// ———————————————————–
// Scale a color (range: 0..128, where 128 is 100%)
// ———————————————————–
inline unsigned int ScaleColor( unsigned int orig, char scale )
{
const Pixel rb = ((scale * (orig & ((255 << 16) + 255))) >> 7) & ((255 << 16) + 255);
const Pixel g = ((scale * (orig & (255 << 8))) >> 7) & (255 << 8);
return (Pixel)rb + g;
}
// ———————————————————–
// Draw a glass ball using fake reflection & refraction
// ———————————————————–
void Game::DrawBall( int bx, int by )
{
Pixel* dst = m_Surface->GetBuffer() + bx + by * m_Surface->GetPitch();
Pixel* src = m_Image->GetBuffer();
for ( int x = 0; x < 128; x++ )
{
for ( int y = 0; y < 128; y++ )
{
float dx = (float)(x – 64);
float dy = (float)(y – 64);
int dist = (int)sqrt( dx * dx + dy * dy );
if (dist < 64)
{
int xoffs = (int)((dist / 2 + 10) * sin( (float)(x – 50) / 40.0 ) );
int yoffs = (int)((dist / 2 + 10) * sin( (float)(y – 50) / 40.0 ) );
int u1 = (((bx + x) – 4 * xoffs) + SCRWIDTH) % SCRWIDTH;
int v1 = (((by + y) – 4 * yoffs) + SCRHEIGHT) % SCRHEIGHT;
int u2 = (((bx + x) + 2 * xoffs) + SCRWIDTH) % SCRWIDTH;
int v2 = (((by + y) + 2 * yoffs) + SCRHEIGHT) % SCRHEIGHT;
Pixel refl = src[u1 + v1 * m_Image->GetPitch()];
Pixel refr = src[u2 + v2 * m_Image->GetPitch()];
int reflscale = (int)(63.0f – 0.015f * (1 – dist) * (1 – dist));
int refrscale = (int)(0.015f * (1 – dist) * (1 – dist));
dst[x + y * m_Surface->GetPitch()] =
ScaleColor( refl, 41 – (int)(reflscale * 0.5f) ) + ScaleColor( refr, 63 – refrscale );
float3 L = Normalize( float3( 60, -90, 85 ) );
float3 p = float3( (x – 64) / 64.0f, (y – 64) / 64.0f, 0 );
p.z = sqrt( 1.0f – (p.x * p.x + p.y * p.y) );
float d = min( 1, max( 0, Dot( L, p ) ) );
d = powf( d, 140 );
Pixel highlight = ((int)(d * 255.0f) << 16) + ((int)(d * 255.0f) << 8) + (int)(d * 255.0f);
dst[x + y * m_Surface->GetPitch()] = AddBlend( dst[x + y * m_Surface->GetPitch()], highlight );
}
}
}
}
// ———————————————————–
// Initialize the game
// ———————————————————–
void Game::Init()
{
m_Image = new Surface( "testdata/mountains.png" );
m_BallX = 100;
m_BallY = 100;
m_VX = 1.6f;
m_VY = 0;
}
// ———————————————————–
// Draw the backdrop and make it a bit darker
// ———————————————————–
void Game::DrawBackdrop()
{
m_Image->CopyTo( m_Surface, 0, 0 );
Pixel* src = m_Surface->GetBuffer();
unsigned int count = m_Surface->GetPitch() * m_Surface->GetHeight();
for ( unsigned int i = 0; i < count; i++ )
{
src[i] = ScaleColor( src[i], 20 );
int grey = src[i] & 255;
src[i] = grey + (grey << 8) + (grey << 16);
}
}
// ———————————————————–
// Main game tick function
// ———————————————————–
void Game::Tick( float a_DT )
{
m_Surface->Clear( 0 );
DrawBackdrop();
DrawBall( (int)m_BallX, (int)m_BallY );
m_BallY += m_VY;
m_BallX += m_VX;
m_VY += 0.2f;
if (m_BallY > (SCRHEIGHT – 128))
{
m_BallY = SCRHEIGHT – 128;
m_VY = -0.96f * m_VY;
}
if (m_BallX > (SCRWIDTH – 138))
{
m_BallX = SCRWIDTH – 138;
m_VX = -m_VX;
}
if (m_BallX < 10)
{
m_BallX = 10;
m_VX = -m_VX;
}
}[/code]
I will not dive in too much depth on what the pieces of the code does, but Init runs once, Tick runs every frame, DrawBackdrop draws the dark and grey overlay on top of the bright coloured landscape, which is visible through the ball drawn with DrawBall. The things to optimize will be drawing the background (with currently DrawBackdrop) and DrawBall. I did not pay attention to memory usage, as that isn’t the intention of the excersize. This is the code that I ended up with:
[code language=”cpp”]#include "game.h"
#include "surface.h"
#include <iostream>
#include <time.h>
using namespace Tmpl8;
Surface* imageWithBackdrop;
// ———————————————————–
// Scale a color (range: 0..128, where 128 is 100%)
// ———————————————————–
inline unsigned int ScaleColor( unsigned int orig, int scale )
{
const Pixel rb = ((scale * (orig & ((255 << 16) | 0xFF))) >> 7) & ((255 << 16) + 255);
const Pixel g = ((scale * (orig & (255 << 8))) >> 7) & (255 << 8);
return (Pixel)rb + g;
}
// ———————————————————–
// Draw a glass ball using fake reflection & refraction
// ———————————————————–
const float3 L = Normalize(float3(60, -90, 85));
unsigned int imageWidth;
unsigned int imageHeight;
int distances[128 * 128];
Pixel highlights[128 * 128];
int xOffs[128 * 128];
int yOffs[128 * 128];
int reflScales[128 * 128];
int refrScales[128 * 128];
const static int pitch = SCRWIDTH;
#define BALLS 100
struct Ball
{
float m_X, m_Y, m_VX, m_VY;
};
Ball balls[BALLS];
void Game::DrawBall( int bx, int by )
{
static Pixel* src = m_Image->GetBuffer();
Pixel* dst = m_Surface->GetBuffer() + bx + by * m_Surface->GetPitch();
for (int y = 0; y < 128; ++y )
{
for ( int x = 0; x < 128; ++x )
{
unsigned int index = (y << 7) + x;
if (distances[index] < 64)
{
unsigned int u1 = ((((bx + x) – (xOffs[index] << 1)) + imageWidth) << 22) >> 22;
unsigned int v1 = ((((by + y) – (yOffs[index] << 1)) + imageHeight) << 22) >> 12;
unsigned int u2 = (((bx + x) + xOffs[index] + imageWidth) << 22) >> 22;
unsigned int v2 = (((by + y) + yOffs[index] + imageHeight) << 22) >> 12;
Pixel refl = src[u1 + v1]; //Reflection
Pixel refr = src[u2 + v2]; //Refraction
dst[x + y * pitch] =
AddBlend(
ScaleColor(refl, reflScales[index]) + ScaleColor(refr, refrScales[index]),
highlights[index]
);
}
}
}
}
// ———————————————————–
// Draw the backdrop and make it a bit darker
// ———————————————————–
void Game::DrawBackdrop()
{
m_Image->CopyTo(m_Surface, 0, 0);
Pixel* src = m_Surface->GetBuffer();
unsigned int count = m_Surface->GetPitch() * m_Surface->GetHeight();
for (unsigned int i = 0; i < count; i++)
{
src[i] = ScaleColor(src[i], 20);
int grey = src[i] & 255;
src[i] = grey + (grey << 8) + (grey << 16);
}
}
void DrawBackground(Surface* a_Target)
{
memcpy(a_Target->GetBuffer(), imageWithBackdrop->GetBuffer(),
imageWithBackdrop->GetPitch() * imageWithBackdrop->GetHeight() * sizeof(Pixel));
}
// ———————————————————–
// Initialize the game
// ———————————————————–
#define TEST_ITERATIONS 512
void Game::Init()
{
srand((unsigned int)time(0));
m_Image = new Surface("testdata/mountains.png");
imageWidth = m_Image->GetWidth();
imageHeight = m_Image->GetHeight();
for (int y = 0; y < 128; y++)
{
float dy = (float)(y – 64);
for (int x = 0; x < 128; x++)
{
float dx = (float)(x – 64);
int dist = (int)sqrt(dx * dx + dy * dy);
distances[y * 128 + x] = dist;
}
}
for(int y = 0; y < 128; y++)
{
int dy = y – 64;
float pY = dy / 64.0f;
for (int x = 0; x < 128; x++)
{
xOffs[y * 128 + x] = (int)(((distances[y * 128 + x] >> 1) + 10) * sin((x – 50) / 40.0f));
xOffs[y * 128 + x] <<= 1;
yOffs[y * 128 + x] = (int)(((distances[y * 128 + x] >> 1) + 10) * sin((y – 50) / 40.0f));
yOffs[y * 128 + x] <<= 1;
reflScales[y * 128 + x] = (int)(63.0f – 0.015f * (1 – distances[y * 128 + x]) * (1 – distances[y * 128 + x]));
reflScales[y * 128 + x] = 41 – (reflScales[y * 128 + x] >> 1);
refrScales[y * 128 + x] = (int)(0.015f * (1 – distances[y * 128 + x]) * (1 – distances[y * 128 + x]));
refrScales[y * 128 + x] = 63 – refrScales[y * 128 + x];
float3 p((x – 64) / 64.0f, pY, 0);
p.z = sqrt(1.0f – (p.x * p.x + p.y * p.y));
float d = Dot(L, p);
if (d > 0.96f) //Approximate value, anything below this doesn’t affect the highlight.
{
d = min(1, max(0, d));
d = pow(d, 140);
auto di = (int)(d * 255.0f);
Pixel highlight = (di << 16) + (di << 8) + di;
highlights[y * 128 + x] = highlight;
}
else
{
highlights[y * 128 + x] = 0;
}
}
}
for (int i = 0; i < BALLS; i++)
{
balls[i].m_X = (float)(80 + rand()%(SCRWIDTH-230));
balls[i].m_Y = (float)(16 + rand() % (SCRHEIGHT – 230));
balls[i].m_VX = 1.6f;
balls[i].m_VY = 0;
}
DrawBackdrop();
imageWithBackdrop = new Surface(m_Surface->GetPitch(), m_Surface->GetHeight());
memcpy(imageWithBackdrop->GetBuffer(), m_Surface->GetBuffer(),
m_Surface->GetPitch() * m_Surface->GetHeight() * sizeof(Pixel));
unsigned long long timings[TEST_ITERATIONS];
for (int iterations = 0; iterations < TEST_ITERATIONS; iterations++)
{
TimerRDTSC timer;
timer.Start();
DrawBall(300, 300);
timer.Stop();
timings[iterations] = timer.Interval();
}
unsigned long long total = 0;
for (int i = 0; i < TEST_ITERATIONS; i++)
{
total += timings[i];
}
total /= TEST_ITERATIONS;
std::cout << total << " is average for DrawBall\n";
TimerRDTSC timer;
timer.Start();
DrawBackground(m_Surface);
timer.Stop();
std::cout << timer.Interval() << std::endl;
}
// ———————————————————–
// Main game tick function
// ———————————————————–
void Game::Tick( float a_DT )
{
DrawBackground(m_Surface);
for (int i = 0; i < BALLS; i++)
{
DrawBall((int)balls[i].m_X, (int)balls[i].m_Y);
balls[i].m_X += balls[i].m_VX;
balls[i].m_Y += balls[i].m_VY;
balls[i].m_VY += 0.2f;
if (balls[i].m_Y >(SCRHEIGHT – 128))
{
balls[i].m_Y = SCRHEIGHT – 128;
balls[i].m_VY = -0.96f * balls[i].m_VY;
}
if (balls[i].m_X > (SCRWIDTH – 138))
{
balls[i].m_X = SCRWIDTH – 138;
balls[i].m_VX = -balls[i].m_VX;
}
if (balls[i].m_X < 10)
{
balls[i].m_X = 10;
balls[i].m_VX = -balls[i].m_VX;
}
}
}[/code]
Background
I started off with the background, which was rendering slow due to the DrawBackdrop every frame. The background image never changes, which means drawing the backdrop only once on a copy of the landscape and then drawing that pre-rendered surface as background would save me quite some cycles. I moved the call to DrawBackdrop to the Init-function and drew that to a new surface.
This saved me a lot of cycles. The cycles on itself don’t say a lot, but in comparison to optimized results, they do. Going from 100 to 10 cycles would be 10 times faster, while 100 itself doesn’t give any valuable information away. The rendering of the background with the backdrop costed me about 1871784 cycles on average (on 10 tests). The optimized DrawBackground, using the pre-rendered surface and memcpy, uses roughly 285000 cycles on average (on 512 tests). This is about 15.2% of the unoptimized version. Clearing the screen wasn’t necessary, so that line is removed in the final code.
The Ball
For the optimizations performed on drawing the ball, I will discuss the before mentioned topics. The original code uses 4912748 cycles on average for 512 tests. The optimized version uses about 433500 on average, with 512 tests. That means there’s only 8.8% left of the cost!
Get rid of expensive operations (and use the power of 2)
The original code uses a lot of expensive operations. Most of that stuff can be precalculated, which is what I went for. I did try some other ideas before I started to move those calculations to the Init function. Here’s a short list of things I’ve figured out, some with help of my classmates.
• Changing “powf” to “pow” gives an enormous reduction in cycles. This is because “pow” takes an integer as power and “powf” takes a float, which is a lot harder to calculate. Since the input is 140, it could just be an integer.
• Reducing the calls to “pow” increases performance a lot. “pow” is still quite expensive, so we’d want to reduce its use as much as we can. The code using “pow” draws a specular highlight on the ball, but that’s just a small dot. Doing “pow” for every pixel while only about 5% should need it is a waste. I approximated a number of about 0.96 (depending on the power), where every result of the Dot-product higher than that will draw a pixel for the specular highlight. 96% less calls to “pow” is great.
• The distance calculations were faster with floats. I didn’t expect this, since integer operations are practically always faster. But, I read somewhere that the compiler might use SIMD to optimize floating-point operations, making it a lot faster. This is why you should always test and record, instead of blindly changing code which you think might run faster.
• Modulo is slow. It can be used to have looping textures, but the operation is slow. The original code used modulo on the window width, but it sampled colours (for refraction and reflection) from the image provided with the project. The fortunate thing is that the background image has a dimension of 1024×640. Since 1024 is a power of 2, you can use a logical and operator (&) to scrape off the bits not included in the 1024-range, making it wrap neatly. This speeds up the code enormously, since it saves two modulo calls per pixel per ball. The height isn’t a power of two, but we can adjust the image to make it so. Using PhotoShop, I padded the height to 1024, repeating the image for sampling purposes. Now I can do the same thing with the height, removing all uses of modulo.
• Bitshifting is faster than logical and (&). In the previous list item, I removed the modulo calls, by replacing them with logical ands, greatly increasing speed. But, since we’re only using the first 10 bits, we can shift 22 bits to the left, truncating all those bits and then shifting them back to get the wrapped value between 0 and 1023. This saved several thousand cycles on average.
• int-float conversions are slow. “(int)(reflscale * 0.5f)” converts “reflscale” to a float to do the multiplication and then converts that back to an int. Since 0.5 is a reciprocal of a power of 2, we can just use a bitshift to divide the number by 2. “reflscale >> 1” does the job perfectly and is a lot faster.
Precalculate
Precalculating most stuff was what gave the biggest improvements in cycle reduction. I’ve moved almost everything, apart from the sampling from the background, to the Init functions and made it accessible by array indices.
I started precalculating stuff when I noticed the “sin()” calls always used a number of 0 to 128 as arguments. I created an array of precalculated sines and referenced to that, giving a great boost in speed. I wanted to do the same for the distances, using an array of 64×64, because the distances are the same for all quadrants of the circle. This gave me an off-by-one error, making the code draw 2 unwanted lines. Since memory usage wasn’t the focus, I figured I could just use 128×128 arrays for the calculations, avoiding calls to “abs()”.
After this, I sat down with a classmate, talking about which optimizations we used and soon figured out that practically everything can be cached quite easily. The entire specular highlight can be saved in an array, getting rid of “sqrt()”-calls, “pow”-calls, “sin()”-calls, and floats in the rendering of each ball every frame. The highlight-array is basically just an overlay for the ball, blended with “AddBlend()”. All the heavy code is now in the Init-function, basically leaving only integer arithmetic, sampling from an image, blending, and bitshifting in the DrawBall-function. Pre-rendering every possible position for a ball could also be a solution, but having 800×600 pre-rendered balls in memory isn’t a neat solution in my opinion.
Avoid conditional jumps
Unrolling my loops would probably speed things up, but I read that the compiler unrolls the loops automatically. An old forum post a user states that the Visual C++ compiler unwinds loops where possible to increase performance. My project settings are set to produce the fastest code (opposed to the smaller code option).
Get out early
The “Get out early”-principle basically means to break the loop if the rest of the iterations aren’t helpful anymore. I tested code which broke the X-loop after it drew the last pixel of that row. Unfortunately, the checks for these last pixels increased the amount of cycles more than it saved, which is why I don’t break the loop.
Conclusion
The core of this optimization challenge was precalculation (caching) and getting rid of expensive function calls. I got the ball to render more than 10 times as fast and I can run the program drawing 100 balls on screen with over 30 FPS on my laptop. I’m happy with the result, yet it frustrates me a little bit that I can’t find anything else in that piece of code to optimize, but I guess I’ll learn more ways to do that in future lectures. Thanks for reading!
Getting things sorted
For an assignment for Programming 3 at IGAD, I have to optimize a piece of code. The code transforms, sorts, and renders an amount of sprites. Optimization of the rendering was done in class, so the sorting is the biggest bottleneck at the moment. Which is what I will dedicate this post to. I’ve found some sorting algorithms that I think are very interesting. The ones I will talk about are QuickSort, Radix Sort, and Tree Sort. The algorithm used in the code at first is an unoptimized Bubble Sort, which is very slow.
The sorting algorithm will have to deal with a high amount of sprites to sort, based on their Z-value, which is a floating-point number. The amount of sprites ranges is at least above 5000 and the assignment is to make the code render as many sprites on screen as I can.
Tree Sort
The first decent algorithm I imagined without research was a tree structure, where you would put all data in a binary tree. This automatically sorts the tree, which you could flatten back to an array quite easily. Apparently, this exists and isn’t a terrible solution for sorting. The only problem you face is when you have an unbalanced tree. If you have the maximum value as a first value, anything that gets added will get added to the left child nodes. With a bit of bad luck, this could be the case for a lot of elements, making you traverse every element before adding a new one.
The ease of implementation is what would make this an option for the simulation that I have to optimize. I will not use this one, but I will give example pseudo-code of how I would implement it. It can probably optimized in really simple ways, but since I’m not using this algorithm, I see no need in working on that at the moment. Adding to the tree would be as follows.
[code language=”cpp”]struct TreeNode
{
/*
* I’m using float3-vectors, this could be a Sprite as well.
* The code just draws sprites at positions.
*/
float3 value;
TreeNode* leftNode;
TreeNode* rightNode;
TreeNode(float3 a_value)
: value(a_value), leftNode(nullptr), rightNode(nullptr){}
~TreeNode()
{
if(leftNode) delete leftNode;
if(rightNode) delete rightNode;
}
void Insert(float3& a_other)
{
if(a_other.z <= value.z)
{
if(leftNode == nullptr)
leftNode = new TreeNode(a_other);
else
leftNode->Insert(a_other);
}
else
{
if(rightNode == nullptr)
rightNode = new TreeNode(a_other);
else
rightNode->Insert(a_other);
}
}
//float3 pointer to array, int pointer
void Flatten(float3* a_array, int& a_index)
{
if(leftNode != nullptr)
leftNode->Flatten(a_array, a_index);
a_array[a_index++] = value;
if(rightNode != nullptr)
rightNode->Flatten(a_array, a_index);
}
};
//Setting the first position vector as RootNode
TreeNode* root = new TreeNode(ListOfPositions[0]);
//DOTS is a define which is the number of sprites drawn
for(int i = 1; i < DOTS; i++)
{
root->Insert(ListOfPositions[i]);
}
//Now, to flatten the tree back to an array.
int index = 0;
root->Flatten(ListOfPositions, index);
//Don’t forget to clean up the tree!
delete root;[/code]
I tried this code in the application and quite quickly discovered that this is a slow method of sorting, compared to Quicksort. This might be because it is a completely unbalanced tree. The results vary, but it sometimes uses 100 times the amount of cycles that Quicksort uses.
Radix Sort
The first thing I did when I started this assignment was to look for sorting algorithms. There is an awesome visualization of 15 algorithms on YouTube. This video features the Radix Sort (LSD) and Radix Sort (MSD). I will focus on LSD (Least Significant Digit) here, because that is what I tried to implement.
The idea of Radix Sort is to sort numbers in several passes. Every pass sorts the list by a digit in the number. If you have an array with {105, 401, 828, 976}, you can sort on the last digit. {105, 401, 828, 976} would become {401, 105, 976, 828}. After this, you sort on the second digit, making {401, 105, 976, 828} into {401, 105, 828, 976}. The last digit makes {401, 105, 828, 976} into {105, 401, 828, 976}, and we’re done sorting. The beauty of this algorithm is that you can do it without comparing numbers to eachother. I will explain how in a while.
Another cool property of this algorithm is that it is a stable sorting algorithm. This means that if you have two objects in an array with the same comparing value (e.g. {Vector3(100, 20, 3), Vector3(526, -10, 3)} when comparing Z-values), they are guaranteed to appear in the same order in the array. The second element will never be inserted before the first element. This is quite useful, because two objects with the same Z-value and similar X and Y positions might end up Z-fighting, a problem I am facing with Quicksort at the moment.
For this algorithm, you don’t compare values to eachother. The idea is to use buckets to put the objects in. For the previous example, you can have 10 buckets. A bucket per digit (0..9). When you are done filling the buckets, you loop through the buckets, adding all the items in the bucket to an array, making it all sorted on the current radix. For the previous example, this required three passes, one pass per digit. This makes the complexity of Radix Sort O(n*k) where k is equal to the amount of passes. This is very fast, but not applicable to all sorting problems. The good thing is, I can use this to sort floating-point numbers.
Floating-point numbers are difficult to sort this way, since the radices aren’t obvious. What you can do, however, is convert the float to an unsigned integer and using the hexadecimal values to sort them. I will not go in to detail on how to do this, since there are amazing articles by Pierre Terdiman and Michael Herf on Radix Sorting (negative) floating-point numbers. The idea is that most compilers comply to the IEEE 754 standard for representation of floating-point numbers in memory, making the least significant bits in the memory representation also the least significant digits in the floating-point number.
Quicksort
Quicksort is the algorithm I ended up using, since I couldn’t get the Radix Sort to work properly and Quicksort is easier to implement. Quicksort has the advantage that the sorting is in-place, meaning you don’t need additional memory for the sorting process. A downside to Quicksort is that the algorithm isn’t stable, giving Z-fighting issues in this demo.
The concept of Quicksort is to select a pivot point in the array (any random number, the median would be ideal, but costs more time to compute) and put lower numbers to the left of this pivot and higher numbers to the right. When this is done, you partition the array into several smaller partitions to sort the smaller and higher blocks. This process is recursive and ends when the partition size is too small to swap things over the pivot (for example, just the pivot left in the partition would indicate that that partition is sorted, or at least doesn’t require sorting).
Conclusion
This concludes my post about sorting algorithms. The one I picked was Quicksort, since it is easy to understand, implement, and find reference for. It is incredibly fast, but I consider the Z-fighting issues annoying. I’d require a stable sorting algorithm to get rid of that. Radix Sort would be a viable candidate for that, but for the time I had, it took too much to fully comprehend and implement it. The Tree Sort is really easy to implement, since I made up the code as I wrote this post, but it’s slow as well. It could use some clever optimizing, but I wanted to focus on getting the program fast first. The downside to Radix Sort and Tree Sort is that it isn’t in-place, meaning that you need extra memory for sorting the arrays. Memory wasn’t an issue in this assignment, but it can be an issue in future projects, which is why I took it into consideration when picking Quicksort.
Post-mortem: Micro Machines in 3D
This is my first post-mortem post. I will write about Micro Machines in 3D, an assignment I handed in last week for the second block of Programming at IGAD. This project features a lot of stuff I haven’t tried before. For instance, it uses QuadTrees for collision checking, it uses OpenGL and GLSL to render the models, and it uses a lot more smart pointers than I did before. In this post I will talk about several subjects that were relevant for the making of this project. I will talk about writing code for drawing 3D models using .OBJ’s, OpenGL, and GLSL. I will talk about optimizations, especially in collision testing and cull testing. I also ended up with a neat Singleton base-class which I would like to share. The first thing I’d like to talk about is Planning. Something I’ve worked on a lot this block. Planning | 3D Models | Shaders | Optimizing | Smart Pointers | Singletons | Conclusion
Planning
Planning has always been a thing I’ve neglected and thought of as something that would never work for me, since I’m quite chaotic. A lecture on planning triggered something in me to think “let’s just give it a shot”. I started off with listing all deadlines I had that block and then creating a few incomplete sprints, like the ones in SCRUM. I used Scrumy to keep an overview of things I still had to do and things that I should be doing. When those were complete, I started to fill in things in Google Calendar. When I received the assignment for PR2 (Programming), I immediately decided to make a 3D game. I knew this would cost me quite a large amount of time, which is why I wanted to plan this out from the start. I made a list of all things I wanted in the game and proceeded to create User Stories for them. User stories Apart from the racing against an opponent, everything on the image is in the game. After this, I created Sprint Backlogs, with hour estimations. I knew I had to spend about 20 hours a week to complete the project in time, with the estimations I made back then. The first few weeks, I stuck to my planning, getting things done in time and moving things forward. After about 3 weeks, I doubted if I was able to fit in an AI with pathfinding and I wanted to finish a cool, working project instead of just attempts at getting stuff to work. This led me to skip the AI and focus more on collisions, the art, and culling. This messed up the planning though. The Sprint Backlogs lost their use and I just made a planning every week, stating “Work on this assignment”, instead of specifying which task had to be done at what moment. The lack of specification made it easier for me to say “Not today! Time for video games.” I learnt that the effective part of planning is the specification. If I tell myself to do task X between 19:00 and 23:00, my planning will depend on me having done that. This forces me to work on it and actually keeps me motivated. I found sticking to my planning to be very motivating. The short bursts of excitement when you finish a task way in time is awesome.
3D models
The primary personal goal of this assignment was creating the game in 3D, instead of 2D. The framework I had to use was basically a software renderer at first. It did all the calculations on an array of integers and pushed that to the GPU as a texture on a quad. This is quite slow, so the first step was to edit this framework to use OpenGL quads for sprites. The framework used a quad for its rendering, which made it an excellent case study. I used OpenGL’s deprecated Immediate Mode at first for rendering those sprite quads. When I finally got a quad to draw with a texture on it, rotating along an arbitrary axis, it was time to start using perspective instead of an orthographic view. Prior to this assignment, I read parts of Beginning OpenGL Game Programming, a book I would recommend to anyone starting OpenGL. This book shows how easy it is to set up a perspective view. The problems I faced were in the clipping planes and in the aspect ratio. The code I had was:
[code language=”cpp”]gluPerspective(60, 800 / 600, -1, 1000);[/code]
The problem in the aspect ratio was (not only the lack of variable names) that integer division results in an integer. This means 800/600 results to 1. The second problem was the near-clipping plane, set to -1. When this is negative, the graphics become quite glitchy, because certain calculations can’t be done. Some calculations use a ratio between the near and far clipping plane for determining the depth. If it is set to -1, the ratio can’t be calculated correctly. Setting this to 0.001 fixed the problem. The correct version was:
[code language=”cpp”]gluPerspective(60, 800.0 / 600, 0.001, 1000);[/code]
By this time, I got a fancy quad rotating around the Y-axis, in perspective. The next step was loading a model. An easy-to-use and readable format is the .OBJ-format. This format is saved in ASCII (plain text) and there are some great tutorials for parsing these formats. After following this tutorial, I ran into a small issue. Drawing a 3D-object in Immediate Mode really pushes down the Frames Per Second (FPS). The usual solution would be to use a Vertex Buffer Object (VBO) accompanied with an Index Buffer Object (IBO) for the indices. After trying several different tutorials and functions, I couldn’t get the model to draw on screen. This led me to look for other solutions. I found the, by now deprecated, Display List feature in OpenGL. As far as I know, this function records the Immediate Mode steps in a list and makes it possible to call the list using
[code language=”cpp”]glCallList(m_listId);[/code]
This was quite fast as well, making it easy for me to load several models in the game with still a really high FPS. For the upcoming projects, I want to start using VBO’s and IBO’s though. For the bubbles in this game, I used a cone billboarded towards the camera. The shader uses the normals to determine transparency and the colors. This seemed a bit more efficient than having 1000 spheres in a game.
Shaders
I found programming the shaders quite an enjoyable thing to do. I used GLSL for this project, and with it I made a shader for my bubbles, a shader for explosions (illuminating), some standard diffuse shaders with texture capabilities, and a shader for the wavy water. I will talk about the bubble shader and the wavy water.
Bubble Shader
The bubble shader uses the camera position and checks how much the normal points towards the camera using the Dot-product. It then adds more red and green the less it points towards the camera. It also reduces alpha if it points towards the camera. This gives the impression of a bubble which you can see through, but is still visible by its edges. It then adds some lighting to these bubbles. I tweaked this to make them look a bit less dull and overall, I’m quite satisfied with the result.
Wavy Water Shader
The wavy water shader makes vertices on an object translate upwards depending on a given time value and it’s vertex Z-position. At first, I multiplied the sine of the time value by the Z-position, but this led to water waving a lot faster the further away you were from Z = 0. This is why I added the sine of Z/10 to the sine of time. This always gives an offset, making the wave work correctly. The amplitude of the waves is quite subtle in my opinion. This is because the objects will not go up and down depending on the waves and this covers it up a bit. It’s subtle enough to not be annoying and visible enough to give it an extra touch to the game.
Optimizing
Culling
Apparently, trying to render a complete scene with thousand bubbles in it isn’t really efficient. This bogged down the FPS to something that was unplayable. To fix this, I needed some form of culling. Frustum culling was a bit too hard for me at that moment. I used two forms of determining if I should draw something. At first, check if it is behind the camera. If that’s the case, don’t draw it. If it isn’t the case, check for the distance. If the distance (squared, to keep it a bit faster) is greater than a certain amount, don’t draw it as well. These methods worked fairly well in keeping my FPS decent in areas where there weren’t a lot of bubbles. Whenever I ran into areas with lots of bubbles, the FPS started to drop. Tweaking my waypoints (where the bubbles and mines spawn around) fixed this issue.
QuadTree
Not drawing all those objects helps for maintaining a decent FPS, but having a thousand bubbles constantly checking collision with eachother and the scene isn’t a practical solution as well. The first thing I did was create a QuadTree containing all static objects. The walls and mines are put into this tree and every object is checked against this quad tree. This helps quite a lot, but not enough. There are still a lot of dynamic objects in the scene. I couldn’t really find a good solution for this problem, which is why I create a QuadTree containing the dynamic objects every frame. This can be terribly slow, because of the amount of bubbles. This led me to use a hacky solution. If the dynamic object isn’t in range of the player, it isn’t updated nor pushed in the dynamic QuadTree. It’s hardly noticeable if you don’t know this and really helps for stabilizing the FPS. These solutions combined keeps my FPS above 50 in intense parts on my laptop and even more on other computers I’ve tested it on.
Smart Pointers
This was the first project where I’ve tried to use Smart Pointers intensively. There are still some leaks in the game, which I couldn’t find, but I’ve managed to reduce them a lot. I used unique pointers for the Singletons and shared pointers for any static or dynamic object. It is such a delight to only have to erase objects from a vector to get them removed and not constantly filling in destructors. The next project will be a challenge for me, to make it contain as few raw pointers as possible.
Singletons
For my projects, I use Singletons extensively. I have Singletons for containing textures, sprites, entities, audio, and input. During this project, for every Singleton I had to copy/paste several things to get it to work. I now have a Singleton base-class using templates. The only thing that you need to do in a derived class is inherit from it and declaring the Singleton<type> a friend, to be able to call the constructor, which you will want private to prevent multiple instances. Example of how to inherit from it:
[code language=”cpp”]class ShaderManager : public Singleton {
friend class Singleton;[/code]
And to use the Singleton you just use:
[code language=”cpp”]ShaderManager::GetInstance()->DoStuff();[/code]
So, the header file containing this class can be used in any project. It uses a unique pointer, which you could reset to get rid of the instance. This is the Singleton-class I’m talking about:
[code language=”cpp”]#pragma once
#include <memory>
template <class Type>
class Singleton
{
public:
virtual ~Singleton(void){}
static Type* GetInstance(void)
{
if(single.get() == nullptr)
{
single = std::unique_ptr<Type>(new Type());
return single.get();
}
else
{
return single.get();
}
}
static void Reset(){single = nullptr;}
protected:
static std::unique_ptr<Type> single;
};
template <class Type>
std::unique_ptr<Type> Singleton<Type>::single = nullptr;[/code]
Conclusion
This was a project in which I did a lot of stuff I haven’t done before. It was a great experience and I learned a lot about OpenGL, shaders, Smart Pointers, and an overall decent architecture. In the end, the code base got a bit messier than I had hoped, but the overall result is something I am quite proud of. The next project will feature the use of VBO’s, IBO’s, more math (since I won’t be using deprecated OpenGL anymore), and perhaps more fancy shaders. I will also try to stick to using only Smart Pointers and avoid raw pointers overall. This was also my first post-mortem, which is an experience itself. Normally, writing such a piece of text requires me to force myself to do it. I believe this will help me in overcoming the hurdle of having to write documents and larger pieces of text. Writing these posts will hopefully also improve my English over time. I hope you’ve enjoyed reading my first post-mortem. If you have made it to this point, thanks for reading! You can check out the code at GitHub and check the portfolio piece here.
Getting back to blogging.
Welcome to this new blog of mine.
Not my first
This isn’t the first blog I’ve started. Recently, I’ve started a study in Breda at the NHTV called International Game Architecture & Design (IGAD), with a focus on Programming. Before this, I studied somewhere else (Game Design & Development at USAT in Hilversum), but quit after two years. I discovered that a focus on game design isn’t set aside for me. From the start of that study, I enjoyed programming more than anything. When I quit, I decided to spend time preparing for IGAD. Improving my math skills, working on my C++, and do some blogging. Blogging forces me to think about the software I’ve written and gives an insight into my way of getting tasks done. Blogging also improves my skills in writing and understanding English. English isn’t my native language, Dutch is.
My old blog features the intake assignment I had to create for my application to IGAD, a project I started in XNA/C#. It also has my old portfolio before starting IGAD, summarized into one blog-post.
The revival
This website will function as a portfolio and a blog. It will also feature a short bio and links to social networks like LinkedIn, Github, and Twitter. The reason for having this website is to have a central hub for everything that has to do with my study and my upcoming career as a programmer. While developing software, I am also developing myself in a way that I keep getting closer to what I really want to do on a daily basis. What that is, I can’t really tell at the moment, but I am tending towards Graphics Programming.
My education offers me general game programming and principles at the moment, but in upcoming years, I will specialize myself into a subject. Not knowing what that exactly is, is somewhat exciting, because I already love working with the broad subjects, and going into the depths of one of those sounds like a lot of fun.
The blog?
The blog on this site will feature post-mortems for projects I have done, posts on graphics programming, attempts at clearing my mind of annoying programming-related issues, and software architecture related stuff. Probably more, but that’s the fun of this journey, I have no clear idea where I’m going!
The portfolio?
The portfolio is, at the moment of writing, still empty. I have a few projects made at school which I will add to it as soon as possible. You can expect several 2D games, including a small clone of Super Mario for the NES, a “clone” of Zelda: A Link to the Past (it has the graphics and parts of the mechanics, but a different game), and a clone of Galaxian. Those were the first three graded assignments for Programming this year. Then there’s the game we got to make for the second block of Programming. This was my first attempt at making a 3D game using OpenGL, shaders, and models.
I’m also considering adding the first GameLab game I had to make. That was a group project made in Unity. I’m not quite sure if I find it portfolio-worthy enough, but it would make a nice post-mortem with references to the portfolio.
The end… for today
Next week, I will fill in the portfolio and write a post-mortem on the 3D game I made. I have some other stuff to write about, so that might be posted next week as well. Thanks for reading!
|
__label__pos
| 0.905619 |
/ Artifact [b5a3e30f]
Login
Artifact b5a3e30f538a9ffe81538b3063b4d5963f9bb422:
/*
** 2005 December 14
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
*************************************************************************
**
** $Id: sqlite3async.c,v 1.7 2009/07/18 11:52:04 danielk1977 Exp $
**
** This file contains the implementation of an asynchronous IO backend
** for SQLite.
*/
#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_ASYNCIO)
#include "sqlite3async.h"
#include "sqlite3.h"
#include <stdarg.h>
#include <string.h>
#include <assert.h>
/* Useful macros used in several places */
#define MIN(x,y) ((x)<(y)?(x):(y))
#define MAX(x,y) ((x)>(y)?(x):(y))
#ifndef SQLITE_AMALGAMATION
/* Macro to mark parameters as unused and silence compiler warnings. */
#define UNUSED_PARAMETER(x) (void)(x)
#endif
/* Forward references */
typedef struct AsyncWrite AsyncWrite;
typedef struct AsyncFile AsyncFile;
typedef struct AsyncFileData AsyncFileData;
typedef struct AsyncFileLock AsyncFileLock;
typedef struct AsyncLock AsyncLock;
/* Enable for debugging */
#ifndef NDEBUG
#include <stdio.h>
static int sqlite3async_trace = 0;
# define ASYNC_TRACE(X) if( sqlite3async_trace ) asyncTrace X
static void asyncTrace(const char *zFormat, ...){
char *z;
va_list ap;
va_start(ap, zFormat);
z = sqlite3_vmprintf(zFormat, ap);
va_end(ap);
fprintf(stderr, "[%d] %s", 0 /* (int)pthread_self() */, z);
sqlite3_free(z);
}
#else
# define ASYNC_TRACE(X)
#endif
/*
** THREAD SAFETY NOTES
**
** Basic rules:
**
** * Both read and write access to the global write-op queue must be
** protected by the async.queueMutex. As are the async.ioError and
** async.nFile variables.
**
** * The async.pLock list and all AsyncLock and AsyncFileLock
** structures must be protected by the async.lockMutex mutex.
**
** * The file handles from the underlying system are not assumed to
** be thread safe.
**
** * See the last two paragraphs under "The Writer Thread" for
** an assumption to do with file-handle synchronization by the Os.
**
** Deadlock prevention:
**
** There are three mutex used by the system: the "writer" mutex,
** the "queue" mutex and the "lock" mutex. Rules are:
**
** * It is illegal to block on the writer mutex when any other mutex
** are held, and
**
** * It is illegal to block on the queue mutex when the lock mutex
** is held.
**
** i.e. mutex's must be grabbed in the order "writer", "queue", "lock".
**
** File system operations (invoked by SQLite thread):
**
** xOpen
** xDelete
** xFileExists
**
** File handle operations (invoked by SQLite thread):
**
** asyncWrite, asyncClose, asyncTruncate, asyncSync
**
** The operations above add an entry to the global write-op list. They
** prepare the entry, acquire the async.queueMutex momentarily while
** list pointers are manipulated to insert the new entry, then release
** the mutex and signal the writer thread to wake up in case it happens
** to be asleep.
**
**
** asyncRead, asyncFileSize.
**
** Read operations. Both of these read from both the underlying file
** first then adjust their result based on pending writes in the
** write-op queue. So async.queueMutex is held for the duration
** of these operations to prevent other threads from changing the
** queue in mid operation.
**
**
** asyncLock, asyncUnlock, asyncCheckReservedLock
**
** These primitives implement in-process locking using a hash table
** on the file name. Files are locked correctly for connections coming
** from the same process. But other processes cannot see these locks
** and will therefore not honor them.
**
**
** The writer thread:
**
** The async.writerMutex is used to make sure only there is only
** a single writer thread running at a time.
**
** Inside the writer thread is a loop that works like this:
**
** WHILE (write-op list is not empty)
** Do IO operation at head of write-op list
** Remove entry from head of write-op list
** END WHILE
**
** The async.queueMutex is always held during the <write-op list is
** not empty> test, and when the entry is removed from the head
** of the write-op list. Sometimes it is held for the interim
** period (while the IO is performed), and sometimes it is
** relinquished. It is relinquished if (a) the IO op is an
** ASYNC_CLOSE or (b) when the file handle was opened, two of
** the underlying systems handles were opened on the same
** file-system entry.
**
** If condition (b) above is true, then one file-handle
** (AsyncFile.pBaseRead) is used exclusively by sqlite threads to read the
** file, the other (AsyncFile.pBaseWrite) by sqlite3_async_flush()
** threads to perform write() operations. This means that read
** operations are not blocked by asynchronous writes (although
** asynchronous writes may still be blocked by reads).
**
** This assumes that the OS keeps two handles open on the same file
** properly in sync. That is, any read operation that starts after a
** write operation on the same file system entry has completed returns
** data consistent with the write. We also assume that if one thread
** reads a file while another is writing it all bytes other than the
** ones actually being written contain valid data.
**
** If the above assumptions are not true, set the preprocessor symbol
** SQLITE_ASYNC_TWO_FILEHANDLES to 0.
*/
#ifndef NDEBUG
# define TESTONLY( X ) X
#else
# define TESTONLY( X )
#endif
/*
** PORTING FUNCTIONS
**
** There are two definitions of the following functions. One for pthreads
** compatible systems and one for Win32. These functions isolate the OS
** specific code required by each platform.
**
** The system uses three mutexes and a single condition variable. To
** block on a mutex, async_mutex_enter() is called. The parameter passed
** to async_mutex_enter(), which must be one of ASYNC_MUTEX_LOCK,
** ASYNC_MUTEX_QUEUE or ASYNC_MUTEX_WRITER, identifies which of the three
** mutexes to lock. Similarly, to unlock a mutex, async_mutex_leave() is
** called with a parameter identifying the mutex being unlocked. Mutexes
** are not recursive - it is an error to call async_mutex_enter() to
** lock a mutex that is already locked, or to call async_mutex_leave()
** to unlock a mutex that is not currently locked.
**
** The async_cond_wait() and async_cond_signal() functions are modelled
** on the pthreads functions with similar names. The first parameter to
** both functions is always ASYNC_COND_QUEUE. When async_cond_wait()
** is called the mutex identified by the second parameter must be held.
** The mutex is unlocked, and the calling thread simultaneously begins
** waiting for the condition variable to be signalled by another thread.
** After another thread signals the condition variable, the calling
** thread stops waiting, locks mutex eMutex and returns. The
** async_cond_signal() function is used to signal the condition variable.
** It is assumed that the mutex used by the thread calling async_cond_wait()
** is held by the caller of async_cond_signal() (otherwise there would be
** a race condition).
**
** It is guaranteed that no other thread will call async_cond_wait() when
** there is already a thread waiting on the condition variable.
**
** The async_sched_yield() function is called to suggest to the operating
** system that it would be a good time to shift the current thread off the
** CPU. The system will still work if this function is not implemented
** (it is not currently implemented for win32), but it might be marginally
** more efficient if it is.
*/
static void async_mutex_enter(int eMutex);
static void async_mutex_leave(int eMutex);
static void async_cond_wait(int eCond, int eMutex);
static void async_cond_signal(int eCond);
static void async_sched_yield(void);
/*
** There are also two definitions of the following. async_os_initialize()
** is called when the asynchronous VFS is first installed, and os_shutdown()
** is called when it is uninstalled (from within sqlite3async_shutdown()).
**
** For pthreads builds, both of these functions are no-ops. For win32,
** they provide an opportunity to initialize and finalize the required
** mutex and condition variables.
**
** If async_os_initialize() returns other than zero, then the initialization
** fails and SQLITE_ERROR is returned to the user.
*/
static int async_os_initialize(void);
static void async_os_shutdown(void);
/* Values for use as the 'eMutex' argument of the above functions. The
** integer values assigned to these constants are important for assert()
** statements that verify that mutexes are locked in the correct order.
** Specifically, it is unsafe to try to lock mutex N while holding a lock
** on mutex M if (M<=N).
*/
#define ASYNC_MUTEX_LOCK 0
#define ASYNC_MUTEX_QUEUE 1
#define ASYNC_MUTEX_WRITER 2
/* Values for use as the 'eCond' argument of the above functions. */
#define ASYNC_COND_QUEUE 0
/*************************************************************************
** Start of OS specific code.
*/
#if SQLITE_OS_WIN || defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(__BORLANDC__)
#include <windows.h>
/* The following block contains the win32 specific code. */
#define mutex_held(X) (GetCurrentThreadId()==primitives.aHolder[X])
static struct AsyncPrimitives {
int isInit;
DWORD aHolder[3];
CRITICAL_SECTION aMutex[3];
HANDLE aCond[1];
} primitives = { 0 };
static int async_os_initialize(void){
if( !primitives.isInit ){
primitives.aCond[0] = CreateEvent(NULL, TRUE, FALSE, 0);
if( primitives.aCond[0]==NULL ){
return 1;
}
InitializeCriticalSection(&primitives.aMutex[0]);
InitializeCriticalSection(&primitives.aMutex[1]);
InitializeCriticalSection(&primitives.aMutex[2]);
primitives.isInit = 1;
}
return 0;
}
static void async_os_shutdown(void){
if( primitives.isInit ){
DeleteCriticalSection(&primitives.aMutex[0]);
DeleteCriticalSection(&primitives.aMutex[1]);
DeleteCriticalSection(&primitives.aMutex[2]);
CloseHandle(primitives.aCond[0]);
primitives.isInit = 0;
}
}
/* The following block contains the Win32 specific code. */
static void async_mutex_enter(int eMutex){
assert( eMutex==0 || eMutex==1 || eMutex==2 );
assert( eMutex!=2 || (!mutex_held(0) && !mutex_held(1) && !mutex_held(2)) );
assert( eMutex!=1 || (!mutex_held(0) && !mutex_held(1)) );
assert( eMutex!=0 || (!mutex_held(0)) );
EnterCriticalSection(&primitives.aMutex[eMutex]);
TESTONLY( primitives.aHolder[eMutex] = GetCurrentThreadId(); )
}
static void async_mutex_leave(int eMutex){
assert( eMutex==0 || eMutex==1 || eMutex==2 );
assert( mutex_held(eMutex) );
TESTONLY( primitives.aHolder[eMutex] = 0; )
LeaveCriticalSection(&primitives.aMutex[eMutex]);
}
static void async_cond_wait(int eCond, int eMutex){
ResetEvent(primitives.aCond[eCond]);
async_mutex_leave(eMutex);
WaitForSingleObject(primitives.aCond[eCond], INFINITE);
async_mutex_enter(eMutex);
}
static void async_cond_signal(int eCond){
assert( mutex_held(ASYNC_MUTEX_QUEUE) );
SetEvent(primitives.aCond[eCond]);
}
static void async_sched_yield(void){
Sleep(0);
}
#else
/* The following block contains the pthreads specific code. */
#include <pthread.h>
#include <sched.h>
#define mutex_held(X) pthread_equal(primitives.aHolder[X], pthread_self())
static int async_os_initialize(void) {return 0;}
static void async_os_shutdown(void) {}
static struct AsyncPrimitives {
pthread_mutex_t aMutex[3];
pthread_cond_t aCond[1];
pthread_t aHolder[3];
} primitives = {
{ PTHREAD_MUTEX_INITIALIZER,
PTHREAD_MUTEX_INITIALIZER,
PTHREAD_MUTEX_INITIALIZER
} , {
PTHREAD_COND_INITIALIZER
} , { 0, 0, 0 }
};
static void async_mutex_enter(int eMutex){
assert( eMutex==0 || eMutex==1 || eMutex==2 );
assert( eMutex!=2 || (!mutex_held(0) && !mutex_held(1) && !mutex_held(2)) );
assert( eMutex!=1 || (!mutex_held(0) && !mutex_held(1)) );
assert( eMutex!=0 || (!mutex_held(0)) );
pthread_mutex_lock(&primitives.aMutex[eMutex]);
TESTONLY( primitives.aHolder[eMutex] = pthread_self(); )
}
static void async_mutex_leave(int eMutex){
assert( eMutex==0 || eMutex==1 || eMutex==2 );
assert( mutex_held(eMutex) );
TESTONLY( primitives.aHolder[eMutex] = 0; )
pthread_mutex_unlock(&primitives.aMutex[eMutex]);
}
static void async_cond_wait(int eCond, int eMutex){
assert( eMutex==0 || eMutex==1 || eMutex==2 );
assert( mutex_held(eMutex) );
TESTONLY( primitives.aHolder[eMutex] = 0; )
pthread_cond_wait(&primitives.aCond[eCond], &primitives.aMutex[eMutex]);
TESTONLY( primitives.aHolder[eMutex] = pthread_self(); )
}
static void async_cond_signal(int eCond){
assert( mutex_held(ASYNC_MUTEX_QUEUE) );
pthread_cond_signal(&primitives.aCond[eCond]);
}
static void async_sched_yield(void){
sched_yield();
}
#endif
/*
** End of OS specific code.
*************************************************************************/
#define assert_mutex_is_held(X) assert( mutex_held(X) )
#ifndef SQLITE_ASYNC_TWO_FILEHANDLES
/* #define SQLITE_ASYNC_TWO_FILEHANDLES 0 */
#define SQLITE_ASYNC_TWO_FILEHANDLES 1
#endif
/*
** State information is held in the static variable "async" defined
** as the following structure.
**
** Both async.ioError and async.nFile are protected by async.queueMutex.
*/
static struct TestAsyncStaticData {
AsyncWrite *pQueueFirst; /* Next write operation to be processed */
AsyncWrite *pQueueLast; /* Last write operation on the list */
AsyncLock *pLock; /* Linked list of all AsyncLock structures */
volatile int ioDelay; /* Extra delay between write operations */
volatile int eHalt; /* One of the SQLITEASYNC_HALT_XXX values */
volatile int bLockFiles; /* Current value of "lockfiles" parameter */
int ioError; /* True if an IO error has occurred */
int nFile; /* Number of open files (from sqlite pov) */
} async = { 0,0,0,0,0,1,0,0 };
/* Possible values of AsyncWrite.op */
#define ASYNC_NOOP 0
#define ASYNC_WRITE 1
#define ASYNC_SYNC 2
#define ASYNC_TRUNCATE 3
#define ASYNC_CLOSE 4
#define ASYNC_DELETE 5
#define ASYNC_OPENEXCLUSIVE 6
#define ASYNC_UNLOCK 7
/* Names of opcodes. Used for debugging only.
** Make sure these stay in sync with the macros above!
*/
static const char *azOpcodeName[] = {
"NOOP", "WRITE", "SYNC", "TRUNCATE", "CLOSE", "DELETE", "OPENEX", "UNLOCK"
};
/*
** Entries on the write-op queue are instances of the AsyncWrite
** structure, defined here.
**
** The interpretation of the iOffset and nByte variables varies depending
** on the value of AsyncWrite.op:
**
** ASYNC_NOOP:
** No values used.
**
** ASYNC_WRITE:
** iOffset -> Offset in file to write to.
** nByte -> Number of bytes of data to write (pointed to by zBuf).
**
** ASYNC_SYNC:
** nByte -> flags to pass to sqlite3OsSync().
**
** ASYNC_TRUNCATE:
** iOffset -> Size to truncate file to.
** nByte -> Unused.
**
** ASYNC_CLOSE:
** iOffset -> Unused.
** nByte -> Unused.
**
** ASYNC_DELETE:
** iOffset -> Contains the "syncDir" flag.
** nByte -> Number of bytes of zBuf points to (file name).
**
** ASYNC_OPENEXCLUSIVE:
** iOffset -> Value of "delflag".
** nByte -> Number of bytes of zBuf points to (file name).
**
** ASYNC_UNLOCK:
** nByte -> Argument to sqlite3OsUnlock().
**
**
** For an ASYNC_WRITE operation, zBuf points to the data to write to the file.
** This space is sqlite3_malloc()d along with the AsyncWrite structure in a
** single blob, so is deleted when sqlite3_free() is called on the parent
** structure.
*/
struct AsyncWrite {
AsyncFileData *pFileData; /* File to write data to or sync */
int op; /* One of ASYNC_xxx etc. */
sqlite_int64 iOffset; /* See above */
int nByte; /* See above */
char *zBuf; /* Data to write to file (or NULL if op!=ASYNC_WRITE) */
AsyncWrite *pNext; /* Next write operation (to any file) */
};
/*
** An instance of this structure is created for each distinct open file
** (i.e. if two handles are opened on the one file, only one of these
** structures is allocated) and stored in the async.aLock hash table. The
** keys for async.aLock are the full pathnames of the opened files.
**
** AsyncLock.pList points to the head of a linked list of AsyncFileLock
** structures, one for each handle currently open on the file.
**
** If the opened file is not a main-database (the SQLITE_OPEN_MAIN_DB is
** not passed to the sqlite3OsOpen() call), or if async.bLockFiles is
** false, variables AsyncLock.pFile and AsyncLock.eLock are never used.
** Otherwise, pFile is a file handle opened on the file in question and
** used to obtain the file-system locks required by database connections
** within this process.
**
** See comments above the asyncLock() function for more details on
** the implementation of database locking used by this backend.
*/
struct AsyncLock {
char *zFile;
int nFile;
sqlite3_file *pFile;
int eLock;
AsyncFileLock *pList;
AsyncLock *pNext; /* Next in linked list headed by async.pLock */
};
/*
** An instance of the following structure is allocated along with each
** AsyncFileData structure (see AsyncFileData.lock), but is only used if the
** file was opened with the SQLITE_OPEN_MAIN_DB.
*/
struct AsyncFileLock {
int eLock; /* Internally visible lock state (sqlite pov) */
int eAsyncLock; /* Lock-state with write-queue unlock */
AsyncFileLock *pNext;
};
/*
** The AsyncFile structure is a subclass of sqlite3_file used for
** asynchronous IO.
**
** All of the actual data for the structure is stored in the structure
** pointed to by AsyncFile.pData, which is allocated as part of the
** sqlite3OsOpen() using sqlite3_malloc(). The reason for this is that the
** lifetime of the AsyncFile structure is ended by the caller after OsClose()
** is called, but the data in AsyncFileData may be required by the
** writer thread after that point.
*/
struct AsyncFile {
sqlite3_io_methods *pMethod;
AsyncFileData *pData;
};
struct AsyncFileData {
char *zName; /* Underlying OS filename - used for debugging */
int nName; /* Number of characters in zName */
sqlite3_file *pBaseRead; /* Read handle to the underlying Os file */
sqlite3_file *pBaseWrite; /* Write handle to the underlying Os file */
AsyncFileLock lock; /* Lock state for this handle */
AsyncLock *pLock; /* AsyncLock object for this file system entry */
AsyncWrite closeOp; /* Preallocated close operation */
};
/*
** Add an entry to the end of the global write-op list. pWrite should point
** to an AsyncWrite structure allocated using sqlite3_malloc(). The writer
** thread will call sqlite3_free() to free the structure after the specified
** operation has been completed.
**
** Once an AsyncWrite structure has been added to the list, it becomes the
** property of the writer thread and must not be read or modified by the
** caller.
*/
static void addAsyncWrite(AsyncWrite *pWrite){
/* We must hold the queue mutex in order to modify the queue pointers */
if( pWrite->op!=ASYNC_UNLOCK ){
async_mutex_enter(ASYNC_MUTEX_QUEUE);
}
/* Add the record to the end of the write-op queue */
assert( !pWrite->pNext );
if( async.pQueueLast ){
assert( async.pQueueFirst );
async.pQueueLast->pNext = pWrite;
}else{
async.pQueueFirst = pWrite;
}
async.pQueueLast = pWrite;
ASYNC_TRACE(("PUSH %p (%s %s %d)\n", pWrite, azOpcodeName[pWrite->op],
pWrite->pFileData ? pWrite->pFileData->zName : "-", pWrite->iOffset));
if( pWrite->op==ASYNC_CLOSE ){
async.nFile--;
}
/* The writer thread might have been idle because there was nothing
** on the write-op queue for it to do. So wake it up. */
async_cond_signal(ASYNC_COND_QUEUE);
/* Drop the queue mutex */
if( pWrite->op!=ASYNC_UNLOCK ){
async_mutex_leave(ASYNC_MUTEX_QUEUE);
}
}
/*
** Increment async.nFile in a thread-safe manner.
*/
static void incrOpenFileCount(void){
/* We must hold the queue mutex in order to modify async.nFile */
async_mutex_enter(ASYNC_MUTEX_QUEUE);
if( async.nFile==0 ){
async.ioError = SQLITE_OK;
}
async.nFile++;
async_mutex_leave(ASYNC_MUTEX_QUEUE);
}
/*
** This is a utility function to allocate and populate a new AsyncWrite
** structure and insert it (via addAsyncWrite() ) into the global list.
*/
static int addNewAsyncWrite(
AsyncFileData *pFileData,
int op,
sqlite3_int64 iOffset,
int nByte,
const char *zByte
){
AsyncWrite *p;
if( op!=ASYNC_CLOSE && async.ioError ){
return async.ioError;
}
p = sqlite3_malloc(sizeof(AsyncWrite) + (zByte?nByte:0));
if( !p ){
/* The upper layer does not expect operations like OsWrite() to
** return SQLITE_NOMEM. This is partly because under normal conditions
** SQLite is required to do rollback without calling malloc(). So
** if malloc() fails here, treat it as an I/O error. The above
** layer knows how to handle that.
*/
return SQLITE_IOERR;
}
p->op = op;
p->iOffset = iOffset;
p->nByte = nByte;
p->pFileData = pFileData;
p->pNext = 0;
if( zByte ){
p->zBuf = (char *)&p[1];
memcpy(p->zBuf, zByte, nByte);
}else{
p->zBuf = 0;
}
addAsyncWrite(p);
return SQLITE_OK;
}
/*
** Close the file. This just adds an entry to the write-op list, the file is
** not actually closed.
*/
static int asyncClose(sqlite3_file *pFile){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
/* Unlock the file, if it is locked */
async_mutex_enter(ASYNC_MUTEX_LOCK);
p->lock.eLock = 0;
async_mutex_leave(ASYNC_MUTEX_LOCK);
addAsyncWrite(&p->closeOp);
return SQLITE_OK;
}
/*
** Implementation of sqlite3OsWrite() for asynchronous files. Instead of
** writing to the underlying file, this function adds an entry to the end of
** the global AsyncWrite list. Either SQLITE_OK or SQLITE_NOMEM may be
** returned.
*/
static int asyncWrite(
sqlite3_file *pFile,
const void *pBuf,
int amt,
sqlite3_int64 iOff
){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
return addNewAsyncWrite(p, ASYNC_WRITE, iOff, amt, pBuf);
}
/*
** Read data from the file. First we read from the filesystem, then adjust
** the contents of the buffer based on ASYNC_WRITE operations in the
** write-op queue.
**
** This method holds the mutex from start to finish.
*/
static int asyncRead(
sqlite3_file *pFile,
void *zOut,
int iAmt,
sqlite3_int64 iOffset
){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
int rc = SQLITE_OK;
sqlite3_int64 filesize = 0;
sqlite3_file *pBase = p->pBaseRead;
sqlite3_int64 iAmt64 = (sqlite3_int64)iAmt;
/* Grab the write queue mutex for the duration of the call */
async_mutex_enter(ASYNC_MUTEX_QUEUE);
/* If an I/O error has previously occurred in this virtual file
** system, then all subsequent operations fail.
*/
if( async.ioError!=SQLITE_OK ){
rc = async.ioError;
goto asyncread_out;
}
if( pBase->pMethods ){
sqlite3_int64 nRead;
rc = pBase->pMethods->xFileSize(pBase, &filesize);
if( rc!=SQLITE_OK ){
goto asyncread_out;
}
nRead = MIN(filesize - iOffset, iAmt64);
if( nRead>0 ){
rc = pBase->pMethods->xRead(pBase, zOut, (int)nRead, iOffset);
ASYNC_TRACE(("READ %s %d bytes at %d\n", p->zName, nRead, iOffset));
}
}
if( rc==SQLITE_OK ){
AsyncWrite *pWrite;
char *zName = p->zName;
for(pWrite=async.pQueueFirst; pWrite; pWrite = pWrite->pNext){
if( pWrite->op==ASYNC_WRITE && (
(pWrite->pFileData==p) ||
(zName && pWrite->pFileData->zName==zName)
)){
sqlite3_int64 nCopy;
sqlite3_int64 nByte64 = (sqlite3_int64)pWrite->nByte;
/* Set variable iBeginIn to the offset in buffer pWrite->zBuf[] from
** which data should be copied. Set iBeginOut to the offset within
** the output buffer to which data should be copied. If either of
** these offsets is a negative number, set them to 0.
*/
sqlite3_int64 iBeginOut = (pWrite->iOffset-iOffset);
sqlite3_int64 iBeginIn = -iBeginOut;
if( iBeginIn<0 ) iBeginIn = 0;
if( iBeginOut<0 ) iBeginOut = 0;
filesize = MAX(filesize, pWrite->iOffset+nByte64);
nCopy = MIN(nByte64-iBeginIn, iAmt64-iBeginOut);
if( nCopy>0 ){
memcpy(&((char *)zOut)[iBeginOut], &pWrite->zBuf[iBeginIn], (size_t)nCopy);
ASYNC_TRACE(("OVERREAD %d bytes at %d\n", nCopy, iBeginOut+iOffset));
}
}
}
}
asyncread_out:
async_mutex_leave(ASYNC_MUTEX_QUEUE);
if( rc==SQLITE_OK && filesize<(iOffset+iAmt) ){
rc = SQLITE_IOERR_SHORT_READ;
}
return rc;
}
/*
** Truncate the file to nByte bytes in length. This just adds an entry to
** the write-op list, no IO actually takes place.
*/
static int asyncTruncate(sqlite3_file *pFile, sqlite3_int64 nByte){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
return addNewAsyncWrite(p, ASYNC_TRUNCATE, nByte, 0, 0);
}
/*
** Sync the file. This just adds an entry to the write-op list, the
** sync() is done later by sqlite3_async_flush().
*/
static int asyncSync(sqlite3_file *pFile, int flags){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
return addNewAsyncWrite(p, ASYNC_SYNC, 0, flags, 0);
}
/*
** Read the size of the file. First we read the size of the file system
** entry, then adjust for any ASYNC_WRITE or ASYNC_TRUNCATE operations
** currently in the write-op list.
**
** This method holds the mutex from start to finish.
*/
int asyncFileSize(sqlite3_file *pFile, sqlite3_int64 *piSize){
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
int rc = SQLITE_OK;
sqlite3_int64 s = 0;
sqlite3_file *pBase;
async_mutex_enter(ASYNC_MUTEX_QUEUE);
/* Read the filesystem size from the base file. If pMethods is NULL, this
** means the file hasn't been opened yet. In this case all relevant data
** must be in the write-op queue anyway, so we can omit reading from the
** file-system.
*/
pBase = p->pBaseRead;
if( pBase->pMethods ){
rc = pBase->pMethods->xFileSize(pBase, &s);
}
if( rc==SQLITE_OK ){
AsyncWrite *pWrite;
for(pWrite=async.pQueueFirst; pWrite; pWrite = pWrite->pNext){
if( pWrite->op==ASYNC_DELETE
&& p->zName
&& strcmp(p->zName, pWrite->zBuf)==0
){
s = 0;
}else if( pWrite->pFileData && (
(pWrite->pFileData==p)
|| (p->zName && pWrite->pFileData->zName==p->zName)
)){
switch( pWrite->op ){
case ASYNC_WRITE:
s = MAX(pWrite->iOffset + (sqlite3_int64)(pWrite->nByte), s);
break;
case ASYNC_TRUNCATE:
s = MIN(s, pWrite->iOffset);
break;
}
}
}
*piSize = s;
}
async_mutex_leave(ASYNC_MUTEX_QUEUE);
return rc;
}
/*
** Lock or unlock the actual file-system entry.
*/
static int getFileLock(AsyncLock *pLock){
int rc = SQLITE_OK;
AsyncFileLock *pIter;
int eRequired = 0;
if( pLock->pFile ){
for(pIter=pLock->pList; pIter; pIter=pIter->pNext){
assert(pIter->eAsyncLock>=pIter->eLock);
if( pIter->eAsyncLock>eRequired ){
eRequired = pIter->eAsyncLock;
assert(eRequired>=0 && eRequired<=SQLITE_LOCK_EXCLUSIVE);
}
}
if( eRequired>pLock->eLock ){
rc = pLock->pFile->pMethods->xLock(pLock->pFile, eRequired);
if( rc==SQLITE_OK ){
pLock->eLock = eRequired;
}
}
else if( eRequired<pLock->eLock && eRequired<=SQLITE_LOCK_SHARED ){
rc = pLock->pFile->pMethods->xUnlock(pLock->pFile, eRequired);
if( rc==SQLITE_OK ){
pLock->eLock = eRequired;
}
}
}
return rc;
}
/*
** Return the AsyncLock structure from the global async.pLock list
** associated with the file-system entry identified by path zName
** (a string of nName bytes). If no such structure exists, return 0.
*/
static AsyncLock *findLock(const char *zName, int nName){
AsyncLock *p = async.pLock;
while( p && (p->nFile!=nName || memcmp(p->zFile, zName, nName)) ){
p = p->pNext;
}
return p;
}
/*
** The following two methods - asyncLock() and asyncUnlock() - are used
** to obtain and release locks on database files opened with the
** asynchronous backend.
*/
static int asyncLock(sqlite3_file *pFile, int eLock){
int rc = SQLITE_OK;
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
if( p->zName ){
async_mutex_enter(ASYNC_MUTEX_LOCK);
if( p->lock.eLock<eLock ){
AsyncLock *pLock = p->pLock;
AsyncFileLock *pIter;
assert(pLock && pLock->pList);
for(pIter=pLock->pList; pIter; pIter=pIter->pNext){
if( pIter!=&p->lock && (
(eLock==SQLITE_LOCK_EXCLUSIVE && pIter->eLock>=SQLITE_LOCK_SHARED) ||
(eLock==SQLITE_LOCK_PENDING && pIter->eLock>=SQLITE_LOCK_RESERVED) ||
(eLock==SQLITE_LOCK_RESERVED && pIter->eLock>=SQLITE_LOCK_RESERVED) ||
(eLock==SQLITE_LOCK_SHARED && pIter->eLock>=SQLITE_LOCK_PENDING)
)){
rc = SQLITE_BUSY;
}
}
if( rc==SQLITE_OK ){
p->lock.eLock = eLock;
p->lock.eAsyncLock = MAX(p->lock.eAsyncLock, eLock);
}
assert(p->lock.eAsyncLock>=p->lock.eLock);
if( rc==SQLITE_OK ){
rc = getFileLock(pLock);
}
}
async_mutex_leave(ASYNC_MUTEX_LOCK);
}
ASYNC_TRACE(("LOCK %d (%s) rc=%d\n", eLock, p->zName, rc));
return rc;
}
static int asyncUnlock(sqlite3_file *pFile, int eLock){
int rc = SQLITE_OK;
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
if( p->zName ){
AsyncFileLock *pLock = &p->lock;
async_mutex_enter(ASYNC_MUTEX_QUEUE);
async_mutex_enter(ASYNC_MUTEX_LOCK);
pLock->eLock = MIN(pLock->eLock, eLock);
rc = addNewAsyncWrite(p, ASYNC_UNLOCK, 0, eLock, 0);
async_mutex_leave(ASYNC_MUTEX_LOCK);
async_mutex_leave(ASYNC_MUTEX_QUEUE);
}
return rc;
}
/*
** This function is called when the pager layer first opens a database file
** and is checking for a hot-journal.
*/
static int asyncCheckReservedLock(sqlite3_file *pFile, int *pResOut){
int ret = 0;
AsyncFileLock *pIter;
AsyncFileData *p = ((AsyncFile *)pFile)->pData;
async_mutex_enter(ASYNC_MUTEX_LOCK);
for(pIter=p->pLock->pList; pIter; pIter=pIter->pNext){
if( pIter->eLock>=SQLITE_LOCK_RESERVED ){
ret = 1;
break;
}
}
async_mutex_leave(ASYNC_MUTEX_LOCK);
ASYNC_TRACE(("CHECK-LOCK %d (%s)\n", ret, p->zName));
*pResOut = ret;
return SQLITE_OK;
}
/*
** sqlite3_file_control() implementation.
*/
static int asyncFileControl(sqlite3_file *id, int op, void *pArg){
switch( op ){
case SQLITE_FCNTL_LOCKSTATE: {
async_mutex_enter(ASYNC_MUTEX_LOCK);
*(int*)pArg = ((AsyncFile*)id)->pData->lock.eLock;
async_mutex_leave(ASYNC_MUTEX_LOCK);
return SQLITE_OK;
}
}
return SQLITE_NOTFOUND;
}
/*
** Return the device characteristics and sector-size of the device. It
** is tricky to implement these correctly, as this backend might
** not have an open file handle at this point.
*/
static int asyncSectorSize(sqlite3_file *pFile){
UNUSED_PARAMETER(pFile);
return 512;
}
static int asyncDeviceCharacteristics(sqlite3_file *pFile){
UNUSED_PARAMETER(pFile);
return 0;
}
static int unlinkAsyncFile(AsyncFileData *pData){
AsyncFileLock **ppIter;
int rc = SQLITE_OK;
if( pData->zName ){
AsyncLock *pLock = pData->pLock;
for(ppIter=&pLock->pList; *ppIter; ppIter=&((*ppIter)->pNext)){
if( (*ppIter)==&pData->lock ){
*ppIter = pData->lock.pNext;
break;
}
}
if( !pLock->pList ){
AsyncLock **pp;
if( pLock->pFile ){
pLock->pFile->pMethods->xClose(pLock->pFile);
}
for(pp=&async.pLock; *pp!=pLock; pp=&((*pp)->pNext));
*pp = pLock->pNext;
sqlite3_free(pLock);
}else{
rc = getFileLock(pLock);
}
}
return rc;
}
/*
** The parameter passed to this function is a copy of a 'flags' parameter
** passed to this modules xOpen() method. This function returns true
** if the file should be opened asynchronously, or false if it should
** be opened immediately.
**
** If the file is to be opened asynchronously, then asyncOpen() will add
** an entry to the event queue and the file will not actually be opened
** until the event is processed. Otherwise, the file is opened directly
** by the caller.
*/
static int doAsynchronousOpen(int flags){
return (flags&SQLITE_OPEN_CREATE) && (
(flags&SQLITE_OPEN_MAIN_JOURNAL) ||
(flags&SQLITE_OPEN_TEMP_JOURNAL) ||
(flags&SQLITE_OPEN_DELETEONCLOSE)
);
}
/*
** Open a file.
*/
static int asyncOpen(
sqlite3_vfs *pAsyncVfs,
const char *zName,
sqlite3_file *pFile,
int flags,
int *pOutFlags
){
static sqlite3_io_methods async_methods = {
1, /* iVersion */
asyncClose, /* xClose */
asyncRead, /* xRead */
asyncWrite, /* xWrite */
asyncTruncate, /* xTruncate */
asyncSync, /* xSync */
asyncFileSize, /* xFileSize */
asyncLock, /* xLock */
asyncUnlock, /* xUnlock */
asyncCheckReservedLock, /* xCheckReservedLock */
asyncFileControl, /* xFileControl */
asyncSectorSize, /* xSectorSize */
asyncDeviceCharacteristics /* xDeviceCharacteristics */
};
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
AsyncFile *p = (AsyncFile *)pFile;
int nName = 0;
int rc = SQLITE_OK;
int nByte;
AsyncFileData *pData;
AsyncLock *pLock = 0;
char *z;
int isAsyncOpen = doAsynchronousOpen(flags);
/* If zName is NULL, then the upper layer is requesting an anonymous file.
** Otherwise, allocate enough space to make a copy of the file name (along
** with the second nul-terminator byte required by xOpen).
*/
if( zName ){
nName = (int)strlen(zName);
}
nByte = (
sizeof(AsyncFileData) + /* AsyncFileData structure */
2 * pVfs->szOsFile + /* AsyncFileData.pBaseRead and pBaseWrite */
nName + 2 /* AsyncFileData.zName */
);
z = sqlite3_malloc(nByte);
if( !z ){
return SQLITE_NOMEM;
}
memset(z, 0, nByte);
pData = (AsyncFileData*)z;
z += sizeof(pData[0]);
pData->pBaseRead = (sqlite3_file*)z;
z += pVfs->szOsFile;
pData->pBaseWrite = (sqlite3_file*)z;
pData->closeOp.pFileData = pData;
pData->closeOp.op = ASYNC_CLOSE;
if( zName ){
z += pVfs->szOsFile;
pData->zName = z;
pData->nName = nName;
memcpy(pData->zName, zName, nName);
}
if( !isAsyncOpen ){
int flagsout;
rc = pVfs->xOpen(pVfs, pData->zName, pData->pBaseRead, flags, &flagsout);
if( rc==SQLITE_OK
&& (flagsout&SQLITE_OPEN_READWRITE)
&& (flags&SQLITE_OPEN_EXCLUSIVE)==0
){
rc = pVfs->xOpen(pVfs, pData->zName, pData->pBaseWrite, flags, 0);
}
if( pOutFlags ){
*pOutFlags = flagsout;
}
}
async_mutex_enter(ASYNC_MUTEX_LOCK);
if( zName && rc==SQLITE_OK ){
pLock = findLock(pData->zName, pData->nName);
if( !pLock ){
int nByte = pVfs->szOsFile + sizeof(AsyncLock) + pData->nName + 1;
pLock = (AsyncLock *)sqlite3_malloc(nByte);
if( pLock ){
memset(pLock, 0, nByte);
if( async.bLockFiles && (flags&SQLITE_OPEN_MAIN_DB) ){
pLock->pFile = (sqlite3_file *)&pLock[1];
rc = pVfs->xOpen(pVfs, pData->zName, pLock->pFile, flags, 0);
if( rc!=SQLITE_OK ){
sqlite3_free(pLock);
pLock = 0;
}
}
if( pLock ){
pLock->nFile = pData->nName;
pLock->zFile = &((char *)(&pLock[1]))[pVfs->szOsFile];
memcpy(pLock->zFile, pData->zName, pLock->nFile);
pLock->pNext = async.pLock;
async.pLock = pLock;
}
}else{
rc = SQLITE_NOMEM;
}
}
}
if( rc==SQLITE_OK ){
p->pMethod = &async_methods;
p->pData = pData;
/* Link AsyncFileData.lock into the linked list of
** AsyncFileLock structures for this file.
*/
if( zName ){
pData->lock.pNext = pLock->pList;
pLock->pList = &pData->lock;
pData->zName = pLock->zFile;
}
}else{
if( pData->pBaseRead->pMethods ){
pData->pBaseRead->pMethods->xClose(pData->pBaseRead);
}
if( pData->pBaseWrite->pMethods ){
pData->pBaseWrite->pMethods->xClose(pData->pBaseWrite);
}
sqlite3_free(pData);
}
async_mutex_leave(ASYNC_MUTEX_LOCK);
if( rc==SQLITE_OK ){
pData->pLock = pLock;
}
if( rc==SQLITE_OK && isAsyncOpen ){
rc = addNewAsyncWrite(pData, ASYNC_OPENEXCLUSIVE, (sqlite3_int64)flags,0,0);
if( rc==SQLITE_OK ){
if( pOutFlags ) *pOutFlags = flags;
}else{
async_mutex_enter(ASYNC_MUTEX_LOCK);
unlinkAsyncFile(pData);
async_mutex_leave(ASYNC_MUTEX_LOCK);
sqlite3_free(pData);
}
}
if( rc!=SQLITE_OK ){
p->pMethod = 0;
}else{
incrOpenFileCount();
}
return rc;
}
/*
** Implementation of sqlite3OsDelete. Add an entry to the end of the
** write-op queue to perform the delete.
*/
static int asyncDelete(sqlite3_vfs *pAsyncVfs, const char *z, int syncDir){
UNUSED_PARAMETER(pAsyncVfs);
return addNewAsyncWrite(0, ASYNC_DELETE, syncDir, (int)strlen(z)+1, z);
}
/*
** Implementation of sqlite3OsAccess. This method holds the mutex from
** start to finish.
*/
static int asyncAccess(
sqlite3_vfs *pAsyncVfs,
const char *zName,
int flags,
int *pResOut
){
int rc;
int ret;
AsyncWrite *p;
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
assert(flags==SQLITE_ACCESS_READWRITE
|| flags==SQLITE_ACCESS_READ
|| flags==SQLITE_ACCESS_EXISTS
);
async_mutex_enter(ASYNC_MUTEX_QUEUE);
rc = pVfs->xAccess(pVfs, zName, flags, &ret);
if( rc==SQLITE_OK && flags==SQLITE_ACCESS_EXISTS ){
for(p=async.pQueueFirst; p; p = p->pNext){
if( p->op==ASYNC_DELETE && 0==strcmp(p->zBuf, zName) ){
ret = 0;
}else if( p->op==ASYNC_OPENEXCLUSIVE
&& p->pFileData->zName
&& 0==strcmp(p->pFileData->zName, zName)
){
ret = 1;
}
}
}
ASYNC_TRACE(("ACCESS(%s): %s = %d\n",
flags==SQLITE_ACCESS_READWRITE?"read-write":
flags==SQLITE_ACCESS_READ?"read":"exists"
, zName, ret)
);
async_mutex_leave(ASYNC_MUTEX_QUEUE);
*pResOut = ret;
return rc;
}
/*
** Fill in zPathOut with the full path to the file identified by zPath.
*/
static int asyncFullPathname(
sqlite3_vfs *pAsyncVfs,
const char *zPath,
int nPathOut,
char *zPathOut
){
int rc;
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
rc = pVfs->xFullPathname(pVfs, zPath, nPathOut, zPathOut);
/* Because of the way intra-process file locking works, this backend
** needs to return a canonical path. The following block assumes the
** file-system uses unix style paths.
*/
if( rc==SQLITE_OK ){
int i, j;
char *z = zPathOut;
int n = (int)strlen(z);
while( n>1 && z[n-1]=='/' ){ n--; }
for(i=j=0; i<n; i++){
if( z[i]=='/' ){
if( z[i+1]=='/' ) continue;
if( z[i+1]=='.' && i+2<n && z[i+2]=='/' ){
i += 1;
continue;
}
if( z[i+1]=='.' && i+3<n && z[i+2]=='.' && z[i+3]=='/' ){
while( j>0 && z[j-1]!='/' ){ j--; }
if( j>0 ){ j--; }
i += 2;
continue;
}
}
z[j++] = z[i];
}
z[j] = 0;
}
return rc;
}
static void *asyncDlOpen(sqlite3_vfs *pAsyncVfs, const char *zPath){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
return pVfs->xDlOpen(pVfs, zPath);
}
static void asyncDlError(sqlite3_vfs *pAsyncVfs, int nByte, char *zErrMsg){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
pVfs->xDlError(pVfs, nByte, zErrMsg);
}
static void (*asyncDlSym(
sqlite3_vfs *pAsyncVfs,
void *pHandle,
const char *zSymbol
))(void){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
return pVfs->xDlSym(pVfs, pHandle, zSymbol);
}
static void asyncDlClose(sqlite3_vfs *pAsyncVfs, void *pHandle){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
pVfs->xDlClose(pVfs, pHandle);
}
static int asyncRandomness(sqlite3_vfs *pAsyncVfs, int nByte, char *zBufOut){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
return pVfs->xRandomness(pVfs, nByte, zBufOut);
}
static int asyncSleep(sqlite3_vfs *pAsyncVfs, int nMicro){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
return pVfs->xSleep(pVfs, nMicro);
}
static int asyncCurrentTime(sqlite3_vfs *pAsyncVfs, double *pTimeOut){
sqlite3_vfs *pVfs = (sqlite3_vfs *)pAsyncVfs->pAppData;
return pVfs->xCurrentTime(pVfs, pTimeOut);
}
static sqlite3_vfs async_vfs = {
1, /* iVersion */
sizeof(AsyncFile), /* szOsFile */
0, /* mxPathname */
0, /* pNext */
SQLITEASYNC_VFSNAME, /* zName */
0, /* pAppData */
asyncOpen, /* xOpen */
asyncDelete, /* xDelete */
asyncAccess, /* xAccess */
asyncFullPathname, /* xFullPathname */
asyncDlOpen, /* xDlOpen */
asyncDlError, /* xDlError */
asyncDlSym, /* xDlSym */
asyncDlClose, /* xDlClose */
asyncRandomness, /* xDlError */
asyncSleep, /* xDlSym */
asyncCurrentTime /* xDlClose */
};
/*
** This procedure runs in a separate thread, reading messages off of the
** write queue and processing them one by one.
**
** If async.writerHaltNow is true, then this procedure exits
** after processing a single message.
**
** If async.writerHaltWhenIdle is true, then this procedure exits when
** the write queue is empty.
**
** If both of the above variables are false, this procedure runs
** indefinately, waiting for operations to be added to the write queue
** and processing them in the order in which they arrive.
**
** An artifical delay of async.ioDelay milliseconds is inserted before
** each write operation in order to simulate the effect of a slow disk.
**
** Only one instance of this procedure may be running at a time.
*/
static void asyncWriterThread(void){
sqlite3_vfs *pVfs = (sqlite3_vfs *)(async_vfs.pAppData);
AsyncWrite *p = 0;
int rc = SQLITE_OK;
int holdingMutex = 0;
async_mutex_enter(ASYNC_MUTEX_WRITER);
while( async.eHalt!=SQLITEASYNC_HALT_NOW ){
int doNotFree = 0;
sqlite3_file *pBase = 0;
if( !holdingMutex ){
async_mutex_enter(ASYNC_MUTEX_QUEUE);
}
while( (p = async.pQueueFirst)==0 ){
if( async.eHalt!=SQLITEASYNC_HALT_NEVER ){
async_mutex_leave(ASYNC_MUTEX_QUEUE);
break;
}else{
ASYNC_TRACE(("IDLE\n"));
async_cond_wait(ASYNC_COND_QUEUE, ASYNC_MUTEX_QUEUE);
ASYNC_TRACE(("WAKEUP\n"));
}
}
if( p==0 ) break;
holdingMutex = 1;
/* Right now this thread is holding the mutex on the write-op queue.
** Variable 'p' points to the first entry in the write-op queue. In
** the general case, we hold on to the mutex for the entire body of
** the loop.
**
** However in the cases enumerated below, we relinquish the mutex,
** perform the IO, and then re-request the mutex before removing 'p' from
** the head of the write-op queue. The idea is to increase concurrency with
** sqlite threads.
**
** * An ASYNC_CLOSE operation.
** * An ASYNC_OPENEXCLUSIVE operation. For this one, we relinquish
** the mutex, call the underlying xOpenExclusive() function, then
** re-aquire the mutex before seting the AsyncFile.pBaseRead
** variable.
** * ASYNC_SYNC and ASYNC_WRITE operations, if
** SQLITE_ASYNC_TWO_FILEHANDLES was set at compile time and two
** file-handles are open for the particular file being "synced".
*/
if( async.ioError!=SQLITE_OK && p->op!=ASYNC_CLOSE ){
p->op = ASYNC_NOOP;
}
if( p->pFileData ){
pBase = p->pFileData->pBaseWrite;
if(
p->op==ASYNC_CLOSE ||
p->op==ASYNC_OPENEXCLUSIVE ||
(pBase->pMethods && (p->op==ASYNC_SYNC || p->op==ASYNC_WRITE) )
){
async_mutex_leave(ASYNC_MUTEX_QUEUE);
holdingMutex = 0;
}
if( !pBase->pMethods ){
pBase = p->pFileData->pBaseRead;
}
}
switch( p->op ){
case ASYNC_NOOP:
break;
case ASYNC_WRITE:
assert( pBase );
ASYNC_TRACE(("WRITE %s %d bytes at %d\n",
p->pFileData->zName, p->nByte, p->iOffset));
rc = pBase->pMethods->xWrite(pBase, (void *)(p->zBuf), p->nByte, p->iOffset);
break;
case ASYNC_SYNC:
assert( pBase );
ASYNC_TRACE(("SYNC %s\n", p->pFileData->zName));
rc = pBase->pMethods->xSync(pBase, p->nByte);
break;
case ASYNC_TRUNCATE:
assert( pBase );
ASYNC_TRACE(("TRUNCATE %s to %d bytes\n",
p->pFileData->zName, p->iOffset));
rc = pBase->pMethods->xTruncate(pBase, p->iOffset);
break;
case ASYNC_CLOSE: {
AsyncFileData *pData = p->pFileData;
ASYNC_TRACE(("CLOSE %s\n", p->pFileData->zName));
if( pData->pBaseWrite->pMethods ){
pData->pBaseWrite->pMethods->xClose(pData->pBaseWrite);
}
if( pData->pBaseRead->pMethods ){
pData->pBaseRead->pMethods->xClose(pData->pBaseRead);
}
/* Unlink AsyncFileData.lock from the linked list of AsyncFileLock
** structures for this file. Obtain the async.lockMutex mutex
** before doing so.
*/
async_mutex_enter(ASYNC_MUTEX_LOCK);
rc = unlinkAsyncFile(pData);
async_mutex_leave(ASYNC_MUTEX_LOCK);
if( !holdingMutex ){
async_mutex_enter(ASYNC_MUTEX_QUEUE);
holdingMutex = 1;
}
assert_mutex_is_held(ASYNC_MUTEX_QUEUE);
async.pQueueFirst = p->pNext;
sqlite3_free(pData);
doNotFree = 1;
break;
}
case ASYNC_UNLOCK: {
AsyncWrite *pIter;
AsyncFileData *pData = p->pFileData;
int eLock = p->nByte;
/* When a file is locked by SQLite using the async backend, it is
** locked within the 'real' file-system synchronously. When it is
** unlocked, an ASYNC_UNLOCK event is added to the write-queue to
** unlock the file asynchronously. The design of the async backend
** requires that the 'real' file-system file be locked from the
** time that SQLite first locks it (and probably reads from it)
** until all asynchronous write events that were scheduled before
** SQLite unlocked the file have been processed.
**
** This is more complex if SQLite locks and unlocks the file multiple
** times in quick succession. For example, if SQLite does:
**
** lock, write, unlock, lock, write, unlock
**
** Each "lock" operation locks the file immediately. Each "write"
** and "unlock" operation adds an event to the event queue. If the
** second "lock" operation is performed before the first "unlock"
** operation has been processed asynchronously, then the first
** "unlock" cannot be safely processed as is, since this would mean
** the file was unlocked when the second "write" operation is
** processed. To work around this, when processing an ASYNC_UNLOCK
** operation, SQLite:
**
** 1) Unlocks the file to the minimum of the argument passed to
** the xUnlock() call and the current lock from SQLite's point
** of view, and
**
** 2) Only unlocks the file at all if this event is the last
** ASYNC_UNLOCK event on this file in the write-queue.
*/
assert( holdingMutex==1 );
assert( async.pQueueFirst==p );
for(pIter=async.pQueueFirst->pNext; pIter; pIter=pIter->pNext){
if( pIter->pFileData==pData && pIter->op==ASYNC_UNLOCK ) break;
}
if( !pIter ){
async_mutex_enter(ASYNC_MUTEX_LOCK);
pData->lock.eAsyncLock = MIN(
pData->lock.eAsyncLock, MAX(pData->lock.eLock, eLock)
);
assert(pData->lock.eAsyncLock>=pData->lock.eLock);
rc = getFileLock(pData->pLock);
async_mutex_leave(ASYNC_MUTEX_LOCK);
}
break;
}
case ASYNC_DELETE:
ASYNC_TRACE(("DELETE %s\n", p->zBuf));
rc = pVfs->xDelete(pVfs, p->zBuf, (int)p->iOffset);
if( rc==SQLITE_IOERR_DELETE_NOENT ) rc = SQLITE_OK;
break;
case ASYNC_OPENEXCLUSIVE: {
int flags = (int)p->iOffset;
AsyncFileData *pData = p->pFileData;
ASYNC_TRACE(("OPEN %s flags=%d\n", p->zBuf, (int)p->iOffset));
assert(pData->pBaseRead->pMethods==0 && pData->pBaseWrite->pMethods==0);
rc = pVfs->xOpen(pVfs, pData->zName, pData->pBaseRead, flags, 0);
assert( holdingMutex==0 );
async_mutex_enter(ASYNC_MUTEX_QUEUE);
holdingMutex = 1;
break;
}
default: assert(!"Illegal value for AsyncWrite.op");
}
/* If we didn't hang on to the mutex during the IO op, obtain it now
** so that the AsyncWrite structure can be safely removed from the
** global write-op queue.
*/
if( !holdingMutex ){
async_mutex_enter(ASYNC_MUTEX_QUEUE);
holdingMutex = 1;
}
/* ASYNC_TRACE(("UNLINK %p\n", p)); */
if( p==async.pQueueLast ){
async.pQueueLast = 0;
}
if( !doNotFree ){
assert_mutex_is_held(ASYNC_MUTEX_QUEUE);
async.pQueueFirst = p->pNext;
sqlite3_free(p);
}
assert( holdingMutex );
/* An IO error has occurred. We cannot report the error back to the
** connection that requested the I/O since the error happened
** asynchronously. The connection has already moved on. There
** really is nobody to report the error to.
**
** The file for which the error occurred may have been a database or
** journal file. Regardless, none of the currently queued operations
** associated with the same database should now be performed. Nor should
** any subsequently requested IO on either a database or journal file
** handle for the same database be accepted until the main database
** file handle has been closed and reopened.
**
** Furthermore, no further IO should be queued or performed on any file
** handle associated with a database that may have been part of a
** multi-file transaction that included the database associated with
** the IO error (i.e. a database ATTACHed to the same handle at some
** point in time).
*/
if( rc!=SQLITE_OK ){
async.ioError = rc;
}
if( async.ioError && !async.pQueueFirst ){
async_mutex_enter(ASYNC_MUTEX_LOCK);
if( 0==async.pLock ){
async.ioError = SQLITE_OK;
}
async_mutex_leave(ASYNC_MUTEX_LOCK);
}
/* Drop the queue mutex before continuing to the next write operation
** in order to give other threads a chance to work with the write queue.
*/
if( !async.pQueueFirst || !async.ioError ){
async_mutex_leave(ASYNC_MUTEX_QUEUE);
holdingMutex = 0;
if( async.ioDelay>0 ){
pVfs->xSleep(pVfs, async.ioDelay*1000);
}else{
async_sched_yield();
}
}
}
async_mutex_leave(ASYNC_MUTEX_WRITER);
return;
}
/*
** Install the asynchronous VFS.
*/
int sqlite3async_initialize(const char *zParent, int isDefault){
int rc = SQLITE_OK;
if( async_vfs.pAppData==0 ){
sqlite3_vfs *pParent = sqlite3_vfs_find(zParent);
if( !pParent || async_os_initialize() ){
rc = SQLITE_ERROR;
}else if( SQLITE_OK!=(rc = sqlite3_vfs_register(&async_vfs, isDefault)) ){
async_os_shutdown();
}else{
async_vfs.pAppData = (void *)pParent;
async_vfs.mxPathname = ((sqlite3_vfs *)async_vfs.pAppData)->mxPathname;
}
}
return rc;
}
/*
** Uninstall the asynchronous VFS.
*/
void sqlite3async_shutdown(void){
if( async_vfs.pAppData ){
async_os_shutdown();
sqlite3_vfs_unregister((sqlite3_vfs *)&async_vfs);
async_vfs.pAppData = 0;
}
}
/*
** Process events on the write-queue.
*/
void sqlite3async_run(void){
asyncWriterThread();
}
/*
** Control/configure the asynchronous IO system.
*/
int sqlite3async_control(int op, ...){
va_list ap;
va_start(ap, op);
switch( op ){
case SQLITEASYNC_HALT: {
int eWhen = va_arg(ap, int);
if( eWhen!=SQLITEASYNC_HALT_NEVER
&& eWhen!=SQLITEASYNC_HALT_NOW
&& eWhen!=SQLITEASYNC_HALT_IDLE
){
return SQLITE_MISUSE;
}
async.eHalt = eWhen;
async_mutex_enter(ASYNC_MUTEX_QUEUE);
async_cond_signal(ASYNC_COND_QUEUE);
async_mutex_leave(ASYNC_MUTEX_QUEUE);
break;
}
case SQLITEASYNC_DELAY: {
int iDelay = va_arg(ap, int);
if( iDelay<0 ){
return SQLITE_MISUSE;
}
async.ioDelay = iDelay;
break;
}
case SQLITEASYNC_LOCKFILES: {
int bLock = va_arg(ap, int);
async_mutex_enter(ASYNC_MUTEX_QUEUE);
if( async.nFile || async.pQueueFirst ){
async_mutex_leave(ASYNC_MUTEX_QUEUE);
return SQLITE_MISUSE;
}
async.bLockFiles = bLock;
async_mutex_leave(ASYNC_MUTEX_QUEUE);
break;
}
case SQLITEASYNC_GET_HALT: {
int *peWhen = va_arg(ap, int *);
*peWhen = async.eHalt;
break;
}
case SQLITEASYNC_GET_DELAY: {
int *piDelay = va_arg(ap, int *);
*piDelay = async.ioDelay;
break;
}
case SQLITEASYNC_GET_LOCKFILES: {
int *piDelay = va_arg(ap, int *);
*piDelay = async.bLockFiles;
break;
}
default:
return SQLITE_ERROR;
}
return SQLITE_OK;
}
#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_ASYNCIO) */
|
__label__pos
| 0.999084 |
File: make_dist
package info (click to toggle)
qpdf 8.0.2-2~bpo9+1
• links: PTS
• area: main
• in suites: stretch-backports
• size: 20,056 kB
• sloc: cpp: 29,209; perl: 6,102; xml: 5,043; sh: 3,932; ansic: 2,743; makefile: 92
file content (149 lines) | stat: -rwxr-xr-x 2,809 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
#!/usr/bin/env perl
#
# This program creates a source distribution of qpdf. For details,
# see README-maintainer.md.
#
require 5.008;
use warnings;
use strict;
use File::Basename;
use Cwd;
use Cwd 'abs_path';
use IO::File;
my $whoami = basename($0);
my $srcdir = basename(dirname($0));
my $pwd = getcwd();
usage() unless $pwd eq abs_path(dirname(dirname($0)));
my $run_tests = 1;
foreach my $arg (@ARGV)
{
if ($arg eq '--no-tests')
{
$run_tests = 0;
}
else
{
usage();
}
}
usage() unless $srcdir =~ m/^qpdf-(\d+\.\d+(?:\.(a|b|rc)?\d+)?)$/;
my $version = $1;
cd($srcdir);
# Check versions
my $fh = safe_open("configure.ac");
my $config_version = 'unknown';
while (<$fh>)
{
if (m/^AC_INIT\(\[qpdf\],\[([^\)]+)\]\)/)
{
$config_version = $1;
last;
}
}
$fh->close();
$fh = safe_open("libqpdf/QPDF.cc");
my $code_version = 'unknown';
while (<$fh>)
{
if (m/QPDF::qpdf_version = \"([^\"]+)\"/)
{
$code_version = $1;
last;
}
}
$fh->close();
$fh = safe_open("manual/qpdf-manual.xml");
my $doc_version = 'unknown';
while (<$fh>)
{
if (m/swversion "([^\"]+)\"/)
{
$doc_version = $1;
last;
}
}
$fh->close();
my $version_error = 0;
if ($version ne $config_version)
{
print "$whoami: configure.ac version = $config_version\n";
$version_error = 1;
}
if ($version ne $code_version)
{
print "$whoami: QPDF.cc version = $code_version\n";
$version_error = 1;
}
if ($version ne $doc_version)
{
print "$whoami: qpdf-manual.xml version = $doc_version\n";
$version_error = 1;
}
if ($version_error)
{
die "$whoami: version numbers are not consistent\n";
}
run("./autogen.sh");
run("./configure --enable-doc-maintenance --enable-werror");
run("make -j8 build_manual");
run("make distclean");
cd($pwd);
run("tar czvf $srcdir.tar.gz-candidate $srcdir");
if ($run_tests)
{
cd($srcdir);
run("./configure");
run("make -j8");
run("make check");
cd($pwd);
}
rename "$srcdir.tar.gz-candidate", "$srcdir.tar.gz" or die;
print "
Source distribution created as $srcdir.tar.gz
You can now remove $srcdir.
If this is a release, don't forget to tag the version control system and
make a backup of the release tar file.
";
sub safe_open
{
my $file = shift;
my $fh = new IO::File("<$file") or die "$whoami: can't open $file: $!";
$fh;
}
sub run
{
my $cmd = shift;
system($cmd) == 0 or die "$whoami: $cmd failed\n";
}
sub cd
{
my $dir = shift;
chdir($dir) or die;
}
sub usage
{
die "
Usage: $whoami [ --no-tests ]
$whoami must be run from the parent of a directory called
qpdf-<version> which must contain a pristine export of that version of
qpdf from the version control system. Use of --no-tests can be used
for internally testing releases, but do not use it for a real release.
";
}
|
__label__pos
| 0.908931 |
Is it a good idea to use Temporal to manage provisioning of its own workers and infrastructure?
I am in the process of implementing a new workflow system, and I am considering using Temporal to manage the provisioning of both the workers and the infrastructure they run on. However, I have been advised against it, and I would like to ask the community for their thoughts on this matter.
My initial thoughts were that since Temporal is designed to handle workflows, it would be a good fit for managing the provisioning of workers and the infrastructure they run on. I was considering using workflows and activities to drive tools like Terraform, Waypoint and other infrastructure provisioning tools.
However, I have been advised against using Temporal in this way for a few reasons. Mainly that it is generally not recommended to use the same system to manage itself.
I would like to know the community’s thoughts on this. Have you used Temporal to manage the provisioning of workers and infrastructure? What has been your experience? Do you have any advice or best practices to share?
I want to clarify my thought process a bit more. I have already created manager workflows to handle the creation of workflows through signals sent by our CLI or API. My idea is that, since Temporal allows us to code workflows as if failure doesn’t exist, even starting a workflow or a worker should be an activity. Starting these processes manually can introduce potential failure points, race conditions, and duplication. Instead, using Temporal’s activities to manage the provisioning of workers and infrastructure could provide a more seamless and reliable solution.
This approach makes a lot of sense. Using Temporal for infra provisioning is a very common use case. Companies like Datadog and Hashicorp rely on it.
1 Like
Thank you for your guidance.
Does it makes sense to add a future to the selector from within the receive callback function? I had assumed that not doing so would block the callback, but I’m not sure if it is an anti-pattern.
Is there an issue with accessing the campaign variable from within the future function?
workflow.Go(ctx, func(ctx workflow.Context) {
for {
selector := workflow.NewSelector(ctx)
selector.AddReceive(
workflow.GetSignalChannel(ctx, CampaignCreateSignal),
func(c workflow.ReceiveChannel, _ bool) {
var campaign Campaign
c.Receive(ctx, &campaign)
log := workflow.GetLogger(ctx)
var childID string
if err = workflow.SideEffect(ctx, func(ctx workflow.Context) interface{} {
return fmt.Sprintf("campaign:%v", uuid.New().String())
}).Get(&childID); err != nil {
return
}
future := workflow.ExecuteChildWorkflow(
workflow.WithChildOptions(ctx, workflow.ChildWorkflowOptions{
WorkflowID: childID,
TaskQueue: CampaignTaskQueue,
}),
CampaignWorkflow,
campaign,
)
log.Info("Creating campaign workflow", campaign.Name)
selector.AddFuture(future, func(f workflow.Future) {
name := campaign.Name
log := workflow.GetLogger(ctx)
log.Info("Getting campaign workflow", name)
err := f.Get(ctx, nil)
if err != nil {
log.Error("Campaign creation failed", name, "Error", err)
return
}
})
})
selector.Select(ctx)
}
})
Check out the Await Signals sample for a simpler pattern.
|
__label__pos
| 0.991111 |
9 Replies Latest reply: Dec 3, 2012 5:32 AM by c.dellabruna RSS
Qlikviews handling of Strings as numbers
Ok I am having some issues with this.
I discovered this quite some time ago that there were part numbers that Qlikview was reading as a number when it should always be a string (For us). This was causing issues with linking and duplicate numbers and lost data:
Example:
Part:
8329488
08329488
Qlikview read both of these values as 08329488.
I discovered that this could be fixed by applying the text function to the field to force Qlikview to read this as text. Since this happens after the data is loaded this needed to be done in the load script.
I use QVD's across several of my dashboards. Since this happens the first time data is pulled into Qlikview I have been creating a full load for these tables and have applied the text function to every field that is classified as a string. I then store this table into the QVD for other dashboards.
There are then cases where I need to use this table several times, so I then just resident load this original load where I have applied this function.
Example:
TempPart:
Load
Text(stringfield) as stringfield,
numberfield,
datefield;
SQL Select
stringfield,
numberfield,
datefield
From ......part;
STORE TempPart into [\\...\Part.qvd(qvd)];
part:
LOAD
stringfield,
numberfield,
datefield
Resident TempPart;
part2:
LOAD
stringfield as Newstring,
numberfield as Newnumber,
datefield as Newdate
Resident TempPart where stringfield<>'A';
Drop Table TempPart;
Ok so now finally to my issue. I was assuming that the original Load with the temp table would apply text to this field anywhere that this field is referenced(IE resident loaded). I have been looking at my data and this is not happening.
Do I really have to apply the Text() function EVERY place that I load this field?
As I said this is causing major issues with duplicate data and incorrect linking.
Can anyone shed some light on this for me?
EDIT:
I left out something because I thought it was unrelated. Our system requires preserving leading spaces. In order to keep these I have also used Set Verbatim='1';
I then also apply rtrim() to every text field to drop only the trailing spaces. (This was a TON of work to do to every table that I call in every dashboard hence the frustration)
I was able to at least get the Load section created for every table using a field output from SQL in Excel using formulas to check the field type. An example would be:
rtrim(text(field)) as field,
What it now seems to be is that Qlikview is ignoring the text function when it is inside of the rtrim function....Do I really need to rewrite this EVERYWHERE that it is used to text(rtrim(field)) ???
Sorry for any hostility as I said this is extremely frustrating as I spend around a week going through and rewriting these scripts to use these functions.
|
__label__pos
| 0.591813 |
memtestosx vs. ubuntu's memtest
Discussion in 'Mac Basics and Help' started by MarsianMan, Feb 23, 2009.
1. MarsianMan macrumors member
Joined:
Oct 19, 2007
#1
I ran Ubuntu's bundled memtest86 and had it error but ran memtestosx (in single user mode) without any errors. Can anyone tell me which version I should pay attention to and why there is a discrepancy?
(Run from an early 2008 MacBook Pro)
2. BlueRevolution macrumors 603
BlueRevolution
Joined:
Jul 26, 2004
Location:
Montreal, QC
#2
I'd use Apple Hardware Test as the definitive authority on the subject.
3. MarsianMan thread starter macrumors member
Joined:
Oct 19, 2007
#3
Meh, that seems silly. Apple Hardware Test requires too much graphics to be boot up. How can it test the entire memory if a significant portion is being used by the OS? Besides which, a simple google search shows people with memory problems that pass the Hardware Test.
Thanks for trying to help. :)
4. BlueRevolution macrumors 603
BlueRevolution
Joined:
Jul 26, 2004
Location:
Montreal, QC
#4
Um... what? Apple Hardware Test is extremely lightweight as it hasn't been updated in years. :rolleyes: Memtest OS X, on the other hand, runs on top of the operating system, which is using a significant amount of memory.
5. MarsianMan thread starter macrumors member
Joined:
Oct 19, 2007
#5
Not if you run it from single user mode. And not being updated in years is /not/ a good thing.
Quoted from the Mac Rumors' Testing RAM guide
Share This Page
|
__label__pos
| 0.859848 |
summaryrefslogtreecommitdiffstats
path: root/kpovmodeler/pmsqe.cpp
blob: 612792517bde6e4951e05f56281e664b062cb2b9 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
/*
**************************************************************************
description
--------------------
copyright : (C) 2002 by Andreas Zehender
email : [email protected]
**************************************************************************
**************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
**************************************************************************/
#include "pmsqe.h"
#include "pmxmlhelper.h"
#include "pmsqeedit.h"
#include "pmmemento.h"
#include "pmviewstructure.h"
#include "pmdefaults.h"
#include "pmmath.h"
#include <klocale.h>
const double c_defaultEastWestExponent = 1.0;
const double c_defaultNorthSouthExponent = 1.0;
PMViewStructure* PMSuperquadricEllipsoid::s_pDefaultViewStructure = 0;
int PMSuperquadricEllipsoid::s_vStep = c_defaultSuperquadricEllipsoidVSteps;
int PMSuperquadricEllipsoid::s_uStep = c_defaultSuperquadricEllipsoidUSteps;
int PMSuperquadricEllipsoid::s_parameterKey = 0;
PMDefinePropertyClass( PMSuperquadricEllipsoid, PMSuperquadricEllipsoidProperty );
PMMetaObject* PMSuperquadricEllipsoid::s_pMetaObject = 0;
PMObject* createNewSuperquadricEllipsoid( PMPart* part )
{
return new PMSuperquadricEllipsoid( part );
}
PMSuperquadricEllipsoid::PMSuperquadricEllipsoid( PMPart* part )
: Base( part )
{
m_eastWestExponent = c_defaultEastWestExponent;
m_northSouthExponent = c_defaultNorthSouthExponent;
}
PMSuperquadricEllipsoid::PMSuperquadricEllipsoid( const PMSuperquadricEllipsoid& s )
: Base( s )
{
m_eastWestExponent = s.m_eastWestExponent;
m_northSouthExponent = s.m_northSouthExponent;
}
PMSuperquadricEllipsoid::~PMSuperquadricEllipsoid( )
{
}
TQString PMSuperquadricEllipsoid::description( ) const
{
return i18n( "superquadric ellipsoid" );
}
void PMSuperquadricEllipsoid::serialize( TQDomElement& e, TQDomDocument& doc ) const
{
e.setAttribute( "value_e", m_eastWestExponent );
e.setAttribute( "value_n", m_northSouthExponent );
Base::serialize( e, doc );
}
void PMSuperquadricEllipsoid::readAttributes( const PMXMLHelper& h )
{
m_eastWestExponent = h.doubleAttribute( "value_e", c_defaultEastWestExponent );
m_northSouthExponent = h.doubleAttribute( "value_n", c_defaultNorthSouthExponent );
Base::readAttributes( h );
}
PMMetaObject* PMSuperquadricEllipsoid::metaObject( ) const
{
if( !s_pMetaObject )
{
s_pMetaObject = new PMMetaObject( "SuperquadricEllipsoid", Base::metaObject( ),
createNewSuperquadricEllipsoid );
s_pMetaObject->addProperty(
new PMSuperquadricEllipsoidProperty( "eastWestExponent",
&PMSuperquadricEllipsoid::setEastWestExponent,
&PMSuperquadricEllipsoid::eastWestExponent ) );
s_pMetaObject->addProperty(
new PMSuperquadricEllipsoidProperty( "northSouthExponent",
&PMSuperquadricEllipsoid::setNorthSouthExponent,
&PMSuperquadricEllipsoid::northSouthExponent ) );
}
return s_pMetaObject;
}
void PMSuperquadricEllipsoid::setEastWestExponent( double e )
{
if( e != m_eastWestExponent )
{
if( m_pMemento )
m_pMemento->addData( s_pMetaObject, PMEastWestExponentID,
m_eastWestExponent );
if( e < 0.001 )
{
kdError( PMArea ) << "EastWestExponent < 0.001 in PMSuperquadricEllipsoid::setEastWestExponent\n";
e = 0.001;
}
m_eastWestExponent = e;
setViewStructureChanged( );
}
}
void PMSuperquadricEllipsoid::setNorthSouthExponent( double n )
{
if( n != m_northSouthExponent )
{
if( m_pMemento )
m_pMemento->addData( s_pMetaObject, PMNorthSouthExponentID,
m_northSouthExponent );
if( n < 0.001 )
{
kdError( PMArea ) << "NorthSouthExponent < 0.001 in PMSuperquadricEllipsoid::setNorthSouthExponent\n";
n = 0.001;
}
m_northSouthExponent = n;
setViewStructureChanged( );
}
}
PMDialogEditBase* PMSuperquadricEllipsoid::editWidget( TQWidget* parent ) const
{
return new PMSuperquadricEllipsoidEdit( parent );
}
void PMSuperquadricEllipsoid::restoreMemento( PMMemento* s )
{
PMMementoDataIterator it( s );
PMMementoData* data;
for( ; it.current( ); ++it )
{
data = it.current( );
if( data->objectType( ) == s_pMetaObject )
{
switch( data->valueID( ) )
{
case PMEastWestExponentID:
setEastWestExponent( data->doubleData( ) );
break;
case PMNorthSouthExponentID:
setNorthSouthExponent( data->doubleData( ) );
break;
default:
kdError( PMArea ) << "Wrong ID in PMSuperquadricEllipsoid::restoreMemento\n";
break;
}
}
}
Base::restoreMemento( s );
}
bool PMSuperquadricEllipsoid::isDefault( )
{
if( ( m_eastWestExponent == c_defaultEastWestExponent ) &&
( m_northSouthExponent == c_defaultNorthSouthExponent )
&& globalDetail( ) )
return true;
return false;
}
void PMSuperquadricEllipsoid::createViewStructure( )
{
if( !m_pViewStructure )
{
m_pViewStructure = new PMViewStructure( defaultViewStructure( ) );
m_pViewStructure->points( ).detach( );
}
int uStep = (int)( ( (float)s_uStep / 2 ) * ( displayDetail( ) + 1 ) );
int vStep = (int)( ( (float)s_vStep / 2 ) * ( displayDetail( ) + 1 ) );
int uStep2 = uStep * 4;
int vStep2 = vStep * 8;
unsigned ptsSize = vStep2 * ( uStep2 - 1 ) + 2;
unsigned lineSize = vStep2 * ( uStep2 - 1 ) * 2 + vStep2;
if( ptsSize != m_pViewStructure->points( ).size( ) )
m_pViewStructure->points( ).resize( ptsSize );
createPoints( m_pViewStructure->points( ), m_eastWestExponent,
m_northSouthExponent, uStep, vStep );
if( lineSize != m_pViewStructure->lines( ).size( ) )
{
m_pViewStructure->lines( ).detach( );
m_pViewStructure->lines( ).resize( lineSize );
createLines( m_pViewStructure->lines( ), uStep2, vStep2 );
}
}
PMViewStructure* PMSuperquadricEllipsoid::defaultViewStructure( ) const
{
if( !s_pDefaultViewStructure || s_pDefaultViewStructure->parameterKey( ) != viewStructureParameterKey( ) )
{
delete s_pDefaultViewStructure;
s_pDefaultViewStructure = 0;
int uStep = (int)( ( (float)s_uStep / 2 ) * ( globalDetailLevel( ) + 1 ) );
int vStep = (int)( ( (float)s_vStep / 2 ) * ( globalDetailLevel( ) + 1 ) );
// transform u and v steps to sphere u/v steps
int uStep2 = uStep * 4;
int vStep2 = vStep * 8;
s_pDefaultViewStructure =
new PMViewStructure( vStep2 * ( uStep2 - 1 ) + 2,
vStep2 * ( uStep2 - 1 ) * 2 + vStep2 );
// points
createPoints( s_pDefaultViewStructure->points( ),
c_defaultEastWestExponent, c_defaultNorthSouthExponent, uStep, vStep );
createLines( s_pDefaultViewStructure->lines( ), uStep2, vStep2 );
}
return s_pDefaultViewStructure;
}
void PMSuperquadricEllipsoid::createLines( PMLineArray& lines, int uStep, int vStep )
{
int u, v;
int offset = 0;
// horizontal lines
for( u = 0; u < ( uStep - 1 ); u++ )
{
for( v = 0; v < ( vStep - 1 ); v++ )
lines[offset + v] =
PMLine( u * vStep + v + 1, u * vStep + v + 2 );
lines[offset + vStep - 1] =
PMLine( u * vStep + 1, u * vStep + vStep );
offset += vStep;
}
// vertical lines
// lines at the "north pole"
for( v = 0; v < vStep; v++ )
lines[offset + v] = PMLine( 0, v + 1 );
offset += vStep;
for( v = 0; v < vStep; v++ )
{
for( u = 0; u < ( uStep - 2 ); u++ )
{
lines[offset + u] =
PMLine( u * vStep + v + 1, ( u + 1 ) * vStep + v + 1 );
}
offset += ( uStep - 2 );
}
// lines at the "south pole"
for( v = 0; v < vStep; v++ )
lines[offset + v] = PMLine( ( uStep - 2 ) * vStep + v + 1,
( uStep - 1 ) * vStep + 1 );
// offset += vStep;
}
void PMSuperquadricEllipsoid::createPoints( PMPointArray& points,
double e, double n, int uStep, int vStep )
{
int u, v;
int zi;
int pbase = 0, pref = 0;
if( e <= 0.001 )
e = 0.001;
if( n <= 0.001 )
n = 0.001;
double c2_e = 2.0 / e;
double c2_n = 2.0 / n;
double cn_2 = n / 2.0;
double ce_2 = e / 2.0;
double cn_e = n / e;
// double ce_n = e / n;
double z = 0.0, c = 0.0, a = 0.0, a2 = 0.0, x = 0.0, y = 0.0;
double k = 0.0, k2 = 0.0, du = 0.0, dv = 0.0;
PMPoint p;
points[0] = PMPoint( 0, 0, 1 );
pbase++;
for( zi = 0; zi < 2; zi++ )
{
for( u = 0; u < uStep; u++ )
{
du = ( double ) ( u + 1 ) / ( double ) uStep;
if( zi == 1 )
du = 1.0 - du;
k = tan( M_PI / 4.0 * pow( du, n < 1.0 ? n : sqrt( n ) ) );
k2 = 1 / ( pow( k, c2_n ) + 1 );
z = pow( k2, cn_2 );
if( zi == 1 )
z *= k;
c = pow( 1 - pow( z, c2_n ), cn_e );
for( v = 0; v < ( vStep + 1 ); v++ )
{
dv = ( double ) v / ( double ) vStep;
a = tan( M_PI / 4.0 * pow( dv, e < 1.0 ? e : sqrt( e ) ) );
a2 = 1 + pow( a, c2_e );
x = pow( c / a2, ce_2 );
y = x * a;
points[pbase+v] = PMPoint( x, y, z );
}
// 1/8
pref = pbase + 2 * vStep;
for( v = 0; v < vStep; v++, pref-- )
{
p = points[pbase+v];
x = p[0];
p[0] = p[1];
p[1] = x;
points[pref] = p;
}
// 1/4
pref = pbase + 4 * vStep;
for( v = 0; v < ( 2 * vStep ); v++, pref-- )
{
p = points[pbase+v];
p[0] = -p[0];
points[pref] = p;
}
// 1/2
pref = pbase + 8 * vStep - 1;
for( v = 1; v < ( 4 * vStep ); v++, pref-- )
{
p = points[pbase+v];
p[1] = -p[1];
points[pref] = p;
}
pbase += 8 * vStep;
}
}
for( u = 0; u < ( uStep * 2 - 1 ); u++ )
{
pbase = 1 + u * vStep * 8;
pref = 1 + ( uStep * 4 - 2 - u ) * vStep * 8;
for( v = 0; v < ( vStep * 8 ); v++, pref++ )
{
p = points[pbase + v];
p[2] = -p[2];
points[pref] = p;
}
}
points[ vStep * 8 * ( uStep * 4 - 1 ) + 1 ] = PMPoint( 0, 0, -1 );
}
void PMSuperquadricEllipsoid::setUSteps( int u )
{
if( u >= 2 )
{
s_uStep = u;
if( s_pDefaultViewStructure )
{
delete s_pDefaultViewStructure;
s_pDefaultViewStructure = 0;
}
}
else
kdDebug( PMArea ) << "PMSuperquadricEllipsoid::setUSteps: U must be greater than 1\n";
s_parameterKey++;
}
void PMSuperquadricEllipsoid::setVSteps( int v )
{
if( v >= 2 )
{
s_vStep = v;
if( s_pDefaultViewStructure )
{
delete s_pDefaultViewStructure;
s_pDefaultViewStructure = 0;
}
}
else
kdDebug( PMArea ) << "PMSuperquadricEllipsoid::setVSteps: V must be greater than 1\n";
s_parameterKey++;
}
void PMSuperquadricEllipsoid::cleanUp( ) const
{
if( s_pDefaultViewStructure )
delete s_pDefaultViewStructure;
s_pDefaultViewStructure = 0;
if( s_pMetaObject )
{
delete s_pMetaObject;
s_pMetaObject = 0;
}
Base::cleanUp( );
}
|
__label__pos
| 0.986869 |
10
$\begingroup$
The category of fibrations in a combinatorial model category is accessible, accessibly embedded in the arrow category. How about the cofibrations?
More generally, let $C$ be a locally presentable category and let $(L,R)$ be a weak factorization system on $C$. If $(L,R)$ is cofibrantly-generated (i.e. there is a set $I \subseteq L$ such that $R$ consists precisely of those morphisms with the right lifting property with respect to $I$), then $R$, considered as a full subcategory of $C^\to$, is accessible and accessibly embedded.
Question 1: Suppose that $(L,R)$ is cofibrantly-generated. Is $L$ accessible and accessibly embedded (as a full subcategory of $C^\to$)?
Question 2: Conversely, if $L$ is accessible and accessibly embedded, then is $(L,R)$ cofibrantly-generated?
Question 3: Similar to the above two, but use the notion of "small-generated" coming from Garner's small object argument (where $I$ can be a category rather than a set).
The proof that $R$ is accessible and accessibly embedded is not completely straightforward: it relies on the fact that the small object argument provides a functorial factorization system which preserves $\lambda$-filtered colimits and $\lambda$-presentable objects for some $\lambda$ to exhibit every $R$-morphism as a retract of a colimit of $\lambda$-presentable $R$-morphisms and to see that fibrations are closed under $\lambda$-filtered colimits.
The fact that $L$ is closed under transfinite composition sounds tantalizingly close to saying that it is closed under filtered colimits, but I'm not sure the latter is actually true.
Motivation: If the answer to both questions is yes, then it becomes very easy to prove Jeff Smith's theorem since an intersection of accessible, accessibly-embedded, replete subcategories is accessible and accessibly-embedded.
$\endgroup$
6
• 1
$\begingroup$ Do you know if "L accessible and accessibly embedded" follow from having an accessible factorization system (I.e there is a functorial choice of factorization, given by an accessible functor). That's sound reasonable and if it is true the answer are yes and no: cofibrantly generated wfs are accessible because of the factorization given by (Garner's version of) the small object argument, and I know some exemple of accessible wfs that are not cofibrantly generated. $\endgroup$ Jun 29, 2018 at 14:44
• $\begingroup$ Read: "Cofibrantly generated wfs in presentable categories are..." $\endgroup$ Jun 29, 2018 at 14:50
• 1
$\begingroup$ @SimonHenry I see. I misread Theorem 4.3 in Rosicky's Accessible model categories, eliding the difference between small-generated and cofibrantly-generated. I would really like to believe that "L accessible and accessibly-embedded" is equivalent having an accessible wfs, but I don't know if either implication holds. I suppose Rosicky's Theorem 5.3 is a version of Jeff Smith's theorem along the lines I'm suggesting, although as he remarks, it's not clear if it's optimal. $\endgroup$
– Tim Campion
Jun 29, 2018 at 14:54
• 1
$\begingroup$ Take a look at Lemma 2.11 in arxiv.org/abs/1802.09889. This doesn't answer your question, but is relevant. $\endgroup$ Jun 30, 2018 at 5:26
• $\begingroup$ @TimCampion: So, is it correct to say (based on my understanding of the answers given below) that we still do not know any examples of combinatorial model categories where the class of cofibrations is not accessible? $\endgroup$ Jul 30, 2021 at 20:51
2 Answers 2
9
$\begingroup$
A cofibrantly generated $(L,R)$ does not need to have $L$ accessible, see Example 3.5 in my paper "On combinatorial model categories." Also, $L$ accessible does not imply that $(L,R)$ is cofibrantly generated, even accessible. Take regular monos in Boolean algebras. This $L$ is accessible but $(L,R)$ cannot be accessible because regular injectives are complete Boolean algebras which are not accessible.
$\endgroup$
4
• $\begingroup$ Thanks! The second example in Example 3.5 -- the independence of the accessibility of free abelian groups -- is even one that I knew at some point, and was trying to remember -- I looked for it in your On projectivity in locally presentable categories but didn't find it there. Apparently the original reference is 5.5.1 in Makkai and Pare. $\endgroup$
– Tim Campion
Jun 30, 2018 at 15:00
• $\begingroup$ Just for completeness, let me record the first example in Example 3.5. The split monos in the category of posets form the cofibrant closure of the split monos between finite posets, but they are not closed under $\lambda$-filtered colimits for any $\lambda$, and so not accessibly embedded. There is no dependence on set theory in this example. $\endgroup$
– Tim Campion
Jul 1, 2018 at 16:14
• $\begingroup$ I'm suddenly doubtful. Let $S$ be a set, regarded as a discrete poset, let $S_1$ be $S$ with a top element added, and let $S_2 = S_1 \cup_S S_1$. Then the inclusion $S_1 \to S_2$ is a split mono. But if $S$ is infinite, I think that $S_1 \to S_2$ is not cellular in split monos between finite posets (and I believe the cellular maps in split monos between finite posets are closed under retracts). $\endgroup$
– Tim Campion
Jul 1, 2018 at 18:39
• $\begingroup$ After all, transfinite composition doesn't buy us anything here, since we're only adding one element. So we would have to exhibit $S_1 \to S_2$ as the pushout of a split mono between finite posets. But by construction, any pushout of a split mono between finite posets which factors $S_1 \to S_2$ can only make finitely many of the elements of $S$ lie below the new top element, so we cannot achieve $S_1 \to S_2$. $\endgroup$
– Tim Campion
Jul 1, 2018 at 18:47
1
$\begingroup$
Here's an elaboration on the example in Professor Rosický's paper. I'll make it community-wiki.
Let $Pos$ be the category of posets, and let $L$ be the class of split monomorphisms in $Pos$. Let $L_\omega$ be the set of split monomorphisms between finite posets.
Claim 1: $L$ is the cofibrant closure of $L_\omega$.
Proof: One can check that in any category the class of split monomorphisms is closed under coproduct, cobase-change, transfinite composition, and retracts. Conversely, if $P \to Q$ is a split mono, one can add the elements of $Q$ one at a time in a chain, so we may assume without loss of generality that $Q$ has only one element $q$ which is not in $P$. Now we may express $P \to Q$ as the colimit of a chain, each link of which adds one relation $p \leq q$ or $q \leq p$ for some $p \in P$. Each of these links is a pushout by a split mono between 2-element posets. I'm not sure how do do this!
Claim 2: $L$ is not closed in $Pos^{\to}$ under $\lambda$-filtered colimits for any $\lambda$.
*Proof:** The closure of $L$ under $\lambda$-filtered colimits consists of the $\lambda$-pure monomorphisms in $Pos$. So we just need an example of a $\lambda$-pure monomorphism which doesn't split, for each regular cardinal $\lambda$. The inclusion $\lambda \to \lambda+1$ fits the bill -- see Example 2.28(3) in Adamek and Rosicky's Locally Presentable and Accessible Categories.
Thus $L$ is cofibrantly generated, but not accessibly embedded.
In the other direction, I don't know a source for Professor Rosický's claim that regular monos in Boolean algebras are a counterexample. But I'm pretty sure that in any locally presentable category, both (epi, strong mono) and (strong epi, mono) are accessible orthogonal factorization systems. And Example 4.4(2) in the same book says that complete Boolean algebras are the injective objects in the category of distributive lattices, citing
Banaschewski, B. and G. Bruns (1967): Categorical characterization of MacNellie completion. Arch. Math. 18, 369-377.
I think it's well-known that complete Boolean algebras don't form an accessible category. To show this it suffices to construct a Boolean algebra of cardinality $\kappa$ which is $\kappa$-complete but not $\kappa^+$ complete, for arbitrarily large $\kappa$. The set of $<\kappa$-sized subsets of a set of size $\kappa$ works (where $\kappa$ is regular).
$\endgroup$
3
• $\begingroup$ Tim, you are right. Split monos in posets are not cofibrantly generated. The resulting weak factorization system is only accessible. This follows from 1.6 in my joint paper with Adámek, Herrlich and Tholen "Weak factorization systems and topological functors". So, we are missing an example in ZFC. Concerning regular monos in Boolean algebras, you gave a proof. This and other examples can be found in my recent paper On the uniqueness of cellular injectives. $\endgroup$ Jul 2, 2018 at 7:38
• $\begingroup$ @JiříRosický Thanks, that's helpful. I see that 1.6 in "Weak factorization systems and topological functors" implies that split monos in posets are the left half of a weak factorization system, but why is it accessible? $\endgroup$
– Tim Campion
Jul 2, 2018 at 12:57
• 1
$\begingroup$ The factorization of $f:A\to B$ is $A\to A\times B\to B$ and $colim (A_i\times B_i)\cong (colim A_i)\times (colim B_i)$. $\endgroup$ Jul 2, 2018 at 13:54
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.756169 |
IDL
CREATE_STRUCT
CREATE_STRUCT
The CREATE_STRUCT function creates a structure given pairs of tag names and values. CREATE_STRUCT can also be used to concatenate structures.
Examples
To create the anonymous structure { A: 1, B: 'xxx'} in the variable P, enter:
p = CREATE_STRUCT('A', 1, 'B', 'xxx')
To add the fields “FIRST” and “LAST” to the structure, enter the following:
p = CREATE_STRUCT('FIRST', 0, p, 'LAST', 3)
The resulting structure contains { FIRST: 0, A: 1, B: 'xxx', LAST: 3}.
Finally, consider the following statements:
s1 = {Struct1, Tag1:'AAA', Tag2:'BBB'}
s2 = {Struct2, TagA:100, TagB:200}
s3 = CREATE_STRUCT(NAME='Struct3', ['A','B','C'], 1, 2, s1, s2)
Here, the variable s3 contains the following named structure:
{Struct3, A: 1, B: 2, C:{Struct1, Tag1: 'AAA', Tag2: 'BBB'}, TagA: 100, TagB: 200}
Note that the value of s3.C is itself a “Struct1” structure, since the structure variable s1 was interpreted as a Values argument, whereas the structure variable s2 was interpreted as a Structures argument, thus including the tags from the “Struct2” structure directly in the new structure.
Syntax
Result = CREATE_STRUCT( [Tag1, Values1, ..., Tagn, Valuesn] [, Structuresn] [, NAME=string])
or
Result = CREATE_STRUCT( [Tags, Values1, ..., Valuesn][, Structuresn] [, NAME=string])
Return Value
Returns a structure composed of given pairs of tag names and values.
Arguments
Tags
The structure tag names. Tag names may be specified either as scalar strings or a single string array. If scalar strings are specified, values alternate with the tag names. If a string array is provided, values must still be specified individually. Tag names must be enclosed in quotes. Tag names may not be IDL Reserved Words, and must be unique within a given structure, although the same tag name can be used in more than one structure. Tag names follow the rules of IDL identifiers: they must begin with a letter; following characters can be letters, digits, or the underscore or dollar sign characters; and case is ignored.
Note: If a tag name contains spaces, CREATE_STRUCT will replace the spaces with underscores. For example, if you specify a tag name of 'my tag', the tag will be created with the name 'my_tag'.
Values
The values for the structure fields. The number of Values arguments must match the number of Tags arguments (if tags are specified as scalar strings) or the number of elements of the Tags array (if tags are specified as a single array.)
Structures
One or more existing structure variables whose tags and values will be inserted into the new structure. When concatenating structures in this manner, the following rules apply:
• All tag names, whether specified via the Tags argument or in an existing structure variable, must be unique.
• Names of named structures included via the Structures arguments are not used in the newly-created structure.
• Structures arguments can be interspersed with groups of Tags and Values arguments in the call to CREATE_STRUCT. Use caution, however, to ensure that the number of Tags and Values in each group are equal, to avoid inserting a structure variable as the value of a single tag when you mean to include the structure’s data as individual tags and values.
Keywords
NAME
To create a named structure, set this keyword equal to a string specifying the structure name. If this keyword is not present, an anonymous structure is created. Structure names must begin with a letter; following characters can be letters, digits, or the underscore or dollar sign characters; and case is ignored.
If NAME is specified and no plain arguments (tags, values, or structures) are present, then CREATE_STRUCT will return a structure of known type, either from IDL's internal table of already known named structures or by locating the appropriate __define.pro file for that structure in the current IDL search path (!PATH) and executing it. Hence, the following IDL statements are equivalent:
Result = { mystruct }
Result = CREATE_STRUCT(NAME='mystruct')
The CREATE_STRUCT version can be convenient in situations where the name of the structure is computed at runtime, and the EXECUTE function is not available (e.g., code running in the free IDL Virtual Machine environment, in which EXECUTE is disallowed).
Version History
Pre 4.0
Introduced
6.1
Added NAME keyword
See Also
IDL_VALIDNAME, N_TAGS, TAG_NAMES.
Notes
This page has no user notes yet. Be the first one!
This information is not subject to the controls of the International Traffic in Arms Regulations (ITAR) or the Export Administration Regulations (EAR). However, it may be restricted from transfer to various embargoed countries under U.S. laws and regulations.
© 2014 Exelis Visual Information Solutions
|
__label__pos
| 0.978301 |
9217857fc634efd7c7295db447218b7adef68dd7
[mirror_edk2.git] / Tools / Java / Source / MigrationTools / org / tianocore / migration / MsaOwner.java
1 /** @file
2
3 Copyright (c) 2006, Intel Corporation
4 All rights reserved. This program and the accompanying materials
5 are licensed and made available under the terms and conditions of the BSD License
6 which accompanies this distribution. The full text of the license may be found at
7 http://opensource.org/licenses/bsd-license.php
8
9 THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
10 WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
11
12 **/
13 package org.tianocore.migration;
14
15 import java.io.BufferedWriter;
16 import java.io.FileWriter;
17 import java.util.*;
18
19 import org.apache.xmlbeans.XmlOptions;
20 import org.tianocore.*;
21 import org.tianocore.SupportedArchitectures.Enum;
22
23 public class MsaOwner {
24 public static final String COPYRIGHT = "Copyright (c) 2006, Intel Corporation";
25 public static final String VERSION = "1.0";
26 public static final String ABSTRACT = "Component name for module ";
27 public static final String DESCRIPTION = "FIX ME!";
28 public static final String LICENSE = "All rights reserved.\n" +
29 " This software and associated documentation (if any) is furnished\n" +
30 " under a license and may only be used or copied in accordance\n" +
31 " with the terms of the license. Except as permitted by such\n" +
32 " license, no part of this software or documentation may be\n" +
33 " reproduced, stored in a retrieval system, or transmitted in any\n" +
34 " form or by any means without the express written consent of\n" +
35 " Intel Corporation.";
36 public static final String SPECIFICATION = "FRAMEWORK_BUILD_PACKAGING_SPECIFICATION 0x00000052";
37
38 public static final Enum IA32 = SupportedArchitectures.IA_32;
39 public static final Enum X64 = SupportedArchitectures.X_64;
40 public static final Enum IPF = SupportedArchitectures.IPF;
41 public static final Enum EBC = SupportedArchitectures.EBC;
42
43 private ModuleSurfaceAreaDocument msadoc = ModuleSurfaceAreaDocument.Factory.newInstance();
44
45 private ModuleSurfaceAreaDocument.ModuleSurfaceArea msa = null;
46 private MsaHeaderDocument.MsaHeader msaheader = null;
47 private LicenseDocument.License license = null;
48 private ModuleDefinitionsDocument.ModuleDefinitions moduledefinitions = null;
49 private SourceFilesDocument.SourceFiles sourcefiles = null; //found local .h files are not written
50 private GuidsDocument.Guids guids = null;
51 private ProtocolsDocument.Protocols protocols = null;
52 private PPIsDocument.PPIs ppis = null;
53 private PackageDependenciesDocument.PackageDependencies packagedependencies = null;
54 private LibraryClassDefinitionsDocument.LibraryClassDefinitions libclassdefs = null;
55 private ExternsDocument.Externs externs = null;
56
57 private List<Enum> listarch = new ArrayList<Enum>();
58 //private Map<String, Enum> mapfilenames = new HashMap<String, Enum>(); //this need to be installed manually when msa is to be written
59 //private Map<String, UsageTypes.Enum> mapprotocols = new HashMap<String, UsageTypes.Enum>();
60
61 //-----------------------------msaheader-------------------------------------//
62
63 public final boolean addLibraryClass (String name, UsageTypes.Enum usage) {
64 Iterator<LibraryClassDocument.LibraryClass> classit = libclassdefs.getLibraryClassList().iterator();
65 while (classit.hasNext()) {
66 if (classit.next().getKeyword() == name) {
67 MigrationTool.ui.println ("Warning: Duplicate LibraryClass");
68 return false;
69 }
70 }
71
72 LibraryClassDocument.LibraryClass classname;
73 classname = libclassdefs.addNewLibraryClass();
74 classname.setKeyword(name);
75 classname.setUsage(usage);
76 return true;
77 }
78
79 public final boolean addGuid (String guidname, UsageTypes.Enum usage) {
80 if (guids == null) {
81 guids = msa.addNewGuids();
82 }
83
84 Iterator<GuidsDocument.Guids.GuidCNames> guidit = guids.getGuidCNamesList().iterator();
85 while (guidit.hasNext()) {
86 if (guidit.next().getGuidCName() == guidname) {
87 MigrationTool.ui.println ("Warning: Duplicate Guid");
88 return false;
89 }
90 }
91
92 GuidsDocument.Guids.GuidCNames guid;
93 guid = guids.addNewGuidCNames();
94 guid.setGuidCName(guidname);
95 guid.setUsage(usage);
96 return true;
97 }
98
99
100 public final boolean addPpi (String ppiname, UsageTypes.Enum usage) {
101 if (ppis == null) {
102 ppis = msa.addNewPPIs();
103 }
104
105 Iterator<PPIsDocument.PPIs.Ppi> ppiit = ppis.getPpiList().iterator();
106 while (ppiit.hasNext()) {
107 if (ppiit.next().getPpiCName() == ppiname) {
108 MigrationTool.ui.println ("Warning: Duplicate Ppi");
109 return false;
110 }
111 }
112
113 PPIsDocument.PPIs.Ppi ppi;
114 ppi = ppis.addNewPpi();
115 ppi.setPpiCName(ppiname);
116 ppi.setUsage(usage);
117 return true;
118 }
119
120 /*
121 private final boolean installProtocols () {
122 if (mapprotocols.isEmpty()) {
123 return false;
124 }
125 Set<String> setprotocols = mapprotocols.keySet();
126 ProtocolsDocument.Protocols.Protocol protocol;
127 Iterator<String> it = setprotocols.iterator();
128 while (it.hasNext()) {
129 protocol = protocols.addNewProtocol();
130 protocol.setProtocolCName(it.next());
131 protocol.setUsage(mapprotocols.get(protocol.getProtocolCName()));
132 }
133 return true;
134 }
135
136 public final boolean addProtocols (String protocol, UsageTypes.Enum usage) {
137 if (mapprotocols.containsKey(protocol)) {
138 return false;
139 } else {
140 mapprotocols.put(protocol, usage);
141 return true;
142 }
143 }
144 */
145 public final boolean addProtocol (String proname, UsageTypes.Enum usage) {
146 if (protocols == null) {
147 protocols = msa.addNewProtocols();
148 }
149
150 Iterator<ProtocolsDocument.Protocols.Protocol> proit = protocols.getProtocolList().iterator();
151 while (proit.hasNext()) {
152 if (proit.next().getProtocolCName() == proname) {
153 MigrationTool.ui.println ("Warning: Duplicate Protocol");
154 return false;
155 }
156 }
157
158 ProtocolsDocument.Protocols.Protocol protocol;
159 protocol = protocols.addNewProtocol();
160 protocol.setProtocolCName(proname);
161 protocol.setUsage(usage);
162 return true;
163 }
164
165 /*
166 private final boolean installHashFilename () {
167 if (mapfilenames.isEmpty()) {
168 return false;
169 }
170 Set<String> setfilename = mapfilenames.keySet();
171 FilenameDocument.Filename filename;
172 List<Enum> arch = new ArrayList<Enum>();
173 Iterator<String> it = setfilename.iterator();
174 while (it.hasNext()) {
175 filename = sourcefiles.addNewFilename();
176 filename.setStringValue(it.next());
177 arch.add(mapfilenames.get(filename.getStringValue()));
178 filename.setSupArchList(arch);
179 }
180 return true;
181 }
182
183 public final boolean addSourceFile (String filename, Enum arch) { // dummy & null how to imply?
184 if (mapfilenames.containsKey(filename)) {
185 return false;
186 } else {
187 mapfilenames.put(filename, arch);
188 return true;
189 }
190 }
191 */
192 public final boolean addSourceFile (String name, Enum en) {
193 Iterator<FilenameDocument.Filename> fileit = sourcefiles.getFilenameList().iterator();
194 while (fileit.hasNext()) {
195 if (fileit.next().getStringValue() == name) {
196 MigrationTool.ui.println ("Warning: Duplicate SourceFileName");
197 return false;
198 }
199 }
200
201 FilenameDocument.Filename filename;
202 List<Enum> arch = new ArrayList<Enum>();
203 filename = sourcefiles.addNewFilename();
204 filename.setStringValue(name);
205 arch.add(en);
206 filename.setSupArchList(arch);
207 return true;
208 }
209
210 // entry point todo
211
212 public final boolean setupExternSpecification () {
213 addExternSpecification("EFI_SPECIFICATION_VERSION 0x00020000");
214 addExternSpecification("EDK_RELEASE_VERSION 0x00020000");
215 return true;
216 }
217
218 public final boolean addExternSpecification (String specification) {
219 if (externs.getSpecificationList().contains(specification)) {
220 return false;
221 } else {
222 externs.addSpecification(specification);
223 return true;
224 }
225 }
226
227 public final boolean setupPackageDependencies() {
228 addPackage("5e0e9358-46b6-4ae2-8218-4ab8b9bbdcec");
229 addPackage("68169ab0-d41b-4009-9060-292c253ac43d");
230 return true;
231 }
232
233 public final boolean addPackage (String guid) {
234 if (packagedependencies.getPackageList().contains(guid)) {
235 return false;
236 } else {
237 packagedependencies.addNewPackage().setPackageGuid(guid);
238 return true;
239 }
240 }
241
242 public final boolean setupModuleDefinitions () { //????????? give this job to moduleinfo
243 moduledefinitions.setBinaryModule(false);
244 moduledefinitions.setOutputFileBasename(msaheader.getModuleName());
245 return true;
246 }
247 public final boolean addSupportedArchitectures (Enum arch) {
248 if (listarch.contains(arch)) {
249 return false;
250 } else {
251 listarch.add(arch);
252 return true;
253 }
254 }
255
256 public final boolean addSpecification (String specification) {
257 if (msaheader.getSpecification() == null) {
258 if (specification == null) {
259 msaheader.setSpecification(SPECIFICATION);
260 } else {
261 msaheader.setSpecification(specification);
262 }
263 return true;
264 } else {
265 MigrationTool.ui.println ("Warning: Duplicate Specification");
266 return false;
267 }
268 }
269
270 public final boolean addLicense (String licensecontent) {
271 if (msaheader.getLicense() == null) {
272 license = msaheader.addNewLicense();
273 if (licensecontent == null) {
274 license.setStringValue(LICENSE);
275 } else {
276 license.setStringValue(licensecontent);
277 }
278 return true;
279 } else {
280 MigrationTool.ui.println ("Warning: Duplicate License");
281 return false;
282 }
283 }
284
285 public final boolean addDescription (String description) {
286 if (msaheader.getDescription() == null) {
287 if (description == null) {
288 msaheader.setDescription(DESCRIPTION);
289 } else {
290 msaheader.setDescription(description);
291 }
292 return true;
293 } else {
294 MigrationTool.ui.println ("Warning: Duplicate Description");
295 return false;
296 }
297 }
298
299 public final boolean addAbstract (String abs) {
300 if (msaheader.getAbstract() == null) {
301 if (abs == null) {
302 msaheader.setAbstract(ABSTRACT + msaheader.getModuleName());
303 } else {
304 msaheader.setVersion(abs);
305 }
306 return true;
307 } else {
308 MigrationTool.ui.println ("Warning: Duplicate Abstract");
309 return false;
310 }
311 }
312
313 public final boolean addVersion (String version) {
314 if (msaheader.getVersion() == null) {
315 if (version == null) {
316 msaheader.setVersion(VERSION);
317 } else {
318 msaheader.setVersion(version);
319 }
320 return true;
321 } else {
322 MigrationTool.ui.println ("Warning: Duplicate Version");
323 return false;
324 }
325 }
326
327 public final boolean addCopyRight (String copyright) {
328 if (msaheader.getCopyright() == null) {
329 if (copyright == null) {
330 msaheader.setCopyright(COPYRIGHT);
331 } else {
332 msaheader.setCopyright(copyright);
333 }
334 return true;
335 } else {
336 MigrationTool.ui.println ("Warning: Duplicate CopyRight");
337 return false;
338 }
339 }
340
341 public final boolean addModuleType (String moduletype) {
342 if (msaheader.getModuleType() == null) {
343 msaheader.setModuleType(ModuleTypeDef.Enum.forString(moduletype));
344 return true;
345 } else {
346 MigrationTool.ui.println ("Warning: Duplicate ModuleType");
347 return false;
348 }
349 }
350
351 public final boolean addGuidValue (String guidvalue) {
352 if (msaheader.getGuidValue() == null) {
353 msaheader.setGuidValue(guidvalue);
354 return true;
355 } else {
356 MigrationTool.ui.println ("Warning: Duplicate GuidValue");
357 return false;
358 }
359 }
360
361 public final boolean addModuleName (String modulename) {
362 if (msaheader.getModuleName() == null) {
363 msaheader.setModuleName(modulename);
364 return true;
365 } else {
366 MigrationTool.ui.println ("Warning: Duplicate ModuleName");
367 return false;
368 }
369 }
370 //-----------------------------msaheader-------------------------------------//
371
372 public final void flush(String outputpath) throws Exception {
373 XmlOptions options = new XmlOptions();
374
375 options.setCharacterEncoding("UTF-8");
376 options.setSavePrettyPrint();
377 options.setSavePrettyPrintIndent(2);
378 options.setUseDefaultNamespace();
379
380 BufferedWriter bw = new BufferedWriter(new FileWriter(outputpath));
381 msadoc.save(bw, options);
382 bw.flush();
383 bw.close();
384 }
385
386 private final MsaOwner init () {
387 msa = msadoc.addNewModuleSurfaceArea();
388 msaheader = msa.addNewMsaHeader();
389 moduledefinitions = msa.addNewModuleDefinitions();
390 moduledefinitions.setSupportedArchitectures(listarch);
391
392 sourcefiles = msa.addNewSourceFiles();
393 packagedependencies = msa.addNewPackageDependencies();
394 libclassdefs = msa.addNewLibraryClassDefinitions();
395 externs = msa.addNewExterns();
396 return this;
397 }
398
399 public static final MsaOwner initNewMsaOwner() {
400 return new MsaOwner().init();
401 }
402 }
|
__label__pos
| 0.995854 |
COIN-OR::LEMON - Graph Library
source: lemon-1.2/lemon/matching.h @ 705:39a5b48bcace
Last change on this file since 705:39a5b48bcace was 651:3adf5e2d1e62, checked in by Peter Kovacs <kpeter@…>, 11 years ago
Small doc improvements (#257)
File size: 102.2 KB
Line
1/* -*- mode: C++; indent-tabs-mode: nil; -*-
2 *
3 * This file is a part of LEMON, a generic C++ optimization library.
4 *
5 * Copyright (C) 2003-2009
6 * Egervary Jeno Kombinatorikus Optimalizalasi Kutatocsoport
7 * (Egervary Research Group on Combinatorial Optimization, EGRES).
8 *
9 * Permission to use, modify and distribute this software is granted
10 * provided that this copyright notice appears in all copies. For
11 * precise terms see the accompanying LICENSE file.
12 *
13 * This software is provided "AS IS" with no warranty of any kind,
14 * express or implied, and with no claim as to its suitability for any
15 * purpose.
16 *
17 */
18
19#ifndef LEMON_MAX_MATCHING_H
20#define LEMON_MAX_MATCHING_H
21
22#include <vector>
23#include <queue>
24#include <set>
25#include <limits>
26
27#include <lemon/core.h>
28#include <lemon/unionfind.h>
29#include <lemon/bin_heap.h>
30#include <lemon/maps.h>
31
32///\ingroup matching
33///\file
34///\brief Maximum matching algorithms in general graphs.
35
36namespace lemon {
37
38 /// \ingroup matching
39 ///
40 /// \brief Maximum cardinality matching in general graphs
41 ///
42 /// This class implements Edmonds' alternating forest matching algorithm
43 /// for finding a maximum cardinality matching in a general undirected graph.
44 /// It can be started from an arbitrary initial matching
45 /// (the default is the empty one).
46 ///
47 /// The dual solution of the problem is a map of the nodes to
48 /// \ref MaxMatching::Status "Status", having values \c EVEN (or \c D),
49 /// \c ODD (or \c A) and \c MATCHED (or \c C) defining the Gallai-Edmonds
50 /// decomposition of the graph. The nodes in \c EVEN/D induce a subgraph
51 /// with factor-critical components, the nodes in \c ODD/A form the
52 /// canonical barrier, and the nodes in \c MATCHED/C induce a graph having
53 /// a perfect matching. The number of the factor-critical components
54 /// minus the number of barrier nodes is a lower bound on the
55 /// unmatched nodes, and the matching is optimal if and only if this bound is
56 /// tight. This decomposition can be obtained using \ref status() or
57 /// \ref statusMap() after running the algorithm.
58 ///
59 /// \tparam GR The undirected graph type the algorithm runs on.
60 template <typename GR>
61 class MaxMatching {
62 public:
63
64 /// The graph type of the algorithm
65 typedef GR Graph;
66 /// The type of the matching map
67 typedef typename Graph::template NodeMap<typename Graph::Arc>
68 MatchingMap;
69
70 ///\brief Status constants for Gallai-Edmonds decomposition.
71 ///
72 ///These constants are used for indicating the Gallai-Edmonds
73 ///decomposition of a graph. The nodes with status \c EVEN (or \c D)
74 ///induce a subgraph with factor-critical components, the nodes with
75 ///status \c ODD (or \c A) form the canonical barrier, and the nodes
76 ///with status \c MATCHED (or \c C) induce a subgraph having a
77 ///perfect matching.
78 enum Status {
79 EVEN = 1, ///< = 1. (\c D is an alias for \c EVEN.)
80 D = 1,
81 MATCHED = 0, ///< = 0. (\c C is an alias for \c MATCHED.)
82 C = 0,
83 ODD = -1, ///< = -1. (\c A is an alias for \c ODD.)
84 A = -1,
85 UNMATCHED = -2 ///< = -2.
86 };
87
88 /// The type of the status map
89 typedef typename Graph::template NodeMap<Status> StatusMap;
90
91 private:
92
93 TEMPLATE_GRAPH_TYPEDEFS(Graph);
94
95 typedef UnionFindEnum<IntNodeMap> BlossomSet;
96 typedef ExtendFindEnum<IntNodeMap> TreeSet;
97 typedef RangeMap<Node> NodeIntMap;
98 typedef MatchingMap EarMap;
99 typedef std::vector<Node> NodeQueue;
100
101 const Graph& _graph;
102 MatchingMap* _matching;
103 StatusMap* _status;
104
105 EarMap* _ear;
106
107 IntNodeMap* _blossom_set_index;
108 BlossomSet* _blossom_set;
109 NodeIntMap* _blossom_rep;
110
111 IntNodeMap* _tree_set_index;
112 TreeSet* _tree_set;
113
114 NodeQueue _node_queue;
115 int _process, _postpone, _last;
116
117 int _node_num;
118
119 private:
120
121 void createStructures() {
122 _node_num = countNodes(_graph);
123 if (!_matching) {
124 _matching = new MatchingMap(_graph);
125 }
126 if (!_status) {
127 _status = new StatusMap(_graph);
128 }
129 if (!_ear) {
130 _ear = new EarMap(_graph);
131 }
132 if (!_blossom_set) {
133 _blossom_set_index = new IntNodeMap(_graph);
134 _blossom_set = new BlossomSet(*_blossom_set_index);
135 }
136 if (!_blossom_rep) {
137 _blossom_rep = new NodeIntMap(_node_num);
138 }
139 if (!_tree_set) {
140 _tree_set_index = new IntNodeMap(_graph);
141 _tree_set = new TreeSet(*_tree_set_index);
142 }
143 _node_queue.resize(_node_num);
144 }
145
146 void destroyStructures() {
147 if (_matching) {
148 delete _matching;
149 }
150 if (_status) {
151 delete _status;
152 }
153 if (_ear) {
154 delete _ear;
155 }
156 if (_blossom_set) {
157 delete _blossom_set;
158 delete _blossom_set_index;
159 }
160 if (_blossom_rep) {
161 delete _blossom_rep;
162 }
163 if (_tree_set) {
164 delete _tree_set_index;
165 delete _tree_set;
166 }
167 }
168
169 void processDense(const Node& n) {
170 _process = _postpone = _last = 0;
171 _node_queue[_last++] = n;
172
173 while (_process != _last) {
174 Node u = _node_queue[_process++];
175 for (OutArcIt a(_graph, u); a != INVALID; ++a) {
176 Node v = _graph.target(a);
177 if ((*_status)[v] == MATCHED) {
178 extendOnArc(a);
179 } else if ((*_status)[v] == UNMATCHED) {
180 augmentOnArc(a);
181 return;
182 }
183 }
184 }
185
186 while (_postpone != _last) {
187 Node u = _node_queue[_postpone++];
188
189 for (OutArcIt a(_graph, u); a != INVALID ; ++a) {
190 Node v = _graph.target(a);
191
192 if ((*_status)[v] == EVEN) {
193 if (_blossom_set->find(u) != _blossom_set->find(v)) {
194 shrinkOnEdge(a);
195 }
196 }
197
198 while (_process != _last) {
199 Node w = _node_queue[_process++];
200 for (OutArcIt b(_graph, w); b != INVALID; ++b) {
201 Node x = _graph.target(b);
202 if ((*_status)[x] == MATCHED) {
203 extendOnArc(b);
204 } else if ((*_status)[x] == UNMATCHED) {
205 augmentOnArc(b);
206 return;
207 }
208 }
209 }
210 }
211 }
212 }
213
214 void processSparse(const Node& n) {
215 _process = _last = 0;
216 _node_queue[_last++] = n;
217 while (_process != _last) {
218 Node u = _node_queue[_process++];
219 for (OutArcIt a(_graph, u); a != INVALID; ++a) {
220 Node v = _graph.target(a);
221
222 if ((*_status)[v] == EVEN) {
223 if (_blossom_set->find(u) != _blossom_set->find(v)) {
224 shrinkOnEdge(a);
225 }
226 } else if ((*_status)[v] == MATCHED) {
227 extendOnArc(a);
228 } else if ((*_status)[v] == UNMATCHED) {
229 augmentOnArc(a);
230 return;
231 }
232 }
233 }
234 }
235
236 void shrinkOnEdge(const Edge& e) {
237 Node nca = INVALID;
238
239 {
240 std::set<Node> left_set, right_set;
241
242 Node left = (*_blossom_rep)[_blossom_set->find(_graph.u(e))];
243 left_set.insert(left);
244
245 Node right = (*_blossom_rep)[_blossom_set->find(_graph.v(e))];
246 right_set.insert(right);
247
248 while (true) {
249 if ((*_matching)[left] == INVALID) break;
250 left = _graph.target((*_matching)[left]);
251 left = (*_blossom_rep)[_blossom_set->
252 find(_graph.target((*_ear)[left]))];
253 if (right_set.find(left) != right_set.end()) {
254 nca = left;
255 break;
256 }
257 left_set.insert(left);
258
259 if ((*_matching)[right] == INVALID) break;
260 right = _graph.target((*_matching)[right]);
261 right = (*_blossom_rep)[_blossom_set->
262 find(_graph.target((*_ear)[right]))];
263 if (left_set.find(right) != left_set.end()) {
264 nca = right;
265 break;
266 }
267 right_set.insert(right);
268 }
269
270 if (nca == INVALID) {
271 if ((*_matching)[left] == INVALID) {
272 nca = right;
273 while (left_set.find(nca) == left_set.end()) {
274 nca = _graph.target((*_matching)[nca]);
275 nca =(*_blossom_rep)[_blossom_set->
276 find(_graph.target((*_ear)[nca]))];
277 }
278 } else {
279 nca = left;
280 while (right_set.find(nca) == right_set.end()) {
281 nca = _graph.target((*_matching)[nca]);
282 nca = (*_blossom_rep)[_blossom_set->
283 find(_graph.target((*_ear)[nca]))];
284 }
285 }
286 }
287 }
288
289 {
290
291 Node node = _graph.u(e);
292 Arc arc = _graph.direct(e, true);
293 Node base = (*_blossom_rep)[_blossom_set->find(node)];
294
295 while (base != nca) {
296 (*_ear)[node] = arc;
297
298 Node n = node;
299 while (n != base) {
300 n = _graph.target((*_matching)[n]);
301 Arc a = (*_ear)[n];
302 n = _graph.target(a);
303 (*_ear)[n] = _graph.oppositeArc(a);
304 }
305 node = _graph.target((*_matching)[base]);
306 _tree_set->erase(base);
307 _tree_set->erase(node);
308 _blossom_set->insert(node, _blossom_set->find(base));
309 (*_status)[node] = EVEN;
310 _node_queue[_last++] = node;
311 arc = _graph.oppositeArc((*_ear)[node]);
312 node = _graph.target((*_ear)[node]);
313 base = (*_blossom_rep)[_blossom_set->find(node)];
314 _blossom_set->join(_graph.target(arc), base);
315 }
316 }
317
318 (*_blossom_rep)[_blossom_set->find(nca)] = nca;
319
320 {
321
322 Node node = _graph.v(e);
323 Arc arc = _graph.direct(e, false);
324 Node base = (*_blossom_rep)[_blossom_set->find(node)];
325
326 while (base != nca) {
327 (*_ear)[node] = arc;
328
329 Node n = node;
330 while (n != base) {
331 n = _graph.target((*_matching)[n]);
332 Arc a = (*_ear)[n];
333 n = _graph.target(a);
334 (*_ear)[n] = _graph.oppositeArc(a);
335 }
336 node = _graph.target((*_matching)[base]);
337 _tree_set->erase(base);
338 _tree_set->erase(node);
339 _blossom_set->insert(node, _blossom_set->find(base));
340 (*_status)[node] = EVEN;
341 _node_queue[_last++] = node;
342 arc = _graph.oppositeArc((*_ear)[node]);
343 node = _graph.target((*_ear)[node]);
344 base = (*_blossom_rep)[_blossom_set->find(node)];
345 _blossom_set->join(_graph.target(arc), base);
346 }
347 }
348
349 (*_blossom_rep)[_blossom_set->find(nca)] = nca;
350 }
351
352 void extendOnArc(const Arc& a) {
353 Node base = _graph.source(a);
354 Node odd = _graph.target(a);
355
356 (*_ear)[odd] = _graph.oppositeArc(a);
357 Node even = _graph.target((*_matching)[odd]);
358 (*_blossom_rep)[_blossom_set->insert(even)] = even;
359 (*_status)[odd] = ODD;
360 (*_status)[even] = EVEN;
361 int tree = _tree_set->find((*_blossom_rep)[_blossom_set->find(base)]);
362 _tree_set->insert(odd, tree);
363 _tree_set->insert(even, tree);
364 _node_queue[_last++] = even;
365
366 }
367
368 void augmentOnArc(const Arc& a) {
369 Node even = _graph.source(a);
370 Node odd = _graph.target(a);
371
372 int tree = _tree_set->find((*_blossom_rep)[_blossom_set->find(even)]);
373
374 (*_matching)[odd] = _graph.oppositeArc(a);
375 (*_status)[odd] = MATCHED;
376
377 Arc arc = (*_matching)[even];
378 (*_matching)[even] = a;
379
380 while (arc != INVALID) {
381 odd = _graph.target(arc);
382 arc = (*_ear)[odd];
383 even = _graph.target(arc);
384 (*_matching)[odd] = arc;
385 arc = (*_matching)[even];
386 (*_matching)[even] = _graph.oppositeArc((*_matching)[odd]);
387 }
388
389 for (typename TreeSet::ItemIt it(*_tree_set, tree);
390 it != INVALID; ++it) {
391 if ((*_status)[it] == ODD) {
392 (*_status)[it] = MATCHED;
393 } else {
394 int blossom = _blossom_set->find(it);
395 for (typename BlossomSet::ItemIt jt(*_blossom_set, blossom);
396 jt != INVALID; ++jt) {
397 (*_status)[jt] = MATCHED;
398 }
399 _blossom_set->eraseClass(blossom);
400 }
401 }
402 _tree_set->eraseClass(tree);
403
404 }
405
406 public:
407
408 /// \brief Constructor
409 ///
410 /// Constructor.
411 MaxMatching(const Graph& graph)
412 : _graph(graph), _matching(0), _status(0), _ear(0),
413 _blossom_set_index(0), _blossom_set(0), _blossom_rep(0),
414 _tree_set_index(0), _tree_set(0) {}
415
416 ~MaxMatching() {
417 destroyStructures();
418 }
419
420 /// \name Execution Control
421 /// The simplest way to execute the algorithm is to use the
422 /// \c run() member function.\n
423 /// If you need better control on the execution, you have to call
424 /// one of the functions \ref init(), \ref greedyInit() or
425 /// \ref matchingInit() first, then you can start the algorithm with
426 /// \ref startSparse() or \ref startDense().
427
428 ///@{
429
430 /// \brief Set the initial matching to the empty matching.
431 ///
432 /// This function sets the initial matching to the empty matching.
433 void init() {
434 createStructures();
435 for(NodeIt n(_graph); n != INVALID; ++n) {
436 (*_matching)[n] = INVALID;
437 (*_status)[n] = UNMATCHED;
438 }
439 }
440
441 /// \brief Find an initial matching in a greedy way.
442 ///
443 /// This function finds an initial matching in a greedy way.
444 void greedyInit() {
445 createStructures();
446 for (NodeIt n(_graph); n != INVALID; ++n) {
447 (*_matching)[n] = INVALID;
448 (*_status)[n] = UNMATCHED;
449 }
450 for (NodeIt n(_graph); n != INVALID; ++n) {
451 if ((*_matching)[n] == INVALID) {
452 for (OutArcIt a(_graph, n); a != INVALID ; ++a) {
453 Node v = _graph.target(a);
454 if ((*_matching)[v] == INVALID && v != n) {
455 (*_matching)[n] = a;
456 (*_status)[n] = MATCHED;
457 (*_matching)[v] = _graph.oppositeArc(a);
458 (*_status)[v] = MATCHED;
459 break;
460 }
461 }
462 }
463 }
464 }
465
466
467 /// \brief Initialize the matching from a map.
468 ///
469 /// This function initializes the matching from a \c bool valued edge
470 /// map. This map should have the property that there are no two incident
471 /// edges with \c true value, i.e. it really contains a matching.
472 /// \return \c true if the map contains a matching.
473 template <typename MatchingMap>
474 bool matchingInit(const MatchingMap& matching) {
475 createStructures();
476
477 for (NodeIt n(_graph); n != INVALID; ++n) {
478 (*_matching)[n] = INVALID;
479 (*_status)[n] = UNMATCHED;
480 }
481 for(EdgeIt e(_graph); e!=INVALID; ++e) {
482 if (matching[e]) {
483
484 Node u = _graph.u(e);
485 if ((*_matching)[u] != INVALID) return false;
486 (*_matching)[u] = _graph.direct(e, true);
487 (*_status)[u] = MATCHED;
488
489 Node v = _graph.v(e);
490 if ((*_matching)[v] != INVALID) return false;
491 (*_matching)[v] = _graph.direct(e, false);
492 (*_status)[v] = MATCHED;
493 }
494 }
495 return true;
496 }
497
498 /// \brief Start Edmonds' algorithm
499 ///
500 /// This function runs the original Edmonds' algorithm.
501 ///
502 /// \pre \ref init(), \ref greedyInit() or \ref matchingInit() must be
503 /// called before using this function.
504 void startSparse() {
505 for(NodeIt n(_graph); n != INVALID; ++n) {
506 if ((*_status)[n] == UNMATCHED) {
507 (*_blossom_rep)[_blossom_set->insert(n)] = n;
508 _tree_set->insert(n);
509 (*_status)[n] = EVEN;
510 processSparse(n);
511 }
512 }
513 }
514
515 /// \brief Start Edmonds' algorithm with a heuristic improvement
516 /// for dense graphs
517 ///
518 /// This function runs Edmonds' algorithm with a heuristic of postponing
519 /// shrinks, therefore resulting in a faster algorithm for dense graphs.
520 ///
521 /// \pre \ref init(), \ref greedyInit() or \ref matchingInit() must be
522 /// called before using this function.
523 void startDense() {
524 for(NodeIt n(_graph); n != INVALID; ++n) {
525 if ((*_status)[n] == UNMATCHED) {
526 (*_blossom_rep)[_blossom_set->insert(n)] = n;
527 _tree_set->insert(n);
528 (*_status)[n] = EVEN;
529 processDense(n);
530 }
531 }
532 }
533
534
535 /// \brief Run Edmonds' algorithm
536 ///
537 /// This function runs Edmonds' algorithm. An additional heuristic of
538 /// postponing shrinks is used for relatively dense graphs
539 /// (for which <tt>m>=2*n</tt> holds).
540 void run() {
541 if (countEdges(_graph) < 2 * countNodes(_graph)) {
542 greedyInit();
543 startSparse();
544 } else {
545 init();
546 startDense();
547 }
548 }
549
550 /// @}
551
552 /// \name Primal Solution
553 /// Functions to get the primal solution, i.e. the maximum matching.
554
555 /// @{
556
557 /// \brief Return the size (cardinality) of the matching.
558 ///
559 /// This function returns the size (cardinality) of the current matching.
560 /// After run() it returns the size of the maximum matching in the graph.
561 int matchingSize() const {
562 int size = 0;
563 for (NodeIt n(_graph); n != INVALID; ++n) {
564 if ((*_matching)[n] != INVALID) {
565 ++size;
566 }
567 }
568 return size / 2;
569 }
570
571 /// \brief Return \c true if the given edge is in the matching.
572 ///
573 /// This function returns \c true if the given edge is in the current
574 /// matching.
575 bool matching(const Edge& edge) const {
576 return edge == (*_matching)[_graph.u(edge)];
577 }
578
579 /// \brief Return the matching arc (or edge) incident to the given node.
580 ///
581 /// This function returns the matching arc (or edge) incident to the
582 /// given node in the current matching or \c INVALID if the node is
583 /// not covered by the matching.
584 Arc matching(const Node& n) const {
585 return (*_matching)[n];
586 }
587
588 /// \brief Return a const reference to the matching map.
589 ///
590 /// This function returns a const reference to a node map that stores
591 /// the matching arc (or edge) incident to each node.
592 const MatchingMap& matchingMap() const {
593 return *_matching;
594 }
595
596 /// \brief Return the mate of the given node.
597 ///
598 /// This function returns the mate of the given node in the current
599 /// matching or \c INVALID if the node is not covered by the matching.
600 Node mate(const Node& n) const {
601 return (*_matching)[n] != INVALID ?
602 _graph.target((*_matching)[n]) : INVALID;
603 }
604
605 /// @}
606
607 /// \name Dual Solution
608 /// Functions to get the dual solution, i.e. the Gallai-Edmonds
609 /// decomposition.
610
611 /// @{
612
613 /// \brief Return the status of the given node in the Edmonds-Gallai
614 /// decomposition.
615 ///
616 /// This function returns the \ref Status "status" of the given node
617 /// in the Edmonds-Gallai decomposition.
618 Status status(const Node& n) const {
619 return (*_status)[n];
620 }
621
622 /// \brief Return a const reference to the status map, which stores
623 /// the Edmonds-Gallai decomposition.
624 ///
625 /// This function returns a const reference to a node map that stores the
626 /// \ref Status "status" of each node in the Edmonds-Gallai decomposition.
627 const StatusMap& statusMap() const {
628 return *_status;
629 }
630
631 /// \brief Return \c true if the given node is in the barrier.
632 ///
633 /// This function returns \c true if the given node is in the barrier.
634 bool barrier(const Node& n) const {
635 return (*_status)[n] == ODD;
636 }
637
638 /// @}
639
640 };
641
642 /// \ingroup matching
643 ///
644 /// \brief Weighted matching in general graphs
645 ///
646 /// This class provides an efficient implementation of Edmond's
647 /// maximum weighted matching algorithm. The implementation is based
648 /// on extensive use of priority queues and provides
649 /// \f$O(nm\log n)\f$ time complexity.
650 ///
651 /// The maximum weighted matching problem is to find a subset of the
652 /// edges in an undirected graph with maximum overall weight for which
653 /// each node has at most one incident edge.
654 /// It can be formulated with the following linear program.
655 /// \f[ \sum_{e \in \delta(u)}x_e \le 1 \quad \forall u\in V\f]
656 /** \f[ \sum_{e \in \gamma(B)}x_e \le \frac{\vert B \vert - 1}{2}
657 \quad \forall B\in\mathcal{O}\f] */
658 /// \f[x_e \ge 0\quad \forall e\in E\f]
659 /// \f[\max \sum_{e\in E}x_ew_e\f]
660 /// where \f$\delta(X)\f$ is the set of edges incident to a node in
661 /// \f$X\f$, \f$\gamma(X)\f$ is the set of edges with both ends in
662 /// \f$X\f$ and \f$\mathcal{O}\f$ is the set of odd cardinality
663 /// subsets of the nodes.
664 ///
665 /// The algorithm calculates an optimal matching and a proof of the
666 /// optimality. The solution of the dual problem can be used to check
667 /// the result of the algorithm. The dual linear problem is the
668 /// following.
669 /** \f[ y_u + y_v + \sum_{B \in \mathcal{O}, uv \in \gamma(B)}
670 z_B \ge w_{uv} \quad \forall uv\in E\f] */
671 /// \f[y_u \ge 0 \quad \forall u \in V\f]
672 /// \f[z_B \ge 0 \quad \forall B \in \mathcal{O}\f]
673 /** \f[\min \sum_{u \in V}y_u + \sum_{B \in \mathcal{O}}
674 \frac{\vert B \vert - 1}{2}z_B\f] */
675 ///
676 /// The algorithm can be executed with the run() function.
677 /// After it the matching (the primal solution) and the dual solution
678 /// can be obtained using the query functions and the
679 /// \ref MaxWeightedMatching::BlossomIt "BlossomIt" nested class,
680 /// which is able to iterate on the nodes of a blossom.
681 /// If the value type is integer, then the dual solution is multiplied
682 /// by \ref MaxWeightedMatching::dualScale "4".
683 ///
684 /// \tparam GR The undirected graph type the algorithm runs on.
685 /// \tparam WM The type edge weight map. The default type is
686 /// \ref concepts::Graph::EdgeMap "GR::EdgeMap<int>".
687#ifdef DOXYGEN
688 template <typename GR, typename WM>
689#else
690 template <typename GR,
691 typename WM = typename GR::template EdgeMap<int> >
692#endif
693 class MaxWeightedMatching {
694 public:
695
696 /// The graph type of the algorithm
697 typedef GR Graph;
698 /// The type of the edge weight map
699 typedef WM WeightMap;
700 /// The value type of the edge weights
701 typedef typename WeightMap::Value Value;
702
703 /// The type of the matching map
704 typedef typename Graph::template NodeMap<typename Graph::Arc>
705 MatchingMap;
706
707 /// \brief Scaling factor for dual solution
708 ///
709 /// Scaling factor for dual solution. It is equal to 4 or 1
710 /// according to the value type.
711 static const int dualScale =
712 std::numeric_limits<Value>::is_integer ? 4 : 1;
713
714 private:
715
716 TEMPLATE_GRAPH_TYPEDEFS(Graph);
717
718 typedef typename Graph::template NodeMap<Value> NodePotential;
719 typedef std::vector<Node> BlossomNodeList;
720
721 struct BlossomVariable {
722 int begin, end;
723 Value value;
724
725 BlossomVariable(int _begin, int _end, Value _value)
726 : begin(_begin), end(_end), value(_value) {}
727
728 };
729
730 typedef std::vector<BlossomVariable> BlossomPotential;
731
732 const Graph& _graph;
733 const WeightMap& _weight;
734
735 MatchingMap* _matching;
736
737 NodePotential* _node_potential;
738
739 BlossomPotential _blossom_potential;
740 BlossomNodeList _blossom_node_list;
741
742 int _node_num;
743 int _blossom_num;
744
745 typedef RangeMap<int> IntIntMap;
746
747 enum Status {
748 EVEN = -1, MATCHED = 0, ODD = 1, UNMATCHED = -2
749 };
750
751 typedef HeapUnionFind<Value, IntNodeMap> BlossomSet;
752 struct BlossomData {
753 int tree;
754 Status status;
755 Arc pred, next;
756 Value pot, offset;
757 Node base;
758 };
759
760 IntNodeMap *_blossom_index;
761 BlossomSet *_blossom_set;
762 RangeMap<BlossomData>* _blossom_data;
763
764 IntNodeMap *_node_index;
765 IntArcMap *_node_heap_index;
766
767 struct NodeData {
768
769 NodeData(IntArcMap& node_heap_index)
770 : heap(node_heap_index) {}
771
772 int blossom;
773 Value pot;
774 BinHeap<Value, IntArcMap> heap;
775 std::map<int, Arc> heap_index;
776
777 int tree;
778 };
779
780 RangeMap<NodeData>* _node_data;
781
782 typedef ExtendFindEnum<IntIntMap> TreeSet;
783
784 IntIntMap *_tree_set_index;
785 TreeSet *_tree_set;
786
787 IntNodeMap *_delta1_index;
788 BinHeap<Value, IntNodeMap> *_delta1;
789
790 IntIntMap *_delta2_index;
791 BinHeap<Value, IntIntMap> *_delta2;
792
793 IntEdgeMap *_delta3_index;
794 BinHeap<Value, IntEdgeMap> *_delta3;
795
796 IntIntMap *_delta4_index;
797 BinHeap<Value, IntIntMap> *_delta4;
798
799 Value _delta_sum;
800
801 void createStructures() {
802 _node_num = countNodes(_graph);
803 _blossom_num = _node_num * 3 / 2;
804
805 if (!_matching) {
806 _matching = new MatchingMap(_graph);
807 }
808 if (!_node_potential) {
809 _node_potential = new NodePotential(_graph);
810 }
811 if (!_blossom_set) {
812 _blossom_index = new IntNodeMap(_graph);
813 _blossom_set = new BlossomSet(*_blossom_index);
814 _blossom_data = new RangeMap<BlossomData>(_blossom_num);
815 }
816
817 if (!_node_index) {
818 _node_index = new IntNodeMap(_graph);
819 _node_heap_index = new IntArcMap(_graph);
820 _node_data = new RangeMap<NodeData>(_node_num,
821 NodeData(*_node_heap_index));
822 }
823
824 if (!_tree_set) {
825 _tree_set_index = new IntIntMap(_blossom_num);
826 _tree_set = new TreeSet(*_tree_set_index);
827 }
828 if (!_delta1) {
829 _delta1_index = new IntNodeMap(_graph);
830 _delta1 = new BinHeap<Value, IntNodeMap>(*_delta1_index);
831 }
832 if (!_delta2) {
833 _delta2_index = new IntIntMap(_blossom_num);
834 _delta2 = new BinHeap<Value, IntIntMap>(*_delta2_index);
835 }
836 if (!_delta3) {
837 _delta3_index = new IntEdgeMap(_graph);
838 _delta3 = new BinHeap<Value, IntEdgeMap>(*_delta3_index);
839 }
840 if (!_delta4) {
841 _delta4_index = new IntIntMap(_blossom_num);
842 _delta4 = new BinHeap<Value, IntIntMap>(*_delta4_index);
843 }
844 }
845
846 void destroyStructures() {
847 _node_num = countNodes(_graph);
848 _blossom_num = _node_num * 3 / 2;
849
850 if (_matching) {
851 delete _matching;
852 }
853 if (_node_potential) {
854 delete _node_potential;
855 }
856 if (_blossom_set) {
857 delete _blossom_index;
858 delete _blossom_set;
859 delete _blossom_data;
860 }
861
862 if (_node_index) {
863 delete _node_index;
864 delete _node_heap_index;
865 delete _node_data;
866 }
867
868 if (_tree_set) {
869 delete _tree_set_index;
870 delete _tree_set;
871 }
872 if (_delta1) {
873 delete _delta1_index;
874 delete _delta1;
875 }
876 if (_delta2) {
877 delete _delta2_index;
878 delete _delta2;
879 }
880 if (_delta3) {
881 delete _delta3_index;
882 delete _delta3;
883 }
884 if (_delta4) {
885 delete _delta4_index;
886 delete _delta4;
887 }
888 }
889
890 void matchedToEven(int blossom, int tree) {
891 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
892 _delta2->erase(blossom);
893 }
894
895 if (!_blossom_set->trivial(blossom)) {
896 (*_blossom_data)[blossom].pot -=
897 2 * (_delta_sum - (*_blossom_data)[blossom].offset);
898 }
899
900 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
901 n != INVALID; ++n) {
902
903 _blossom_set->increase(n, std::numeric_limits<Value>::max());
904 int ni = (*_node_index)[n];
905
906 (*_node_data)[ni].heap.clear();
907 (*_node_data)[ni].heap_index.clear();
908
909 (*_node_data)[ni].pot += _delta_sum - (*_blossom_data)[blossom].offset;
910
911 _delta1->push(n, (*_node_data)[ni].pot);
912
913 for (InArcIt e(_graph, n); e != INVALID; ++e) {
914 Node v = _graph.source(e);
915 int vb = _blossom_set->find(v);
916 int vi = (*_node_index)[v];
917
918 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
919 dualScale * _weight[e];
920
921 if ((*_blossom_data)[vb].status == EVEN) {
922 if (_delta3->state(e) != _delta3->IN_HEAP && blossom != vb) {
923 _delta3->push(e, rw / 2);
924 }
925 } else if ((*_blossom_data)[vb].status == UNMATCHED) {
926 if (_delta3->state(e) != _delta3->IN_HEAP) {
927 _delta3->push(e, rw);
928 }
929 } else {
930 typename std::map<int, Arc>::iterator it =
931 (*_node_data)[vi].heap_index.find(tree);
932
933 if (it != (*_node_data)[vi].heap_index.end()) {
934 if ((*_node_data)[vi].heap[it->second] > rw) {
935 (*_node_data)[vi].heap.replace(it->second, e);
936 (*_node_data)[vi].heap.decrease(e, rw);
937 it->second = e;
938 }
939 } else {
940 (*_node_data)[vi].heap.push(e, rw);
941 (*_node_data)[vi].heap_index.insert(std::make_pair(tree, e));
942 }
943
944 if ((*_blossom_set)[v] > (*_node_data)[vi].heap.prio()) {
945 _blossom_set->decrease(v, (*_node_data)[vi].heap.prio());
946
947 if ((*_blossom_data)[vb].status == MATCHED) {
948 if (_delta2->state(vb) != _delta2->IN_HEAP) {
949 _delta2->push(vb, _blossom_set->classPrio(vb) -
950 (*_blossom_data)[vb].offset);
951 } else if ((*_delta2)[vb] > _blossom_set->classPrio(vb) -
952 (*_blossom_data)[vb].offset){
953 _delta2->decrease(vb, _blossom_set->classPrio(vb) -
954 (*_blossom_data)[vb].offset);
955 }
956 }
957 }
958 }
959 }
960 }
961 (*_blossom_data)[blossom].offset = 0;
962 }
963
964 void matchedToOdd(int blossom) {
965 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
966 _delta2->erase(blossom);
967 }
968 (*_blossom_data)[blossom].offset += _delta_sum;
969 if (!_blossom_set->trivial(blossom)) {
970 _delta4->push(blossom, (*_blossom_data)[blossom].pot / 2 +
971 (*_blossom_data)[blossom].offset);
972 }
973 }
974
975 void evenToMatched(int blossom, int tree) {
976 if (!_blossom_set->trivial(blossom)) {
977 (*_blossom_data)[blossom].pot += 2 * _delta_sum;
978 }
979
980 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
981 n != INVALID; ++n) {
982 int ni = (*_node_index)[n];
983 (*_node_data)[ni].pot -= _delta_sum;
984
985 _delta1->erase(n);
986
987 for (InArcIt e(_graph, n); e != INVALID; ++e) {
988 Node v = _graph.source(e);
989 int vb = _blossom_set->find(v);
990 int vi = (*_node_index)[v];
991
992 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
993 dualScale * _weight[e];
994
995 if (vb == blossom) {
996 if (_delta3->state(e) == _delta3->IN_HEAP) {
997 _delta3->erase(e);
998 }
999 } else if ((*_blossom_data)[vb].status == EVEN) {
1000
1001 if (_delta3->state(e) == _delta3->IN_HEAP) {
1002 _delta3->erase(e);
1003 }
1004
1005 int vt = _tree_set->find(vb);
1006
1007 if (vt != tree) {
1008
1009 Arc r = _graph.oppositeArc(e);
1010
1011 typename std::map<int, Arc>::iterator it =
1012 (*_node_data)[ni].heap_index.find(vt);
1013
1014 if (it != (*_node_data)[ni].heap_index.end()) {
1015 if ((*_node_data)[ni].heap[it->second] > rw) {
1016 (*_node_data)[ni].heap.replace(it->second, r);
1017 (*_node_data)[ni].heap.decrease(r, rw);
1018 it->second = r;
1019 }
1020 } else {
1021 (*_node_data)[ni].heap.push(r, rw);
1022 (*_node_data)[ni].heap_index.insert(std::make_pair(vt, r));
1023 }
1024
1025 if ((*_blossom_set)[n] > (*_node_data)[ni].heap.prio()) {
1026 _blossom_set->decrease(n, (*_node_data)[ni].heap.prio());
1027
1028 if (_delta2->state(blossom) != _delta2->IN_HEAP) {
1029 _delta2->push(blossom, _blossom_set->classPrio(blossom) -
1030 (*_blossom_data)[blossom].offset);
1031 } else if ((*_delta2)[blossom] >
1032 _blossom_set->classPrio(blossom) -
1033 (*_blossom_data)[blossom].offset){
1034 _delta2->decrease(blossom, _blossom_set->classPrio(blossom) -
1035 (*_blossom_data)[blossom].offset);
1036 }
1037 }
1038 }
1039
1040 } else if ((*_blossom_data)[vb].status == UNMATCHED) {
1041 if (_delta3->state(e) == _delta3->IN_HEAP) {
1042 _delta3->erase(e);
1043 }
1044 } else {
1045
1046 typename std::map<int, Arc>::iterator it =
1047 (*_node_data)[vi].heap_index.find(tree);
1048
1049 if (it != (*_node_data)[vi].heap_index.end()) {
1050 (*_node_data)[vi].heap.erase(it->second);
1051 (*_node_data)[vi].heap_index.erase(it);
1052 if ((*_node_data)[vi].heap.empty()) {
1053 _blossom_set->increase(v, std::numeric_limits<Value>::max());
1054 } else if ((*_blossom_set)[v] < (*_node_data)[vi].heap.prio()) {
1055 _blossom_set->increase(v, (*_node_data)[vi].heap.prio());
1056 }
1057
1058 if ((*_blossom_data)[vb].status == MATCHED) {
1059 if (_blossom_set->classPrio(vb) ==
1060 std::numeric_limits<Value>::max()) {
1061 _delta2->erase(vb);
1062 } else if ((*_delta2)[vb] < _blossom_set->classPrio(vb) -
1063 (*_blossom_data)[vb].offset) {
1064 _delta2->increase(vb, _blossom_set->classPrio(vb) -
1065 (*_blossom_data)[vb].offset);
1066 }
1067 }
1068 }
1069 }
1070 }
1071 }
1072 }
1073
1074 void oddToMatched(int blossom) {
1075 (*_blossom_data)[blossom].offset -= _delta_sum;
1076
1077 if (_blossom_set->classPrio(blossom) !=
1078 std::numeric_limits<Value>::max()) {
1079 _delta2->push(blossom, _blossom_set->classPrio(blossom) -
1080 (*_blossom_data)[blossom].offset);
1081 }
1082
1083 if (!_blossom_set->trivial(blossom)) {
1084 _delta4->erase(blossom);
1085 }
1086 }
1087
1088 void oddToEven(int blossom, int tree) {
1089 if (!_blossom_set->trivial(blossom)) {
1090 _delta4->erase(blossom);
1091 (*_blossom_data)[blossom].pot -=
1092 2 * (2 * _delta_sum - (*_blossom_data)[blossom].offset);
1093 }
1094
1095 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
1096 n != INVALID; ++n) {
1097 int ni = (*_node_index)[n];
1098
1099 _blossom_set->increase(n, std::numeric_limits<Value>::max());
1100
1101 (*_node_data)[ni].heap.clear();
1102 (*_node_data)[ni].heap_index.clear();
1103 (*_node_data)[ni].pot +=
1104 2 * _delta_sum - (*_blossom_data)[blossom].offset;
1105
1106 _delta1->push(n, (*_node_data)[ni].pot);
1107
1108 for (InArcIt e(_graph, n); e != INVALID; ++e) {
1109 Node v = _graph.source(e);
1110 int vb = _blossom_set->find(v);
1111 int vi = (*_node_index)[v];
1112
1113 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
1114 dualScale * _weight[e];
1115
1116 if ((*_blossom_data)[vb].status == EVEN) {
1117 if (_delta3->state(e) != _delta3->IN_HEAP && blossom != vb) {
1118 _delta3->push(e, rw / 2);
1119 }
1120 } else if ((*_blossom_data)[vb].status == UNMATCHED) {
1121 if (_delta3->state(e) != _delta3->IN_HEAP) {
1122 _delta3->push(e, rw);
1123 }
1124 } else {
1125
1126 typename std::map<int, Arc>::iterator it =
1127 (*_node_data)[vi].heap_index.find(tree);
1128
1129 if (it != (*_node_data)[vi].heap_index.end()) {
1130 if ((*_node_data)[vi].heap[it->second] > rw) {
1131 (*_node_data)[vi].heap.replace(it->second, e);
1132 (*_node_data)[vi].heap.decrease(e, rw);
1133 it->second = e;
1134 }
1135 } else {
1136 (*_node_data)[vi].heap.push(e, rw);
1137 (*_node_data)[vi].heap_index.insert(std::make_pair(tree, e));
1138 }
1139
1140 if ((*_blossom_set)[v] > (*_node_data)[vi].heap.prio()) {
1141 _blossom_set->decrease(v, (*_node_data)[vi].heap.prio());
1142
1143 if ((*_blossom_data)[vb].status == MATCHED) {
1144 if (_delta2->state(vb) != _delta2->IN_HEAP) {
1145 _delta2->push(vb, _blossom_set->classPrio(vb) -
1146 (*_blossom_data)[vb].offset);
1147 } else if ((*_delta2)[vb] > _blossom_set->classPrio(vb) -
1148 (*_blossom_data)[vb].offset) {
1149 _delta2->decrease(vb, _blossom_set->classPrio(vb) -
1150 (*_blossom_data)[vb].offset);
1151 }
1152 }
1153 }
1154 }
1155 }
1156 }
1157 (*_blossom_data)[blossom].offset = 0;
1158 }
1159
1160
1161 void matchedToUnmatched(int blossom) {
1162 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
1163 _delta2->erase(blossom);
1164 }
1165
1166 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
1167 n != INVALID; ++n) {
1168 int ni = (*_node_index)[n];
1169
1170 _blossom_set->increase(n, std::numeric_limits<Value>::max());
1171
1172 (*_node_data)[ni].heap.clear();
1173 (*_node_data)[ni].heap_index.clear();
1174
1175 for (OutArcIt e(_graph, n); e != INVALID; ++e) {
1176 Node v = _graph.target(e);
1177 int vb = _blossom_set->find(v);
1178 int vi = (*_node_index)[v];
1179
1180 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
1181 dualScale * _weight[e];
1182
1183 if ((*_blossom_data)[vb].status == EVEN) {
1184 if (_delta3->state(e) != _delta3->IN_HEAP) {
1185 _delta3->push(e, rw);
1186 }
1187 }
1188 }
1189 }
1190 }
1191
1192 void unmatchedToMatched(int blossom) {
1193 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
1194 n != INVALID; ++n) {
1195 int ni = (*_node_index)[n];
1196
1197 for (InArcIt e(_graph, n); e != INVALID; ++e) {
1198 Node v = _graph.source(e);
1199 int vb = _blossom_set->find(v);
1200 int vi = (*_node_index)[v];
1201
1202 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
1203 dualScale * _weight[e];
1204
1205 if (vb == blossom) {
1206 if (_delta3->state(e) == _delta3->IN_HEAP) {
1207 _delta3->erase(e);
1208 }
1209 } else if ((*_blossom_data)[vb].status == EVEN) {
1210
1211 if (_delta3->state(e) == _delta3->IN_HEAP) {
1212 _delta3->erase(e);
1213 }
1214
1215 int vt = _tree_set->find(vb);
1216
1217 Arc r = _graph.oppositeArc(e);
1218
1219 typename std::map<int, Arc>::iterator it =
1220 (*_node_data)[ni].heap_index.find(vt);
1221
1222 if (it != (*_node_data)[ni].heap_index.end()) {
1223 if ((*_node_data)[ni].heap[it->second] > rw) {
1224 (*_node_data)[ni].heap.replace(it->second, r);
1225 (*_node_data)[ni].heap.decrease(r, rw);
1226 it->second = r;
1227 }
1228 } else {
1229 (*_node_data)[ni].heap.push(r, rw);
1230 (*_node_data)[ni].heap_index.insert(std::make_pair(vt, r));
1231 }
1232
1233 if ((*_blossom_set)[n] > (*_node_data)[ni].heap.prio()) {
1234 _blossom_set->decrease(n, (*_node_data)[ni].heap.prio());
1235
1236 if (_delta2->state(blossom) != _delta2->IN_HEAP) {
1237 _delta2->push(blossom, _blossom_set->classPrio(blossom) -
1238 (*_blossom_data)[blossom].offset);
1239 } else if ((*_delta2)[blossom] > _blossom_set->classPrio(blossom)-
1240 (*_blossom_data)[blossom].offset){
1241 _delta2->decrease(blossom, _blossom_set->classPrio(blossom) -
1242 (*_blossom_data)[blossom].offset);
1243 }
1244 }
1245
1246 } else if ((*_blossom_data)[vb].status == UNMATCHED) {
1247 if (_delta3->state(e) == _delta3->IN_HEAP) {
1248 _delta3->erase(e);
1249 }
1250 }
1251 }
1252 }
1253 }
1254
1255 void alternatePath(int even, int tree) {
1256 int odd;
1257
1258 evenToMatched(even, tree);
1259 (*_blossom_data)[even].status = MATCHED;
1260
1261 while ((*_blossom_data)[even].pred != INVALID) {
1262 odd = _blossom_set->find(_graph.target((*_blossom_data)[even].pred));
1263 (*_blossom_data)[odd].status = MATCHED;
1264 oddToMatched(odd);
1265 (*_blossom_data)[odd].next = (*_blossom_data)[odd].pred;
1266
1267 even = _blossom_set->find(_graph.target((*_blossom_data)[odd].pred));
1268 (*_blossom_data)[even].status = MATCHED;
1269 evenToMatched(even, tree);
1270 (*_blossom_data)[even].next =
1271 _graph.oppositeArc((*_blossom_data)[odd].pred);
1272 }
1273
1274 }
1275
1276 void destroyTree(int tree) {
1277 for (TreeSet::ItemIt b(*_tree_set, tree); b != INVALID; ++b) {
1278 if ((*_blossom_data)[b].status == EVEN) {
1279 (*_blossom_data)[b].status = MATCHED;
1280 evenToMatched(b, tree);
1281 } else if ((*_blossom_data)[b].status == ODD) {
1282 (*_blossom_data)[b].status = MATCHED;
1283 oddToMatched(b);
1284 }
1285 }
1286 _tree_set->eraseClass(tree);
1287 }
1288
1289
1290 void unmatchNode(const Node& node) {
1291 int blossom = _blossom_set->find(node);
1292 int tree = _tree_set->find(blossom);
1293
1294 alternatePath(blossom, tree);
1295 destroyTree(tree);
1296
1297 (*_blossom_data)[blossom].status = UNMATCHED;
1298 (*_blossom_data)[blossom].base = node;
1299 matchedToUnmatched(blossom);
1300 }
1301
1302
1303 void augmentOnEdge(const Edge& edge) {
1304
1305 int left = _blossom_set->find(_graph.u(edge));
1306 int right = _blossom_set->find(_graph.v(edge));
1307
1308 if ((*_blossom_data)[left].status == EVEN) {
1309 int left_tree = _tree_set->find(left);
1310 alternatePath(left, left_tree);
1311 destroyTree(left_tree);
1312 } else {
1313 (*_blossom_data)[left].status = MATCHED;
1314 unmatchedToMatched(left);
1315 }
1316
1317 if ((*_blossom_data)[right].status == EVEN) {
1318 int right_tree = _tree_set->find(right);
1319 alternatePath(right, right_tree);
1320 destroyTree(right_tree);
1321 } else {
1322 (*_blossom_data)[right].status = MATCHED;
1323 unmatchedToMatched(right);
1324 }
1325
1326 (*_blossom_data)[left].next = _graph.direct(edge, true);
1327 (*_blossom_data)[right].next = _graph.direct(edge, false);
1328 }
1329
1330 void extendOnArc(const Arc& arc) {
1331 int base = _blossom_set->find(_graph.target(arc));
1332 int tree = _tree_set->find(base);
1333
1334 int odd = _blossom_set->find(_graph.source(arc));
1335 _tree_set->insert(odd, tree);
1336 (*_blossom_data)[odd].status = ODD;
1337 matchedToOdd(odd);
1338 (*_blossom_data)[odd].pred = arc;
1339
1340 int even = _blossom_set->find(_graph.target((*_blossom_data)[odd].next));
1341 (*_blossom_data)[even].pred = (*_blossom_data)[even].next;
1342 _tree_set->insert(even, tree);
1343 (*_blossom_data)[even].status = EVEN;
1344 matchedToEven(even, tree);
1345 }
1346
1347 void shrinkOnEdge(const Edge& edge, int tree) {
1348 int nca = -1;
1349 std::vector<int> left_path, right_path;
1350
1351 {
1352 std::set<int> left_set, right_set;
1353 int left = _blossom_set->find(_graph.u(edge));
1354 left_path.push_back(left);
1355 left_set.insert(left);
1356
1357 int right = _blossom_set->find(_graph.v(edge));
1358 right_path.push_back(right);
1359 right_set.insert(right);
1360
1361 while (true) {
1362
1363 if ((*_blossom_data)[left].pred == INVALID) break;
1364
1365 left =
1366 _blossom_set->find(_graph.target((*_blossom_data)[left].pred));
1367 left_path.push_back(left);
1368 left =
1369 _blossom_set->find(_graph.target((*_blossom_data)[left].pred));
1370 left_path.push_back(left);
1371
1372 left_set.insert(left);
1373
1374 if (right_set.find(left) != right_set.end()) {
1375 nca = left;
1376 break;
1377 }
1378
1379 if ((*_blossom_data)[right].pred == INVALID) break;
1380
1381 right =
1382 _blossom_set->find(_graph.target((*_blossom_data)[right].pred));
1383 right_path.push_back(right);
1384 right =
1385 _blossom_set->find(_graph.target((*_blossom_data)[right].pred));
1386 right_path.push_back(right);
1387
1388 right_set.insert(right);
1389
1390 if (left_set.find(right) != left_set.end()) {
1391 nca = right;
1392 break;
1393 }
1394
1395 }
1396
1397 if (nca == -1) {
1398 if ((*_blossom_data)[left].pred == INVALID) {
1399 nca = right;
1400 while (left_set.find(nca) == left_set.end()) {
1401 nca =
1402 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
1403 right_path.push_back(nca);
1404 nca =
1405 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
1406 right_path.push_back(nca);
1407 }
1408 } else {
1409 nca = left;
1410 while (right_set.find(nca) == right_set.end()) {
1411 nca =
1412 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
1413 left_path.push_back(nca);
1414 nca =
1415 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
1416 left_path.push_back(nca);
1417 }
1418 }
1419 }
1420 }
1421
1422 std::vector<int> subblossoms;
1423 Arc prev;
1424
1425 prev = _graph.direct(edge, true);
1426 for (int i = 0; left_path[i] != nca; i += 2) {
1427 subblossoms.push_back(left_path[i]);
1428 (*_blossom_data)[left_path[i]].next = prev;
1429 _tree_set->erase(left_path[i]);
1430
1431 subblossoms.push_back(left_path[i + 1]);
1432 (*_blossom_data)[left_path[i + 1]].status = EVEN;
1433 oddToEven(left_path[i + 1], tree);
1434 _tree_set->erase(left_path[i + 1]);
1435 prev = _graph.oppositeArc((*_blossom_data)[left_path[i + 1]].pred);
1436 }
1437
1438 int k = 0;
1439 while (right_path[k] != nca) ++k;
1440
1441 subblossoms.push_back(nca);
1442 (*_blossom_data)[nca].next = prev;
1443
1444 for (int i = k - 2; i >= 0; i -= 2) {
1445 subblossoms.push_back(right_path[i + 1]);
1446 (*_blossom_data)[right_path[i + 1]].status = EVEN;
1447 oddToEven(right_path[i + 1], tree);
1448 _tree_set->erase(right_path[i + 1]);
1449
1450 (*_blossom_data)[right_path[i + 1]].next =
1451 (*_blossom_data)[right_path[i + 1]].pred;
1452
1453 subblossoms.push_back(right_path[i]);
1454 _tree_set->erase(right_path[i]);
1455 }
1456
1457 int surface =
1458 _blossom_set->join(subblossoms.begin(), subblossoms.end());
1459
1460 for (int i = 0; i < int(subblossoms.size()); ++i) {
1461 if (!_blossom_set->trivial(subblossoms[i])) {
1462 (*_blossom_data)[subblossoms[i]].pot += 2 * _delta_sum;
1463 }
1464 (*_blossom_data)[subblossoms[i]].status = MATCHED;
1465 }
1466
1467 (*_blossom_data)[surface].pot = -2 * _delta_sum;
1468 (*_blossom_data)[surface].offset = 0;
1469 (*_blossom_data)[surface].status = EVEN;
1470 (*_blossom_data)[surface].pred = (*_blossom_data)[nca].pred;
1471 (*_blossom_data)[surface].next = (*_blossom_data)[nca].pred;
1472
1473 _tree_set->insert(surface, tree);
1474 _tree_set->erase(nca);
1475 }
1476
1477 void splitBlossom(int blossom) {
1478 Arc next = (*_blossom_data)[blossom].next;
1479 Arc pred = (*_blossom_data)[blossom].pred;
1480
1481 int tree = _tree_set->find(blossom);
1482
1483 (*_blossom_data)[blossom].status = MATCHED;
1484 oddToMatched(blossom);
1485 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
1486 _delta2->erase(blossom);
1487 }
1488
1489 std::vector<int> subblossoms;
1490 _blossom_set->split(blossom, std::back_inserter(subblossoms));
1491
1492 Value offset = (*_blossom_data)[blossom].offset;
1493 int b = _blossom_set->find(_graph.source(pred));
1494 int d = _blossom_set->find(_graph.source(next));
1495
1496 int ib = -1, id = -1;
1497 for (int i = 0; i < int(subblossoms.size()); ++i) {
1498 if (subblossoms[i] == b) ib = i;
1499 if (subblossoms[i] == d) id = i;
1500
1501 (*_blossom_data)[subblossoms[i]].offset = offset;
1502 if (!_blossom_set->trivial(subblossoms[i])) {
1503 (*_blossom_data)[subblossoms[i]].pot -= 2 * offset;
1504 }
1505 if (_blossom_set->classPrio(subblossoms[i]) !=
1506 std::numeric_limits<Value>::max()) {
1507 _delta2->push(subblossoms[i],
1508 _blossom_set->classPrio(subblossoms[i]) -
1509 (*_blossom_data)[subblossoms[i]].offset);
1510 }
1511 }
1512
1513 if (id > ib ? ((id - ib) % 2 == 0) : ((ib - id) % 2 == 1)) {
1514 for (int i = (id + 1) % subblossoms.size();
1515 i != ib; i = (i + 2) % subblossoms.size()) {
1516 int sb = subblossoms[i];
1517 int tb = subblossoms[(i + 1) % subblossoms.size()];
1518 (*_blossom_data)[sb].next =
1519 _graph.oppositeArc((*_blossom_data)[tb].next);
1520 }
1521
1522 for (int i = ib; i != id; i = (i + 2) % subblossoms.size()) {
1523 int sb = subblossoms[i];
1524 int tb = subblossoms[(i + 1) % subblossoms.size()];
1525 int ub = subblossoms[(i + 2) % subblossoms.size()];
1526
1527 (*_blossom_data)[sb].status = ODD;
1528 matchedToOdd(sb);
1529 _tree_set->insert(sb, tree);
1530 (*_blossom_data)[sb].pred = pred;
1531 (*_blossom_data)[sb].next =
1532 _graph.oppositeArc((*_blossom_data)[tb].next);
1533
1534 pred = (*_blossom_data)[ub].next;
1535
1536 (*_blossom_data)[tb].status = EVEN;
1537 matchedToEven(tb, tree);
1538 _tree_set->insert(tb, tree);
1539 (*_blossom_data)[tb].pred = (*_blossom_data)[tb].next;
1540 }
1541
1542 (*_blossom_data)[subblossoms[id]].status = ODD;
1543 matchedToOdd(subblossoms[id]);
1544 _tree_set->insert(subblossoms[id], tree);
1545 (*_blossom_data)[subblossoms[id]].next = next;
1546 (*_blossom_data)[subblossoms[id]].pred = pred;
1547
1548 } else {
1549
1550 for (int i = (ib + 1) % subblossoms.size();
1551 i != id; i = (i + 2) % subblossoms.size()) {
1552 int sb = subblossoms[i];
1553 int tb = subblossoms[(i + 1) % subblossoms.size()];
1554 (*_blossom_data)[sb].next =
1555 _graph.oppositeArc((*_blossom_data)[tb].next);
1556 }
1557
1558 for (int i = id; i != ib; i = (i + 2) % subblossoms.size()) {
1559 int sb = subblossoms[i];
1560 int tb = subblossoms[(i + 1) % subblossoms.size()];
1561 int ub = subblossoms[(i + 2) % subblossoms.size()];
1562
1563 (*_blossom_data)[sb].status = ODD;
1564 matchedToOdd(sb);
1565 _tree_set->insert(sb, tree);
1566 (*_blossom_data)[sb].next = next;
1567 (*_blossom_data)[sb].pred =
1568 _graph.oppositeArc((*_blossom_data)[tb].next);
1569
1570 (*_blossom_data)[tb].status = EVEN;
1571 matchedToEven(tb, tree);
1572 _tree_set->insert(tb, tree);
1573 (*_blossom_data)[tb].pred =
1574 (*_blossom_data)[tb].next =
1575 _graph.oppositeArc((*_blossom_data)[ub].next);
1576 next = (*_blossom_data)[ub].next;
1577 }
1578
1579 (*_blossom_data)[subblossoms[ib]].status = ODD;
1580 matchedToOdd(subblossoms[ib]);
1581 _tree_set->insert(subblossoms[ib], tree);
1582 (*_blossom_data)[subblossoms[ib]].next = next;
1583 (*_blossom_data)[subblossoms[ib]].pred = pred;
1584 }
1585 _tree_set->erase(blossom);
1586 }
1587
1588 void extractBlossom(int blossom, const Node& base, const Arc& matching) {
1589 if (_blossom_set->trivial(blossom)) {
1590 int bi = (*_node_index)[base];
1591 Value pot = (*_node_data)[bi].pot;
1592
1593 (*_matching)[base] = matching;
1594 _blossom_node_list.push_back(base);
1595 (*_node_potential)[base] = pot;
1596 } else {
1597
1598 Value pot = (*_blossom_data)[blossom].pot;
1599 int bn = _blossom_node_list.size();
1600
1601 std::vector<int> subblossoms;
1602 _blossom_set->split(blossom, std::back_inserter(subblossoms));
1603 int b = _blossom_set->find(base);
1604 int ib = -1;
1605 for (int i = 0; i < int(subblossoms.size()); ++i) {
1606 if (subblossoms[i] == b) { ib = i; break; }
1607 }
1608
1609 for (int i = 1; i < int(subblossoms.size()); i += 2) {
1610 int sb = subblossoms[(ib + i) % subblossoms.size()];
1611 int tb = subblossoms[(ib + i + 1) % subblossoms.size()];
1612
1613 Arc m = (*_blossom_data)[tb].next;
1614 extractBlossom(sb, _graph.target(m), _graph.oppositeArc(m));
1615 extractBlossom(tb, _graph.source(m), m);
1616 }
1617 extractBlossom(subblossoms[ib], base, matching);
1618
1619 int en = _blossom_node_list.size();
1620
1621 _blossom_potential.push_back(BlossomVariable(bn, en, pot));
1622 }
1623 }
1624
1625 void extractMatching() {
1626 std::vector<int> blossoms;
1627 for (typename BlossomSet::ClassIt c(*_blossom_set); c != INVALID; ++c) {
1628 blossoms.push_back(c);
1629 }
1630
1631 for (int i = 0; i < int(blossoms.size()); ++i) {
1632 if ((*_blossom_data)[blossoms[i]].status == MATCHED) {
1633
1634 Value offset = (*_blossom_data)[blossoms[i]].offset;
1635 (*_blossom_data)[blossoms[i]].pot += 2 * offset;
1636 for (typename BlossomSet::ItemIt n(*_blossom_set, blossoms[i]);
1637 n != INVALID; ++n) {
1638 (*_node_data)[(*_node_index)[n]].pot -= offset;
1639 }
1640
1641 Arc matching = (*_blossom_data)[blossoms[i]].next;
1642 Node base = _graph.source(matching);
1643 extractBlossom(blossoms[i], base, matching);
1644 } else {
1645 Node base = (*_blossom_data)[blossoms[i]].base;
1646 extractBlossom(blossoms[i], base, INVALID);
1647 }
1648 }
1649 }
1650
1651 public:
1652
1653 /// \brief Constructor
1654 ///
1655 /// Constructor.
1656 MaxWeightedMatching(const Graph& graph, const WeightMap& weight)
1657 : _graph(graph), _weight(weight), _matching(0),
1658 _node_potential(0), _blossom_potential(), _blossom_node_list(),
1659 _node_num(0), _blossom_num(0),
1660
1661 _blossom_index(0), _blossom_set(0), _blossom_data(0),
1662 _node_index(0), _node_heap_index(0), _node_data(0),
1663 _tree_set_index(0), _tree_set(0),
1664
1665 _delta1_index(0), _delta1(0),
1666 _delta2_index(0), _delta2(0),
1667 _delta3_index(0), _delta3(0),
1668 _delta4_index(0), _delta4(0),
1669
1670 _delta_sum() {}
1671
1672 ~MaxWeightedMatching() {
1673 destroyStructures();
1674 }
1675
1676 /// \name Execution Control
1677 /// The simplest way to execute the algorithm is to use the
1678 /// \ref run() member function.
1679
1680 ///@{
1681
1682 /// \brief Initialize the algorithm
1683 ///
1684 /// This function initializes the algorithm.
1685 void init() {
1686 createStructures();
1687
1688 for (ArcIt e(_graph); e != INVALID; ++e) {
1689 (*_node_heap_index)[e] = BinHeap<Value, IntArcMap>::PRE_HEAP;
1690 }
1691 for (NodeIt n(_graph); n != INVALID; ++n) {
1692 (*_delta1_index)[n] = _delta1->PRE_HEAP;
1693 }
1694 for (EdgeIt e(_graph); e != INVALID; ++e) {
1695 (*_delta3_index)[e] = _delta3->PRE_HEAP;
1696 }
1697 for (int i = 0; i < _blossom_num; ++i) {
1698 (*_delta2_index)[i] = _delta2->PRE_HEAP;
1699 (*_delta4_index)[i] = _delta4->PRE_HEAP;
1700 }
1701
1702 int index = 0;
1703 for (NodeIt n(_graph); n != INVALID; ++n) {
1704 Value max = 0;
1705 for (OutArcIt e(_graph, n); e != INVALID; ++e) {
1706 if (_graph.target(e) == n) continue;
1707 if ((dualScale * _weight[e]) / 2 > max) {
1708 max = (dualScale * _weight[e]) / 2;
1709 }
1710 }
1711 (*_node_index)[n] = index;
1712 (*_node_data)[index].pot = max;
1713 _delta1->push(n, max);
1714 int blossom =
1715 _blossom_set->insert(n, std::numeric_limits<Value>::max());
1716
1717 _tree_set->insert(blossom);
1718
1719 (*_blossom_data)[blossom].status = EVEN;
1720 (*_blossom_data)[blossom].pred = INVALID;
1721 (*_blossom_data)[blossom].next = INVALID;
1722 (*_blossom_data)[blossom].pot = 0;
1723 (*_blossom_data)[blossom].offset = 0;
1724 ++index;
1725 }
1726 for (EdgeIt e(_graph); e != INVALID; ++e) {
1727 int si = (*_node_index)[_graph.u(e)];
1728 int ti = (*_node_index)[_graph.v(e)];
1729 if (_graph.u(e) != _graph.v(e)) {
1730 _delta3->push(e, ((*_node_data)[si].pot + (*_node_data)[ti].pot -
1731 dualScale * _weight[e]) / 2);
1732 }
1733 }
1734 }
1735
1736 /// \brief Start the algorithm
1737 ///
1738 /// This function starts the algorithm.
1739 ///
1740 /// \pre \ref init() must be called before using this function.
1741 void start() {
1742 enum OpType {
1743 D1, D2, D3, D4
1744 };
1745
1746 int unmatched = _node_num;
1747 while (unmatched > 0) {
1748 Value d1 = !_delta1->empty() ?
1749 _delta1->prio() : std::numeric_limits<Value>::max();
1750
1751 Value d2 = !_delta2->empty() ?
1752 _delta2->prio() : std::numeric_limits<Value>::max();
1753
1754 Value d3 = !_delta3->empty() ?
1755 _delta3->prio() : std::numeric_limits<Value>::max();
1756
1757 Value d4 = !_delta4->empty() ?
1758 _delta4->prio() : std::numeric_limits<Value>::max();
1759
1760 _delta_sum = d1; OpType ot = D1;
1761 if (d2 < _delta_sum) { _delta_sum = d2; ot = D2; }
1762 if (d3 < _delta_sum) { _delta_sum = d3; ot = D3; }
1763 if (d4 < _delta_sum) { _delta_sum = d4; ot = D4; }
1764
1765
1766 switch (ot) {
1767 case D1:
1768 {
1769 Node n = _delta1->top();
1770 unmatchNode(n);
1771 --unmatched;
1772 }
1773 break;
1774 case D2:
1775 {
1776 int blossom = _delta2->top();
1777 Node n = _blossom_set->classTop(blossom);
1778 Arc e = (*_node_data)[(*_node_index)[n]].heap.top();
1779 extendOnArc(e);
1780 }
1781 break;
1782 case D3:
1783 {
1784 Edge e = _delta3->top();
1785
1786 int left_blossom = _blossom_set->find(_graph.u(e));
1787 int right_blossom = _blossom_set->find(_graph.v(e));
1788
1789 if (left_blossom == right_blossom) {
1790 _delta3->pop();
1791 } else {
1792 int left_tree;
1793 if ((*_blossom_data)[left_blossom].status == EVEN) {
1794 left_tree = _tree_set->find(left_blossom);
1795 } else {
1796 left_tree = -1;
1797 ++unmatched;
1798 }
1799 int right_tree;
1800 if ((*_blossom_data)[right_blossom].status == EVEN) {
1801 right_tree = _tree_set->find(right_blossom);
1802 } else {
1803 right_tree = -1;
1804 ++unmatched;
1805 }
1806
1807 if (left_tree == right_tree) {
1808 shrinkOnEdge(e, left_tree);
1809 } else {
1810 augmentOnEdge(e);
1811 unmatched -= 2;
1812 }
1813 }
1814 } break;
1815 case D4:
1816 splitBlossom(_delta4->top());
1817 break;
1818 }
1819 }
1820 extractMatching();
1821 }
1822
1823 /// \brief Run the algorithm.
1824 ///
1825 /// This method runs the \c %MaxWeightedMatching algorithm.
1826 ///
1827 /// \note mwm.run() is just a shortcut of the following code.
1828 /// \code
1829 /// mwm.init();
1830 /// mwm.start();
1831 /// \endcode
1832 void run() {
1833 init();
1834 start();
1835 }
1836
1837 /// @}
1838
1839 /// \name Primal Solution
1840 /// Functions to get the primal solution, i.e. the maximum weighted
1841 /// matching.\n
1842 /// Either \ref run() or \ref start() function should be called before
1843 /// using them.
1844
1845 /// @{
1846
1847 /// \brief Return the weight of the matching.
1848 ///
1849 /// This function returns the weight of the found matching.
1850 ///
1851 /// \pre Either run() or start() must be called before using this function.
1852 Value matchingWeight() const {
1853 Value sum = 0;
1854 for (NodeIt n(_graph); n != INVALID; ++n) {
1855 if ((*_matching)[n] != INVALID) {
1856 sum += _weight[(*_matching)[n]];
1857 }
1858 }
1859 return sum /= 2;
1860 }
1861
1862 /// \brief Return the size (cardinality) of the matching.
1863 ///
1864 /// This function returns the size (cardinality) of the found matching.
1865 ///
1866 /// \pre Either run() or start() must be called before using this function.
1867 int matchingSize() const {
1868 int num = 0;
1869 for (NodeIt n(_graph); n != INVALID; ++n) {
1870 if ((*_matching)[n] != INVALID) {
1871 ++num;
1872 }
1873 }
1874 return num /= 2;
1875 }
1876
1877 /// \brief Return \c true if the given edge is in the matching.
1878 ///
1879 /// This function returns \c true if the given edge is in the found
1880 /// matching.
1881 ///
1882 /// \pre Either run() or start() must be called before using this function.
1883 bool matching(const Edge& edge) const {
1884 return edge == (*_matching)[_graph.u(edge)];
1885 }
1886
1887 /// \brief Return the matching arc (or edge) incident to the given node.
1888 ///
1889 /// This function returns the matching arc (or edge) incident to the
1890 /// given node in the found matching or \c INVALID if the node is
1891 /// not covered by the matching.
1892 ///
1893 /// \pre Either run() or start() must be called before using this function.
1894 Arc matching(const Node& node) const {
1895 return (*_matching)[node];
1896 }
1897
1898 /// \brief Return a const reference to the matching map.
1899 ///
1900 /// This function returns a const reference to a node map that stores
1901 /// the matching arc (or edge) incident to each node.
1902 const MatchingMap& matchingMap() const {
1903 return *_matching;
1904 }
1905
1906 /// \brief Return the mate of the given node.
1907 ///
1908 /// This function returns the mate of the given node in the found
1909 /// matching or \c INVALID if the node is not covered by the matching.
1910 ///
1911 /// \pre Either run() or start() must be called before using this function.
1912 Node mate(const Node& node) const {
1913 return (*_matching)[node] != INVALID ?
1914 _graph.target((*_matching)[node]) : INVALID;
1915 }
1916
1917 /// @}
1918
1919 /// \name Dual Solution
1920 /// Functions to get the dual solution.\n
1921 /// Either \ref run() or \ref start() function should be called before
1922 /// using them.
1923
1924 /// @{
1925
1926 /// \brief Return the value of the dual solution.
1927 ///
1928 /// This function returns the value of the dual solution.
1929 /// It should be equal to the primal value scaled by \ref dualScale
1930 /// "dual scale".
1931 ///
1932 /// \pre Either run() or start() must be called before using this function.
1933 Value dualValue() const {
1934 Value sum = 0;
1935 for (NodeIt n(_graph); n != INVALID; ++n) {
1936 sum += nodeValue(n);
1937 }
1938 for (int i = 0; i < blossomNum(); ++i) {
1939 sum += blossomValue(i) * (blossomSize(i) / 2);
1940 }
1941 return sum;
1942 }
1943
1944 /// \brief Return the dual value (potential) of the given node.
1945 ///
1946 /// This function returns the dual value (potential) of the given node.
1947 ///
1948 /// \pre Either run() or start() must be called before using this function.
1949 Value nodeValue(const Node& n) const {
1950 return (*_node_potential)[n];
1951 }
1952
1953 /// \brief Return the number of the blossoms in the basis.
1954 ///
1955 /// This function returns the number of the blossoms in the basis.
1956 ///
1957 /// \pre Either run() or start() must be called before using this function.
1958 /// \see BlossomIt
1959 int blossomNum() const {
1960 return _blossom_potential.size();
1961 }
1962
1963 /// \brief Return the number of the nodes in the given blossom.
1964 ///
1965 /// This function returns the number of the nodes in the given blossom.
1966 ///
1967 /// \pre Either run() or start() must be called before using this function.
1968 /// \see BlossomIt
1969 int blossomSize(int k) const {
1970 return _blossom_potential[k].end - _blossom_potential[k].begin;
1971 }
1972
1973 /// \brief Return the dual value (ptential) of the given blossom.
1974 ///
1975 /// This function returns the dual value (ptential) of the given blossom.
1976 ///
1977 /// \pre Either run() or start() must be called before using this function.
1978 Value blossomValue(int k) const {
1979 return _blossom_potential[k].value;
1980 }
1981
1982 /// \brief Iterator for obtaining the nodes of a blossom.
1983 ///
1984 /// This class provides an iterator for obtaining the nodes of the
1985 /// given blossom. It lists a subset of the nodes.
1986 /// Before using this iterator, you must allocate a
1987 /// MaxWeightedMatching class and execute it.
1988 class BlossomIt {
1989 public:
1990
1991 /// \brief Constructor.
1992 ///
1993 /// Constructor to get the nodes of the given variable.
1994 ///
1995 /// \pre Either \ref MaxWeightedMatching::run() "algorithm.run()" or
1996 /// \ref MaxWeightedMatching::start() "algorithm.start()" must be
1997 /// called before initializing this iterator.
1998 BlossomIt(const MaxWeightedMatching& algorithm, int variable)
1999 : _algorithm(&algorithm)
2000 {
2001 _index = _algorithm->_blossom_potential[variable].begin;
2002 _last = _algorithm->_blossom_potential[variable].end;
2003 }
2004
2005 /// \brief Conversion to \c Node.
2006 ///
2007 /// Conversion to \c Node.
2008 operator Node() const {
2009 return _algorithm->_blossom_node_list[_index];
2010 }
2011
2012 /// \brief Increment operator.
2013 ///
2014 /// Increment operator.
2015 BlossomIt& operator++() {
2016 ++_index;
2017 return *this;
2018 }
2019
2020 /// \brief Validity checking
2021 ///
2022 /// Checks whether the iterator is invalid.
2023 bool operator==(Invalid) const { return _index == _last; }
2024
2025 /// \brief Validity checking
2026 ///
2027 /// Checks whether the iterator is valid.
2028 bool operator!=(Invalid) const { return _index != _last; }
2029
2030 private:
2031 const MaxWeightedMatching* _algorithm;
2032 int _last;
2033 int _index;
2034 };
2035
2036 /// @}
2037
2038 };
2039
2040 /// \ingroup matching
2041 ///
2042 /// \brief Weighted perfect matching in general graphs
2043 ///
2044 /// This class provides an efficient implementation of Edmond's
2045 /// maximum weighted perfect matching algorithm. The implementation
2046 /// is based on extensive use of priority queues and provides
2047 /// \f$O(nm\log n)\f$ time complexity.
2048 ///
2049 /// The maximum weighted perfect matching problem is to find a subset of
2050 /// the edges in an undirected graph with maximum overall weight for which
2051 /// each node has exactly one incident edge.
2052 /// It can be formulated with the following linear program.
2053 /// \f[ \sum_{e \in \delta(u)}x_e = 1 \quad \forall u\in V\f]
2054 /** \f[ \sum_{e \in \gamma(B)}x_e \le \frac{\vert B \vert - 1}{2}
2055 \quad \forall B\in\mathcal{O}\f] */
2056 /// \f[x_e \ge 0\quad \forall e\in E\f]
2057 /// \f[\max \sum_{e\in E}x_ew_e\f]
2058 /// where \f$\delta(X)\f$ is the set of edges incident to a node in
2059 /// \f$X\f$, \f$\gamma(X)\f$ is the set of edges with both ends in
2060 /// \f$X\f$ and \f$\mathcal{O}\f$ is the set of odd cardinality
2061 /// subsets of the nodes.
2062 ///
2063 /// The algorithm calculates an optimal matching and a proof of the
2064 /// optimality. The solution of the dual problem can be used to check
2065 /// the result of the algorithm. The dual linear problem is the
2066 /// following.
2067 /** \f[ y_u + y_v + \sum_{B \in \mathcal{O}, uv \in \gamma(B)}z_B \ge
2068 w_{uv} \quad \forall uv\in E\f] */
2069 /// \f[z_B \ge 0 \quad \forall B \in \mathcal{O}\f]
2070 /** \f[\min \sum_{u \in V}y_u + \sum_{B \in \mathcal{O}}
2071 \frac{\vert B \vert - 1}{2}z_B\f] */
2072 ///
2073 /// The algorithm can be executed with the run() function.
2074 /// After it the matching (the primal solution) and the dual solution
2075 /// can be obtained using the query functions and the
2076 /// \ref MaxWeightedPerfectMatching::BlossomIt "BlossomIt" nested class,
2077 /// which is able to iterate on the nodes of a blossom.
2078 /// If the value type is integer, then the dual solution is multiplied
2079 /// by \ref MaxWeightedMatching::dualScale "4".
2080 ///
2081 /// \tparam GR The undirected graph type the algorithm runs on.
2082 /// \tparam WM The type edge weight map. The default type is
2083 /// \ref concepts::Graph::EdgeMap "GR::EdgeMap<int>".
2084#ifdef DOXYGEN
2085 template <typename GR, typename WM>
2086#else
2087 template <typename GR,
2088 typename WM = typename GR::template EdgeMap<int> >
2089#endif
2090 class MaxWeightedPerfectMatching {
2091 public:
2092
2093 /// The graph type of the algorithm
2094 typedef GR Graph;
2095 /// The type of the edge weight map
2096 typedef WM WeightMap;
2097 /// The value type of the edge weights
2098 typedef typename WeightMap::Value Value;
2099
2100 /// \brief Scaling factor for dual solution
2101 ///
2102 /// Scaling factor for dual solution, it is equal to 4 or 1
2103 /// according to the value type.
2104 static const int dualScale =
2105 std::numeric_limits<Value>::is_integer ? 4 : 1;
2106
2107 /// The type of the matching map
2108 typedef typename Graph::template NodeMap<typename Graph::Arc>
2109 MatchingMap;
2110
2111 private:
2112
2113 TEMPLATE_GRAPH_TYPEDEFS(Graph);
2114
2115 typedef typename Graph::template NodeMap<Value> NodePotential;
2116 typedef std::vector<Node> BlossomNodeList;
2117
2118 struct BlossomVariable {
2119 int begin, end;
2120 Value value;
2121
2122 BlossomVariable(int _begin, int _end, Value _value)
2123 : begin(_begin), end(_end), value(_value) {}
2124
2125 };
2126
2127 typedef std::vector<BlossomVariable> BlossomPotential;
2128
2129 const Graph& _graph;
2130 const WeightMap& _weight;
2131
2132 MatchingMap* _matching;
2133
2134 NodePotential* _node_potential;
2135
2136 BlossomPotential _blossom_potential;
2137 BlossomNodeList _blossom_node_list;
2138
2139 int _node_num;
2140 int _blossom_num;
2141
2142 typedef RangeMap<int> IntIntMap;
2143
2144 enum Status {
2145 EVEN = -1, MATCHED = 0, ODD = 1
2146 };
2147
2148 typedef HeapUnionFind<Value, IntNodeMap> BlossomSet;
2149 struct BlossomData {
2150 int tree;
2151 Status status;
2152 Arc pred, next;
2153 Value pot, offset;
2154 };
2155
2156 IntNodeMap *_blossom_index;
2157 BlossomSet *_blossom_set;
2158 RangeMap<BlossomData>* _blossom_data;
2159
2160 IntNodeMap *_node_index;
2161 IntArcMap *_node_heap_index;
2162
2163 struct NodeData {
2164
2165 NodeData(IntArcMap& node_heap_index)
2166 : heap(node_heap_index) {}
2167
2168 int blossom;
2169 Value pot;
2170 BinHeap<Value, IntArcMap> heap;
2171 std::map<int, Arc> heap_index;
2172
2173 int tree;
2174 };
2175
2176 RangeMap<NodeData>* _node_data;
2177
2178 typedef ExtendFindEnum<IntIntMap> TreeSet;
2179
2180 IntIntMap *_tree_set_index;
2181 TreeSet *_tree_set;
2182
2183 IntIntMap *_delta2_index;
2184 BinHeap<Value, IntIntMap> *_delta2;
2185
2186 IntEdgeMap *_delta3_index;
2187 BinHeap<Value, IntEdgeMap> *_delta3;
2188
2189 IntIntMap *_delta4_index;
2190 BinHeap<Value, IntIntMap> *_delta4;
2191
2192 Value _delta_sum;
2193
2194 void createStructures() {
2195 _node_num = countNodes(_graph);
2196 _blossom_num = _node_num * 3 / 2;
2197
2198 if (!_matching) {
2199 _matching = new MatchingMap(_graph);
2200 }
2201 if (!_node_potential) {
2202 _node_potential = new NodePotential(_graph);
2203 }
2204 if (!_blossom_set) {
2205 _blossom_index = new IntNodeMap(_graph);
2206 _blossom_set = new BlossomSet(*_blossom_index);
2207 _blossom_data = new RangeMap<BlossomData>(_blossom_num);
2208 }
2209
2210 if (!_node_index) {
2211 _node_index = new IntNodeMap(_graph);
2212 _node_heap_index = new IntArcMap(_graph);
2213 _node_data = new RangeMap<NodeData>(_node_num,
2214 NodeData(*_node_heap_index));
2215 }
2216
2217 if (!_tree_set) {
2218 _tree_set_index = new IntIntMap(_blossom_num);
2219 _tree_set = new TreeSet(*_tree_set_index);
2220 }
2221 if (!_delta2) {
2222 _delta2_index = new IntIntMap(_blossom_num);
2223 _delta2 = new BinHeap<Value, IntIntMap>(*_delta2_index);
2224 }
2225 if (!_delta3) {
2226 _delta3_index = new IntEdgeMap(_graph);
2227 _delta3 = new BinHeap<Value, IntEdgeMap>(*_delta3_index);
2228 }
2229 if (!_delta4) {
2230 _delta4_index = new IntIntMap(_blossom_num);
2231 _delta4 = new BinHeap<Value, IntIntMap>(*_delta4_index);
2232 }
2233 }
2234
2235 void destroyStructures() {
2236 _node_num = countNodes(_graph);
2237 _blossom_num = _node_num * 3 / 2;
2238
2239 if (_matching) {
2240 delete _matching;
2241 }
2242 if (_node_potential) {
2243 delete _node_potential;
2244 }
2245 if (_blossom_set) {
2246 delete _blossom_index;
2247 delete _blossom_set;
2248 delete _blossom_data;
2249 }
2250
2251 if (_node_index) {
2252 delete _node_index;
2253 delete _node_heap_index;
2254 delete _node_data;
2255 }
2256
2257 if (_tree_set) {
2258 delete _tree_set_index;
2259 delete _tree_set;
2260 }
2261 if (_delta2) {
2262 delete _delta2_index;
2263 delete _delta2;
2264 }
2265 if (_delta3) {
2266 delete _delta3_index;
2267 delete _delta3;
2268 }
2269 if (_delta4) {
2270 delete _delta4_index;
2271 delete _delta4;
2272 }
2273 }
2274
2275 void matchedToEven(int blossom, int tree) {
2276 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
2277 _delta2->erase(blossom);
2278 }
2279
2280 if (!_blossom_set->trivial(blossom)) {
2281 (*_blossom_data)[blossom].pot -=
2282 2 * (_delta_sum - (*_blossom_data)[blossom].offset);
2283 }
2284
2285 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
2286 n != INVALID; ++n) {
2287
2288 _blossom_set->increase(n, std::numeric_limits<Value>::max());
2289 int ni = (*_node_index)[n];
2290
2291 (*_node_data)[ni].heap.clear();
2292 (*_node_data)[ni].heap_index.clear();
2293
2294 (*_node_data)[ni].pot += _delta_sum - (*_blossom_data)[blossom].offset;
2295
2296 for (InArcIt e(_graph, n); e != INVALID; ++e) {
2297 Node v = _graph.source(e);
2298 int vb = _blossom_set->find(v);
2299 int vi = (*_node_index)[v];
2300
2301 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
2302 dualScale * _weight[e];
2303
2304 if ((*_blossom_data)[vb].status == EVEN) {
2305 if (_delta3->state(e) != _delta3->IN_HEAP && blossom != vb) {
2306 _delta3->push(e, rw / 2);
2307 }
2308 } else {
2309 typename std::map<int, Arc>::iterator it =
2310 (*_node_data)[vi].heap_index.find(tree);
2311
2312 if (it != (*_node_data)[vi].heap_index.end()) {
2313 if ((*_node_data)[vi].heap[it->second] > rw) {
2314 (*_node_data)[vi].heap.replace(it->second, e);
2315 (*_node_data)[vi].heap.decrease(e, rw);
2316 it->second = e;
2317 }
2318 } else {
2319 (*_node_data)[vi].heap.push(e, rw);
2320 (*_node_data)[vi].heap_index.insert(std::make_pair(tree, e));
2321 }
2322
2323 if ((*_blossom_set)[v] > (*_node_data)[vi].heap.prio()) {
2324 _blossom_set->decrease(v, (*_node_data)[vi].heap.prio());
2325
2326 if ((*_blossom_data)[vb].status == MATCHED) {
2327 if (_delta2->state(vb) != _delta2->IN_HEAP) {
2328 _delta2->push(vb, _blossom_set->classPrio(vb) -
2329 (*_blossom_data)[vb].offset);
2330 } else if ((*_delta2)[vb] > _blossom_set->classPrio(vb) -
2331 (*_blossom_data)[vb].offset){
2332 _delta2->decrease(vb, _blossom_set->classPrio(vb) -
2333 (*_blossom_data)[vb].offset);
2334 }
2335 }
2336 }
2337 }
2338 }
2339 }
2340 (*_blossom_data)[blossom].offset = 0;
2341 }
2342
2343 void matchedToOdd(int blossom) {
2344 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
2345 _delta2->erase(blossom);
2346 }
2347 (*_blossom_data)[blossom].offset += _delta_sum;
2348 if (!_blossom_set->trivial(blossom)) {
2349 _delta4->push(blossom, (*_blossom_data)[blossom].pot / 2 +
2350 (*_blossom_data)[blossom].offset);
2351 }
2352 }
2353
2354 void evenToMatched(int blossom, int tree) {
2355 if (!_blossom_set->trivial(blossom)) {
2356 (*_blossom_data)[blossom].pot += 2 * _delta_sum;
2357 }
2358
2359 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
2360 n != INVALID; ++n) {
2361 int ni = (*_node_index)[n];
2362 (*_node_data)[ni].pot -= _delta_sum;
2363
2364 for (InArcIt e(_graph, n); e != INVALID; ++e) {
2365 Node v = _graph.source(e);
2366 int vb = _blossom_set->find(v);
2367 int vi = (*_node_index)[v];
2368
2369 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
2370 dualScale * _weight[e];
2371
2372 if (vb == blossom) {
2373 if (_delta3->state(e) == _delta3->IN_HEAP) {
2374 _delta3->erase(e);
2375 }
2376 } else if ((*_blossom_data)[vb].status == EVEN) {
2377
2378 if (_delta3->state(e) == _delta3->IN_HEAP) {
2379 _delta3->erase(e);
2380 }
2381
2382 int vt = _tree_set->find(vb);
2383
2384 if (vt != tree) {
2385
2386 Arc r = _graph.oppositeArc(e);
2387
2388 typename std::map<int, Arc>::iterator it =
2389 (*_node_data)[ni].heap_index.find(vt);
2390
2391 if (it != (*_node_data)[ni].heap_index.end()) {
2392 if ((*_node_data)[ni].heap[it->second] > rw) {
2393 (*_node_data)[ni].heap.replace(it->second, r);
2394 (*_node_data)[ni].heap.decrease(r, rw);
2395 it->second = r;
2396 }
2397 } else {
2398 (*_node_data)[ni].heap.push(r, rw);
2399 (*_node_data)[ni].heap_index.insert(std::make_pair(vt, r));
2400 }
2401
2402 if ((*_blossom_set)[n] > (*_node_data)[ni].heap.prio()) {
2403 _blossom_set->decrease(n, (*_node_data)[ni].heap.prio());
2404
2405 if (_delta2->state(blossom) != _delta2->IN_HEAP) {
2406 _delta2->push(blossom, _blossom_set->classPrio(blossom) -
2407 (*_blossom_data)[blossom].offset);
2408 } else if ((*_delta2)[blossom] >
2409 _blossom_set->classPrio(blossom) -
2410 (*_blossom_data)[blossom].offset){
2411 _delta2->decrease(blossom, _blossom_set->classPrio(blossom) -
2412 (*_blossom_data)[blossom].offset);
2413 }
2414 }
2415 }
2416 } else {
2417
2418 typename std::map<int, Arc>::iterator it =
2419 (*_node_data)[vi].heap_index.find(tree);
2420
2421 if (it != (*_node_data)[vi].heap_index.end()) {
2422 (*_node_data)[vi].heap.erase(it->second);
2423 (*_node_data)[vi].heap_index.erase(it);
2424 if ((*_node_data)[vi].heap.empty()) {
2425 _blossom_set->increase(v, std::numeric_limits<Value>::max());
2426 } else if ((*_blossom_set)[v] < (*_node_data)[vi].heap.prio()) {
2427 _blossom_set->increase(v, (*_node_data)[vi].heap.prio());
2428 }
2429
2430 if ((*_blossom_data)[vb].status == MATCHED) {
2431 if (_blossom_set->classPrio(vb) ==
2432 std::numeric_limits<Value>::max()) {
2433 _delta2->erase(vb);
2434 } else if ((*_delta2)[vb] < _blossom_set->classPrio(vb) -
2435 (*_blossom_data)[vb].offset) {
2436 _delta2->increase(vb, _blossom_set->classPrio(vb) -
2437 (*_blossom_data)[vb].offset);
2438 }
2439 }
2440 }
2441 }
2442 }
2443 }
2444 }
2445
2446 void oddToMatched(int blossom) {
2447 (*_blossom_data)[blossom].offset -= _delta_sum;
2448
2449 if (_blossom_set->classPrio(blossom) !=
2450 std::numeric_limits<Value>::max()) {
2451 _delta2->push(blossom, _blossom_set->classPrio(blossom) -
2452 (*_blossom_data)[blossom].offset);
2453 }
2454
2455 if (!_blossom_set->trivial(blossom)) {
2456 _delta4->erase(blossom);
2457 }
2458 }
2459
2460 void oddToEven(int blossom, int tree) {
2461 if (!_blossom_set->trivial(blossom)) {
2462 _delta4->erase(blossom);
2463 (*_blossom_data)[blossom].pot -=
2464 2 * (2 * _delta_sum - (*_blossom_data)[blossom].offset);
2465 }
2466
2467 for (typename BlossomSet::ItemIt n(*_blossom_set, blossom);
2468 n != INVALID; ++n) {
2469 int ni = (*_node_index)[n];
2470
2471 _blossom_set->increase(n, std::numeric_limits<Value>::max());
2472
2473 (*_node_data)[ni].heap.clear();
2474 (*_node_data)[ni].heap_index.clear();
2475 (*_node_data)[ni].pot +=
2476 2 * _delta_sum - (*_blossom_data)[blossom].offset;
2477
2478 for (InArcIt e(_graph, n); e != INVALID; ++e) {
2479 Node v = _graph.source(e);
2480 int vb = _blossom_set->find(v);
2481 int vi = (*_node_index)[v];
2482
2483 Value rw = (*_node_data)[ni].pot + (*_node_data)[vi].pot -
2484 dualScale * _weight[e];
2485
2486 if ((*_blossom_data)[vb].status == EVEN) {
2487 if (_delta3->state(e) != _delta3->IN_HEAP && blossom != vb) {
2488 _delta3->push(e, rw / 2);
2489 }
2490 } else {
2491
2492 typename std::map<int, Arc>::iterator it =
2493 (*_node_data)[vi].heap_index.find(tree);
2494
2495 if (it != (*_node_data)[vi].heap_index.end()) {
2496 if ((*_node_data)[vi].heap[it->second] > rw) {
2497 (*_node_data)[vi].heap.replace(it->second, e);
2498 (*_node_data)[vi].heap.decrease(e, rw);
2499 it->second = e;
2500 }
2501 } else {
2502 (*_node_data)[vi].heap.push(e, rw);
2503 (*_node_data)[vi].heap_index.insert(std::make_pair(tree, e));
2504 }
2505
2506 if ((*_blossom_set)[v] > (*_node_data)[vi].heap.prio()) {
2507 _blossom_set->decrease(v, (*_node_data)[vi].heap.prio());
2508
2509 if ((*_blossom_data)[vb].status == MATCHED) {
2510 if (_delta2->state(vb) != _delta2->IN_HEAP) {
2511 _delta2->push(vb, _blossom_set->classPrio(vb) -
2512 (*_blossom_data)[vb].offset);
2513 } else if ((*_delta2)[vb] > _blossom_set->classPrio(vb) -
2514 (*_blossom_data)[vb].offset) {
2515 _delta2->decrease(vb, _blossom_set->classPrio(vb) -
2516 (*_blossom_data)[vb].offset);
2517 }
2518 }
2519 }
2520 }
2521 }
2522 }
2523 (*_blossom_data)[blossom].offset = 0;
2524 }
2525
2526 void alternatePath(int even, int tree) {
2527 int odd;
2528
2529 evenToMatched(even, tree);
2530 (*_blossom_data)[even].status = MATCHED;
2531
2532 while ((*_blossom_data)[even].pred != INVALID) {
2533 odd = _blossom_set->find(_graph.target((*_blossom_data)[even].pred));
2534 (*_blossom_data)[odd].status = MATCHED;
2535 oddToMatched(odd);
2536 (*_blossom_data)[odd].next = (*_blossom_data)[odd].pred;
2537
2538 even = _blossom_set->find(_graph.target((*_blossom_data)[odd].pred));
2539 (*_blossom_data)[even].status = MATCHED;
2540 evenToMatched(even, tree);
2541 (*_blossom_data)[even].next =
2542 _graph.oppositeArc((*_blossom_data)[odd].pred);
2543 }
2544
2545 }
2546
2547 void destroyTree(int tree) {
2548 for (TreeSet::ItemIt b(*_tree_set, tree); b != INVALID; ++b) {
2549 if ((*_blossom_data)[b].status == EVEN) {
2550 (*_blossom_data)[b].status = MATCHED;
2551 evenToMatched(b, tree);
2552 } else if ((*_blossom_data)[b].status == ODD) {
2553 (*_blossom_data)[b].status = MATCHED;
2554 oddToMatched(b);
2555 }
2556 }
2557 _tree_set->eraseClass(tree);
2558 }
2559
2560 void augmentOnEdge(const Edge& edge) {
2561
2562 int left = _blossom_set->find(_graph.u(edge));
2563 int right = _blossom_set->find(_graph.v(edge));
2564
2565 int left_tree = _tree_set->find(left);
2566 alternatePath(left, left_tree);
2567 destroyTree(left_tree);
2568
2569 int right_tree = _tree_set->find(right);
2570 alternatePath(right, right_tree);
2571 destroyTree(right_tree);
2572
2573 (*_blossom_data)[left].next = _graph.direct(edge, true);
2574 (*_blossom_data)[right].next = _graph.direct(edge, false);
2575 }
2576
2577 void extendOnArc(const Arc& arc) {
2578 int base = _blossom_set->find(_graph.target(arc));
2579 int tree = _tree_set->find(base);
2580
2581 int odd = _blossom_set->find(_graph.source(arc));
2582 _tree_set->insert(odd, tree);
2583 (*_blossom_data)[odd].status = ODD;
2584 matchedToOdd(odd);
2585 (*_blossom_data)[odd].pred = arc;
2586
2587 int even = _blossom_set->find(_graph.target((*_blossom_data)[odd].next));
2588 (*_blossom_data)[even].pred = (*_blossom_data)[even].next;
2589 _tree_set->insert(even, tree);
2590 (*_blossom_data)[even].status = EVEN;
2591 matchedToEven(even, tree);
2592 }
2593
2594 void shrinkOnEdge(const Edge& edge, int tree) {
2595 int nca = -1;
2596 std::vector<int> left_path, right_path;
2597
2598 {
2599 std::set<int> left_set, right_set;
2600 int left = _blossom_set->find(_graph.u(edge));
2601 left_path.push_back(left);
2602 left_set.insert(left);
2603
2604 int right = _blossom_set->find(_graph.v(edge));
2605 right_path.push_back(right);
2606 right_set.insert(right);
2607
2608 while (true) {
2609
2610 if ((*_blossom_data)[left].pred == INVALID) break;
2611
2612 left =
2613 _blossom_set->find(_graph.target((*_blossom_data)[left].pred));
2614 left_path.push_back(left);
2615 left =
2616 _blossom_set->find(_graph.target((*_blossom_data)[left].pred));
2617 left_path.push_back(left);
2618
2619 left_set.insert(left);
2620
2621 if (right_set.find(left) != right_set.end()) {
2622 nca = left;
2623 break;
2624 }
2625
2626 if ((*_blossom_data)[right].pred == INVALID) break;
2627
2628 right =
2629 _blossom_set->find(_graph.target((*_blossom_data)[right].pred));
2630 right_path.push_back(right);
2631 right =
2632 _blossom_set->find(_graph.target((*_blossom_data)[right].pred));
2633 right_path.push_back(right);
2634
2635 right_set.insert(right);
2636
2637 if (left_set.find(right) != left_set.end()) {
2638 nca = right;
2639 break;
2640 }
2641
2642 }
2643
2644 if (nca == -1) {
2645 if ((*_blossom_data)[left].pred == INVALID) {
2646 nca = right;
2647 while (left_set.find(nca) == left_set.end()) {
2648 nca =
2649 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
2650 right_path.push_back(nca);
2651 nca =
2652 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
2653 right_path.push_back(nca);
2654 }
2655 } else {
2656 nca = left;
2657 while (right_set.find(nca) == right_set.end()) {
2658 nca =
2659 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
2660 left_path.push_back(nca);
2661 nca =
2662 _blossom_set->find(_graph.target((*_blossom_data)[nca].pred));
2663 left_path.push_back(nca);
2664 }
2665 }
2666 }
2667 }
2668
2669 std::vector<int> subblossoms;
2670 Arc prev;
2671
2672 prev = _graph.direct(edge, true);
2673 for (int i = 0; left_path[i] != nca; i += 2) {
2674 subblossoms.push_back(left_path[i]);
2675 (*_blossom_data)[left_path[i]].next = prev;
2676 _tree_set->erase(left_path[i]);
2677
2678 subblossoms.push_back(left_path[i + 1]);
2679 (*_blossom_data)[left_path[i + 1]].status = EVEN;
2680 oddToEven(left_path[i + 1], tree);
2681 _tree_set->erase(left_path[i + 1]);
2682 prev = _graph.oppositeArc((*_blossom_data)[left_path[i + 1]].pred);
2683 }
2684
2685 int k = 0;
2686 while (right_path[k] != nca) ++k;
2687
2688 subblossoms.push_back(nca);
2689 (*_blossom_data)[nca].next = prev;
2690
2691 for (int i = k - 2; i >= 0; i -= 2) {
2692 subblossoms.push_back(right_path[i + 1]);
2693 (*_blossom_data)[right_path[i + 1]].status = EVEN;
2694 oddToEven(right_path[i + 1], tree);
2695 _tree_set->erase(right_path[i + 1]);
2696
2697 (*_blossom_data)[right_path[i + 1]].next =
2698 (*_blossom_data)[right_path[i + 1]].pred;
2699
2700 subblossoms.push_back(right_path[i]);
2701 _tree_set->erase(right_path[i]);
2702 }
2703
2704 int surface =
2705 _blossom_set->join(subblossoms.begin(), subblossoms.end());
2706
2707 for (int i = 0; i < int(subblossoms.size()); ++i) {
2708 if (!_blossom_set->trivial(subblossoms[i])) {
2709 (*_blossom_data)[subblossoms[i]].pot += 2 * _delta_sum;
2710 }
2711 (*_blossom_data)[subblossoms[i]].status = MATCHED;
2712 }
2713
2714 (*_blossom_data)[surface].pot = -2 * _delta_sum;
2715 (*_blossom_data)[surface].offset = 0;
2716 (*_blossom_data)[surface].status = EVEN;
2717 (*_blossom_data)[surface].pred = (*_blossom_data)[nca].pred;
2718 (*_blossom_data)[surface].next = (*_blossom_data)[nca].pred;
2719
2720 _tree_set->insert(surface, tree);
2721 _tree_set->erase(nca);
2722 }
2723
2724 void splitBlossom(int blossom) {
2725 Arc next = (*_blossom_data)[blossom].next;
2726 Arc pred = (*_blossom_data)[blossom].pred;
2727
2728 int tree = _tree_set->find(blossom);
2729
2730 (*_blossom_data)[blossom].status = MATCHED;
2731 oddToMatched(blossom);
2732 if (_delta2->state(blossom) == _delta2->IN_HEAP) {
2733 _delta2->erase(blossom);
2734 }
2735
2736 std::vector<int> subblossoms;
2737 _blossom_set->split(blossom, std::back_inserter(subblossoms));
2738
2739 Value offset = (*_blossom_data)[blossom].offset;
2740 int b = _blossom_set->find(_graph.source(pred));
2741 int d = _blossom_set->find(_graph.source(next));
2742
2743 int ib = -1, id = -1;
2744 for (int i = 0; i < int(subblossoms.size()); ++i) {
2745 if (subblossoms[i] == b) ib = i;
2746 if (subblossoms[i] == d) id = i;
2747
2748 (*_blossom_data)[subblossoms[i]].offset = offset;
2749 if (!_blossom_set->trivial(subblossoms[i])) {
2750 (*_blossom_data)[subblossoms[i]].pot -= 2 * offset;
2751 }
2752 if (_blossom_set->classPrio(subblossoms[i]) !=
2753 std::numeric_limits<Value>::max()) {
2754 _delta2->push(subblossoms[i],
2755 _blossom_set->classPrio(subblossoms[i]) -
2756 (*_blossom_data)[subblossoms[i]].offset);
2757 }
2758 }
2759
2760 if (id > ib ? ((id - ib) % 2 == 0) : ((ib - id) % 2 == 1)) {
2761 for (int i = (id + 1) % subblossoms.size();
2762 i != ib; i = (i + 2) % subblossoms.size()) {
2763 int sb = subblossoms[i];
2764 int tb = subblossoms[(i + 1) % subblossoms.size()];
2765 (*_blossom_data)[sb].next =
2766 _graph.oppositeArc((*_blossom_data)[tb].next);
2767 }
2768
2769 for (int i = ib; i != id; i = (i + 2) % subblossoms.size()) {
2770 int sb = subblossoms[i];
2771 int tb = subblossoms[(i + 1) % subblossoms.size()];
2772 int ub = subblossoms[(i + 2) % subblossoms.size()];
2773
2774 (*_blossom_data)[sb].status = ODD;
2775 matchedToOdd(sb);
2776 _tree_set->insert(sb, tree);
2777 (*_blossom_data)[sb].pred = pred;
2778 (*_blossom_data)[sb].next =
2779 _graph.oppositeArc((*_blossom_data)[tb].next);
2780
2781 pred = (*_blossom_data)[ub].next;
2782
2783 (*_blossom_data)[tb].status = EVEN;
2784 matchedToEven(tb, tree);
2785 _tree_set->insert(tb, tree);
2786 (*_blossom_data)[tb].pred = (*_blossom_data)[tb].next;
2787 }
2788
2789 (*_blossom_data)[subblossoms[id]].status = ODD;
2790 matchedToOdd(subblossoms[id]);
2791 _tree_set->insert(subblossoms[id], tree);
2792 (*_blossom_data)[subblossoms[id]].next = next;
2793 (*_blossom_data)[subblossoms[id]].pred = pred;
2794
2795 } else {
2796
2797 for (int i = (ib + 1) % subblossoms.size();
2798 i != id; i = (i + 2) % subblossoms.size()) {
2799 int sb = subblossoms[i];
2800 int tb = subblossoms[(i + 1) % subblossoms.size()];
2801 (*_blossom_data)[sb].next =
2802 _graph.oppositeArc((*_blossom_data)[tb].next);
2803 }
2804
2805 for (int i = id; i != ib; i = (i + 2) % subblossoms.size()) {
2806 int sb = subblossoms[i];
2807 int tb = subblossoms[(i + 1) % subblossoms.size()];
2808 int ub = subblossoms[(i + 2) % subblossoms.size()];
2809
2810 (*_blossom_data)[sb].status = ODD;
2811 matchedToOdd(sb);
2812 _tree_set->insert(sb, tree);
2813 (*_blossom_data)[sb].next = next;
2814 (*_blossom_data)[sb].pred =
2815 _graph.oppositeArc((*_blossom_data)[tb].next);
2816
2817 (*_blossom_data)[tb].status = EVEN;
2818 matchedToEven(tb, tree);
2819 _tree_set->insert(tb, tree);
2820 (*_blossom_data)[tb].pred =
2821 (*_blossom_data)[tb].next =
2822 _graph.oppositeArc((*_blossom_data)[ub].next);
2823 next = (*_blossom_data)[ub].next;
2824 }
2825
2826 (*_blossom_data)[subblossoms[ib]].status = ODD;
2827 matchedToOdd(subblossoms[ib]);
2828 _tree_set->insert(subblossoms[ib], tree);
2829 (*_blossom_data)[subblossoms[ib]].next = next;
2830 (*_blossom_data)[subblossoms[ib]].pred = pred;
2831 }
2832 _tree_set->erase(blossom);
2833 }
2834
2835 void extractBlossom(int blossom, const Node& base, const Arc& matching) {
2836 if (_blossom_set->trivial(blossom)) {
2837 int bi = (*_node_index)[base];
2838 Value pot = (*_node_data)[bi].pot;
2839
2840 (*_matching)[base] = matching;
2841 _blossom_node_list.push_back(base);
2842 (*_node_potential)[base] = pot;
2843 } else {
2844
2845 Value pot = (*_blossom_data)[blossom].pot;
2846 int bn = _blossom_node_list.size();
2847
2848 std::vector<int> subblossoms;
2849 _blossom_set->split(blossom, std::back_inserter(subblossoms));
2850 int b = _blossom_set->find(base);
2851 int ib = -1;
2852 for (int i = 0; i < int(subblossoms.size()); ++i) {
2853 if (subblossoms[i] == b) { ib = i; break; }
2854 }
2855
2856 for (int i = 1; i < int(subblossoms.size()); i += 2) {
2857 int sb = subblossoms[(ib + i) % subblossoms.size()];
2858 int tb = subblossoms[(ib + i + 1) % subblossoms.size()];
2859
2860 Arc m = (*_blossom_data)[tb].next;
2861 extractBlossom(sb, _graph.target(m), _graph.oppositeArc(m));
2862 extractBlossom(tb, _graph.source(m), m);
2863 }
2864 extractBlossom(subblossoms[ib], base, matching);
2865
2866 int en = _blossom_node_list.size();
2867
2868 _blossom_potential.push_back(BlossomVariable(bn, en, pot));
2869 }
2870 }
2871
2872 void extractMatching() {
2873 std::vector<int> blossoms;
2874 for (typename BlossomSet::ClassIt c(*_blossom_set); c != INVALID; ++c) {
2875 blossoms.push_back(c);
2876 }
2877
2878 for (int i = 0; i < int(blossoms.size()); ++i) {
2879
2880 Value offset = (*_blossom_data)[blossoms[i]].offset;
2881 (*_blossom_data)[blossoms[i]].pot += 2 * offset;
2882 for (typename BlossomSet::ItemIt n(*_blossom_set, blossoms[i]);
2883 n != INVALID; ++n) {
2884 (*_node_data)[(*_node_index)[n]].pot -= offset;
2885 }
2886
2887 Arc matching = (*_blossom_data)[blossoms[i]].next;
2888 Node base = _graph.source(matching);
2889 extractBlossom(blossoms[i], base, matching);
2890 }
2891 }
2892
2893 public:
2894
2895 /// \brief Constructor
2896 ///
2897 /// Constructor.
2898 MaxWeightedPerfectMatching(const Graph& graph, const WeightMap& weight)
2899 : _graph(graph), _weight(weight), _matching(0),
2900 _node_potential(0), _blossom_potential(), _blossom_node_list(),
2901 _node_num(0), _blossom_num(0),
2902
2903 _blossom_index(0), _blossom_set(0), _blossom_data(0),
2904 _node_index(0), _node_heap_index(0), _node_data(0),
2905 _tree_set_index(0), _tree_set(0),
2906
2907 _delta2_index(0), _delta2(0),
2908 _delta3_index(0), _delta3(0),
2909 _delta4_index(0), _delta4(0),
2910
2911 _delta_sum() {}
2912
2913 ~MaxWeightedPerfectMatching() {
2914 destroyStructures();
2915 }
2916
2917 /// \name Execution Control
2918 /// The simplest way to execute the algorithm is to use the
2919 /// \ref run() member function.
2920
2921 ///@{
2922
2923 /// \brief Initialize the algorithm
2924 ///
2925 /// This function initializes the algorithm.
2926 void init() {
2927 createStructures();
2928
2929 for (ArcIt e(_graph); e != INVALID; ++e) {
2930 (*_node_heap_index)[e] = BinHeap<Value, IntArcMap>::PRE_HEAP;
2931 }
2932 for (EdgeIt e(_graph); e != INVALID; ++e) {
2933 (*_delta3_index)[e] = _delta3->PRE_HEAP;
2934 }
2935 for (int i = 0; i < _blossom_num; ++i) {
2936 (*_delta2_index)[i] = _delta2->PRE_HEAP;
2937 (*_delta4_index)[i] = _delta4->PRE_HEAP;
2938 }
2939
2940 int index = 0;
2941 for (NodeIt n(_graph); n != INVALID; ++n) {
2942 Value max = - std::numeric_limits<Value>::max();
2943 for (OutArcIt e(_graph, n); e != INVALID; ++e) {
2944 if (_graph.target(e) == n) continue;
2945 if ((dualScale * _weight[e]) / 2 > max) {
2946 max = (dualScale * _weight[e]) / 2;
2947 }
2948 }
2949 (*_node_index)[n] = index;
2950 (*_node_data)[index].pot = max;
2951 int blossom =
2952 _blossom_set->insert(n, std::numeric_limits<Value>::max());
2953
2954 _tree_set->insert(blossom);
2955
2956 (*_blossom_data)[blossom].status = EVEN;
2957 (*_blossom_data)[blossom].pred = INVALID;
2958 (*_blossom_data)[blossom].next = INVALID;
2959 (*_blossom_data)[blossom].pot = 0;
2960 (*_blossom_data)[blossom].offset = 0;
2961 ++index;
2962 }
2963 for (EdgeIt e(_graph); e != INVALID; ++e) {
2964 int si = (*_node_index)[_graph.u(e)];
2965 int ti = (*_node_index)[_graph.v(e)];
2966 if (_graph.u(e) != _graph.v(e)) {
2967 _delta3->push(e, ((*_node_data)[si].pot + (*_node_data)[ti].pot -
2968 dualScale * _weight[e]) / 2);
2969 }
2970 }
2971 }
2972
2973 /// \brief Start the algorithm
2974 ///
2975 /// This function starts the algorithm.
2976 ///
2977 /// \pre \ref init() must be called before using this function.
2978 bool start() {
2979 enum OpType {
2980 D2, D3, D4
2981 };
2982
2983 int unmatched = _node_num;
2984 while (unmatched > 0) {
2985 Value d2 = !_delta2->empty() ?
2986 _delta2->prio() : std::numeric_limits<Value>::max();
2987
2988 Value d3 = !_delta3->empty() ?
2989 _delta3->prio() : std::numeric_limits<Value>::max();
2990
2991 Value d4 = !_delta4->empty() ?
2992 _delta4->prio() : std::numeric_limits<Value>::max();
2993
2994 _delta_sum = d2; OpType ot = D2;
2995 if (d3 < _delta_sum) { _delta_sum = d3; ot = D3; }
2996 if (d4 < _delta_sum) { _delta_sum = d4; ot = D4; }
2997
2998 if (_delta_sum == std::numeric_limits<Value>::max()) {
2999 return false;
3000 }
3001
3002 switch (ot) {
3003 case D2:
3004 {
3005 int blossom = _delta2->top();
3006 Node n = _blossom_set->classTop(blossom);
3007 Arc e = (*_node_data)[(*_node_index)[n]].heap.top();
3008 extendOnArc(e);
3009 }
3010 break;
3011 case D3:
3012 {
3013 Edge e = _delta3->top();
3014
3015 int left_blossom = _blossom_set->find(_graph.u(e));
3016 int right_blossom = _blossom_set->find(_graph.v(e));
3017
3018 if (left_blossom == right_blossom) {
3019 _delta3->pop();
3020 } else {
3021 int left_tree = _tree_set->find(left_blossom);
3022 int right_tree = _tree_set->find(right_blossom);
3023
3024 if (left_tree == right_tree) {
3025 shrinkOnEdge(e, left_tree);
3026 } else {
3027 augmentOnEdge(e);
3028 unmatched -= 2;
3029 }
3030 }
3031 } break;
3032 case D4:
3033 splitBlossom(_delta4->top());
3034 break;
3035 }
3036 }
3037 extractMatching();
3038 return true;
3039 }
3040
3041 /// \brief Run the algorithm.
3042 ///
3043 /// This method runs the \c %MaxWeightedPerfectMatching algorithm.
3044 ///
3045 /// \note mwpm.run() is just a shortcut of the following code.
3046 /// \code
3047 /// mwpm.init();
3048 /// mwpm.start();
3049 /// \endcode
3050 bool run() {
3051 init();
3052 return start();
3053 }
3054
3055 /// @}
3056
3057 /// \name Primal Solution
3058 /// Functions to get the primal solution, i.e. the maximum weighted
3059 /// perfect matching.\n
3060 /// Either \ref run() or \ref start() function should be called before
3061 /// using them.
3062
3063 /// @{
3064
3065 /// \brief Return the weight of the matching.
3066 ///
3067 /// This function returns the weight of the found matching.
3068 ///
3069 /// \pre Either run() or start() must be called before using this function.
3070 Value matchingWeight() const {
3071 Value sum = 0;
3072 for (NodeIt n(_graph); n != INVALID; ++n) {
3073 if ((*_matching)[n] != INVALID) {
3074 sum += _weight[(*_matching)[n]];
3075 }
3076 }
3077 return sum /= 2;
3078 }
3079
3080 /// \brief Return \c true if the given edge is in the matching.
3081 ///
3082 /// This function returns \c true if the given edge is in the found
3083 /// matching.
3084 ///
3085 /// \pre Either run() or start() must be called before using this function.
3086 bool matching(const Edge& edge) const {
3087 return static_cast<const Edge&>((*_matching)[_graph.u(edge)]) == edge;
3088 }
3089
3090 /// \brief Return the matching arc (or edge) incident to the given node.
3091 ///
3092 /// This function returns the matching arc (or edge) incident to the
3093 /// given node in the found matching or \c INVALID if the node is
3094 /// not covered by the matching.
3095 ///
3096 /// \pre Either run() or start() must be called before using this function.
3097 Arc matching(const Node& node) const {
3098 return (*_matching)[node];
3099 }
3100
3101 /// \brief Return a const reference to the matching map.
3102 ///
3103 /// This function returns a const reference to a node map that stores
3104 /// the matching arc (or edge) incident to each node.
3105 const MatchingMap& matchingMap() const {
3106 return *_matching;
3107 }
3108
3109 /// \brief Return the mate of the given node.
3110 ///
3111 /// This function returns the mate of the given node in the found
3112 /// matching or \c INVALID if the node is not covered by the matching.
3113 ///
3114 /// \pre Either run() or start() must be called before using this function.
3115 Node mate(const Node& node) const {
3116 return _graph.target((*_matching)[node]);
3117 }
3118
3119 /// @}
3120
3121 /// \name Dual Solution
3122 /// Functions to get the dual solution.\n
3123 /// Either \ref run() or \ref start() function should be called before
3124 /// using them.
3125
3126 /// @{
3127
3128 /// \brief Return the value of the dual solution.
3129 ///
3130 /// This function returns the value of the dual solution.
3131 /// It should be equal to the primal value scaled by \ref dualScale
3132 /// "dual scale".
3133 ///
3134 /// \pre Either run() or start() must be called before using this function.
3135 Value dualValue() const {
3136 Value sum = 0;
3137 for (NodeIt n(_graph); n != INVALID; ++n) {
3138 sum += nodeValue(n);
3139 }
3140 for (int i = 0; i < blossomNum(); ++i) {
3141 sum += blossomValue(i) * (blossomSize(i) / 2);
3142 }
3143 return sum;
3144 }
3145
3146 /// \brief Return the dual value (potential) of the given node.
3147 ///
3148 /// This function returns the dual value (potential) of the given node.
3149 ///
3150 /// \pre Either run() or start() must be called before using this function.
3151 Value nodeValue(const Node& n) const {
3152 return (*_node_potential)[n];
3153 }
3154
3155 /// \brief Return the number of the blossoms in the basis.
3156 ///
3157 /// This function returns the number of the blossoms in the basis.
3158 ///
3159 /// \pre Either run() or start() must be called before using this function.
3160 /// \see BlossomIt
3161 int blossomNum() const {
3162 return _blossom_potential.size();
3163 }
3164
3165 /// \brief Return the number of the nodes in the given blossom.
3166 ///
3167 /// This function returns the number of the nodes in the given blossom.
3168 ///
3169 /// \pre Either run() or start() must be called before using this function.
3170 /// \see BlossomIt
3171 int blossomSize(int k) const {
3172 return _blossom_potential[k].end - _blossom_potential[k].begin;
3173 }
3174
3175 /// \brief Return the dual value (ptential) of the given blossom.
3176 ///
3177 /// This function returns the dual value (ptential) of the given blossom.
3178 ///
3179 /// \pre Either run() or start() must be called before using this function.
3180 Value blossomValue(int k) const {
3181 return _blossom_potential[k].value;
3182 }
3183
3184 /// \brief Iterator for obtaining the nodes of a blossom.
3185 ///
3186 /// This class provides an iterator for obtaining the nodes of the
3187 /// given blossom. It lists a subset of the nodes.
3188 /// Before using this iterator, you must allocate a
3189 /// MaxWeightedPerfectMatching class and execute it.
3190 class BlossomIt {
3191 public:
3192
3193 /// \brief Constructor.
3194 ///
3195 /// Constructor to get the nodes of the given variable.
3196 ///
3197 /// \pre Either \ref MaxWeightedPerfectMatching::run() "algorithm.run()"
3198 /// or \ref MaxWeightedPerfectMatching::start() "algorithm.start()"
3199 /// must be called before initializing this iterator.
3200 BlossomIt(const MaxWeightedPerfectMatching& algorithm, int variable)
3201 : _algorithm(&algorithm)
3202 {
3203 _index = _algorithm->_blossom_potential[variable].begin;
3204 _last = _algorithm->_blossom_potential[variable].end;
3205 }
3206
3207 /// \brief Conversion to \c Node.
3208 ///
3209 /// Conversion to \c Node.
3210 operator Node() const {
3211 return _algorithm->_blossom_node_list[_index];
3212 }
3213
3214 /// \brief Increment operator.
3215 ///
3216 /// Increment operator.
3217 BlossomIt& operator++() {
3218 ++_index;
3219 return *this;
3220 }
3221
3222 /// \brief Validity checking
3223 ///
3224 /// This function checks whether the iterator is invalid.
3225 bool operator==(Invalid) const { return _index == _last; }
3226
3227 /// \brief Validity checking
3228 ///
3229 /// This function checks whether the iterator is valid.
3230 bool operator!=(Invalid) const { return _index != _last; }
3231
3232 private:
3233 const MaxWeightedPerfectMatching* _algorithm;
3234 int _last;
3235 int _index;
3236 };
3237
3238 /// @}
3239
3240 };
3241
3242} //END OF NAMESPACE LEMON
3243
3244#endif //LEMON_MAX_MATCHING_H
Note: See TracBrowser for help on using the repository browser.
|
__label__pos
| 0.996717 |
翻译或纠错本页面
db.createRole()
Definition
db.createRole(role, writeConcern)
Creates a role in a database. You can specify privileges for the role by explicitly listing the privileges or by having the role inherit privileges from other roles or both. The role applies to the database on which you run the method.
The db.createRole() method takes the following arguments:
Parameter Type Description
role document A document containing the name of the role and the role definition.
writeConcern document Optional. The level of write concern to apply to this operation. The writeConcern document uses the same fields as the getLastError command.
The role document has the following form:
{
role: "<name>",
privileges: [
{ resource: { <resource> }, actions: [ "<action>", ... ] },
...
],
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
]
}
The role document has the following fields:
Field Type Description
role string The name of the new role.
privileges array
The privileges to grant the role. A privilege consists of a resource and permitted actions. For the syntax of a privilege, see the privileges array.
You must include the privileges field. Use an empty array to specify no privileges.
roles array
An array of roles from which this role inherits privileges.
You must include the roles field. Use an empty array to specify no roles to inherit from.
In the roles field, you can specify both built-in roles and user-defined role.
To specify a role that exists in the same database where db.createRole() runs, you can either specify the role with the name of the role:
"readWrite"
Or you can specify the role with a document, as in:
{ role: "<role>", db: "<database>" }
To specify a role that exists in a different database, specify the role with a document.
The db.createRole() method wraps the createRole command.
Behavior
Except for roles created in the admin database, a role can only include privileges that apply to its database and can only inherit from other roles in its database.
A role created in the admin database can include privileges that apply to the admin database, other databases or to the cluster resource, and can inherit from roles in other databases as well as the admin database.
The db.createRole() method returns a duplicate role error if the role already exists in the database.
Required Access
To create a role in a database, you must have:
Built-in roles userAdmin and userAdminAnyDatabase provide createRole and grantRole actions on their respective resources.
Example
The following db.createRole() method creates the myClusterwideAdmin role on the admin database:
use admin
db.createRole(
{
role: "myClusterwideAdmin",
privileges: [
{ resource: { cluster: true }, actions: [ "addShard" ] },
{ resource: { db: "config", collection: "" }, actions: [ "find", "update", "insert", "remove" ] },
{ resource: { db: "users", collection: "usersCollection" }, actions: [ "update", "insert", "remove" ] },
{ resource: { db: "", collection: "" }, actions: [ "find" ] }
],
roles: [
{ role: "read", db: "admin" }
]
},
{ w: "majority" , wtimeout: 5000 }
)
|
__label__pos
| 0.984121 |
• Status: Solved
• Priority: Medium
• Security: Public
• Views: 310
• Last Modified:
Adding appointments to multiple accounts in Exchange
using System;
using System.Net;
using System.Net.Security;
using System.Security.Cryptography.X509Certificates;
using Microsoft.Exchange.WebServices.Data;
namespace EwsManagedTest
{
class Program
{
static void Main(string[] args)
{
ServicePointManager.ServerCertificateValidationCallback +=
delegate(
object sender,
X509Certificate certificate,
X509Chain chain,
SslPolicyErrors sslPolicyErrors)
{
return true;
};
string userName = "login";
string password = "password";
string domain = "domain.com";
string exchangeWebServiceUrl = "https://domain.com/ews/exchange.asmx";
ExchangeService service = new ExchangeService();
service.Credentials = new WebCredentials(userName, password, domain);
service.Url = new Uri(exchangeWebServiceUrl);
Appointment appointment = new Appointment(service);
appointment.Subject = "Testing";
appointment.Start = DateTime.Now;
appointment.End = appointment.Start.AddHours(1);
appointment.Save();
}
}
}
Open in new window
How do I modify this code to add new appointments to exchange calendar for multiple accounts using just one master login credentials?
0
joein610
Asked:
joein610
• 2
2 Solutions
Glen KnightCommented:
Its down to the permissions on the mailbox. If the user you are authenticating with has access to all the mailboxes then it will be able to create a calendar entry.
0
joein610Author Commented:
How do I mention it in the code? Let's say that my login is xxx and I want to add a calendar item for the user yyy. How do I do that? Do I still log in with the same credentials?
0
Glen KnightCommented:
Yes, you just create a service account that has access to all the mailboxes and then use this account to create the appointments.
0
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
Join & Write a Comment
Featured Post
Free tool for managing users' photos in Office 365
Easily upload multiple users’ photos to Office 365. Manage them with an intuitive GUI and use handy built-in cropping and resizing options. Link photos with users based on Azure AD attributes. Free tool!
• 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now
|
__label__pos
| 0.578609 |
Code covered by the BSD License
Highlights from
Generation of Random Variates
image thumbnail
Generation of Random Variates
by
James Huntley (view profile)
generates random variates from over 870 univariate distributions
paspois_pdf(n, mu, a, c)
% paspois_pdf.m - evaluates a Pascal Poisson Probability Density.
% See "Univariate Discrete Distributions", Johnson, Kemp, and Kotz,
% J. Wiley, 2005, p.244.
%
% Created by Jim Huntley, 8/15/07
%
function [pdf] = paspois_pdf(n, mu, a, c)
persistent coef P0 jmax term logc
if(isempty(coef))
coef = (a*log(a*c/(a*c+mu)) - gammaln(a));
P0 = (1 - mu*(exp(-c)-1)/(a*c))^(-a);
jmax = fix(20 * (mu/c)^0.7); % heuristic.
term = log(mu/(a*c+mu));
logc = log(c);
end
pdf = P0;
if(n > 0)
sumj = 0;
for j = 1:jmax
%sumj = sumj + gamma(a+j) * (mu/(a*c+mu))^j * j^n * exp(-j*c) / gamma(j+1);
sumj = sumj + exp(gammaln(a+j) + j*term + n*log(j) - j*c - gammaln(j+1));
end
%pdf = coef * c^n * sumj / gamma(n+1);
pdf = exp(coef + n*logc + log(sumj) - gammaln(n+1));
end
return
Contact us
|
__label__pos
| 0.948519 |
Pricing of a Virtual Power Plant on a GPU
Even the pricing of a simple virtual power plant (VPP) is challenging. Main reasons are the high number of possible states of the VPP and the large number of possible exercise dates because often a VPP is priced as a bermudan-style option with hourly exercise rights. The implementation effort for an exact pricing engine based on finite difference methods (see e.g.[1]) or based on least squares Monte-Carlo is comparable large. As shown in [1] Monte-Carlo combined with perfect foresight optimization can result in a very good approximation. The algorithm consists of a Monte-Carlo path generator and a dynamic programming optimization part, which calculates the optimal load schedule plan for each path separately. The stochastic processes involved are outlined in [1].
The CUDA based GPU implementation is available here. It depends on the latest QuantLib version from the SVN trunk or the upcoming QuantLib 1.2 release and CUDA 4.0. The corresponding C++ implementation is a speed-optimized version of the test-case VPPTest::testVPPPricing. This version also supports multi-threading. The following hardware was used to compare both implementations:
• CPU: Core [email protected] GHz, quad-core
• GPU: GTX560@810/1620MHz, 336 cores
As can be seen in the diagram below the GPU outperforms the CPU roughly by a factor 100 for single precision and a factor of 50 if the GPU is using double precision.
The CUDA implementation consists of the following files:
gpuvpppricingengine.hpp / gpuvpppricingengine.cpp
A QuantLib pricing engine for a simple VPP based on a Monte-Carlo simulation and perfect foresight optimization via dynamic programming. The physical size of the Monte-Carlo simulation is controlled by the following parameters of the constructor
1. Size nSimulations: number of Monte-Carlo simulations carried out.
2. bool antithetic: enables/disables antithetic sampling
3. Size blockSize: number of threads in a CUDA block.
4. Size gridSize: number of CUDA blocks that are grouped together in a simulation kernel.
gpuvpppricingengine_kernel.hpp
gpuvpppricingengine_kernel.cu/ gpuvpppricingengine_kernel.def
The CUDA implementation consists of two kernels. The first kernel is the Monte-Carlo path generator, which calculates the paths on hourly granularity and stores them in the global memory of the graphic card.. The technics used are outlined e.g. in [2], [3] and [4]. The second kernel performs the optimization of the load schedule based on dynamic programming. The memory layout of this step depends on the number of possible states of the VPP because every possible state is stored in the shared memory of the GPU. The number of states is given by N_{states} = 2t_{up}+t_{down}. CUDA does not support efficient dynamic shared memory allocation. Therefore the sizes of all shared memory arrays must be given at compile time. To allow an optimal use of the limited shared memory capacity different kernels with different N_{states} values are generated using X-macros and the appropriate kernel is chosen at runtime.
cudatype.hpp
defines basic CUDA types, especially the typedef for the type “real” can be used to compile the code either for single or double precision.
gpurand.hpp
C++ interface for a GPU random number generator
gpucurand.hpp / gpucurand.cpp / gpucurand_kernel.hpp / gpucurand_kernel.cu
implementation of the GPURand interface based on the CURAND library, which is part of CUDA 4.0.
[1] this blog, VPP Pricing III: Exact Pricing based on Finite Difference Methods.
[2] L. Howes, D. Thomas, Efficient Random Number Generation and Application Using CUDA.
[3] A. Bernemann, R. Schreyer, K. Spanderen, Accelerating Exotic Option Pricing and Model Calibration Using GPUs
[4] M. Joshi, Graphical Asian Options
One thought on “Pricing of a Virtual Power Plant on a GPU
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.893843 |
Implemented stable React components and functions to be added to any future pages. The render function creates a node tree of the React components and then updates this node tree in response to the mutations in the data model caused by various actions done by the user or by the system. In the Tree component each node with the exception of the root item has a parent and can have children. but I only want the tree to reload with it because i lose every other information. The TreeView widget displays a hierarchical collection of items using a traditional tree structure. Above all they have been designed with intention to improve user navigation experience in the application. Built on top of SVG elements with a lightweight dependency on D3 submodules. But it's dynamic, so whatever "own" properties are in props are included. Ruby, Rails, RubyMotion, iOS, Reactjs and programming. 8, 2019 Title 40 Protection of Environment Parts 50 to 51 Revised as of July 1, 2019 Containing a codification of documents of general applicability and future effect As of July 1, 2019. In order to display the tree, the TreeView component will travesse all visible nodes and create react components from them. Drag and Drop for React Last updated 6 days ago by darthtrevino. Have your own style guide? No problem. This isn’t really surprising. WPF Tree List Control A feature-complete, data-aware TreeView-ListView hybrid that can display information as a TREE, a GRID, or a combination of both - in either data bound or unbound mode with full data editing support. When the menu is a item then I will render a image and when you click on a label the taskflowurl value is passed on to the dynamic region bean and the page will show this task flow In some cases these cookies improve the speed with which we can process your request, allow us to remember site preferences you’ve selected. The next day, Jem and Scout find that the knothole has been filled with cement. A polyfill is a way to provide new functionality available in modern JavaScript or a modern browser API to older browsers. A common example is having a set of similar inputs that are backed by an array. Dynamic Loading in TreeGrid. Today with Dynamic, Miles works behind the scenes overseeing company operations. Overview of Needed react-pdf Components. represents the PDF document itself. But nowadays, if someone have the same problem should use dynamic import, in order to load only the component needed and avoid to load all the different ones. This introduced the concept of “renderers” to React internals. Learn the different ways a developer can manipulate the DOM tree using common web technologies. React’s render function creates a node tree out of the React components. Install the React components and choose a theme that suits your needs. Learn Blazor - Blazor by example. In this examples used string array structure. Tree-shaking, also known as "live code inclusion", is Rollup's process of eliminating code that is not actually used in a given project. key - as you already know, each dynamically created React component instance needs a key property that React uses to uniquely identify that instance. After graduating in 1995 with a Bachelor of Commerce, Miles started his own reforestation company. Storybook - GitHub Pages. js Examples Ui Scroll A clear, easy JSON tree view component made with vue. It is available on npm as the react package. Similar to React, Vue. React’s react-router and react-redux are maintained by community members and aren’t ‘officially’ under the Facebook/React umbrella. If we used this in React, displaying 1000 elements would require in the order of one billion comparisons. Dynamic Heatmaps for the Web heatmap. Included resources allow you to get up and running quickly even with no prior behavior tree experience. js You can…. For this to happen, the air must either compress or speed up where its flow narrows. Now we can use the Document object to access to all HTML elements (as node objects) in a page, using any script such as JavaScript. React Tree View UI Component and Libraries We will see 12 examples of react tree which will cover topics like html5 treeview, react tree table and relevant libraries. Dynamic components are useful when we want to generate components on the fly, for example we could assume a server response tells us to display a particular view and/or message, and handling this with built-in structural directives (such as a big ngIf tree) is not really the best approach - we can do better!. Demo WPF reactive tree demo on GitHub 3. Considering the complexity of dynamic web apps these days, editing the tree structure of the HTML DOM can be quite problematic in terms of performance (and developer sanity). disabledKeys: Set < Key > A set of keys for items that are disabled. Create a Modern Dynamic Sidebar Menu in React Using Recursion The higher the depth is, the more deep down in the tree they're located in. A label can position itself in the corner of an element. Yup is a JavaScript object schema validator and object parser inspired by Joi ( a validator for node). js is a lightweight, easy to use JavaScript library to help you visualize your three dimensional data! Use it to add new value to your project, build a business based on it, study and visualize user behaviour, or why not build something completely crazy/awesome?. Theme Designer is the easiest way to design and implement your own themes for the PrimeReact components. Read the upgrade guide. Available for pure JavaScript, React, Vue, and Angular. Since children is an "own" property in props , spread will include it. js, and it exports a global called React. The virtual DOM is one of the main reasons why React is very fast. Testing React. React core only includes the APIs necessary to define components. Yup is a JavaScript object schema validator and object parser inspired by Joi ( a validator for node). The figures are followed in turn by chewing gum, a spelling bee medal, and an old pocket watch. js in your project directory. Join to Connect. Implemented stable React components and functions to be added to any future pages. Component is one of the possible ways React allows you to represent a component: via class. —Donald Norman. React Native Picker also known as Spinner in android and iOS applications is used to show and pick a single value from a Set of Values. Due to this, engineers started working around React Router to improve performance, but these workarounds themselves caused some side effects. × Sign up for our newsletter. They support many different use cases (editable, filtering, grouping, sorting, selection, i18n, tree data and more). Install the library using your favorite dependency manager: Using yarn: yarn add react-dynamic-checkbox-tree Using npm: npm install react-dynamic-checkbox-tree --save Render Component. You will get hands-on practice and see how MST lets you solve problems with its out-of. The next day, Jem and Scout find that the knothole has been filled with cement. We are going to use react-native init to make our React Native App. Shows how you can use Wijmo controls in dynamic, user-customizable dashboards. So we would have this kind of dynamic tree structure. The React component tree. render() within the main() entrypoint of your Dart application. React uses a virtual DOM while Angular uses a regular DOM. Simple examples of dynamic HTML capabilities include having the color of a text heading change when a. It should only contain children of type component. disabledKeys: Set < Key > A set of keys for items that are disabled. React/Angular website with. Build apps with flowcharts, org charts, BPMN, UML, modeling, and other visual graph types. Learning React is hard enough, so I have tried to shy away from forcing you to use ES6 techniques by default. iOS Developer at Triple Tree Solutions. The general rendering process of Pug is simple. Thus React JS improves the server based performance. React+D3 started as a bet in April 2015. Most of the code in this example is boiler plate so below I will explain only the key parts of the code. React can compare two Virtual DOM trees to determine the fewest actions required to transform the first tree into the second. Next, open a separate terminal window and install styled-components: yarn add styled-components. Preact is a fast 3kB alternative to React with the same modern API. (Web app vs native app using React). Unstated is a new library that makes state management in React dead simple. Nike React Vision; £115. In the end it turned out to be a very easy fix and very important lesson on understanding react. whenever the component's props or state gets updated, whether to make an actual DOM update or not, React will decide by comparing the newly returned element with previously rendered one. Indeed, when we only have a fixed set of fields, it's pretty easy to make them into code. Navigating to pages using Link and NavLink component. JavaScript. Inputs may need to be shown or hidden depending on the state of another input, or input controls may need to be created on-the-fly in response to user input. Yup is a JavaScript object schema validator and object parser inspired by Joi ( a validator for node). As usual spin a new react project using whatever template you prefer. the whole java script should be passed to web page as a single string. Angular uses a regular DOM. When To Use # Almost anything can be represented in a tree structure. The formatted text field can check the value either continuously while the user is typing or. Component is one of the possible ways React allows you to represent a component: via class. If you want to do code-splitting in a server rendered app, we recommend Loadable Components. DayPilot Scheduler can load the resource tree children dynamically (upon clicking the expand [+] icon). The other way to optimize is to find a library or other module of significant size which is used only under certain conditions. That’s the only. Similar to the logic above, creating a decision tree and parsing it with the user inputs solved our problem of field selection. The Grid layout component provides a semi-automated, responsive grid layout which is both easy to learn and easy to use. Much of dynamic HTML is specified in HTML 4. You can also display the Tree. A tree view represents a hierarchical view of information, where each item can have a number of subitems. Note: This tutorial is for React Router. When something changes, React JS updates the Virtual DOM and figures out how it differs from the actual DOM, updating the actual DOM only with what has actually. js is how templating is done. This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band. locale preference, UI theme) that are required by many components within an application. Whether you're building highly interactive web applications or you just need to add a date picker to a form control, jQuery UI is the perfect choice. Radley replies that he plugged the knothole because the tree is dying. The state tree will be managed by redux. DayPilot Scheduler can load the resource tree children dynamically (upon clicking the expand [+] icon). If we used this in React, displaying 1000 elements would require in the order of one billion comparisons. react-dynamic-charts is a React library for creating dynamic charts visualizations based on dynamic data. represents a single page within a PDF document. Get an answer for 'How is Francie Nolan, the protagonist, a dynamic (changing) character, like how does she generally change in A Tree Grows in Brooklyn?' and find homework help for other A Tree. I will be using create-react-app. 3 $ react-scripts start 'react-scripts' is not recognized as an. WTT leading Enterprise Web & Mobile App Development Company, we offer business application development services and solutions across the globe. react-sortable-tree. react-json-tree. A common example is having a set of similar inputs that are backed by an array. Flareact allows you to render your React apps at the edge rather than on the server. Skip to main content LinkedIn Learning Search skills, subjects, or software. React Sight displays your React app with a live component hierarchy tree of the entire app. When something changes, React JS updates the Virtual DOM and figures out how it differs from the actual DOM, updating the actual DOM only with what has actually. Features¶ Zero upfront configuration necessary to start developing and building a React web app; Modern Babel compilation adding JSX, object rest spread syntax, and class properties. Great way to get started and practice some ES6, however at the end of this brief guide I was left with only one list item, so I had some fun digging around to figure out how to render a full list of question and answers. Nestjs dynamic module. This returns a Promise that, once resolved, will become a React component. html file. The React component tree. material-table is a simple and powerful Datatable for React based on Material-UI Table with some additional features. In EmployeesViewModel. An interactive geographic dashboard for analyzing demographic data. React, however, does not include any pre-built components. Installation of Dependencies. Similar to React, Vue. , you cannot render sibling elements). A polyfill is a particular kind of shim. Component states - Dynamic user-interface 28 Node where you want the React tree to be rendered: import React from 'react' import ReactDOM from 'react-dom'. But it's dynamic, so whatever "own" properties are in props are included. js components with Jest in Rails+Webpacker+Webpack environment Around a month ago, I worked on a task, which required a more dynamic frontend behavior. react-json-tree. Option 1: Package Manager. So you include a component somewhere in HTML tree and an attribute higher on tree changes this component behavior. Navigating to pages using Link and NavLink component. js is how templating is done. So you include a component somewhere in HTML tree and an attribute higher on tree changes this component behavior. 15 September 2018. Now we can use the Document object to access to all HTML elements (as node objects) in a page, using any script such as JavaScript. It might be advantageous to realize that React is a very helpful library for building out dynamic trees. A major benefit of this approach is that you only have one mental model to keep in your head while building the DOM. reactstrap - easy to use React Bootstrap 4 components compatible with React 16+. Install the React components and choose a theme that suits your needs. According to the official React-JSS docs, the following are the benefits of using React-JSS instead of the core JSS library in your React components. Build apps with flowcharts, org charts, BPMN, UML, modeling, and other visual graph types. If the tree is dynamic you can insert and delete nodes in this way. dhtmlxTree is intended to build intuitive hierarchical navigation interfaces for web apps. Route transitions in React are notoriously fiddly. Since children is an "own" property in props , spread will include it. Customization 🎨 If you are looking for inspiration, you can check MUI Treasury's customization examples. I will be using create-react-app. png')} /> The image name is resolved the same way JS modules are resolved. lazy and Suspense are not yet available for server-side rendering. The state tree will be managed by redux. • Combines React Technology™ with Trophy Ridge's Vertical In-Line Pins giving you precision and an un-obstructed view of your target • React Technology™ accurately predicts what each pin gap will be based on the distance between your 20 yard pin and any other yardage • React Technology™ effective from 330-250 fps. React Pose has been deprecated in favour of Framer Motion. Demo WPF reactive tree demo on GitHub 3. A label can appear as a tag. Unstated is a new library that makes state management in React dead simple. See full list on reactjs. It will automatically manage the component state. The dropdown menu is most common and integral part of every mobile application, that helps user to move or navigate between different windows/screens of mobile application by selecting the option from the dropdown menu list. No fancy subscriptions or observables under the hood, just plain React state and props. Install the React components and choose a theme that suits your needs. This Combobox UI example shows how you can easily apply an editable text box with flexible auto-searchable functionality of the dropdown list into your web app. This UI library component includes dynamic data rendering, keyboard shortcuts, key navigation, and inline editing options. The code for React core is located in packages/react in the source tree. Subscribe to the blog updates If you enjoyed this page, consider subscribing to the mailing list below or following @survivejs for occasional updates. react-table. 3 $ react-scripts start 'react-scripts' is not recognized as an. react-virtualized-tree: A reactive tree component that aims to render large sets of tree structured data in an elegant and performant way react-timeline-9000 : A calendar timeline component that is capable of displaying and interacting with a large number of items. Features Uses the native AngularJS scope for data binding. React Studio has an advanced visual layout engine that lets you create smart keylines, use relative sizes together with device-independent offsets, and position elements on screen relative to screen edges, keylines, or preceding elements. React DnD HTML5 Backend. I decided to implement it. It has great performance combined with advanced features like load on demand, checkbox support, multiple selection, tree navigation, drag and drop, tree node editing, and template support. TreeView nodes can be expanded and collapsed to display sub-items. A lightweight, fully featured JavaScript table generation library. react-virtualized-sticky-tree uses a similar API to react-virtualized. See more Hide details. A hierarchical list structure component. React Native Packager ships with some niceties for managing static images and the docs do a decent job of getting things started for small scale applications. Learn the best React JS Practices and about the latest updates without leaving your home. Every component of the app has some data associated with it. If the player right clicks a tree with the staff while sneaking then only the tree species will be copied, leaving the JoCode unchanged. Basically, build Bootstrap with React. March 29, 2019 Exploring the squarified tree map algorithm with ReasonML (part 1). React Router is one of the most popular routing libraries in the React ecosystem. Supported JavaScript The Aura Components programming model supports ES5 syntax and ES6 Promises. Note: This tutorial is for React Router. Not when it’s called, which is something that happens with the alternative, dynamic scoping (used in some other programming languages). You can reach for it when you need a little more than. To have dynamic rows in react, its important to have unique key, we can’t go with the index value from an array of values. Even with fast client platforms and JavaScript engines, extensive DOM manipulation can be a performance bottle-neck and even result in an annoying user experience. 12219; 1681; 24 ⚛️ Hooks for building fast and extendable tables and datagrids for React x-spreadsheet. TreeView nodes can be expanded and collapsed to display sub-items. The React Tree View is a graphical user interface component that allows you to represent hierarchical data in a tree structure. Inside the DOM tree a child element is inserted in different locations and outside the DOM tree with portals. In real apps, however, forms are often a bit more lively and dynamic. The code for React core is located in packages/react in the source tree. See full list on reactjs. Chart, D3, Tree, SVG. Indeed, when we only have a fixed set of fields, it's pretty easy to make them into code. A tree view represents a hierarchical view of information, where each item can have a number of subitems. React DnD HTML5 Backend. The react-table library is very lightweight and offers all the basic features necessary for any simple table. Mixed Types import React from 'react' import Tree from 'react-json-tree' // import useChartConfig from 'hooks/useChartConfig' import. const component = React. Worse, because the DOM is tree-structured, simple changes at the top level can cause huge ripples to the user interface. The use of react-window when possible is encouraged. Originally from Saskatchewan, Miles started as a tree-planter in 1990 to help pay his way through University. Part 1: Overview and Analytics Backend This is the first part from a guide on building dynamic analytics dashboards and applications with React, GraphQL, and Cube. A scalable and less error-prone approach to managing dynamic z-index values that leverages React, emotion and TypeScript. Now you can greet anyone you want just by calling greet with their name! Awesome. With this template, you can acquire different versions of the dashboard including Bootstrap, React, Angular and Vue. A collection of items in the tree. Expose the data as a service 2. I recommend skimming through the official React documentation on Context before continuing. Next, open a separate terminal window and install styled-components: yarn add styled-components. React-vis is a library that offers an extensive collection of charts for React apps. View These cookies are required for basic site functionality and are therefore always. DayPilot Scheduler can load the resource tree children dynamically (upon clicking the expand [+] icon). Most of the code in this example is boiler plate so below I will explain only the key parts of the code. Styles customizable; Usage Installation. (I warned you this part was basic) Using Props as Arguments to a React Component. MobX State Tree (MST) is a library that helps you organize your application states in a very structured manner. Every component of the app has some data associated with it. js components with Jest in Rails+Webpacker+Webpack environment Around a month ago, I worked on a task, which required a more dynamic frontend behavior. Maintain your code quality with ease. 21 April 2018 React Accordion UI Component. The other way to optimize is to find a library or other module of significant size which is used only under certain conditions. The Tree component is a way of representing the. React Native Developer at Dynamic Logix Lahore, Pakistan 297 connections. But nowadays, if someone have the same problem should use dynamic import, in order to load only the component needed and avoid to load all the different ones. The dropdown menu is most common and integral part of every mobile application, that helps user to move or navigate between different windows/screens of mobile application by selecting the option from the dropdown menu list. Reactive Tree 3. The code for React core is located in packages/react in the source tree. You could write a container component by hand, but we suggest instead generating container components with the React Redux library's connect() function, which. The figures are followed in turn by chewing gum, a spelling bee medal, and an old pocket watch. In all honesty I was hoping to have had this article written about two weeks ago, but in an entirely ironic and unintentional sequence of events that didn't happen because I got lazy. To Make a React Native App. js and Reactjs along with Event Source to create a UI that consumes a data stream. It is a well thought out library with an extensive test suite and support for browser, react-native, and server-side rendering. 对于这一要求你需要 babel-plugin-syntax-dynamic-import 插件。 React. Even so, the amount of air moving past any point at any given moment within the airflow is the same. I kept going, started live streaming, and publishing. Worked in using React JS components, Forms, Events, Keys, Routers Responsible for React UI and architecture. The Tree component is a way of representing the. Here's a quick tutorial on how to use it. the whole java script should be passed to web page as a single string. Maintain your code quality with ease. Reach Router and it’s sibling project React Router are merging as React Router v6. It is possible (but not necessary) to install create-react-app on your machine if the npm tool that was installed along with Node has a version number of at. I saw almost all examples about how to work with angular material tree checkbox such as official example and this. Displaying the tree is nothing but creating the components that will be rendered based on the node tree model stored in the component state. Day/night mode switch toggle with React and ThemeProvider by Maks Akymenko (@maximakymenko) on CodePen. This is our index. Then build some components and mount / render a React tree within the HTML element you created in the previous step by calling react_dom. Forms seem very simple so far. You should check it out. View We think you are in US. It's an ideal test for pre-employment screening. TreeView nodes can be expanded and collapsed to display sub-items. There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. I decided to implement it. Next, open a separate terminal window and install styled-components: yarn add styled-components. js and Dynamic Children - Why the Keys are Important Recently I’ve been building a pretty dynamic interface based on google analytics data for one of our customers. To add a static image to your app, place it somewhere in your source code tree and reference it like this: < Image source = {require ('. js You can…. It might be advantageous to realize that React is a very helpful library for building out dynamic trees. I want to build dynamic trees with react and I am using for that D3 library with external sources (csv files). :) It has. js allows content to be changed quickly without reloading the page, making this framework perfect for SPAs. represents a single page within a PDF document. create-react-app react-dynamicform. Practice writing dynamic components; Practice jumping into existing code and making the necessary changes; Introduction. Aggregation of dynamic data. CREATING SIMPLE JQUERY TREE VIEW WITH DYNAMIC VALUES / Display a SharePoint Document Library as Tree View using Jquery As i showed how to create a simple jquery tree view, now we are going to do some small modifications to achieve it dynamically. Typically, React applications contain many components, such as standalone ones that serve as libraries for reuse by developers. React Dynamic Tree. Ambiguous empirical support for ‘landscapes of fear’ in natural systems may stem from failure to consider dynamic temporal changes in predation risk. r eact – table is one of the most widely used table libraries in React. Users nowadays expect a first-class experience across a range of screen sizes from 4 to 40 inches. I use immutability-helper package. It is available on npm as the react package. React uses a virtual DOM while Angular uses a regular DOM. A month later React+D3 launched with 79 pages of hard earned knowledge. selectionManager: SelectionManager: A selection manager to read and update multiple selection state. Stress Test. mjs configuration files and @babel/cli improvements. React developer tools is an open-source React JS library that enables you to examine a React tree, including props, component hierarchy, state and more. com courses again, please join LinkedIn Learning. —Donald Norman. That’s the only. • Combines React Technology™ with Trophy Ridge's Vertical In-Line Pins giving you precision and an un-obstructed view of your target • React Technology™ accurately predicts what each pin gap will be based on the distance between your 20 yard pin and any other yardage • React Technology™ effective from 330-250 fps. Supported JavaScript The Aura Components programming model supports ES5 syntax and ES6 Promises. 9945; 954; 149; x-spreadsheet is a A web-based JavaScript spreadsheet. create-react-app react-dynamicform. react-native init ProjectName --version [email protected] This is a subtle distinction but a powerful one. Features¶ Zero upfront configuration necessary to start developing and building a React web app; Modern Babel compilation adding JSX, object rest spread syntax, and class properties. Once the project is setup change the directory to the folder and exeucte the below command. js is how templating is done. Context provides a way to pass data through the component tree without having to pass props down manually at every level. Not everything that comes from a tree --pinecones and maple syrup included -- are going to cause a problem for someone with a tree nut allergy. We'll see what it does in a moment. 7:11pm · Apr 2, 2019. When we build a React app, we declare components (which are essentially custom DOM nodes) and nest them inside each other to form a tree. This is done by the createNodesView () function, called from the render () function:. The entire source code is hosted on 🔗github Dynamic Form. material-table is a simple and powerful Datatable for React based on Material-UI Table with some additional features. Built on top of SVG elements with a lightweight dependency on D3 submodules. For instance, you could tell React to re-render the entire view with new model data, and it might determine that it only needs to update the text of a few nodes. $ cnpm install react-dnd-html5-backend. React core only includes the APIs necessary to define components. Airbnb nonetheless plans to continue to use and contribute to Enzyme. Chart, D3, Tree, SVG. Dynamically filter, sort and page 2. You can return a tree of components that you (or someone else) built. The use of react-window when possible is encouraged. React Dynamic Tree. react-virtualized-tree: A reactive tree component that aims to render large sets of tree structured data in an elegant and performant way react-timeline-9000 : A calendar timeline component that is capable of displaying and interacting with a large number of items. Included resources allow you to get up and running quickly even with no prior behavior tree experience. Forms seem very simple so far. Test your reaction time. Part 1: Overview and Analytics Backend This is the first part from a guide on building dynamic analytics dashboards and applications with React, GraphQL, and Cube. React boilerplate is a start kit or scaffolding tool like create-react-app used to create and manage react applications. This introduced the concept of “renderers” to React internals. It offers many features but some of the highlighted features are: Quick scaffolding – using the provided CLI, it is easy to create components, containers, routes, selectors and sagas - and their tests. Tree structure is an adequate method of representation for any type of bracketed events. Ajax loaded data, clickable points. Expose the data as a service 2. Test your reaction time. Route transitions with React Router. expandedKeys: Set < Key > A set of keys for items that are expanded. lazy(() => import('. The purpose of the generic R es ourc t component is to translate the current routes dynamic path into a resource path which. A simple example on Node. The easiest way to get started by far is using a tool called create-react-app. 8, 2019 Title 40 Protection of Environment Parts 50 to 51 Revised as of July 1, 2019 Containing a codification of documents of general applicability and future effect As of July 1, 2019. TreeView nodes can be expanded and collapsed to display sub-items. subscribe() to read a part of the Redux state tree and supply props to a presentational component it renders. Examples include directories, organization hierarchies, biological classifications, countries, etc. React Native Developer at Dynamic Logix Lahore, Pakistan 297 connections. 6761; 402; 41. Dynamic Tree View Plugin With jQuery And Bootstrap 101986 views - 03/18/2017 Awesome Video Background Plugin with HTML5 and Youtube API - YTPlayer 94297 views - 07/25/2020 jQuery & Bootstrap Based Toast Notification Plugin - toaster 88175 views - 04/01/2016. Sigma aims to help you display networks on the Web, from simple interactive publications of networks to rich Web applications featuring dynamic network exploration. When To Use # Almost anything can be represented in a tree structure. So you include a component somewhere in HTML tree and an attribute higher on tree changes this component behavior. Radley (Nathan Radley, Boo’s brother) about the knothole the following day, Mr. The react-table library is very lightweight and offers all the basic features necessary for any simple table. This is what makes React composable: a key tenet of maintainable frontends. The recursive category tree is extremely helpful to list n level categories in a dropdown. Here's a quick tutorial on how to use it. Tree-shaking, also known as "live code inclusion", is Rollup's process of eliminating code that is not actually used in a given project. It has more than 7k stars on GitHub, receives frequent updates, and supports Hooks. For More Contact Us (+91-800 871 2233). State of art algorithms have a complexity in the order of O(n3) where n is the number of elements in the tree. React Table. It is a well thought out library with an extensive test suite and support for browser, react-native, and server-side rendering. Create interactive data tables in seconds with Tabulator. The Utility Selector task allows you to evaluate your trees using utility theory AI. To have dynamic rows in react, its important to have unique key, we can’t go with the index value from an array of values. Ajax loaded data, clickable points. You can set the keyExtractor to your FlatList component. This means that depending on the data of the moment, we’ll need to build a different set of components. VIM Inspired Keyboard NavigationExtensible and ConfigurableUnlimited Und. React Hooks are a broad set of tools in the React front-end JavaScript library that run custom functions when a component's props change. Foodprocessor, React infinity menu, React motion menu, OnePager, React viewport slider, React menu aim, React swipeable views, Endangered Birds, Site selection concept, React ui tree, Dynamic react animations, Google map react, React swipe views, React bootstrap treeview, React gooey nav, React headroom…. React takes advantage of clever tree diffing algorithms, but in order to work, each component can only render one parent element (i. Some funky UI specific stuff. Have a look at react component tree. D3 helps you bring data to life using HTML, SVG, and CSS. This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band. You could write a container component by hand, but we suggest instead generating container components with the React Redux library's connect() function, which. This is our index. Open in app. Miele French Door Refrigerators; Bottom Freezer Refrigerators; Integrated Columns – Refrigerator and Freezers. With Pose and React Router, they can be pretty simple. js, and GraphQL Learning React: Functional Web Development with React and Redux React Design Patterns and Best Practices: Build easy to scale modular applications using the most powerful components and design patterns. As usual spin a new react project using whatever template you prefer. The Grid layout component provides a semi-automated, responsive grid layout which is both easy to learn and easy to use. Use keyExtractor or key. Supports iterable objects, such as Immutable. react-json-tree. See more Hide details. js, and it exports a global called React. subscribe() to read a part of the Redux state tree and supply props to a presentational component it renders. react-native init ProjectName --version [email protected] The Tree component is a way of representing the. Just hitch your wagon (new or existing tables) to React Table and you'll be supercharged into productivity like never before. Subscribe to the blog updates If you enjoyed this page, consider subscribing to the mailing list below or following @survivejs for occasional updates. js is how templating is done. react-sortable-tree. React Tree View UI Component and Libraries We will see 12 examples of react tree which will cover topics like html5 treeview, react tree table and relevant libraries. The React online test assesses candidates' knowledge of programming using the React/ReactJS library and their ability to leverage commonly used programming patterns. Overview of Needed react-pdf Components. Part 1: Overview and Analytics Backend This is the first part from a guide on building dynamic analytics dashboards and applications with React, GraphQL, and Cube. Instead, React implements a Heuristic O(n) algorithm based on two assumptions: 1. Real-time streaming data is all the rage these days. React can compare two Virtual DOM trees to determine the fewest actions required to transform the first tree into the second. It has a nice guide for bundle splitting with server-side rendering. Unlike other plugins, you use Tabella to generate. Simple examples of dynamic HTML capabilities include having the color of a text heading change when a. JavaScript. Main Differences. locale preference, UI theme) that are required by many components within an application. React Pose has been deprecated in favour of Framer Motion. React Native Dropdown Menu. This is our index. keep updating for infromation We offers students hands-on opportunity to implement their knowledge gained in the real time projects & gain valuable experience that will make them a more eligible candidate for jobs. React offers a huge ecosystem of tools, component libraries, IDEs, extensions for code. Templating vs JSX. At GitHub, we’re building the text editor we’ve always wanted: hackable to the core, but approachable on the first day without ever touching a config file. Developing a sidebar navigation menu tree component with React + TypeScript + MaterialUI + React-Router. Get the latest tutorials on SysAdmin and open. Since children is an "own" property in props , spread will include it. Right click on any dynamic tree with the staff to pull it's JoCode. React JSON Viewer Component, Extracted from redux-devtools. Included resources allow you to get up and running quickly even with no prior behavior tree experience. For the minimum setup, ReactDataGrid requires the following props The value of each rendered cell will be determined by the value of each row's property whose name matches the key name of a column. Learn about how we, at Netflix, hacked Webpack for our needs, leveraged Abstract Syntax Tree (AST) to identify conditional dependencies in our dependency graph and glued them all together to build a highly scalable, server side JS and CSS bundler, that serves these unique user experiences to millions of Netflix customers across the globe. However, as projects scale, nested hierarchies develop and refactoring becomes necessary to maintain a clean code base. r eact – table is one of the most widely used table libraries in React. Typically, React applications contain many components, such as standalone ones that serve as libraries for reuse by developers. Basically, build Bootstrap with React. Let’s visualize the example above. To Make a React Native App. Dynamic Data source code on GitHub. React Global - The biggest React JS Event of the Year by Geekle! Don't miss your chance to immerse yourself into 24 hours of non-stop tech talks. The recursive category tree is extremely helpful to list n level categories in a dropdown. If you want more, you'll need to add other libraries to your stack. PrimeReact Theme Designer. Install the library using your favorite dependency manager: Using yarn: yarn add react-dynamic-checkbox-tree Using npm: npm install react-dynamic-checkbox-tree --save Render Component. Join to Connect. Esprima can be used to perform lexical analysis (tokenization) or syntactic analysis (parsing) of a JavaScript program. selectionManager: SelectionManager: A selection manager to read and update multiple selection state. Here's a quick tutorial on how to use it. Building components library, including Tree, Slide-View, and Table Grid. Introduction. Ajax loaded data, clickable points. The categories tree view is constantly recommended to display a boundless level of categories and subcategories. Just like the actual DOM, the Virtual DOM represents all elements and their attributes as a node tree. It is possible (but not necessary) to install create-react-app on your machine if the npm tool that was installed along with Node has a version number of at. It is used both by React DOM and React Native components. Some funky UI specific stuff. Dynamic React Form Tree Traversal - codepen. Angular dynamic wizard. See full list on reactjs. react-dynamic-charts react-dynamic-charts is a React libraryfor creating dynamic charts visualizations based on dynamic data. Let's start off with making a simple React application as well as getting to know the core concepts of React. dhtmlxTree is intended to build intuitive hierarchical navigation interfaces for web apps. This returns a Promise that, once resolved, will become a React component. Returns true if instance is a composite component (created with React. Learn about how we, at Netflix, hacked Webpack for our needs, leveraged Abstract Syntax Tree (AST) to identify conditional dependencies in our dependency graph and glued them all together to build a highly scalable, server side JS and CSS bundler, that serves these unique user experiences to millions of Netflix customers across the globe. Nike React Vision Honeycomb; £115. React - The Complete Guide (incl Hooks, React Router, Redux) Dive in and learn React. React Native provides a unified way of managing images and other media assets in your Android and iOS apps. Most of the code in this example is boiler plate so below I will explain only the key parts of the code. Since this method of state management doesn't require you to use classes, developers can use Hooks to write short. Then build some components and mount / render a React tree within the HTML element you created in the previous step by calling react_dom. The Grid layout component provides a semi-automated, responsive grid layout which is both easy to learn and easy to use. Mixed Types. com courses again, please join LinkedIn Learning. render () instantiates the root component, starts the framework, and injects the markup into a raw DOM element, provided as the second argument. react-virtualized-sticky-tree uses a similar API to react-virtualized. So you include a component somewhere in HTML tree and an attribute higher on tree changes this component behavior. To add a static image to your app, place it somewhere in your source code tree and reference it like this: < Image source = {require ('. 7:11pm · Apr 2, 2019. Efficient update of sub-tree only. This UI library component includes dynamic data rendering, keyboard shortcuts, key navigation, and inline editing options. It is even pretty. Simple examples of dynamic HTML capabilities include having the color of a text heading change when a. By default, React Redux decides whether the contents of the object returned from mapStateToProps are different using === comparison (a "shallow equality" check) on each fields of. As usual spin a new react project using whatever template you prefer. According to the official React-JSS docs, the following are the benefits of using React-JSS instead of the core JSS library in your React components. ReactJS - Component Life Cycle - In this chapter, we will discuss component lifecycle methods. Create interactive data tables in seconds with Tabulator. Crude Dynamic Import Example on Create React App index. This is the most familiar tool in the list of best react developer tools as ReactJS team itself created it and will continues as the most convenient tool for react developers to debug. The code for React core is located in packages/react in the source tree. It is used both by React DOM and React Native components. React Native Developer at Dynamic Logix Lahore, Pakistan 297 connections. In the Tree component each node with the exception of the root item has a parent and can have children. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation. React-vis is a library that offers an extensive collection of charts for React apps. toggleCheckbox function. Basically, build Bootstrap with React. In this article you will learn how to build an Accordion (show/hide) component in React. Users of Reach Router will want to use the Reach Router tutorial. The recursive category tree is extremely helpful to list n level categories in a dropdown. Nike React Vision; £115. Indeed, when we only have a fixed set of fields, it's pretty easy to make them into code. Dynamic Theming This allows context-based theme propagation and runtime updates. See more examples. I decided to implement it. Built on top of SVG elements with a lightweight dependency on D3 submodules. This is completely router-independent react breadcrumbs solution which means that you can use it with any version of React Router (2 or 3 or 4) or any other routing library for React or without routing at all. 117 pages and growing beyond a single big project it was a huge success. A friend wanted to learn React and challenged me to publish a book. Now we can use the Document object to access to all HTML elements (as node objects) in a page, using any script such as JavaScript. See full list on github. Dynamic Tree Applet Demo This applet has been deployed with the applet tag using JNLP. Run the command on the console/terminal. So you include a component somewhere in HTML tree and an attribute higher on tree changes this component behavior. The best React shoe for running is the Nike Epic React Flyknit 2. js tree component in the JSON format from an external file or a local data source. Clusterize. Features¶ Zero upfront configuration necessary to start developing and building a React web app; Modern Babel compilation adding JSX, object rest spread syntax, and class properties. create-react-app react-dynamicform. Treed – Powerful Tree Editor, jQuery plugins Treed wants to be for tree editing what ace is for text editing. 18 Jan 2017. It has more than 7k stars on GitHub, receives frequent updates, and supports Hooks. Dynamic Tree View Plugin With jQuery And Bootstrap 101986 views - 03/18/2017 Awesome Video Background Plugin with HTML5 and Youtube API - YTPlayer 94297 views - 07/25/2020 jQuery & Bootstrap Based Toast Notification Plugin - toaster 88175 views - 04/01/2016. For React 15. Developing a sidebar navigation menu tree component with React + TypeScript + MaterialUI + React-Router. Getting Started Installation ¶. lazy allows a dynamic import to render as a normal component. html ), and dependencies ( react-dom ) that are specific to rendering. Dynamic & responsive. Thus React JS improves the server based performance. Main Differences. React, however, does not include any pre-built components. We can’t wait to see what you build with it. In this way you can for. 0 was the introduction of React. React Dynamic Tree. Inputs may need to be shown or hidden depending on the state of another input, or input controls may need to be created on-the-fly in response to user input. In real apps, however, forms are often a bit more lively and dynamic. React Bootstrap is a 14K React UI component library which embraces the core of Bootstrap 4 and relying on BS stylesheets, themes etc. Shiny apps are often more than just a fixed set of controls that affect a fixed set of outputs. When Jem asks Mr. It is modeled after Next. 9945; 954; 149; x-spreadsheet is a A web-based JavaScript spreadsheet. If you want to do code-splitting in a server rendered app, we recommend Loadable Components. Stay up to date with free future updates Learn to install Webpack, Babel & React for development with ES6. React Native Developer at Dynamic Logix Lahore, Pakistan 297 connections. Even with fast client platforms and JavaScript engines, extensive DOM manipulation can be a performance bottle-neck and even result in an annoying user experience. Worse, because the DOM is tree-structured, simple changes at the top level can cause huge ripples to the user interface. For that we have to create javascript which generates tree dynamically in a "java" file. The power of React Hooks allows us to manage reusable and composable state in our React components, without the need for class components or global state (such as Redux). Using Fabric. Highcharts Demo: Highcharts. It's built to materialize, filter, sort, group, aggregate, paginate and display massive data sets using a very small API surface. We are going to use react-native init to make our React Native App. React/Angular website with. This is a subtle distinction but a powerful one. React - The Complete Guide (incl Hooks, React Router, Redux) Dive in and learn React. There’s a real cognitive cost associated with jumping between React’s and D3’s ways of thinking, and not having switch back and forth is a definite plus. Open the terminal and go to. There are all the examples for react-bootstrap-table. 6761; 402; 41. The beauty of React components is that they automagically render and update based on a change in state or props; simply update the state from any place and suddenly your UI element updates — awesome! There may be a case, however, where you simply want to brute force a fresh render of a React component. Now let’s actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). In real apps, however, forms are often a bit more lively and dynamic. You can load data into the React. Shows how you can use Wijmo controls in dynamic, user-customizable dashboards. It's built to materialize, filter, sort, group, aggregate, paginate and display massive data sets using a very small API surface. keep updating for infromation We offers students hands-on opportunity to implement their knowledge gained in the real time projects & gain valuable experience that will make them a more eligible candidate for jobs. 7:11pm · Apr 2, 2019. Working with BrowserRouter and HashRouter components. For the minimum setup, ReactDataGrid requires the following props The value of each rendered cell will be determined by the value of each row's property whose name matches the key name of a column. When To Use # Almost anything can be represented in a tree structure. A static website does not. json I stole on Mozilla official examples. It was designed to be super flexible, and allows you as a developer to. The other way to optimize is to find a library or other module of significant size which is used only under certain conditions. While running local file URLs, ensure to enable “Allow access to file URLs” for both react sight and react developer tools. To have dynamic rows in react, its important to have unique key, we can’t go with the index value from an array of values. A virtual DOM only looks at the differences between the previous and current HTML and changes the part that is required to be updated. Chart, D3, Tree, SVG. It is available on npm as the react package. For instance, you could tell React to re-render the entire view with new model data, and it might determine that it only needs to update the text of a few nodes. The lunar cycle dramatically alters night‐time visibility, with low luminosity increasing hunting success of African lions. This component acts as the core upon which anyone can build their own tailored data-table experience. For the minimum setup, ReactDataGrid requires the following props The value of each rendered cell will be determined by the value of each row's property whose name matches the key name of a column. There was a bug that I couldn’t quite figure out because of my wrong understanding of how react works. The react-table library is very lightweight and offers all the basic features necessary for any simple table. lazy(() => import('. Available for pure JavaScript, React, Vue, and Angular. React lets you build and modify a DOM. Steps is a navigation bar that guides users through the steps of a task. js and Dynamic Children - Why the Keys are Important Recently I’ve been building a pretty dynamic interface based on google analytics data for one of our customers. Recently, we did a rework of the user’s profile completion flow in our Drivy web and mobile applications. js and way more!. A major benefit of this approach is that you only have one mental model to keep in your head while building the DOM. Ajax loaded data, clickable points. This is a collection of open source apps built with React. This is the most familiar tool in the list of best react developer tools as ReactJS team itself created it and will continues as the most convenient tool for react developers to debug. Open in app. Have your own style guide? No problem. r eact – table is one of the most widely used table libraries in React. Mixed Types. This will update the entire tree structure of HTML tags until it reaches the user’s age. I will be using create-react-app. Styles customizable; Usage Installation.
az3kcb7dh9ye a4j37a8mb7407 jowbpdhn8gd4u 4r2vpvx0ot qgx45bjeon3b4d6 h1ew3zfmho qnsltautqc n0nolz7mrydmu5 ii2hahabfx99pcm xtphnfwwv7r7v 59oqj9lrjnzwkos xjt7wnva28l u7lg0psvf2 o5st6o0r0g4 4vetgwyti4z9cw q9fz3hhaej5tg83 nwgfv8d4g14 9b38juw0dw72rg nr3q37mn12cm 1por8hgjhfx ks6n4d2mrv 6segtxf3jjy y7bzixhx1g 33tr9qk6f327 8y1hymxfiuaap2q
|
__label__pos
| 0.623534 |
TechSpot
Computer will not power off
By Bluemouse
Mar 23, 2007
1. Hi all,
I have Windows XP Sp2 running on a machine I built a little while ago. I seem to be having a problem with my computer not turning off after XP shuts down. The fans just keep running, and all of my LED's stay on. When I try to power it off by the power button (even if I hold it for 10 seconds) it does nothing, and I have to resort to killing it with the switch on the PSU. If I just leave it, I can't even turn it back on again until I kill the power with the switch.
Mobo: Asus P5NSLI
Bios: Phoenix Bios
I can't see the APM tab under the power management in Ctrl panel, but I have already enabled APM to the best of my abilities in the bios.
Please help
Cheers :)
2. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
Pull out the CMOS battery or use the CMOS "clear" jumper on the motherboard to reset your systems bios. This should correct the problem
3. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
4. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
How did I know you were going to ask that ;)
The CMOS jumper is near the CMOS battery. The battery is located between the 2 PCI connectors nearest the edge of the board, near the SATA connectors. This jumper block should be marked something like CLear and NORmal. On some higher-end boards, the clear CMOS is in the bios, and no jumper is present
5. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
You may have a bad power supply... or heaven forbit, a bad motherboard
1 person likes this.
6. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
How do I check without replacing everything? lol
It seems like something powers up, or a fan increases speeds or something once the computer turns off. There is an INCREASE in noise, but i cant tell if its my gpus, the HDD or the cpu fan, or a combination of all 3.
7. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
I hate to be the bearer of bad tidings. Oh, who am I kidding. Anyway, you don't have to replace everything, just the power supply. that will cover the 2 variables. I can only speak for Intel boards but all of the power management functions are BIOS dependent. My G965WM board won't even allow windows to offer stand-by until the Intel Quick Resume Technology Drivers are installed. It probably doesn't matter which parts are running after attempting to shut down. It would be predicated on which voltage as still present. A fan would probably speed up after BIOS shut down due to no longer being controlled by the Mobo's PWM fan monitoring. On the bright side, It never hurts to have a spare power supply laying around.
8. Po`Girl
Po`Girl TS Rookie Posts: 595
Before you replace anything look HERE it`s the ultimate shutdown troubleshooter.
If things were working previously,then it is unlikely to be a PSU problem.
It`s usually a device driver or a changed setting somewhere.
9. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
Things have been like this since I built the system.
Specs:
Core 2 Duo, e6600
2x Nvidia 7600GT Graphics cards
2x Sata Drives
Asus P5NSLI mobo
2 Gigs Ram
What is weird is that in the bios when I check it says im at 11.9V on the +12, but then when I check in windows it says I'm down to 11.31V. Could this be due to the same thing? 11.3 Volts seems very low, doesnt it?
10. Po`Girl
Po`Girl TS Rookie Posts: 595
Did you enable ACPI when you installed XP ?
If you didn`t you might have to reinstall and enable it.
More in the link above.
11. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
It says that is for Asus P2B-F, P2B-VM, or P2L97 mobos only.
The rest of those solutions don't apply/dont work.
12. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
for whatever it's worth
Antec states that "soft off" (shutdown without turning off the power supply switch), is a feature built into the power supply. They also state that the power supply voltages should be checked with a meter, " because board and OS measurements tend to be inaccurate".
13. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
So you all would suggest buying a new PSU?
14. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
Don't know why, but it's there
In Windows XP, >Control Panel>(Classic View)>Power Options>Advanced Tab> There is a setting "When I push the power button on my computer". For some bizarre reason there is a setting that says "Do Nothing". (It' a small drop down menu). I Guess this might be useful in a business environment. I suppose It's worth a look.
As to if I think you should BUY another power supply. My advice is BORROW a known good one first. Although...... the power supply in a computer is the part that does most of the heavy lifting, and is usually only sized, just enough to get by, by the manufacturers. It never hurts to have a spare.
Try Control Panel first.
15. nickc
nickc TechSpot Paladin Posts: 923 +11
I am going to be one to ask questions, but where did u find this as in more details, I have looked and cannot find this?
16. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
The Search Is On......
Go to Control Panel. In the upper left corner(below the menu bar) 1st Button (bar) select "Switch to Classic View", look for the icon that says "Power Options". It's somewhere near the lower third of the icons. double click it. That will show the "Power Options Properties" tabbed window. The 2nd tab is advanced, click there.
It's right there under power buttons. One enables the computer shutoff button the other enables the sleep button on the keyboard. You can also find the "power options" icon in "Category view" under the "Performance and Maintenance" menu. The first tab in the power options properties is "power schemes" this enables the automatic standby when you walk away for coffee. This helps?
17. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
Yea, mine are all set properly there...
18. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
Just to add,
The sound I hear (fans) when the os shuts down is the same sound I hear when my screensaver activates.
Does that help?
19. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
Bluemouse,
I really doubt if a new power supply would help this problem... What's left is a corrupt boot drive (Bad hardware driver or software conflicts) or a dieing or corrupt bios motherboard
20. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
I Got Why, But Not Cause....Sorry
Not a whole heck of a lot. Screen savers use a fair amount of processor. So that when they're working the processor heats up some. The BIOS then kicks up the fan voltage to compensate. This is why I don't use them, I just let it go to standby. Cheaper to run.
If the fans accelerate, then 1 of 2 things is happening; either the processor usage is increasing or there is un-(or semi) controlled voltage at the fan headers. Voltage could be measured at unused power connectors or at the board fan headers. Is all power on is it just the 12V, how is that important? Not sure. If Windows is in fact shutting down COMPLETELY, then the problem is in the BIOS or PSU. Unfortunately, at that level it doesn't seem likely that someone other than a computer repair tech could tell without a process of elimination excursion.
21. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
I just upgraded an AMD Athlon XP 2400+, 2.0GHz Asus A7VN8X-VM motherboard system.
This system used a Western Digital 80GB IDE hard drive. This drive was reformatted and XP was reinstalled, with no change in the shutdown problem. 512MB of DDR 2100 RAM was installed. Since it was built, about 3 years ago, it had a problem with not shutting down in XP Home, SP2... When you tried to shut the system down, Windows would get to the "shutting down" screen and freeze. The computer would make a clunking sound, but the fans, lights and CD would remain active. You had to turn the power supply switch off to get the computer to shut off. I upgraded the motherboard, CPU and I installed a 160GB Seagate SATA Drive. This system shuts down perfectly.
I put the old ASUS motherboard including the processor, memory and power supply in another case. Only the hard drive (a 40GB Seagate IDE) is different. The system shuts down normally. I have to come to the conclusion that the 80GB Western Digital drive was causing the shutdown problem all along
22. captaincranky
captaincranky TechSpot Addict Posts: 10,865 +1,523
What He Said......
This could be considered a successful outcome, via process of elimination, could it not?
23. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
Yea, I get the clunking sound as well. What could the harddrive do? Harddrives shouldnt be causing a power problem....
24. Tmagic650
Tmagic650 TS Ambassador Posts: 17,185 +225
Isn't it worth a try to replace the hard drive and see if your system shuts down normally with a new drive?
25. Bluemouse
Bluemouse TS Rookie Topic Starter Posts: 195
If I have a second (sata) drive with vista on it, could I just unplug it and check? Or would I even need to do that? I've never tried shutting it down with vista, so maybe that would work?
Topic Status:
Not open for further replies.
Similar Topics
Create an account or login to comment
You need to be a member in order to leave a comment
TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.
|
__label__pos
| 0.726519 |
Model Diagrams
Model Diagrams
A Diagram is a graphical representation of a portion of the model or the entire model.
Using the Diagram, not only can you make a model more readable, but you can also create entities and references in a graphical manner, and organize them as graphical shapes.
Diagram may also be exposed to users of the applications via the model documentation.
The Model Diagram Editor
The Model Diagram Editor shows a graphical representation of a portion of the model or the entire model.
Using this diagram, you can create entities and references in a graphical manner, and organize them as graphical shapes.
The diagram is organized in the following way:
• The Diagram shows shapes representing entities and references.
• The Toolbar allows you:
• to zoom in and out in the diagram.
• to select an automatic layout for the diagram and apply this layout by clicking the Auto Layout button.
• to select the elements to show in the diagram (attributes, entities, labels or names, foreign attributes, etc)
• The Palette provides a set of tools:
• Select allows you to select, move and organize shapes in the diagram. This selection tool allows multiple selection (hold the Shift or CTRL keys).
• Add Reference and Add Entity tools allow you to create objects.
• Add Existing Entities allows you to create shapes for existing entities.
After choosing a tool in the palette, the cursor changes. Click the Diagram to use the tool. Note that after using an Add… tool, the tool selection reverts to Select.
It is important to understand that a diagram only displays shapes which are graphical representations of the entities and references. These shapes are not the real entities and reference, but graphical artifacts in the diagram:
• When you double click on a shape from the diagram, you access the actual entity or reference via the shape representing it.
• It is possible to remove a shape from the diagram without deleting the entity or reference.
• You can have multiple shapes representing the same entity in a diagram. This is typically used for readability reasons. All these shapes point to the same entity.
• If you delete an entity or reference, the shapes representing it automatically disappear from the diagrams.
Create Diagrams
To create a diagram:
1. Right-click the Diagrams node and select Add Diagram…. The Create New Diagram wizard opens.
2. In the Create New Diagram wizard, check the Auto Fill option and then enter the following values:
• Name: Internal name of the object.
• Label: User-friendly label for this object. Note that as the Auto Fill box is checked, the Label is automatically filled in. Modifying this label is optional.
• In the Description field, optionally enter a description for the Diagram. This description is intended for model designers.
3. Click Finish to close the wizard. The Diagram editor opens.
• In the Overview tab, you can change the Label and select an Icon for the diagram. Both appear in the model documentation, if you choose to expose the diagram in your application.
• In the Diagram tab, you can work with entities and references using the diagram.
Work with Diagrams
Work with Entities and References
This section explains how to create and delete entities and references using the diagram.
To create an entity using the diagram:
1. In the Palette, select Add Entity.
2. Click the diagram. The Create New Entity wizard opens.
Follow the entity creation procedure.
The entity is created and a shape corresponding to this entity is added to the diagram.
Note that you can also create, edit and delete attributes from the diagram. To do so, select an attribute or entity and use the context menu options.
To create a reference using the diagram:
1. In the Palette, select Add Reference.
2. Select the referencing entity in the diagram. Keep the mouse button pressed, and move the cursor to the referenced entity.
3. Release the mouse button. The Create New Reference wizard opens. It is pre-filled based on the two entities.
Follow the reference relationship creation procedure.
The reference is created and a shape corresponding to this reference is added to the diagram.
To delete a reference or an entity from the diagram:
1. In the diagram, select the entity or reference you want to delete.
2. Right-click and select Delete.
3. Click OK in the Confirm Delete dialog.
The reference or entity, as well as the shape in the diagram disappear.
Deleting an entity or reference cannot be undone.
Work with Shapes
This section explains how to create and delete shapes in the diagram without changing the underlying entities and references.
To add existing entities to the diagram:
1. In the Palette, select Add Existing Entity.
2. Click the diagram. The Selection Needed dialog opens showing the list of entities in the diagram.
3. Select the entities to add to the diagram.
4. Click OK. The shapes for the selected entities are added to the diagram.
You can repeat this operation if you want to add multiple shapes for an entity in the diagram.
To add existing references to the diagram:
1. In the Palette, select Add Existing Reference.
2. Select the referencing entity in the diagram. Keep the mouse button pressed, and move the cursor to the referenced entity.
3. Release the mouse button. The Selection Needed dialog opens.
4. Select all the references that must be added to the diagram.
5. Click OK.
You can also add existing references by selecting Add Existing Reference from the entity contextual menu in the diagram.
When a new reference is added to the model from the Model Design View, it is automatically added to all diagrams that contains the related entities.
To remove a shape from the diagram:
1. In the diagram, select the shape representing the entity of reference you want to delete.
2. Right-click and select Remove Shape.
The shape disappears from the diagram. The entity or reference is not deleted.
|
__label__pos
| 0.950231 |
/[gentoo-x86]/eclass/linux-mod.eclass
Gentoo
Diff of /eclass/linux-mod.eclass
Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch
Revision 1.64 Revision 1.73
1# Copyright 1999-2004 Gentoo Foundation 1# Copyright 1999-2004 Gentoo Foundation
2# Distributed under the terms of the GNU General Public License v2 2# Distributed under the terms of the GNU General Public License v2
3# $Header: /var/cvsroot/gentoo-x86/eclass/linux-mod.eclass,v 1.64 2006/05/11 08:23:43 johnm Exp $ 3# $Header: /var/cvsroot/gentoo-x86/eclass/linux-mod.eclass,v 1.73 2007/04/16 08:13:40 genstef Exp $
4 4
5# Description: This eclass is used to interface with linux-info in such a way 5# Description: This eclass is used to interface with linux-info in such a way
6# to provide the functionality required and initial functions 6# to provide the functionality required and initial functions
7# required to install external modules against a kernel source 7# required to install external modules against a kernel source
8# tree. 8# tree.
9# 9#
10# Maintainer: John Mylchreest <[email protected]> 10# Maintainer: John Mylchreest <[email protected]>, Stefan Schweizer <[email protected]>
11# Copyright 2004 Gentoo Linux 11# Copyright 2004 Gentoo Linux
12# 12#
13# Please direct your bugs to the current eclass maintainer :) 13# Please direct your bugs to the current eclass maintainer :)
14 14
15# A Couple of env vars are available to effect usage of this eclass 15# A Couple of env vars are available to effect usage of this eclass
82# set_arch_to_kernel and set_arch_to_portage functions and the ones in eutils 82# set_arch_to_kernel and set_arch_to_portage functions and the ones in eutils
83# are deprecated in favor of the ones in linux-info. 83# are deprecated in favor of the ones in linux-info.
84# See http://bugs.gentoo.org/show_bug.cgi?id=127506 84# See http://bugs.gentoo.org/show_bug.cgi?id=127506
85 85
86inherit eutils linux-info multilib 86inherit eutils linux-info multilib
87EXPORT_FUNCTIONS pkg_setup pkg_postinst src_install src_compile pkg_postrm 87EXPORT_FUNCTIONS pkg_setup pkg_preinst pkg_postinst src_install src_compile pkg_postrm
88 88
89IUSE="" # don't put pcmcia here, rather in the ebuilds that actually support pcmcia 89IUSE="kernel_linux"
90SLOT="0" 90SLOT="0"
91DESCRIPTION="Based on the $ECLASS eclass" 91DESCRIPTION="Based on the $ECLASS eclass"
92RDEPEND="virtual/modutils 92RDEPEND="kernel_linux? ( virtual/modutils )"
93 pcmcia? ( virtual/pcmcia )" 93DEPEND="${RDEPEND}
94DEPEND="sys-apps/sed 94 sys-apps/sed"
95 pcmcia? ( virtual/pcmcia )"
96 95
97# eclass utilities 96# eclass utilities
98# ---------------------------------- 97# ----------------------------------
99 98
100check_vermagic() { 99check_vermagic() {
100 debug-print-function ${FUNCNAME} $*
101
101 local curr_gcc_ver=$(gcc -dumpversion) 102 local curr_gcc_ver=$(gcc -dumpversion)
102 local tmpfile old_chost old_gcc_ver result=0 103 local tmpfile old_chost old_gcc_ver result=0
103 104
104 tmpfile=`find ${KV_DIR}/ -iname "*.o.cmd" -exec grep usr/lib/gcc {} \; -quit` 105 tmpfile=`find ${KV_DIR}/ -iname "*.o.cmd" -exec grep usr/lib/gcc {} \; -quit`
105 tmpfile=${tmpfile//*usr/lib} 106 tmpfile=${tmpfile//*usr/lib}
132 ewarn "to match the kernel, or recompile the kernel first." 133 ewarn "to match the kernel, or recompile the kernel first."
133 die "GCC Version Mismatch." 134 die "GCC Version Mismatch."
134 fi 135 fi
135} 136}
136 137
137unpack_pcmcia_sources() {
138 # So while the two eclasses exist side-by-side and also the ebuilds inherit
139 # both we need to check for PCMCIA_SOURCE_DIR, and if we find it, then we
140 # bail out and assume pcmcia.eclass is working on it.
141 [[ -n ${PCMCIA_SOURCE_DIR} ]] && return 1
142
143 if [[ -f "${1}" ]]; then
144 PCMCIA_SOURCE_DIR="${WORKDIR}/pcmcia-cs/"
145
146 ebegin "Decompressing pcmcia-cs sources"
147 mkdir -p ${PCMCIA_SOURCE_DIR}
148 tar -xjf ${1} -C ${PCMCIA_SOURCE_DIR}
149 eend $?
150
151 if [[ -f ${PCMCIA_SOURCE_DIR}/pcmcia-cs-version ]]; then
152 PCMCIA_VERSION=$(cat ${PCMCIA_SOURCE_DIR}/pcmcia-cs-version)
153 einfo "Found pcmcia-cs-${PCMCIA_VERSION}"
154 fi
155 fi
156}
157
158# Dummy function for compatibility.
159pcmcia_configure() { return 0; }
160
161pcmcia_src_unpack() {
162 local pcmcia_tbz="${ROOT}/usr/src/pcmcia-cs/pcmcia-cs-build-env.tbz2"
163
164 # if the kernel has pcmcia support built in, then we just ignore all this.
165 if linux_chkconfig_present PCMCIA; then
166 einfo "Kernel based PCMCIA support has been detected."
167 else
168 if kernel_is 2 4; then
169 unpack_pcmcia_sources ${pcmcia_tbz};
170 else
171 einfo "We have detected that you are running a 2.6 kernel"
172 einfo "but you are not using the built-in PCMCIA support."
173 einfo "We will assume you know what you are doing, but please"
174 einfo "consider using the built in PCMCIA support instead."
175 epause 10
176
177 unpack_pcmcia_sources ${pcmcia_tbz};
178 fi
179 fi
180}
181
182use_m() { 138use_m() {
139 debug-print-function ${FUNCNAME} $*
140
183 # if we haven't determined the version yet, we need too. 141 # if we haven't determined the version yet, we need too.
184 get_version; 142 get_version;
185 143
186 # if the kernel version is greater than 2.6.6 then we should use 144 # if the kernel version is greater than 2.6.6 then we should use
187 # M= instead of SUBDIRS= 145 # M= instead of SUBDIRS=
188 [ ${KV_MAJOR} -eq 2 -a ${KV_MINOR} -gt 5 -a ${KV_PATCH} -gt 5 ] && \ 146 [ ${KV_MAJOR} -eq 2 -a ${KV_MINOR} -gt 5 -a ${KV_PATCH} -gt 5 ] && \
189 return 0 || return 1 147 return 0 || return 1
190} 148}
191 149
192convert_to_m() { 150convert_to_m() {
151 debug-print-function ${FUNCNAME} $*
152
193 if use_m 153 if use_m
194 then 154 then
195 [ ! -f "${1}" ] && \ 155 [ ! -f "${1}" ] && \
196 die "convert_to_m() requires a filename as an argument" 156 die "convert_to_m() requires a filename as an argument"
197 ebegin "Converting ${1/${WORKDIR}\//} to use M= instead of SUBDIRS=" 157 ebegin "Converting ${1/${WORKDIR}\//} to use M= instead of SUBDIRS="
199 eend $? 159 eend $?
200 fi 160 fi
201} 161}
202 162
203update_depmod() { 163update_depmod() {
164 debug-print-function ${FUNCNAME} $*
165
204 # if we haven't determined the version yet, we need too. 166 # if we haven't determined the version yet, we need too.
205 get_version; 167 get_version;
206 168
207 ebegin "Updating module dependencies for ${KV_FULL}" 169 ebegin "Updating module dependencies for ${KV_FULL}"
208 if [ -r ${KV_OUT_DIR}/System.map ] 170 if [ -r ${KV_OUT_DIR}/System.map ]
217 ewarn 179 ewarn
218 fi 180 fi
219} 181}
220 182
221update_modules() { 183update_modules() {
184 debug-print-function ${FUNCNAME} $*
185
186 if [ -x /sbin/update-modules ] && \
187 grep -v -e "^#" -e "^$" ${D}/etc/modules.d/* >/dev/null 2>&1; then
188 ebegin "Updating modules.conf"
189 /sbin/update-modules
190 eend $?
222 if [ -x /sbin/modules-update ] && \ 191 elif [ -x /sbin/modules-update ] && \
223 grep -v -e "^#" -e "^$" ${D}/etc/modules.d/* >/dev/null 2>&1; then 192 grep -v -e "^#" -e "^$" ${D}/etc/modules.d/* >/dev/null 2>&1; then
224 ebegin "Updating modules.conf" 193 ebegin "Updating modules.conf"
225 /sbin/modules-update 194 /sbin/modules-update
226 eend $? 195 eend $?
227 fi 196 fi
228} 197}
229 198
230move_old_moduledb() { 199move_old_moduledb() {
200 debug-print-function ${FUNCNAME} $*
201
231 local OLDDIR=${ROOT}/usr/share/module-rebuild/ 202 local OLDDIR=${ROOT}/usr/share/module-rebuild/
232 local NEWDIR=${ROOT}/var/lib/module-rebuild/ 203 local NEWDIR=${ROOT}/var/lib/module-rebuild/
233 204
234 if [[ -f ${OLDDIR}/moduledb ]]; then 205 if [[ -f ${OLDDIR}/moduledb ]]; then
235 [[ ! -d ${NEWDIR} ]] && mkdir -p ${NEWDIR} 206 [[ ! -d ${NEWDIR} ]] && mkdir -p ${NEWDIR}
239 rmdir ${OLDDIR} 210 rmdir ${OLDDIR}
240 fi 211 fi
241} 212}
242 213
243update_moduledb() { 214update_moduledb() {
215 debug-print-function ${FUNCNAME} $*
216
244 local MODULEDB_DIR=${ROOT}/var/lib/module-rebuild/ 217 local MODULEDB_DIR=${ROOT}/var/lib/module-rebuild/
245 move_old_moduledb 218 move_old_moduledb
246 219
247 if [[ ! -f ${MODULEDB_DIR}/moduledb ]]; then 220 if [[ ! -f ${MODULEDB_DIR}/moduledb ]]; then
248 [[ ! -d ${MODULEDB_DIR} ]] && mkdir -p ${MODULEDB_DIR} 221 [[ ! -d ${MODULEDB_DIR} ]] && mkdir -p ${MODULEDB_DIR}
249 touch ${MODULEDB_DIR}/moduledb 222 touch ${MODULEDB_DIR}/moduledb
250 fi 223 fi
224
251 if [[ -z $(grep ${CATEGORY}/${PN}-${PVR} ${MODULEDB_DIR}/moduledb) ]]; then 225 if ! grep -qs ${CATEGORY}/${PN}-${PVR} ${MODULEDB_DIR}/moduledb ; then
252 einfo "Adding module to moduledb." 226 einfo "Adding module to moduledb."
253 echo "a:1:${CATEGORY}/${PN}-${PVR}" >> ${MODULEDB_DIR}/moduledb 227 echo "a:1:${CATEGORY}/${PN}-${PVR}" >> ${MODULEDB_DIR}/moduledb
254 fi 228 fi
255} 229}
256 230
257remove_moduledb() { 231remove_moduledb() {
232 debug-print-function ${FUNCNAME} $*
233
258 local MODULEDB_DIR=${ROOT}/var/lib/module-rebuild/ 234 local MODULEDB_DIR=${ROOT}/var/lib/module-rebuild/
259 move_old_moduledb 235 move_old_moduledb
260 236
261 if [[ -n $(grep ${CATEGORY}/${PN}-${PVR} ${MODULEDB_DIR}/moduledb) ]]; then 237 if grep -qs ${CATEGORY}/${PN}-${PVR} ${MODULEDB_DIR}/moduledb ; then
262 einfo "Removing ${CATEGORY}/${PN}-${PVR} from moduledb." 238 einfo "Removing ${CATEGORY}/${PN}-${PVR} from moduledb."
263 sed -ie "/.*${CATEGORY}\/${PN}-${PVR}.*/d" ${MODULEDB_DIR}/moduledb 239 sed -i -e "/.*${CATEGORY}\/${PN}-${PVR}.*/d" ${MODULEDB_DIR}/moduledb
264 fi 240 fi
265} 241}
266 242
267set_kvobj() { 243set_kvobj() {
244 debug-print-function ${FUNCNAME} $*
245
268 if kernel_is 2 6 246 if kernel_is 2 6
269 then 247 then
270 KV_OBJ="ko" 248 KV_OBJ="ko"
271 else 249 else
272 KV_OBJ="o" 250 KV_OBJ="o"
274 # Do we really need to know this? 252 # Do we really need to know this?
275 # Lets silence it. 253 # Lets silence it.
276 # einfo "Using KV_OBJ=${KV_OBJ}" 254 # einfo "Using KV_OBJ=${KV_OBJ}"
277} 255}
278 256
257get-KERNEL_CC() {
258 debug-print-function ${FUNCNAME} $*
259
260 local kernel_cc
261 if [ -n "${KERNEL_ABI}" ]; then
262 # In future, an arch might want to define CC_$ABI
263 #kernel_cc="$(get_abi_CC)"
264 #[ -z "${kernel_cc}" ] &&
265 kernel_cc="$(tc-getCC $(ABI=${KERNEL_ABI} get_abi_CHOST))"
266 else
267 kernel_cc=$(tc-getCC)
268 fi
269 echo "${kernel_cc}"
270}
271
279generate_modulesd() { 272generate_modulesd() {
273 debug-print-function ${FUNCNAME} $*
274
280 # This function will generate the neccessary modules.d file from the 275 # This function will generate the neccessary modules.d file from the
281 # information contained in the modules exported parms 276 # information contained in the modules exported parms
282 277
283 local currm_path currm currm_t t myIFS myVAR 278 local currm_path currm currm_t t myIFS myVAR
284 local module_docs module_enabled module_aliases \ 279 local module_docs module_enabled module_aliases \
417 eend 0 412 eend 0
418 return 0 413 return 0
419} 414}
420 415
421find_module_params() { 416find_module_params() {
417 debug-print-function ${FUNCNAME} $*
418
422 local matched_offset=0 matched_opts=0 test="${@}" temp_var result 419 local matched_offset=0 matched_opts=0 test="${@}" temp_var result
423 local i=0 y=0 z=0 420 local i=0 y=0 z=0
424 421
425 for((i=0; i<=${#test}; i++)) 422 for((i=0; i<=${#test}; i++))
426 do 423 do
460 457
461# default ebuild functions 458# default ebuild functions
462# -------------------------------- 459# --------------------------------
463 460
464linux-mod_pkg_setup() { 461linux-mod_pkg_setup() {
462 debug-print-function ${FUNCNAME} $*
463
465 linux-info_pkg_setup; 464 linux-info_pkg_setup;
466 check_kernel_built; 465 check_kernel_built;
467 strip_modulenames; 466 strip_modulenames;
468 [[ -n ${MODULE_NAMES} ]] && check_modules_supported 467 [[ -n ${MODULE_NAMES} ]] && check_modules_supported
469 set_kvobj; 468 set_kvobj;
472 # introduced - Jason Wever <[email protected]>, 23 Oct 2005 471 # introduced - Jason Wever <[email protected]>, 23 Oct 2005
473 #check_vermagic; 472 #check_vermagic;
474} 473}
475 474
476strip_modulenames() { 475strip_modulenames() {
476 debug-print-function ${FUNCNAME} $*
477
477 local i 478 local i
478 for i in ${MODULE_IGNORE}; do 479 for i in ${MODULE_IGNORE}; do
479 MODULE_NAMES=${MODULE_NAMES//${i}(*} 480 MODULE_NAMES=${MODULE_NAMES//${i}(*}
480 done 481 done
481} 482}
482 483
483linux-mod_src_compile() { 484linux-mod_src_compile() {
485 debug-print-function ${FUNCNAME} $*
486
484 local modulename libdir srcdir objdir i n myARCH="${ARCH}" myABI="${ABI}" 487 local modulename libdir srcdir objdir i n myARCH="${ARCH}" myABI="${ABI}"
485 ARCH="$(tc-arch-kernel)" 488 ARCH="$(tc-arch-kernel)"
486 ABI="${KERNEL_ABI}" 489 ABI="${KERNEL_ABI}"
487 CC_HOSTCC=$(tc-getBUILD_CC)
488 CC_CC=$(tc-getCC)
489 490
490 BUILD_TARGETS=${BUILD_TARGETS:-clean module} 491 BUILD_TARGETS=${BUILD_TARGETS:-clean module}
491 strip_modulenames; 492 strip_modulenames;
492 for i in ${MODULE_NAMES} 493 for i in ${MODULE_NAMES}
493 do 494 do
508 then 509 then
509 econf ${ECONF_PARAMS} || \ 510 econf ${ECONF_PARAMS} || \
510 die "Unable to run econf ${ECONF_PARAMS}" 511 die "Unable to run econf ${ECONF_PARAMS}"
511 fi 512 fi
512 513
513 emake HOSTCC=${CC_HOSTCC} CC=${CC_CC}\ 514 emake HOSTCC="$(tc-getBUILD_CC)" CC="$(get-KERNEL_CC)" LDFLAGS="$(get_abi_LDFLAGS)" \
514 ${BUILD_FIXES} ${BUILD_PARAMS} ${BUILD_TARGETS} \ 515 ${BUILD_FIXES} ${BUILD_PARAMS} ${BUILD_TARGETS} \
515 || die "Unable to make \
516 ${BUILD_FIXES} ${BUILD_PARAMS} ${BUILD_TARGETS}." 516 || die "Unable to make ${BUILD_FIXES} ${BUILD_PARAMS} ${BUILD_TARGETS}."
517 touch ${srcdir}/.built 517 touch ${srcdir}/.built
518 cd ${OLDPWD} 518 cd ${OLDPWD}
519 fi 519 fi
520 done 520 done
521 521
522 ARCH="${myARCH}" 522 ARCH="${myARCH}"
523 ABI="${myABI}" 523 ABI="${myABI}"
524} 524}
525 525
526linux-mod_src_install() { 526linux-mod_src_install() {
527 debug-print-function ${FUNCNAME} $*
528
527 local modulename libdir srcdir objdir i n 529 local modulename libdir srcdir objdir i n
528 530
529 strip_modulenames; 531 strip_modulenames;
530 for i in ${MODULE_NAMES} 532 for i in ${MODULE_NAMES}
531 do 533 do
546 548
547 generate_modulesd ${objdir}/${modulename} 549 generate_modulesd ${objdir}/${modulename}
548 done 550 done
549} 551}
550 552
553linux-mod_pkg_preinst() {
554 debug-print-function ${FUNCNAME} $*
555
556 [ -d ${IMAGE}/lib/modules ] && UPDATE_DEPMOD=true || UPDATE_DEPMOD=false
557 [ -d ${IMAGE}/etc/modules.d ] && UPDATE_MODULES=true || UPDATE_MODULES=false
558 [ -d ${IMAGE}/lib/modules ] && UPDATE_MODULEDB=true || UPDATE_MODULEDB=false
559}
560
551linux-mod_pkg_postinst() { 561linux-mod_pkg_postinst() {
552 update_depmod; 562 debug-print-function ${FUNCNAME} $*
553 update_modules; 563
554 update_moduledb; 564 ${UPDATE_DEPMOD} && update_depmod;
565 ${UPDATE_MODULES} && update_modules;
566 ${UPDATE_MODULEDB} && update_moduledb;
555} 567}
556 568
557linux-mod_pkg_postrm() { 569linux-mod_pkg_postrm() {
570 debug-print-function ${FUNCNAME} $*
558 remove_moduledb; 571 remove_moduledb;
559} 572}
Legend:
Removed from v.1.64
changed lines
Added in v.1.73
ViewVC Help
Powered by ViewVC 1.1.20
|
__label__pos
| 0.879491 |
EOS SD speed setting to microSD Class?
Discussion in 'Supercard' started by Blackout, Oct 15, 2010.
Oct 15, 2010
1. Blackout
OP
Member Blackout GBAtemp Regular
Joined:
Aug 16, 2007
Messages:
106
Location:
Canada
Country:
Canada
Since the DSONE EOS doesn't have the ability to auto-set the SD speed I was wondering if there's a recommended speed to set microSDs at by their Class?
For example my Kingston 2GB microSD is Class 6 that's set at 3x Speed right now. Should I set this higher to maybe 4x or even Fast?
2. Blackout
OP
Member Blackout GBAtemp Regular
Joined:
Aug 16, 2007
Messages:
106
Location:
Canada
Country:
Canada
No recommendations at all? I couldn't find anything on the user manual for EOS regarding this.
3. Blackout
OP
Member Blackout GBAtemp Regular
Joined:
Aug 16, 2007
Messages:
106
Location:
Canada
Country:
Canada
What's the significance of the speed multiplier in EOS anyways? Is it just how much faster it's read/written to?
4. 9th_Sage
Member 9th_Sage GBAtemp Maniac
Joined:
Apr 30, 2008
Messages:
1,481
Country:
United States
I'm not 100% sure myself. I think you're right in what you think it is, but I've never really quite understood it. It's probably best to set it as high as your card can work with it set.
5. YayMii
Member YayMii hi
Joined:
Jun 24, 2009
Messages:
4,856
Location:
that place
Country:
Canada
Wait, what SD card speed setting?
EDIT: Oh, you're using a DSone. I'm using a DStwo.
6. Blackout
OP
Member Blackout GBAtemp Regular
Joined:
Aug 16, 2007
Messages:
106
Location:
Canada
Country:
Canada
That's just the thing. It will work on Fast but I don't understand what the speed setting is for because I see no noticeable difference between 3x and Fast.
I know that higher speeds allow certain games to play better (i.e. Tony Hawk's Downhill Jam, Castlevania: POR) but that's just about read speed I assume?
I guess the other thing that concerns me is that would a higher speed wear out the microSD faster from too many read/writes?
Share This Page
|
__label__pos
| 0.999778 |
/* * Copyright (c) 1980, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef lint #if 0 static char sccsid[] = "@(#)names.c 8.1 (Berkeley) 6/6/93"; #endif #endif /* not lint */ #include __FBSDID("$FreeBSD: stable/10/usr.bin/mail/names.c 216564 2010-12-19 16:25:23Z charnier $"); /* * Mail -- a mail program * * Handle name lists. */ #include "rcv.h" #include #include "extern.h" /* * Allocate a single element of a name list, * initialize its name field to the passed * name and return it. */ struct name * nalloc(char str[], int ntype) { struct name *np; np = (struct name *)salloc(sizeof(*np)); np->n_flink = NULL; np->n_blink = NULL; np->n_type = ntype; np->n_name = savestr(str); return (np); } /* * Find the tail of a list and return it. */ struct name * tailof(struct name *name) { struct name *np; np = name; if (np == NULL) return (NULL); while (np->n_flink != NULL) np = np->n_flink; return (np); } /* * Extract a list of names from a line, * and make a list of names from it. * Return the list or NULL if none found. */ struct name * extract(char line[], int ntype) { char *cp, *nbuf; struct name *top, *np, *t; if (line == NULL || *line == '\0') return (NULL); if ((nbuf = malloc(strlen(line) + 1)) == NULL) err(1, "Out of memory"); top = NULL; np = NULL; cp = line; while ((cp = yankword(cp, nbuf)) != NULL) { t = nalloc(nbuf, ntype); if (top == NULL) top = t; else np->n_flink = t; t->n_blink = np; np = t; } (void)free(nbuf); return (top); } /* * Turn a list of names into a string of the same names. */ char * detract(struct name *np, int ntype) { int s, comma; char *cp, *top; struct name *p; comma = ntype & GCOMMA; if (np == NULL) return (NULL); ntype &= ~GCOMMA; s = 0; if (debug && comma) fprintf(stderr, "detract asked to insert commas\n"); for (p = np; p != NULL; p = p->n_flink) { if (ntype && (p->n_type & GMASK) != ntype) continue; s += strlen(p->n_name) + 1; if (comma) s++; } if (s == 0) return (NULL); s += 2; top = salloc(s); cp = top; for (p = np; p != NULL; p = p->n_flink) { if (ntype && (p->n_type & GMASK) != ntype) continue; cp += strlcpy(cp, p->n_name, strlen(p->n_name) + 1); if (comma && p->n_flink != NULL) *cp++ = ','; *cp++ = ' '; } *--cp = '\0'; if (comma && *--cp == ',') *cp = '\0'; return (top); } /* * Grab a single word (liberal word) * Throw away things between ()'s, and take anything between <>. */ char * yankword(char *ap, char wbuf[]) { char *cp, *cp2; cp = ap; for (;;) { if (*cp == '\0') return (NULL); if (*cp == '(') { int nesting = 0; while (*cp != '\0') { switch (*cp++) { case '(': nesting++; break; case ')': --nesting; break; } if (nesting <= 0) break; } } else if (*cp == ' ' || *cp == '\t' || *cp == ',') cp++; else break; } if (*cp == '<') for (cp2 = wbuf; *cp && (*cp2++ = *cp++) != '>';) ; else for (cp2 = wbuf; *cp != '\0' && strchr(" \t,(", *cp) == NULL; *cp2++ = *cp++) ; *cp2 = '\0'; return (cp); } /* * Grab a single login name (liberal word) * Throw away things between ()'s, take anything between <>, * and look for words before metacharacters %, @, !. */ char * yanklogin(char *ap, char wbuf[]) { char *cp, *cp2, *cp_temp; int n; cp = ap; for (;;) { if (*cp == '\0') return (NULL); if (*cp == '(') { int nesting = 0; while (*cp != '\0') { switch (*cp++) { case '(': nesting++; break; case ')': --nesting; break; } if (nesting <= 0) break; } } else if (*cp == ' ' || *cp == '\t' || *cp == ',') cp++; else break; } /* * Now, let's go forward till we meet the needed character, * and step one word back. */ /* First, remember current point. */ cp_temp = cp; n = 0; /* * Note that we look ahead in a cycle. This is safe, since * non-end of string is checked first. */ while(*cp != '\0' && strchr("@%!", *(cp + 1)) == NULL) cp++; /* * Now, start stepping back to the first non-word character, * while counting the number of symbols in a word. */ while(cp != cp_temp && strchr(" \t,<>", *(cp - 1)) == NULL) { n++; cp--; } /* Finally, grab the word forward. */ cp2 = wbuf; while(n >= 0) { *cp2++=*cp++; n--; } *cp2 = '\0'; return (cp); } /* * For each recipient in the passed name list with a / * in the name, append the message to the end of the named file * and remove him from the recipient list. * * Recipients whose name begins with | are piped through the given * program and removed. */ struct name * outof(struct name *names, FILE *fo, struct header *hp) { int c, ispipe; struct name *np, *top; time_t now; char *date, *fname; FILE *fout, *fin; top = names; np = names; (void)time(&now); date = ctime(&now); while (np != NULL) { if (!isfileaddr(np->n_name) && np->n_name[0] != '|') { np = np->n_flink; continue; } ispipe = np->n_name[0] == '|'; if (ispipe) fname = np->n_name+1; else fname = expand(np->n_name); /* * See if we have copied the complete message out yet. * If not, do so. */ if (image < 0) { int fd; char tempname[PATHSIZE]; (void)snprintf(tempname, sizeof(tempname), "%s/mail.ReXXXXXXXXXX", tmpdir); if ((fd = mkstemp(tempname)) == -1 || (fout = Fdopen(fd, "a")) == NULL) { warn("%s", tempname); senderr++; goto cant; } image = open(tempname, O_RDWR); (void)rm(tempname); if (image < 0) { warn("%s", tempname); senderr++; (void)Fclose(fout); goto cant; } (void)fcntl(image, F_SETFD, 1); fprintf(fout, "From %s %s", myname, date); puthead(hp, fout, GTO|GSUBJECT|GCC|GREPLYTO|GINREPLYTO|GNL); while ((c = getc(fo)) != EOF) (void)putc(c, fout); rewind(fo); fprintf(fout, "\n"); (void)fflush(fout); if (ferror(fout)) { warn("%s", tempname); senderr++; (void)Fclose(fout); goto cant; } (void)Fclose(fout); } /* * Now either copy "image" to the desired file * or give it as the standard input to the desired * program as appropriate. */ if (ispipe) { int pid; char *sh; sigset_t nset; /* * XXX * We can't really reuse the same image file, * because multiple piped recipients will * share the same lseek location and trample * on one another. */ if ((sh = value("SHELL")) == NULL) sh = _PATH_CSHELL; (void)sigemptyset(&nset); (void)sigaddset(&nset, SIGHUP); (void)sigaddset(&nset, SIGINT); (void)sigaddset(&nset, SIGQUIT); pid = start_command(sh, &nset, image, -1, "-c", fname, NULL); if (pid < 0) { senderr++; goto cant; } free_child(pid); } else { int f; if ((fout = Fopen(fname, "a")) == NULL) { warn("%s", fname); senderr++; goto cant; } if ((f = dup(image)) < 0) { warn("dup"); fin = NULL; } else fin = Fdopen(f, "r"); if (fin == NULL) { fprintf(stderr, "Can't reopen image\n"); (void)Fclose(fout); senderr++; goto cant; } rewind(fin); while ((c = getc(fin)) != EOF) (void)putc(c, fout); if (ferror(fout)) { warnx("%s", fname); senderr++; (void)Fclose(fout); (void)Fclose(fin); goto cant; } (void)Fclose(fout); (void)Fclose(fin); } cant: /* * In days of old we removed the entry from the * the list; now for sake of header expansion * we leave it in and mark it as deleted. */ np->n_type |= GDEL; np = np->n_flink; } if (image >= 0) { (void)close(image); image = -1; } return (top); } /* * Determine if the passed address is a local "send to file" address. * If any of the network metacharacters precedes any slashes, it can't * be a filename. We cheat with .'s to allow path names like ./... */ int isfileaddr(char *name) { char *cp; if (*name == '+') return (1); for (cp = name; *cp != '\0'; cp++) { if (*cp == '!' || *cp == '%' || *cp == '@') return (0); if (*cp == '/') return (1); } return (0); } /* * Map all of the aliased users in the invoker's mailrc * file and insert them into the list. * Changed after all these months of service to recursively * expand names (2/14/80). */ struct name * usermap(struct name *names) { struct name *new, *np, *cp; struct grouphead *gh; int metoo; new = NULL; np = names; metoo = (value("metoo") != NULL); while (np != NULL) { if (np->n_name[0] == '\\') { cp = np->n_flink; new = put(new, np); np = cp; continue; } gh = findgroup(np->n_name); cp = np->n_flink; if (gh != NULL) new = gexpand(new, gh, metoo, np->n_type); else new = put(new, np); np = cp; } return (new); } /* * Recursively expand a group name. We limit the expansion to some * fixed level to keep things from going haywire. * Direct recursion is not expanded for convenience. */ struct name * gexpand(struct name *nlist, struct grouphead *gh, int metoo, int ntype) { struct group *gp; struct grouphead *ngh; struct name *np; static int depth; char *cp; if (depth > MAXEXP) { printf("Expanding alias to depth larger than %d\n", MAXEXP); return (nlist); } depth++; for (gp = gh->g_list; gp != NULL; gp = gp->ge_link) { cp = gp->ge_name; if (*cp == '\\') goto quote; if (strcmp(cp, gh->g_name) == 0) goto quote; if ((ngh = findgroup(cp)) != NULL) { nlist = gexpand(nlist, ngh, metoo, ntype); continue; } quote: np = nalloc(cp, ntype); /* * At this point should allow to expand * to self if only person in group */ if (gp == gh->g_list && gp->ge_link == NULL) goto skip; if (!metoo && strcmp(cp, myname) == 0) np->n_type |= GDEL; skip: nlist = put(nlist, np); } depth--; return (nlist); } /* * Concatenate the two passed name lists, return the result. */ struct name * cat(struct name *n1, struct name *n2) { struct name *tail; if (n1 == NULL) return (n2); if (n2 == NULL) return (n1); tail = tailof(n1); tail->n_flink = n2; n2->n_blink = tail; return (n1); } /* * Unpack the name list onto a vector of strings. * Return an error if the name list won't fit. */ char ** unpack(struct name *np) { char **ap, **top; struct name *n; int t, extra, metoo, verbose; n = np; if ((t = count(n)) == 0) errx(1, "No names to unpack"); /* * Compute the number of extra arguments we will need. * We need at least two extra -- one for "mail" and one for * the terminating 0 pointer. Additional spots may be needed * to pass along -f to the host mailer. */ extra = 2; extra++; metoo = value("metoo") != NULL; if (metoo) extra++; verbose = value("verbose") != NULL; if (verbose) extra++; top = (char **)salloc((t + extra) * sizeof(*top)); ap = top; *ap++ = "send-mail"; *ap++ = "-i"; if (metoo) *ap++ = "-m"; if (verbose) *ap++ = "-v"; for (; n != NULL; n = n->n_flink) if ((n->n_type & GDEL) == 0) *ap++ = n->n_name; *ap = NULL; return (top); } /* * Remove all of the duplicates from the passed name list by * insertion sorting them, then checking for dups. * Return the head of the new list. */ struct name * elide(struct name *names) { struct name *np, *t, *new; struct name *x; if (names == NULL) return (NULL); new = names; np = names; np = np->n_flink; if (np != NULL) np->n_blink = NULL; new->n_flink = NULL; while (np != NULL) { t = new; while (strcasecmp(t->n_name, np->n_name) < 0) { if (t->n_flink == NULL) break; t = t->n_flink; } /* * If we ran out of t's, put the new entry after * the current value of t. */ if (strcasecmp(t->n_name, np->n_name) < 0) { t->n_flink = np; np->n_blink = t; t = np; np = np->n_flink; t->n_flink = NULL; continue; } /* * Otherwise, put the new entry in front of the * current t. If at the front of the list, * the new guy becomes the new head of the list. */ if (t == new) { t = np; np = np->n_flink; t->n_flink = new; new->n_blink = t; t->n_blink = NULL; new = t; continue; } /* * The normal case -- we are inserting into the * middle of the list. */ x = np; np = np->n_flink; x->n_flink = t; x->n_blink = t->n_blink; t->n_blink->n_flink = x; t->n_blink = x; } /* * Now the list headed up by new is sorted. * Go through it and remove duplicates. */ np = new; while (np != NULL) { t = np; while (t->n_flink != NULL && strcasecmp(np->n_name, t->n_flink->n_name) == 0) t = t->n_flink; if (t == np || t == NULL) { np = np->n_flink; continue; } /* * Now t points to the last entry with the same name * as np. Make np point beyond t. */ np->n_flink = t->n_flink; if (t->n_flink != NULL) t->n_flink->n_blink = np; np = np->n_flink; } return (new); } /* * Put another node onto a list of names and return * the list. */ struct name * put(struct name *list, struct name *node) { node->n_flink = list; node->n_blink = NULL; if (list != NULL) list->n_blink = node; return (node); } /* * Determine the number of undeleted elements in * a name list and return it. */ int count(struct name *np) { int c; for (c = 0; np != NULL; np = np->n_flink) if ((np->n_type & GDEL) == 0) c++; return (c); } /* * Delete the given name from a namelist. */ struct name * delname(struct name *np, char name[]) { struct name *p; for (p = np; p != NULL; p = p->n_flink) if (strcasecmp(p->n_name, name) == 0) { if (p->n_blink == NULL) { if (p->n_flink != NULL) p->n_flink->n_blink = NULL; np = p->n_flink; continue; } if (p->n_flink == NULL) { if (p->n_blink != NULL) p->n_blink->n_flink = NULL; continue; } p->n_blink->n_flink = p->n_flink; p->n_flink->n_blink = p->n_blink; } return (np); } /* * Pretty print a name list * Uncomment it if you need it. */ /* void prettyprint(struct name *name) { struct name *np; np = name; while (np != NULL) { fprintf(stderr, "%s(%d) ", np->n_name, np->n_type); np = np->n_flink; } fprintf(stderr, "\n"); } */
|
__label__pos
| 0.999972 |
diff --git a/openmp/libomptarget/plugins/cuda/src/rtl.cpp b/openmp/libomptarget/plugins/cuda/src/rtl.cpp index 290fe7b02be6..3ec4bc3d5397 100644 --- a/openmp/libomptarget/plugins/cuda/src/rtl.cpp +++ b/openmp/libomptarget/plugins/cuda/src/rtl.cpp @@ -1,1144 +1,1172 @@ //===----RTLs/cuda/src/rtl.cpp - Target RTLs Implementation ------- C++ -*-===// // // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. // See https://llvm.org/LICENSE.txt for license information. // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception // //===----------------------------------------------------------------------===// // // RTL for CUDA machine // //===----------------------------------------------------------------------===// #include #include #include #include #include #include #include #include #include "omptargetplugin.h" #ifndef TARGET_NAME #define TARGET_NAME CUDA #endif #ifdef OMPTARGET_DEBUG static int DebugLevel = 0; #define GETNAME2(name) #name #define GETNAME(name) GETNAME2(name) #define DP(...) \ do { \ if (DebugLevel > 0) { \ DEBUGP("Target " GETNAME(TARGET_NAME) " RTL", __VA_ARGS__); \ } \ } while (false) // Utility for retrieving and printing CUDA error string. #define CUDA_ERR_STRING(err) \ do { \ if (DebugLevel > 0) { \ const char *errStr; \ cuGetErrorString(err, &errStr); \ DEBUGP("Target " GETNAME(TARGET_NAME) " RTL", "CUDA error is: %s\n", errStr); \ } \ } while (false) #else // OMPTARGET_DEBUG #define DP(...) {} #define CUDA_ERR_STRING(err) {} #endif // OMPTARGET_DEBUG #include "../../common/elf_common.c" /// Keep entries table per device. struct FuncOrGblEntryTy { __tgt_target_table Table; std::vector<__tgt_offload_entry> Entries; }; enum ExecutionModeType { SPMD, // constructors, destructors, // combined constructs (`teams distribute parallel for [simd]`) GENERIC, // everything else NONE }; /// Use a single entity to encode a kernel and a set of flags. struct KernelTy { CUfunction Func; // execution mode of kernel // 0 - SPMD mode (without master warp) // 1 - Generic mode (with master warp) int8_t ExecutionMode; KernelTy(CUfunction _Func, int8_t _ExecutionMode) : Func(_Func), ExecutionMode(_ExecutionMode) {} }; /// Device environment data /// Manually sync with the deviceRTL side for now, move to a dedicated header /// file later. struct omptarget_device_environmentTy { int32_t debug_level; }; /// List that contains all the kernels. /// FIXME: we may need this to be per device and per library. std::list KernelsList; namespace { bool checkResult(CUresult Err, const char *ErrMsg) { if (Err == CUDA_SUCCESS) return true; DP("%s", ErrMsg); CUDA_ERR_STRING(Err); return false; } int memcpyDtoD(const void *SrcPtr, void *DstPtr, int64_t Size, CUstream Stream) { CUresult Err = cuMemcpyDtoDAsync((CUdeviceptr)DstPtr, (CUdeviceptr)SrcPtr, Size, Stream); if (Err != CUDA_SUCCESS) { DP("Error when copying data from device to device. Pointers: src " "= " DPxMOD ", dst = " DPxMOD ", size = %" PRId64 "\n", DPxPTR(SrcPtr), DPxPTR(DstPtr), Size); CUDA_ERR_STRING(Err); return OFFLOAD_FAIL; } return OFFLOAD_SUCCESS; } // Structure contains per-device data struct DeviceDataTy { std::list FuncGblEntries; CUcontext Context = nullptr; // Device properties int ThreadsPerBlock = 0; int BlocksPerGrid = 0; int WarpSize = 0; // OpenMP properties int NumTeams = 0; int NumThreads = 0; }; class StreamManagerTy { int NumberOfDevices; // The initial size of stream pool int EnvNumInitialStreams; // Per-device stream mutex std::vector> StreamMtx; // Per-device stream Id indicates the next available stream in the pool std::vector NextStreamId; // Per-device stream pool std::vector> StreamPool; // Reference to per-device data std::vector &DeviceData; // If there is no CUstream left in the pool, we will resize the pool to // allocate more CUstream. This function should be called with device mutex, // and we do not resize to smaller one. void resizeStreamPool(const int DeviceId, const size_t NewSize) { std::vector &Pool = StreamPool[DeviceId]; const size_t CurrentSize = Pool.size(); assert(NewSize > CurrentSize && "new size is not larger than current size"); CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) { // We will return if cannot switch to the right context in case of // creating bunch of streams that are not corresponding to the right // device. The offloading will fail later because selected CUstream is // nullptr. return; } Pool.resize(NewSize, nullptr); for (size_t I = CurrentSize; I < NewSize; ++I) { checkResult(cuStreamCreate(&Pool[I], CU_STREAM_NON_BLOCKING), "Error returned from cuStreamCreate\n"); } } public: StreamManagerTy(const int NumberOfDevices, std::vector &DeviceData) : NumberOfDevices(NumberOfDevices), EnvNumInitialStreams(32), DeviceData(DeviceData) { StreamPool.resize(NumberOfDevices); NextStreamId.resize(NumberOfDevices); StreamMtx.resize(NumberOfDevices); if (const char *EnvStr = getenv("LIBOMPTARGET_NUM_INITIAL_STREAMS")) EnvNumInitialStreams = std::stoi(EnvStr); // Initialize the next stream id std::fill(NextStreamId.begin(), NextStreamId.end(), 0); // Initialize stream mutex for (std::unique_ptr &Ptr : StreamMtx) Ptr = std::make_unique(); } ~StreamManagerTy() { // Destroy streams for (int I = 0; I < NumberOfDevices; ++I) { checkResult(cuCtxSetCurrent(DeviceData[I].Context), "Error returned from cuCtxSetCurrent\n"); for (CUstream &S : StreamPool[I]) { if (S) checkResult(cuStreamDestroy(S), "Error returned from cuStreamDestroy\n"); } } } // Get a CUstream from pool. Per-device next stream id always points to the // next available CUstream. That means, CUstreams [0, id-1] have been // assigned, and [id,] are still available. If there is no CUstream left, we // will ask more CUstreams from CUDA RT. Each time a CUstream is assigned, // the id will increase one. // xxxxxs+++++++++ // ^ // id // After assignment, the pool becomes the following and s is assigned. // xxxxxs+++++++++ // ^ // id CUstream getStream(const int DeviceId) { const std::lock_guard Lock(*StreamMtx[DeviceId]); int &Id = NextStreamId[DeviceId]; // No CUstream left in the pool, we need to request from CUDA RT if (Id == StreamPool[DeviceId].size()) { // By default we double the stream pool every time resizeStreamPool(DeviceId, Id * 2); } return StreamPool[DeviceId][Id++]; } // Return a CUstream back to pool. As mentioned above, per-device next // stream is always points to the next available CUstream, so when we return // a CUstream, we need to first decrease the id, and then copy the CUstream // back. // It is worth noting that, the order of streams return might be different // from that they're assigned, that saying, at some point, there might be // two identical CUstreams. // xxax+a+++++ // ^ // id // However, it doesn't matter, because they're always on the two sides of // id. The left one will in the end be overwritten by another CUstream. // Therefore, after several execution, the order of pool might be different // from its initial state. void returnStream(const int DeviceId, CUstream Stream) { const std::lock_guard Lock(*StreamMtx[DeviceId]); int &Id = NextStreamId[DeviceId]; assert(Id > 0 && "Wrong stream ID"); StreamPool[DeviceId][--Id] = Stream; } bool initializeDeviceStreamPool(const int DeviceId) { assert(StreamPool[DeviceId].empty() && "stream pool has been initialized"); resizeStreamPool(DeviceId, EnvNumInitialStreams); // Check the size of stream pool if (StreamPool[DeviceId].size() != EnvNumInitialStreams) return false; // Check whether each stream is valid for (CUstream &S : StreamPool[DeviceId]) if (!S) return false; return true; } }; class DeviceRTLTy { int NumberOfDevices; // OpenMP environment properties int EnvNumTeams; int EnvTeamLimit; // OpenMP requires flags int64_t RequiresFlags; static constexpr const int HardTeamLimit = 1U << 16U; // 64k static constexpr const int HardThreadLimit = 1024; static constexpr const int DefaultNumTeams = 128; static constexpr const int DefaultNumThreads = 128; std::unique_ptr StreamManager; std::vector DeviceData; std::vector Modules; // Record entry point associated with device void addOffloadEntry(const int DeviceId, const __tgt_offload_entry entry) { FuncOrGblEntryTy &E = DeviceData[DeviceId].FuncGblEntries.back(); E.Entries.push_back(entry); } // Return true if the entry is associated with device bool findOffloadEntry(const int DeviceId, const void *Addr) const { for (const __tgt_offload_entry &Itr : DeviceData[DeviceId].FuncGblEntries.back().Entries) if (Itr.addr == Addr) return true; return false; } // Return the pointer to the target entries table __tgt_target_table *getOffloadEntriesTable(const int DeviceId) { FuncOrGblEntryTy &E = DeviceData[DeviceId].FuncGblEntries.back(); if (E.Entries.empty()) return nullptr; // Update table info according to the entries and return the pointer E.Table.EntriesBegin = E.Entries.data(); E.Table.EntriesEnd = E.Entries.data() + E.Entries.size(); return &E.Table; } // Clear entries table for a device void clearOffloadEntriesTable(const int DeviceId) { DeviceData[DeviceId].FuncGblEntries.emplace_back(); FuncOrGblEntryTy &E = DeviceData[DeviceId].FuncGblEntries.back(); E.Entries.clear(); E.Table.EntriesBegin = E.Table.EntriesEnd = nullptr; } CUstream getStream(const int DeviceId, __tgt_async_info *AsyncInfoPtr) const { assert(AsyncInfoPtr && "AsyncInfoPtr is nullptr"); if (!AsyncInfoPtr->Queue) AsyncInfoPtr->Queue = StreamManager->getStream(DeviceId); return reinterpret_cast(AsyncInfoPtr->Queue); } public: // This class should not be copied DeviceRTLTy(const DeviceRTLTy &) = delete; DeviceRTLTy(DeviceRTLTy &&) = delete; DeviceRTLTy() : NumberOfDevices(0), EnvNumTeams(-1), EnvTeamLimit(-1), RequiresFlags(OMP_REQ_UNDEFINED) { #ifdef OMPTARGET_DEBUG if (const char *EnvStr = getenv("LIBOMPTARGET_DEBUG")) DebugLevel = std::stoi(EnvStr); #endif // OMPTARGET_DEBUG DP("Start initializing CUDA\n"); CUresult Err = cuInit(0); if (!checkResult(Err, "Error returned from cuInit\n")) { return; } Err = cuDeviceGetCount(&NumberOfDevices); if (!checkResult(Err, "Error returned from cuDeviceGetCount\n")) return; if (NumberOfDevices == 0) { DP("There are no devices supporting CUDA.\n"); return; } DeviceData.resize(NumberOfDevices); // Get environment variables regarding teams if (const char *EnvStr = getenv("OMP_TEAM_LIMIT")) { // OMP_TEAM_LIMIT has been set EnvTeamLimit = std::stoi(EnvStr); DP("Parsed OMP_TEAM_LIMIT=%d\n", EnvTeamLimit); } if (const char *EnvStr = getenv("OMP_NUM_TEAMS")) { // OMP_NUM_TEAMS has been set EnvNumTeams = std::stoi(EnvStr); DP("Parsed OMP_NUM_TEAMS=%d\n", EnvNumTeams); } StreamManager = std::make_unique(NumberOfDevices, DeviceData); } ~DeviceRTLTy() { // First destruct stream manager in case of Contexts is destructed before it StreamManager = nullptr; for (CUmodule &M : Modules) // Close module if (M) checkResult(cuModuleUnload(M), "Error returned from cuModuleUnload\n"); for (DeviceDataTy &D : DeviceData) { // Destroy context - if (D.Context) - checkResult(cuCtxDestroy(D.Context), - "Error returned from cuCtxDestroy\n"); + if (D.Context) { + checkResult(cuCtxSetCurrent(D.Context), + "Error returned from cuCtxSetCurrent\n"); + CUdevice Device; + checkResult(cuCtxGetDevice(&Device), + "Error returned from cuCtxGetDevice\n"); + checkResult(cuDevicePrimaryCtxRelease(Device), + "Error returned from cuDevicePrimaryCtxRelease\n"); + } } } // Check whether a given DeviceId is valid bool isValidDeviceId(const int DeviceId) const { return DeviceId >= 0 && DeviceId < NumberOfDevices; } int getNumOfDevices() const { return NumberOfDevices; } void setRequiresFlag(const int64_t Flags) { this->RequiresFlags = Flags; } int initDevice(const int DeviceId) { CUdevice Device; DP("Getting device %d\n", DeviceId); CUresult Err = cuDeviceGet(&Device, DeviceId); if (!checkResult(Err, "Error returned from cuDeviceGet\n")) return OFFLOAD_FAIL; - // Create the context and save it to use whenever this device is selected. - Err = cuCtxCreate(&DeviceData[DeviceId].Context, CU_CTX_SCHED_BLOCKING_SYNC, - Device); - if (!checkResult(Err, "Error returned from cuCtxCreate\n")) + // Query the current flags of the primary context and set its flags if + // it is inactive + unsigned int FormerPrimaryCtxFlags = 0; + int FormerPrimaryCtxIsActive = 0; + Err = cuDevicePrimaryCtxGetState(Device, &FormerPrimaryCtxFlags, + &FormerPrimaryCtxIsActive); + if (!checkResult(Err, "Error returned from cuDevicePrimaryCtxGetState\n")) + return OFFLOAD_FAIL; + + if (FormerPrimaryCtxIsActive) { + DP("The primary context is active, no change to its flags\n"); + if ((FormerPrimaryCtxFlags & CU_CTX_SCHED_MASK) != + CU_CTX_SCHED_BLOCKING_SYNC) + DP("Warning the current flags are not CU_CTX_SCHED_BLOCKING_SYNC\n"); + } else { + DP("The primary context is inactive, set its flags to " + "CU_CTX_SCHED_BLOCKING_SYNC\n"); + Err = cuDevicePrimaryCtxSetFlags(Device, CU_CTX_SCHED_BLOCKING_SYNC); + if (!checkResult(Err, "Error returned from cuDevicePrimaryCtxSetFlags\n")) + return OFFLOAD_FAIL; + } + + // Retain the per device primary context and save it to use whenever this + // device is selected. + Err = cuDevicePrimaryCtxRetain(&DeviceData[DeviceId].Context, Device); + if (!checkResult(Err, "Error returned from cuDevicePrimaryCtxRetain\n")) return OFFLOAD_FAIL; Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; // Initialize stream pool if (!StreamManager->initializeDeviceStreamPool(DeviceId)) return OFFLOAD_FAIL; // Query attributes to determine number of threads/block and blocks/grid. int MaxGridDimX; Err = cuDeviceGetAttribute(&MaxGridDimX, CU_DEVICE_ATTRIBUTE_MAX_GRID_DIM_X, Device); if (Err != CUDA_SUCCESS) { DP("Error getting max grid dimension, use default value %d\n", DeviceRTLTy::DefaultNumTeams); DeviceData[DeviceId].BlocksPerGrid = DeviceRTLTy::DefaultNumTeams; } else if (MaxGridDimX <= DeviceRTLTy::HardTeamLimit) { DP("Using %d CUDA blocks per grid\n", MaxGridDimX); DeviceData[DeviceId].BlocksPerGrid = MaxGridDimX; } else { DP("Max CUDA blocks per grid %d exceeds the hard team limit %d, capping " "at the hard limit\n", MaxGridDimX, DeviceRTLTy::HardTeamLimit); DeviceData[DeviceId].BlocksPerGrid = DeviceRTLTy::HardTeamLimit; } // We are only exploiting threads along the x axis. int MaxBlockDimX; Err = cuDeviceGetAttribute(&MaxBlockDimX, CU_DEVICE_ATTRIBUTE_MAX_BLOCK_DIM_X, Device); if (Err != CUDA_SUCCESS) { DP("Error getting max block dimension, use default value %d\n", DeviceRTLTy::DefaultNumThreads); DeviceData[DeviceId].ThreadsPerBlock = DeviceRTLTy::DefaultNumThreads; } else if (MaxBlockDimX <= DeviceRTLTy::HardThreadLimit) { DP("Using %d CUDA threads per block\n", MaxBlockDimX); DeviceData[DeviceId].ThreadsPerBlock = MaxBlockDimX; } else { DP("Max CUDA threads per block %d exceeds the hard thread limit %d, " "capping at the hard limit\n", MaxBlockDimX, DeviceRTLTy::HardThreadLimit); DeviceData[DeviceId].ThreadsPerBlock = DeviceRTLTy::HardThreadLimit; } // Get and set warp size int WarpSize; Err = cuDeviceGetAttribute(&WarpSize, CU_DEVICE_ATTRIBUTE_WARP_SIZE, Device); if (Err != CUDA_SUCCESS) { DP("Error getting warp size, assume default value 32\n"); DeviceData[DeviceId].WarpSize = 32; } else { DP("Using warp size %d\n", WarpSize); DeviceData[DeviceId].WarpSize = WarpSize; } // Adjust teams to the env variables if (EnvTeamLimit > 0 && DeviceData[DeviceId].BlocksPerGrid > EnvTeamLimit) { DP("Capping max CUDA blocks per grid to OMP_TEAM_LIMIT=%d\n", EnvTeamLimit); DeviceData[DeviceId].BlocksPerGrid = EnvTeamLimit; } DP("Max number of CUDA blocks %d, threads %d & warp size %d\n", DeviceData[DeviceId].BlocksPerGrid, DeviceData[DeviceId].ThreadsPerBlock, DeviceData[DeviceId].WarpSize); // Set default number of teams if (EnvNumTeams > 0) { DP("Default number of teams set according to environment %d\n", EnvNumTeams); DeviceData[DeviceId].NumTeams = EnvNumTeams; } else { DeviceData[DeviceId].NumTeams = DeviceRTLTy::DefaultNumTeams; DP("Default number of teams set according to library's default %d\n", DeviceRTLTy::DefaultNumTeams); } if (DeviceData[DeviceId].NumTeams > DeviceData[DeviceId].BlocksPerGrid) { DP("Default number of teams exceeds device limit, capping at %d\n", DeviceData[DeviceId].BlocksPerGrid); DeviceData[DeviceId].NumTeams = DeviceData[DeviceId].BlocksPerGrid; } // Set default number of threads DeviceData[DeviceId].NumThreads = DeviceRTLTy::DefaultNumThreads; DP("Default number of threads set according to library's default %d\n", DeviceRTLTy::DefaultNumThreads); if (DeviceData[DeviceId].NumThreads > DeviceData[DeviceId].ThreadsPerBlock) { DP("Default number of threads exceeds device limit, capping at %d\n", DeviceData[DeviceId].ThreadsPerBlock); DeviceData[DeviceId].NumTeams = DeviceData[DeviceId].ThreadsPerBlock; } return OFFLOAD_SUCCESS; } __tgt_target_table *loadBinary(const int DeviceId, const __tgt_device_image *Image) { // Set the context we are using CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return nullptr; // Clear the offload table as we are going to create a new one. clearOffloadEntriesTable(DeviceId); // Create the module and extract the function pointers. CUmodule Module; DP("Load data from image " DPxMOD "\n", DPxPTR(Image->ImageStart)); Err = cuModuleLoadDataEx(&Module, Image->ImageStart, 0, nullptr, nullptr); if (!checkResult(Err, "Error returned from cuModuleLoadDataEx\n")) return nullptr; DP("CUDA module successfully loaded!\n"); Modules.push_back(Module); // Find the symbols in the module by name. const __tgt_offload_entry *HostBegin = Image->EntriesBegin; const __tgt_offload_entry *HostEnd = Image->EntriesEnd; for (const __tgt_offload_entry *E = HostBegin; E != HostEnd; ++E) { if (!E->addr) { // We return nullptr when something like this happens, the host should // have always something in the address to uniquely identify the target // region. DP("Invalid binary: host entry '' (size = %zd)...\n", E->size); return nullptr; } if (E->size) { __tgt_offload_entry Entry = *E; CUdeviceptr CUPtr; size_t CUSize; Err = cuModuleGetGlobal(&CUPtr, &CUSize, Module, E->name); // We keep this style here because we need the name if (Err != CUDA_SUCCESS) { DP("Loading global '%s' (Failed)\n", E->name); CUDA_ERR_STRING(Err); return nullptr; } if (CUSize != E->size) { DP("Loading global '%s' - size mismatch (%zd != %zd)\n", E->name, CUSize, E->size); return nullptr; } DP("Entry point " DPxMOD " maps to global %s (" DPxMOD ")\n", DPxPTR(E - HostBegin), E->name, DPxPTR(CUPtr)); Entry.addr = (void *)(CUPtr); // Note: In the current implementation declare target variables // can either be link or to. This means that once unified // memory is activated via the requires directive, the variable // can be used directly from the host in both cases. // TODO: when variables types other than to or link are added, // the below condition should be changed to explicitly // check for to and link variables types: // (RequiresFlags & OMP_REQ_UNIFIED_SHARED_MEMORY && (e->flags & // OMP_DECLARE_TARGET_LINK || e->flags == OMP_DECLARE_TARGET_TO)) if (RequiresFlags & OMP_REQ_UNIFIED_SHARED_MEMORY) { // If unified memory is present any target link or to variables // can access host addresses directly. There is no longer a // need for device copies. cuMemcpyHtoD(CUPtr, E->addr, sizeof(void *)); DP("Copy linked variable host address (" DPxMOD ") to device address (" DPxMOD ")\n", DPxPTR(*((void **)E->addr)), DPxPTR(CUPtr)); } addOffloadEntry(DeviceId, Entry); continue; } CUfunction Func; Err = cuModuleGetFunction(&Func, Module, E->name); // We keep this style here because we need the name if (Err != CUDA_SUCCESS) { DP("Loading '%s' (Failed)\n", E->name); CUDA_ERR_STRING(Err); return nullptr; } DP("Entry point " DPxMOD " maps to %s (" DPxMOD ")\n", DPxPTR(E - HostBegin), E->name, DPxPTR(Func)); // default value GENERIC (in case symbol is missing from cubin file) int8_t ExecModeVal = ExecutionModeType::GENERIC; std::string ExecModeNameStr(E->name); ExecModeNameStr += "_exec_mode"; const char *ExecModeName = ExecModeNameStr.c_str(); CUdeviceptr ExecModePtr; size_t CUSize; Err = cuModuleGetGlobal(&ExecModePtr, &CUSize, Module, ExecModeName); if (Err == CUDA_SUCCESS) { if (CUSize != sizeof(int8_t)) { DP("Loading global exec_mode '%s' - size mismatch (%zd != %zd)\n", ExecModeName, CUSize, sizeof(int8_t)); return nullptr; } Err = cuMemcpyDtoH(&ExecModeVal, ExecModePtr, CUSize); if (Err != CUDA_SUCCESS) { DP("Error when copying data from device to host. Pointers: " "host = " DPxMOD ", device = " DPxMOD ", size = %zd\n", DPxPTR(&ExecModeVal), DPxPTR(ExecModePtr), CUSize); CUDA_ERR_STRING(Err); return nullptr; } if (ExecModeVal < 0 || ExecModeVal > 1) { DP("Error wrong exec_mode value specified in cubin file: %d\n", ExecModeVal); return nullptr; } } else { DP("Loading global exec_mode '%s' - symbol missing, using default " "value GENERIC (1)\n", ExecModeName); CUDA_ERR_STRING(Err); } KernelsList.emplace_back(Func, ExecModeVal); __tgt_offload_entry Entry = *E; Entry.addr = &KernelsList.back(); addOffloadEntry(DeviceId, Entry); } // send device environment data to the device { omptarget_device_environmentTy DeviceEnv{0}; #ifdef OMPTARGET_DEBUG if (const char *EnvStr = getenv("LIBOMPTARGET_DEVICE_RTL_DEBUG")) DeviceEnv.debug_level = std::stoi(EnvStr); #endif const char *DeviceEnvName = "omptarget_device_environment"; CUdeviceptr DeviceEnvPtr; size_t CUSize; Err = cuModuleGetGlobal(&DeviceEnvPtr, &CUSize, Module, DeviceEnvName); if (Err == CUDA_SUCCESS) { if (CUSize != sizeof(DeviceEnv)) { DP("Global device_environment '%s' - size mismatch (%zu != %zu)\n", DeviceEnvName, CUSize, sizeof(int32_t)); CUDA_ERR_STRING(Err); return nullptr; } Err = cuMemcpyHtoD(DeviceEnvPtr, &DeviceEnv, CUSize); if (Err != CUDA_SUCCESS) { DP("Error when copying data from host to device. Pointers: " "host = " DPxMOD ", device = " DPxMOD ", size = %zu\n", DPxPTR(&DeviceEnv), DPxPTR(DeviceEnvPtr), CUSize); CUDA_ERR_STRING(Err); return nullptr; } DP("Sending global device environment data %zu bytes\n", CUSize); } else { DP("Finding global device environment '%s' - symbol missing.\n", DeviceEnvName); DP("Continue, considering this is a device RTL which does not accept " "environment setting.\n"); } } return getOffloadEntriesTable(DeviceId); } void *dataAlloc(const int DeviceId, const int64_t Size) const { if (Size == 0) return nullptr; CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return nullptr; CUdeviceptr DevicePtr; Err = cuMemAlloc(&DevicePtr, Size); if (!checkResult(Err, "Error returned from cuMemAlloc\n")) return nullptr; return (void *)DevicePtr; } int dataSubmit(const int DeviceId, const void *TgtPtr, const void *HstPtr, const int64_t Size, __tgt_async_info *AsyncInfoPtr) const { assert(AsyncInfoPtr && "AsyncInfoPtr is nullptr"); CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; CUstream Stream = getStream(DeviceId, AsyncInfoPtr); Err = cuMemcpyHtoDAsync((CUdeviceptr)TgtPtr, HstPtr, Size, Stream); if (Err != CUDA_SUCCESS) { DP("Error when copying data from host to device. Pointers: host = " DPxMOD ", device = " DPxMOD ", size = %" PRId64 "\n", DPxPTR(HstPtr), DPxPTR(TgtPtr), Size); CUDA_ERR_STRING(Err); return OFFLOAD_FAIL; } return OFFLOAD_SUCCESS; } int dataRetrieve(const int DeviceId, void *HstPtr, const void *TgtPtr, const int64_t Size, __tgt_async_info *AsyncInfoPtr) const { assert(AsyncInfoPtr && "AsyncInfoPtr is nullptr"); CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; CUstream Stream = getStream(DeviceId, AsyncInfoPtr); Err = cuMemcpyDtoHAsync(HstPtr, (CUdeviceptr)TgtPtr, Size, Stream); if (Err != CUDA_SUCCESS) { DP("Error when copying data from device to host. Pointers: host = " DPxMOD ", device = " DPxMOD ", size = %" PRId64 "\n", DPxPTR(HstPtr), DPxPTR(TgtPtr), Size); CUDA_ERR_STRING(Err); return OFFLOAD_FAIL; } return OFFLOAD_SUCCESS; } int dataExchange(int SrcDevId, const void *SrcPtr, int DstDevId, void *DstPtr, int64_t Size, __tgt_async_info *AsyncInfoPtr) const { assert(AsyncInfoPtr && "AsyncInfoPtr is nullptr"); CUresult Err = cuCtxSetCurrent(DeviceData[SrcDevId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; CUstream Stream = getStream(SrcDevId, AsyncInfoPtr); // If they are two devices, we try peer to peer copy first if (SrcDevId != DstDevId) { int CanAccessPeer = 0; Err = cuDeviceCanAccessPeer(&CanAccessPeer, SrcDevId, DstDevId); if (Err != CUDA_SUCCESS) { DP("Error returned from cuDeviceCanAccessPeer. src = %" PRId32 ", dst = %" PRId32 "\n", SrcDevId, DstDevId); CUDA_ERR_STRING(Err); return memcpyDtoD(SrcPtr, DstPtr, Size, Stream); } if (!CanAccessPeer) { DP("P2P memcpy not supported so fall back to D2D memcpy"); return memcpyDtoD(SrcPtr, DstPtr, Size, Stream); } Err = cuCtxEnablePeerAccess(DeviceData[DstDevId].Context, 0); if (Err != CUDA_SUCCESS) { DP("Error returned from cuCtxEnablePeerAccess. src = %" PRId32 ", dst = %" PRId32 "\n", SrcDevId, DstDevId); CUDA_ERR_STRING(Err); return memcpyDtoD(SrcPtr, DstPtr, Size, Stream); } Err = cuMemcpyPeerAsync((CUdeviceptr)DstPtr, DeviceData[DstDevId].Context, (CUdeviceptr)SrcPtr, DeviceData[SrcDevId].Context, Size, Stream); if (Err == CUDA_SUCCESS) return OFFLOAD_SUCCESS; DP("Error returned from cuMemcpyPeerAsync. src_ptr = " DPxMOD ", src_id =%" PRId32 ", dst_ptr = " DPxMOD ", dst_id =%" PRId32 "\n", DPxPTR(SrcPtr), SrcDevId, DPxPTR(DstPtr), DstDevId); CUDA_ERR_STRING(Err); } return memcpyDtoD(SrcPtr, DstPtr, Size, Stream); } int dataDelete(const int DeviceId, void *TgtPtr) const { CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; Err = cuMemFree((CUdeviceptr)TgtPtr); if (!checkResult(Err, "Error returned from cuMemFree\n")) return OFFLOAD_FAIL; return OFFLOAD_SUCCESS; } int runTargetTeamRegion(const int DeviceId, const void *TgtEntryPtr, void **TgtArgs, ptrdiff_t *TgtOffsets, const int ArgNum, const int TeamNum, const int ThreadLimit, const unsigned int LoopTripCount, __tgt_async_info *AsyncInfo) const { CUresult Err = cuCtxSetCurrent(DeviceData[DeviceId].Context); if (!checkResult(Err, "Error returned from cuCtxSetCurrent\n")) return OFFLOAD_FAIL; // All args are references. std::vector Args(ArgNum); std::vector Ptrs(ArgNum); for (int I = 0; I < ArgNum; ++I) { Ptrs[I] = (void *)((intptr_t)TgtArgs[I] + TgtOffsets[I]); Args[I] = &Ptrs[I]; } const KernelTy *KernelInfo = reinterpret_cast(TgtEntryPtr); unsigned int CudaThreadsPerBlock; if (ThreadLimit > 0) { DP("Setting CUDA threads per block to requested %d\n", ThreadLimit); CudaThreadsPerBlock = ThreadLimit; // Add master warp if necessary if (KernelInfo->ExecutionMode == GENERIC) { DP("Adding master warp: +%d threads\n", DeviceData[DeviceId].WarpSize); CudaThreadsPerBlock += DeviceData[DeviceId].WarpSize; } } else { DP("Setting CUDA threads per block to default %d\n", DeviceData[DeviceId].NumThreads); CudaThreadsPerBlock = DeviceData[DeviceId].NumThreads; } if (CudaThreadsPerBlock > DeviceData[DeviceId].ThreadsPerBlock) { DP("Threads per block capped at device limit %d\n", DeviceData[DeviceId].ThreadsPerBlock); CudaThreadsPerBlock = DeviceData[DeviceId].ThreadsPerBlock; } int KernelLimit; Err = cuFuncGetAttribute(&KernelLimit, CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK, KernelInfo->Func); if (Err == CUDA_SUCCESS && KernelLimit < CudaThreadsPerBlock) { DP("Threads per block capped at kernel limit %d\n", KernelLimit); CudaThreadsPerBlock = KernelLimit; } unsigned int CudaBlocksPerGrid; if (TeamNum <= 0) { if (LoopTripCount > 0 && EnvNumTeams < 0) { if (KernelInfo->ExecutionMode == SPMD) { // We have a combined construct, i.e. `target teams distribute // parallel for [simd]`. We launch so many teams so that each thread // will execute one iteration of the loop. round up to the nearest // integer CudaBlocksPerGrid = ((LoopTripCount - 1) / CudaThreadsPerBlock) + 1; } else { // If we reach this point, then we have a non-combined construct, i.e. // `teams distribute` with a nested `parallel for` and each team is // assigned one iteration of the `distribute` loop. E.g.: // // #pragma omp target teams distribute // for(...loop_tripcount...) { // #pragma omp parallel for // for(...) {} // } // // Threads within a team will execute the iterations of the `parallel` // loop. CudaBlocksPerGrid = LoopTripCount; } DP("Using %d teams due to loop trip count %" PRIu32 " and number of threads per block %d\n", CudaBlocksPerGrid, LoopTripCount, CudaThreadsPerBlock); } else { DP("Using default number of teams %d\n", DeviceData[DeviceId].NumTeams); CudaBlocksPerGrid = DeviceData[DeviceId].NumTeams; } } else if (TeamNum > DeviceData[DeviceId].BlocksPerGrid) { DP("Capping number of teams to team limit %d\n", DeviceData[DeviceId].BlocksPerGrid); CudaBlocksPerGrid = DeviceData[DeviceId].BlocksPerGrid; } else { DP("Using requested number of teams %d\n", TeamNum); CudaBlocksPerGrid = TeamNum; } // Run on the device. DP("Launch kernel with %d blocks and %d threads\n", CudaBlocksPerGrid, CudaThreadsPerBlock); CUstream Stream = getStream(DeviceId, AsyncInfo); Err = cuLaunchKernel(KernelInfo->Func, CudaBlocksPerGrid, /* gridDimY */ 1, /* gridDimZ */ 1, CudaThreadsPerBlock, /* blockDimY */ 1, /* blockDimZ */ 1, /* sharedMemBytes */ 0, Stream, &Args[0], nullptr); if (!checkResult(Err, "Error returned from cuLaunchKernel\n")) return OFFLOAD_FAIL; DP("Launch of entry point at " DPxMOD " successful!\n", DPxPTR(TgtEntryPtr)); return OFFLOAD_SUCCESS; } int synchronize(const int DeviceId, __tgt_async_info *AsyncInfoPtr) const { CUstream Stream = reinterpret_cast(AsyncInfoPtr->Queue); CUresult Err = cuStreamSynchronize(Stream); if (Err != CUDA_SUCCESS) { DP("Error when synchronizing stream. stream = " DPxMOD ", async info ptr = " DPxMOD "\n", DPxPTR(Stream), DPxPTR(AsyncInfoPtr)); CUDA_ERR_STRING(Err); return OFFLOAD_FAIL; } // Once the stream is synchronized, return it to stream pool and reset // async_info. This is to make sure the synchronization only works for its // own tasks. StreamManager->returnStream( DeviceId, reinterpret_cast(AsyncInfoPtr->Queue)); AsyncInfoPtr->Queue = nullptr; return OFFLOAD_SUCCESS; } }; DeviceRTLTy DeviceRTL; } // namespace // Exposed library API function #ifdef __cplusplus extern "C" { #endif int32_t __tgt_rtl_is_valid_binary(__tgt_device_image *image) { return elf_check_machine(image, /* EM_CUDA */ 190); } int32_t __tgt_rtl_number_of_devices() { return DeviceRTL.getNumOfDevices(); } int64_t __tgt_rtl_init_requires(int64_t RequiresFlags) { DP("Init requires flags to %ld\n", RequiresFlags); DeviceRTL.setRequiresFlag(RequiresFlags); return RequiresFlags; } int32_t __tgt_rtl_is_data_exchangable(int32_t src_dev_id, int dst_dev_id) { if (DeviceRTL.isValidDeviceId(src_dev_id) && DeviceRTL.isValidDeviceId(dst_dev_id)) return 1; return 0; } int32_t __tgt_rtl_init_device(int32_t device_id) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return DeviceRTL.initDevice(device_id); } __tgt_target_table *__tgt_rtl_load_binary(int32_t device_id, __tgt_device_image *image) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return DeviceRTL.loadBinary(device_id, image); } void *__tgt_rtl_data_alloc(int32_t device_id, int64_t size, void *) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return DeviceRTL.dataAlloc(device_id, size); } int32_t __tgt_rtl_data_submit(int32_t device_id, void *tgt_ptr, void *hst_ptr, int64_t size) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); __tgt_async_info async_info; const int32_t rc = __tgt_rtl_data_submit_async(device_id, tgt_ptr, hst_ptr, size, &async_info); if (rc != OFFLOAD_SUCCESS) return OFFLOAD_FAIL; return __tgt_rtl_synchronize(device_id, &async_info); } int32_t __tgt_rtl_data_submit_async(int32_t device_id, void *tgt_ptr, void *hst_ptr, int64_t size, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); assert(async_info_ptr && "async_info_ptr is nullptr"); return DeviceRTL.dataSubmit(device_id, tgt_ptr, hst_ptr, size, async_info_ptr); } int32_t __tgt_rtl_data_retrieve(int32_t device_id, void *hst_ptr, void *tgt_ptr, int64_t size) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); __tgt_async_info async_info; const int32_t rc = __tgt_rtl_data_retrieve_async(device_id, hst_ptr, tgt_ptr, size, &async_info); if (rc != OFFLOAD_SUCCESS) return OFFLOAD_FAIL; return __tgt_rtl_synchronize(device_id, &async_info); } int32_t __tgt_rtl_data_retrieve_async(int32_t device_id, void *hst_ptr, void *tgt_ptr, int64_t size, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); assert(async_info_ptr && "async_info_ptr is nullptr"); return DeviceRTL.dataRetrieve(device_id, hst_ptr, tgt_ptr, size, async_info_ptr); } int32_t __tgt_rtl_data_exchange_async(int32_t src_dev_id, void *src_ptr, int dst_dev_id, void *dst_ptr, int64_t size, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(src_dev_id) && "src_dev_id is invalid"); assert(DeviceRTL.isValidDeviceId(dst_dev_id) && "dst_dev_id is invalid"); assert(async_info_ptr && "async_info_ptr is nullptr"); return DeviceRTL.dataExchange(src_dev_id, src_ptr, dst_dev_id, dst_ptr, size, async_info_ptr); } int32_t __tgt_rtl_data_exchange(int32_t src_dev_id, void *src_ptr, int32_t dst_dev_id, void *dst_ptr, int64_t size) { assert(DeviceRTL.isValidDeviceId(src_dev_id) && "src_dev_id is invalid"); assert(DeviceRTL.isValidDeviceId(dst_dev_id) && "dst_dev_id is invalid"); __tgt_async_info async_info; const int32_t rc = __tgt_rtl_data_exchange_async( src_dev_id, src_ptr, dst_dev_id, dst_ptr, size, &async_info); if (rc != OFFLOAD_SUCCESS) return OFFLOAD_FAIL; return __tgt_rtl_synchronize(src_dev_id, &async_info); } int32_t __tgt_rtl_data_delete(int32_t device_id, void *tgt_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return DeviceRTL.dataDelete(device_id, tgt_ptr); } int32_t __tgt_rtl_run_target_team_region(int32_t device_id, void *tgt_entry_ptr, void **tgt_args, ptrdiff_t *tgt_offsets, int32_t arg_num, int32_t team_num, int32_t thread_limit, uint64_t loop_tripcount) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); __tgt_async_info async_info; const int32_t rc = __tgt_rtl_run_target_team_region_async( device_id, tgt_entry_ptr, tgt_args, tgt_offsets, arg_num, team_num, thread_limit, loop_tripcount, &async_info); if (rc != OFFLOAD_SUCCESS) return OFFLOAD_FAIL; return __tgt_rtl_synchronize(device_id, &async_info); } int32_t __tgt_rtl_run_target_team_region_async( int32_t device_id, void *tgt_entry_ptr, void **tgt_args, ptrdiff_t *tgt_offsets, int32_t arg_num, int32_t team_num, int32_t thread_limit, uint64_t loop_tripcount, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return DeviceRTL.runTargetTeamRegion( device_id, tgt_entry_ptr, tgt_args, tgt_offsets, arg_num, team_num, thread_limit, loop_tripcount, async_info_ptr); } int32_t __tgt_rtl_run_target_region(int32_t device_id, void *tgt_entry_ptr, void **tgt_args, ptrdiff_t *tgt_offsets, int32_t arg_num) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); __tgt_async_info async_info; const int32_t rc = __tgt_rtl_run_target_region_async( device_id, tgt_entry_ptr, tgt_args, tgt_offsets, arg_num, &async_info); if (rc != OFFLOAD_SUCCESS) return OFFLOAD_FAIL; return __tgt_rtl_synchronize(device_id, &async_info); } int32_t __tgt_rtl_run_target_region_async(int32_t device_id, void *tgt_entry_ptr, void **tgt_args, ptrdiff_t *tgt_offsets, int32_t arg_num, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); return __tgt_rtl_run_target_team_region_async( device_id, tgt_entry_ptr, tgt_args, tgt_offsets, arg_num, /* team num*/ 1, /* thread_limit */ 1, /* loop_tripcount */ 0, async_info_ptr); } int32_t __tgt_rtl_synchronize(int32_t device_id, __tgt_async_info *async_info_ptr) { assert(DeviceRTL.isValidDeviceId(device_id) && "device_id is invalid"); assert(async_info_ptr && "async_info_ptr is nullptr"); assert(async_info_ptr->Queue && "async_info_ptr->Queue is nullptr"); return DeviceRTL.synchronize(device_id, async_info_ptr); } #ifdef __cplusplus } #endif
|
__label__pos
| 0.988649 |
The internet and video streaming
Video Streaming
It has always been stated that when it comes to the internet, the ‘content is king’. But over the years, as the internet became more and more evolved, a new kind of content is becoming increasingly popular these days and it will continue to rise in popularity. Video is now the number one preferred content for most internet users. Why bother read something when it is so much easier watching it?
Video streaming websites have been especially successful these past few years. Think of YouTube and other similar websites. The power of such websites lays in the fact that end users don’t have to download a certain media file before they are able to view it. However, you need to understand the difference that is between video files and video streaming. Streaming video refers to the mechanism of the distribution of medium and not the media itself.
Not so long ago, video streaming was used almost exclusively by the big companies that were able to afford this technology. However, as time passed, more and more websites begun to implement streaming as costs were no longer an issue.
If you are th
Comments
comments
|
__label__pos
| 0.522153 |
Jump to content
some new sub structures in PST creature
Avenger
Recommended Posts
This is a snippet from PST creature structure as IESDP knows it.
0x0270 52 (bytes) Unknown
0x029c 4 (dword) XP (Secondary class)
0x02a0 4 (dword) XP (Tertiary class)
on 0x294 there is an offset
and on 0x298 a memory size (not header count).
The offset points to a new structure which is basically a list of overlay effects on the actor.
These overlays are applied by opcode #201 (whose param #2 is the overlay type, and resource field is the overlay bam).
The overlay type seems to be a varied and haphazard group of effects which are implemented with different opcodes in bg2. Such spells as Shield, Armour, Balance in all things (a kind of fireshield).
So, a PST creature has a secondary effect list (mixed visual/stat affecting) to make my life harder :p
[edit]
The structure of this new subheader is not determined yet.
The first 8 bytes contain a resref (the overlay bam)
The structure's length is 36 bytes (iirc) and also contains the effect type on a word.
The effect type could be 0-18 i think (with 5,9,10 unused or nonexistent).
Later i'll make a more specific list.
Link to comment
Yeah, i guess you expect the detailed list for the effects :)
I'll make it a bit later when i'm back into PST effect implementation.
Now, i'm sidetracked into implementing a correct script decompiler which i can use :D
Link to comment
Here is the list:
p2 resref spell name effect
0 - SPWI304 - Cloak of Warding - absorbs 3d4+level damage then removed, or level*5 seconds expired
1 - SPWI111 - Shield - AC = 3, +1 to saving throws, removed after level*25 seconds
2 - SPWI203 - Black Barbed Shield - +2 AC, attackers suffer 1d6 damage, removed after 10d3 seconds
3 - SPWI209 - Pain Mirror - hostile creatures nearby suffer the same damage as the caster, removed after level*5 seconds or after triggered once
4 - SPWI704 - Guardian Mantle - deflects all attacks if attacker doesn't make saving throw -4 vs. spells, removed after 50+level*5 seconds
5 - ?
6 - SPWI504 - Enoll Eva's Duplication - double projectiles?
7 - SPWI101 - Armor - AC = 6, removed after 8+level damage
8 - SPWI601 - Antimagic Shell - disables all projectiles? disables casting?
9 - ?
10- ?
11- SPPR201 - Flamewalk - 50% fire resistance, +2 saving throws vs. fire
12- SPPR106 - Protection from Evil - +2 AC vs. evil, +2 saving throws vs. evil
13- SPWI902 - Conflagration - 2d6 damage on target per 5 seconds, anyone comes close also suffers it, but entitled to save vs. spells
14- SPWI312 - Infernal Shield - 150% fire resistance (fire heals half damage).
15- SPWI119 - Submerge the Will - AC = 2, +1 to saving throws, removed after level*12 seconds
16- SPWI314 - Balance in All Things - hostile creatures nearby suffer the same damage as the caster, removed after level*5 seconds, can be triggered level/4 times
Link to comment
As you can see, the blackisle guys hardcoded quite diverse effects into this single opcode.
The only common thing is that all of these use some overlay on the actor (it would be a vvc which follows actor in bg2).
The overlays come along with diverse effects that might be triggered when the target is hit. (apply effect on condition in bg2)
The overlays may disappear after some amount of damage suffered (no bg2 equivalent?)
Some of the effects could be simulated with IDS targeting (protection from evil).
Link to comment
So, the struct is like this:
0x00 8 (char array) resref
0x08 2 (word) effect type (0-16)
0x0a 26 (bytes) Unknown
With the effect type being in the range you posted?
EDIT: Fixed bb code.
Link to comment
To further increase weirdness, some projectiles also give out these overlays (with the additional effects).
One such projectile is 229 (shroud of darkness).
I start to hate those hackers at blackisle.
[edit]
The shroud of darkness projectile starts the overlay with type 5.
So that's why i missed 5 (among some other numbers) from the list.
The unknown at 0ch seems to be a timing method style field.
If it is 0200 then the duration is relative.
If it is 0300 then the duration is absolute (like in this example).
Here is what i've learned so far:
00h BAM S048SPHR
08h UNKNOWN 00000040
0ch UNKNOWN 0300
0eh Type 0005
10h Duration ffb7a401
14h UNKNOWN 00000000
18h UNKNOWN 000000bc
20h UNKNOWN 00ffffff
24h UNKNOWN 00000000
Link to comment
Archived
This topic is now archived and is closed to further replies.
×
×
• Create New...
|
__label__pos
| 0.534812 |
What is a Corrupted File?
Every file on your machine is a piece of electronic data, and depending on the type of the file in question, will have a certain structure. If the file is written incorrectly, then the data can become scrambled, resulting in a corrupted file.
A corrupted file might display incorrectly, or it might not open or function at all. Files typically become corrupted during the saving process, for example if the power is cut while a write operation is in progress. Bad sectors can also result in a corrupted file, as can viruses or malware. Some operating systems include tools to recover or repair corrupted files, like the System File Checker tool in Windows or Disk Utility in macOS. Many pieces of software – notably Microsoft Office – have a built-in tool to repair corrupted files. You can also find third-party commercial file repair tools, like Stellar File Repair Toolkit, that may be able to help.
Often, even though 99% of the file is intact, the 1% that is corrupted can render it completely useless. As well as occurring during a loss of power, a corrupted file can be caused by a glitch in a program at the wrong time, causing the saving process to halt. So how can you protect yourself against corrupted files, and what should you do if a file you need becomes corrupted? Your first port of call should be consulting your last backup, but this can sometimes prevent problems. If your backup strategy includes automatic syncing – a common feature of many cloud-based backup packages – opening a backed-up version of the file will be fruitless, as it will be a copy of the corrupt file. If your files are synced automatically, make sure you have the option to recall a past version of a file in case the current one becomes corrupted. Corrupted system files might be fixable using System File Checker on Windows or Disk Utility on Mac. It may also be possible to restore from a system restore point before the file became corrupted. Some apps autosave multiple copies of a file, allowing you to roll back if a later version becomes corrupted.
Although OS file repair tools and third-party software can help with corrupted files, they will frequently be unable to help, and it might be best to delete the corrupted file and start again. Obviously, this depends on the nature of the file in question; a corrupt Word document could be anything from a job application to a thesis. A corrupt program file may be obtainable from the developer, and a corrupt photo might be backed up. It’s a good time to mention again the importance of having an up-to-date backup of your data for situations involving corrupted files. Run anti-virus scans on a regular basis, and using a surge protector can prevent problems during the saving process.
What is a Corrupted File?
|
__label__pos
| 0.659624 |
Take the tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Can anyone give me a hint for an algorithm to find a simple cycle of length 4 (4 edges and 4 vertices that is) in an undirected graph, given as an adjacency list? It needs to use $O(v^3)$ operations (v is the number of vertices) and I'm pretty sure that it can be done with some kind of BFS or DFS.
The algorithm only has to show that there is such a cycle, not where it is.
share|improve this question
add comment
3 Answers
up vote 1 down vote accepted
Oh, and there is another way, with the BFS you mentioned. Iteratively, do a BFS from each node. By slightly modifying the BFS algorithm, you can instead of computing the distances from your source vertex to any other, remember the number of shortest paths from your source vertex to any other.
If there is a vertex at distance two which has at least 2 shortest paths to the source vertex, you have found your $C_4$. That's $O(n^3)$.
share|improve this answer
I guess that's what I was looking for. Thank you! – user15816 Jun 16 '11 at 16:38
add comment
Let's assume your vertices are labeled from 1 to $n$ and your adjacency list has the form $(u_1,v_1), (u_2,v_2),..., (u_E,v_E)$, where $1 \le u_i < v_i \le n$ for $1 \le i \le E$. Note that $E$, the number of edges, is $O(n^2)$.
Start with a preprocessing step that converts the adjacency list to a list of neighbor sets $N_i$, one for each $i$ between 1 and $n$: For each $k$ from 1 to $E$, put $u_k$ in set $N_{v_k}$ and $v_k$ in set $N_{u_k}$. (Sorry, those sub-subscripts don't look right.) This takes $O(n^2)$ steps.
Now go through the list of pairs $i,j$ with $1 \le i < j \le n$. For each pair, find the intersection $N_i \cap N_j$, and count its size. If you find a pair $i,j$ for which $|N_i \cap N_j| > 1$, you've found your 4-cycle: vertices $i$ and $j$ are each joined to two other vertices. (Neither $i$ nor $j$ is in $N_i \cap N_j$, since $k \notin N_k$ for any $k$.) The computation for each pair can be done in $O(n)$ steps, and there are $O(n^2)$ pairs, so the total computation takes $O(n^3)$ steps.
(Let me elaborate on why the computation of $|N_i \cap N_j|$ is $O(n)$. At worst, you can convert each neighborhood set into a 0--1 vector of dimension $n$ and then take the dot product of the two vectors.)
It might be of interest to ask a follow-up: Given an adjacency list of $E$ edges for a graph on $n$ vertices, can you detect the presence of a 4-cycle in $O(nE)$ steps?
share|improve this answer
@Barry, welcome to MO! – Gerry Myerson Jun 17 '11 at 5:33
add comment
Build a graph $G'$ on $|V(G)|$ elements, and keep it warm.
Then, for any vertex $v$ of your graph $G$, add to $G'$ an edge for all of the $\binom {|N_G(v)|} {2}$ pairs of vertices at distance 1 from v. If at some point, you try to create an edge that had been created before, you have found a $C_4$
share|improve this answer
btw, it seems to run in $O(n^2)$, as you can create at most $\binom n 2$ edges in $G'$. – Nathann Cohen Jun 16 '11 at 16:20
+1 for "keep it warm" – Hans Stricker Jun 16 '11 at 17:25
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.898316 |
Home > Math Shortcuts > Divisibility of a number by 9 shortcut tricks
Divisibility of a number by 9 shortcut tricks
Divisibility of a number by 9 shortcut tricks
Shortcut Tricks are very important things in competitive exam. Time is the main factor in competitive exams. If you know how to manage time then you will surely do great in your exam. Most of us miss that part. Here in this page we give few examples on Divisibility of a number by 9 shortcut tricks. We try to provide all types of shortcut tricks on divisibility of a number by 9 here. We request all visitors to read all examples carefully. These examples will help you to understand shortcut tricks on Divisibility of a number by 9.
Before starting anything just do a math practice set. Write down twenty math problems related to this topic on a page. Solve first ten math problems according to basic math formula. You also need to keep track of Timing. After solving all ten math questions write down total time taken by you to solve those questions. Now practice our shortcut tricks on divisibility of a number by 9 and read examples carefully. After doing this go back to the remaining ten questions and solve those using shortcut methods. Again keep track of Timing. You will surely see the improvement in your timing this time. But this is not all you want. You need more practice to improve your timing more.
We all know that the most important thing in competitive exams is Mathematics. It doesn’t mean that other topics are not so important. But if you need a good score in exam then you have to score good in maths. A good score comes with practice and practice. The only thing you need to do is to do your math problems correctly and within time, and only shortcut tricks can give you that success. But it doesn’t mean that without using shortcut tricks you can’t do any math problems. You may have that potential that you may do maths within time without using any shortcut tricks. But so many other people may not do the same. So Divisibility of a number by 9 shortcut tricks here for those people. We always try to put all shortcut methods of the given topic. But we may miss few of them. If you know anything else rather than this please do share with us. Your little help will help others.
We are try to calculate divisibility of a number by 9 or any number using divisor 9, but if any large numbers is given we are not able to perform fast, and using some rule and shortcut tricks we are calculate the divisibility of a numbers by 9.
Divisibility of a number by 9
When the sum of all digits of a number is divisible by 9 then that number is also divisible by 9.
Example #1 – Divisibility of a number by 9
Is 89874 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (8 + 9 + 8 + 7 + 4) = 36.
2. 36 is divisible by 9.
So, the number 89874 is Divisible by 9.
Rough Workspace
Example #2 – Divisibility of a number by 9
Is 9981 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (9 + 9 + 8 + 1) = 27.
2. 27 is divisible by 9.
So, the number 9981 is Divisible by 9.
Rough Workspace
Example #3 – Divisibility of a number by 9
Is 499869 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (4 + 9 + 9 + 8 + 6 + 9) = 45.
2. 45 is divisible by 9.
So, the number 499869 is Divisible by 9.
Rough Workspace
Example #4 – Divisibility of a number by 9
Is 9522 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (9 + 5 + 2 + 2) = 18.
2. 18 is divisible by 9.
So, the number 9522 is Divisible by 9.
Rough Workspace
Example #5 – Divisibility of a number by 9
Is 778965939 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (7 + 7 + 8 + 9 + 6 + 5 + 9 + 3 + 9) = 63.
2. 63 is divisible by 9.
So, the number 778965939 is Divisible by 9.
Rough Workspace
Example #6 – Divisibility of a number by 9
Is 999988875 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (9 + 9 + 9 + 9 + 8 + 8 + 8 + 7 + 5) = 72.
2. 72 is divisible by 9.
So, the number 999988875 is Divisible by 9.
Rough Workspace
Example #7 – Divisibility of a number by 9
Is 99999888876 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (9 + 9 + 9 + 9 + 9 + 8 + 8 + 8 + 8 + 7 + 6) = 90.
2. 90 is divisible by 9.
So, the number 99999888876 is Divisible by 9.
Rough Workspace
Example #8 – Divisibility of a number by 9
Is 33778899999 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (3 + 3 + 7 + 7 + 8 + 8 + 9 + 9 + 9 + 9 + 9) = 81.
2. 81 is divisible by 9.
So, the number 33778899999 is Divisible by 9.
Rough Workspace
Example #9 – Divisibility of a number by 9
Is 73895896989 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (7 + 3 + 8 + 9 + 5 + 8 + 9 + 6 + 9 + 8 + 9) = 81.
2. 81 is divisible by 9.
So, the number 73895896989 is Divisible by 9.
Rough Workspace
Example #10 – Divisibility of a number by 9
Is 77896593 divisible by 9?
1. Yes
2. No
Show Answer Show How to Solve Open Rough Workspace
Answer: Yes
How to Solve
1. Add all digits of the given number,
i.e, (7 + 7 + 8 + 9 + 6 + 5 + 9 + 3) = 54.
2. 81 is divisible by 9.
So, the number 77896593 is Divisible by 9.
Rough Workspace
You may also like to know:
We provide few shortcut tricks on this topic. Please visit this page to get updates on more Math Shortcut Tricks. You can also like our facebook page to get updates.
If You Have any question regarding this topic then please do comment on below section. You can also send us message on facebook.
Leave a Reply
|
__label__pos
| 0.999775 |
The Surprising Amount of Data Collected on the Internet: How It Impacts Businesses and Individuals
Introduction: The Data Explosion in the Digital Age
In today’s digital age, data is being generated at an unprecedented rate. From social media interactions to online transactions, every click and swipe contributes to the ever-growing pool of information. This data explosion has revolutionized the way businesses operate and has opened up new opportunities for innovation and growth.
With the advent of advanced technologies and sophisticated analytics tools, businesses now have access to vast amounts of data that can be harnessed to gain valuable insights. Organizations are increasingly relying on data-driven decision-making processes to optimize their operations, improve customer experiences, and drive competitive advantage.
However, with great power comes great responsibility. As more personal information is collected and analyzed, concerns around privacy and security arise. It is crucial for organizations to prioritize robust data protection measures and adhere to ethical guidelines when handling sensitive information.
The Impact of Data Collection on Businesses
In today’s digital age, data collection has become an integral part of businesses across various industries. The ability to gather and analyze vast amounts of data has revolutionized the way companies operate, make decisions, and interact with their customers. With the power of data at their fingertips, businesses can gain valuable insights into consumer behavior, market trends, and overall performance.
Data collection allows businesses to understand their target audience on a deeper level. By collecting information such as demographics, preferences, and purchasing patterns, companies can tailor their marketing strategies to effectively reach and engage with their customers. This personalized approach not only enhances customer satisfaction but also increases the chances of conversion and brand loyalty.
However, it is important to note that with great power comes great responsibility. As businesses collect more data from consumers, ensuring privacy protection becomes paramount. Companies must adhere to strict ethical guidelines when collecting and storing customer information while providing transparency regarding how the collected data will be used.
In conclusion, the impact of data collection on businesses cannot be overstated. It empowers companies with valuable insights into consumer behavior while enabling them to optimize operations and make informed strategic decisions. Embracing responsible data collection practices will undoubtedly drive success in today’s digital landscape.
Data Collection Practices: Ethical Considerations and Regulations
In today’s digital age, data collection has become an integral part of various industries. However, with the increasing amount of personal information being collected, it is crucial to address the ethical considerations and regulations surrounding data collection practices.
Ethical considerations play a vital role in ensuring that individuals’ privacy and rights are respected when their data is collected. It is essential for organizations to prioritize transparency and consent when collecting personal information. This means providing clear explanations of how the data will be used and obtaining explicit permission from individuals before collecting their data.
Adhering to ethical considerations and regulations not only protects individuals’ privacy but also helps build trust between organizations and their customers. By implementing responsible data collection practices, companies can demonstrate their commitment to protecting personal information while still utilizing valuable insights to improve products or services.
Data Security: Protecting Sensitive Information in a Connected World
In today’s connected world, data security has become a paramount concern for individuals and organizations alike. With the rapid advancements in technology and the increasing reliance on digital platforms, protecting sensitive information has become more challenging than ever before.
The rise of cybercrime and data breaches has highlighted the need for robust measures to safeguard personal and confidential data. From financial records to medical information, sensitive data is constantly at risk of being compromised or exploited by malicious actors.
Furthermore, organizations must establish strict policies and procedures to ensure that employees are trained in best practices for data security. Regular audits and assessments should be conducted to identify vulnerabilities in existing systems and address them promptly.
In conclusion, as our world becomes increasingly interconnected, the importance of data security cannot be overstated. By implementing robust technological solutions, establishing stringent policies and procedures, and promoting individual responsibility, we can create a safer digital environment where sensitive information remains protected from unauthorized access or misuse.
Conclusion: Navigating the Data Landscape for a Better Future
In conclusion, navigating the data landscape is crucial for shaping a better future. As technology continues to advance and data becomes more abundant, harnessing its power effectively is essential for businesses, governments, and individuals alike.
By understanding the importance of data and implementing strategies to collect, analyze, and utilize it intelligently, organizations can gain valuable insights that drive informed decision-making. This can lead to improved operational efficiency, enhanced customer experiences, and increased competitiveness in today’s fast-paced digital landscape.
Furthermore, navigating the data landscape opens up opportunities for innovation and growth. By leveraging advanced analytics techniques such as machine learning and artificial intelligence, businesses can uncover hidden patterns and trends in their data that were previously inaccessible. This enables them to make predictions, optimize processes, and identify new market opportunities.
In conclusion, embracing the potential of data offers immense possibilities for a better future. By harnessing its power responsibly and strategically navigating the ever-evolving data landscape, organizations can unlock valuable insights that drive innovation, growth, and success in today’s digital age.
• Experience the Intense Flavor of Vampire Vape Heisenberg 100ml
Experience the Intense Flavor of Vampire Vape Heisenberg 100ml Experience the Intense Flavor of Vampire Vape Heisenberg 100ml The world of vaping is constantly evolving with new flavors and products being introduced every day. Among the many options available, Vampire Vape’s Heisenberg 100ml has gained a loyal following for its unique and intense flavor. This […]
• The Secret Ingredient: How [Primary Keyword] can be a Valuable Asset for Businesses Looking for Growth and Success
Introduction: Understanding the Power of In today’s competitive marketplace, business growth and success are crucial goals for any organization. To remain ahead of the curve, businesses must constantly seek innovative strategies and valuable assets that will propel them towards their objectives. One key asset that has proven to be a game-changer in achieving business success […]
• The Importance of Storing Customer Data and How it Enhances Business Success
In today’s data-driven world, customer data plays a crucial role in determining the success of businesses. The ability to efficiently store and analyze this valuable information is paramount for companies striving to stay ahead of the competition. By leveraging advanced data storage technologies, businesses can gain deep insights into their customers’ behaviors and preferences, enabling […]
• The Power of TAR (Tape Archive): A Reliable Container Format for Data Storage and Archiving
Introduction: Understanding TAR (Tape Archive) and Its Importance in Data Storage In today’s digital age, data archiving and storage have become critical for businesses and individuals alike. With the exponential growth of data, finding a reliable container format that can efficiently store and preserve information has become essential. This is where TAR format, also known […]
• The Power of Efficient File Formats and Containers: Revolutionizing Data Storage
Introduction: Understanding the Importance of Efficient File Formats and Containers In today’s fast-paced digital landscape, efficient file formats and data storage solutions are crucial for businesses and individuals alike. With the exponential growth of data, it is imperative to optimize file sizes and streamline data management processes. This is where file compression and container formats […]
• Embrace the Power of Admiration: How Filling Your Life with Inspiration Can Lead to Success
The power of admiration is truly remarkable. It has the ability to inspire and motivate individuals towards success, while filling their lives with a sense of purpose and fulfillment. When we find someone or something that we truly admire, it ignites a fire within us, driving us to push beyond our limits and achieve greatness.Admiration […]
• Streamlining Data Compression and Storage: A Comprehensive Guide on How to Make the Process Easier
Introduction: Understanding the Importance of Data Compression and Storage In today’s digital age, where data is constantly being generated and shared, efficient data management has become a critical aspect for individuals and businesses alike. One key aspect of data management is data compression, which involves reducing the file size of data without compromising its quality […]
• Assess Your Own Progress and Make Necessary Changes: A Guide to Personal Growth and Improvement
Introduction: The Importance of Assessing Your Progress In today’s fast-paced world, personal growth and self-improvement have become essential for achieving success and fulfillment. We all strive to become the best versions of ourselves, constantly seeking ways to progress and reach our goals. But how do we measure our progress? How do we ensure that we […]
• Discover the Beauty of Breathtaking Landscapes: Exploring Nature’s Masterpieces
Prepare to be mesmerized by the awe-inspiring and breathtaking landscapes that Mother Nature has bestowed upon us. Set out on a journey of exploration, where you will witness nature’s masterpieces unfold before your very eyes. Immerse yourself in the beauty that surrounds you, as you discover hidden gems nestled within majestic mountains, tranquil lakes reflecting […]
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.994103 |
/* * Copyright (c) 2005 Apple Computer, Inc. All rights reserved. * * @APPLE_LICENSE_HEADER_START@ * * This file contains Original Code and/or Modifications of Original Code * as defined in and that are subject to the Apple Public Source License * Version 2.0 (the 'License'). You may not use this file except in * compliance with the License. Please obtain a copy of the License at * http://www.opensource.apple.com/apsl/ and read it before using this * file. * * The Original Code and all software distributed under the License are * distributed on an 'AS IS' basis, WITHOUT WARRANTY OF ANY KIND, EITHER * EXPRESS OR IMPLIED, AND APPLE HEREBY DISCLAIMS ALL SUCH WARRANTIES, * INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE, QUIET ENJOYMENT OR NON-INFRINGEMENT. * Please see the License for the specific language governing rights and * limitations under the License. * * @APPLE_LICENSE_HEADER_END@ */ #include #include #include #include "test.h" // PASS(), FAIL() /// /// The point of this test is to load the same bundle file multiple times and /// verify each time it is linked is a new instantiations (new globals, etc) /// int main() { // NSCreateObjectFileImageFromMemory is only available on Mac OS X - not iPhone OS #if __MAC_OS_X_VERSION_MIN_REQUIRED NSObjectFileImage ofi; if ( NSCreateObjectFileImageFromFile("test.bundle", &ofi) != NSObjectFileImageSuccess ) { FAIL("NSCreateObjectFileImageFromFile failed"); return 0; } NSModule mod = NSLinkModule(ofi, "test.bundle", NSLINKMODULE_OPTION_NONE); if ( mod == NULL ) { FAIL("NSLinkModule failed"); return 0; } NSSymbol sym = NSLookupSymbolInModule(mod, "_foo"); if ( sym == NULL ) { FAIL("NSLookupSymbolInModule failed"); return 0; } void* func = NSAddressOfSymbol(sym); //fprintf(stderr, "1st address of foo() = %p in module %p in OFI %p\n", func, mod, ofi); NSObjectFileImage ofi2; if ( NSCreateObjectFileImageFromFile("test.bundle", &ofi2) != NSObjectFileImageSuccess ) { FAIL("2nd NSCreateObjectFileImageFromFile failed"); return 0; } NSModule mod2 = NSLinkModule(ofi2, "test2.bundle", NSLINKMODULE_OPTION_NONE); if ( mod2 == NULL ) { FAIL("2nd NSLookupSymbolInModule failed"); return 0; } if ( mod == mod2 ) { FAIL("2nd NSLinkModule return same function address as first\n"); return 0; } NSSymbol sym2 = NSLookupSymbolInModule(mod2, "_foo"); if ( sym2 == NULL ) { FAIL("2nd NSLookupSymbolInModule failed\n"); return 0; } void* func2 = NSAddressOfSymbol(sym2); //fprintf(stderr, "2nd address of foo() = %p in module %p in OFI %p\n", func2, mod2, ofi2); if ( func == func2 ) { FAIL("2nd NSAddressOfSymbol return same function address as 1st\n"); return 0; } NSObjectFileImage ofi3; if ( NSCreateObjectFileImageFromFile("test.bundle", &ofi3) != NSObjectFileImageSuccess ) { FAIL("3rd NSCreateObjectFileImageFromFile failed"); return 0; } NSModule mod3 = NSLinkModule(ofi3, "test3.bundle", NSLINKMODULE_OPTION_NONE); if ( mod3 == NULL ) { FAIL("3rd NSLinkModule failed\n"); return 0; } if ( mod3 == mod ) { FAIL("3rd NSLinkModule return same function address as 1st\n"); return 0; } if ( mod3 == mod2 ) { FAIL("3rd NSLinkModule return same function address as 2nd\n"); return 0; } NSSymbol sym3 = NSLookupSymbolInModule(mod3, "_foo"); if ( sym3 == NULL ) { FAIL("3rd NSLookupSymbolInModule failed\n"); return 0; } void* func3 = NSAddressOfSymbol(sym3); //fprintf(stderr, "3rd address of foo() = %p in module %p in OFI %p\n", func3, mod3, ofi3); if ( func3 == func ) { FAIL("3rd NSAddressOfSymbol return same function address as 1st\n"); return 0; } if ( func3 == func2 ) { FAIL("3rd NSAddressOfSymbol return same function address as 2nd\n"); return 0; } if ( !NSUnLinkModule(mod, NSUNLINKMODULE_OPTION_NONE) ) { FAIL("NSUnLinkModule failed"); return 0; } if ( !NSUnLinkModule(mod3, NSUNLINKMODULE_OPTION_NONE) ) { FAIL("3rd NSUnLinkModule failed"); return 0; } // note, we are calling NSDestroyObjectFileImage() before NSUnLinkModule() if ( !NSDestroyObjectFileImage(ofi2) ) { FAIL("2nd NSDestroyObjectFileImage failed"); return 0; } if ( !NSUnLinkModule(mod2, NSUNLINKMODULE_OPTION_NONE) ) { FAIL("2nd NSUnLinkModule failed"); return 0; } if ( !NSDestroyObjectFileImage(ofi) ) { FAIL("1st NSDestroyObjectFileImage failed"); return 0; } if ( !NSDestroyObjectFileImage(ofi3) ) { FAIL("3rd NSDestroyObjectFileImage failed"); return 0; } #endif PASS("bundle-multi-load"); return 0; }
|
__label__pos
| 0.992408 |
Topic: How to assign contributions to several sessions (Read 26656 times)
Is it possible to assign an accepted contribution to several sessions?
For our conference we have planned two poster sessions running at different times. All of the accepted posters will be presented in both sessions, and we want to represent this in the schedule created in ConfTool. While the assignment to the first session works well, we are not able to assign the posters to the second session. Can you please advise how to assign all posters to both sessions?
Currently, ConfTool Pro does not provide an option to assign contributions to several sessions, but there are two alternative approaches available.
1. You can use "Referencing Sessions" to refer from one session to another session (called "Parent Session"). The referencing session always shows the same presentations as the parent session, but you can set another time and another room for the referencing session.
To enable referencing sessions, please go to:
Overview => Scheduling => Main Settings for Conference Session Overview
First, enable the expert settings on the bottom of the page by clicking on "Expert Settings Disabled" or the cogwheel. Then, scroll down to section “Further Options” and activate the setting "Enable Extra Session Types" (Image 1).
If enabled, you can select a "Session Type" for each new session (see Image 2). Please use the Session Type "Referencing Session", select a "Parent Session" from the list of existing sessions and set a new session time and location. You also have the option to enter an alternative title for this session (if the title field is left empty, the title of the parent session will be shown).
2. Alternatively, you can use a more "manual" approach to create a copy of the presentations of a session. Please start by assigning the contributions to the first session. Please go to Overview => Scheduling => Create, Configure and Delete Sessions and create the second session. Use the normal session type and disable the option “Assign Contributions” for this second session (see upper green box Image 4). Now you can either:
• copy the session output of the first session into the abstract input field of the second session (see Image 3), or
• insert an HTML hyperlink into the input field “Further Information on the Session” (see lower green box of Image 4) with a related comment like “See session xxx for further details” (more Information on how to create HTML links can be found in the ConfTool documentation). Please note that you have to use the URL of the public session overview (instead of an internal page for chairs or administrators) as link reference.
|
__label__pos
| 0.531207 |
Maximum subsequence sum such that no three are consecutive
Difficulty Level Medium
Frequently asked in 24*7 Innovation Labs Accenture Amazon Delhivery PayPal PayU
Array Dynamic ProgrammingViews 1020
The problem “Maximum subsequence sum such that no three are consecutive ” states that you are given an array of integers. Now you need to find a subsequence that has the maximum sum given that you cannot consider three consecutive elements. To recall, a subsequence is nothing but an array that is left when some of the elements are removed from the original input array keeping the order same.
Example
Maximum subsequence sum such that no three are consecutive
a[] = {2, 5, 10}
50
Explanation
This was an easy choice to pick 5 and 10. Because any other way will not result in a larger sum.
a[] = {5, 10, 5, 10, 15}
40
Explanation
We don’t pick the 5 that is in the middle of the array. Because that will create a subsequence that does not satisfy the condition imposed in the question.
Approach
The problem has asked us to find the subsequence with a maximum sum such that no three consecutive elements are picked. Thus a naive approach could be the generation of the subsequences. As we have done in some of the previous questions. The naive approach is most of the time, to generate the subsequences then check whether the subsequence satisfies the conditions which are imposed in the question. But this approach is time-consuming and can not be used practically. Because using the approach for even moderate-sized inputs will exceed the time limits. Thus to solve the problem we need to use some other method.
We will use Dynamic Programming to solve the problem but before that, we need to perform some casework. This casework is done to reduce the initial problem into smaller subproblems. Because in Dynamic Programming, we reduce the problem into smaller subproblems. So, consider we skip the current element then our problem is reduced to solving the problem until the previous element. Consider, we do pick the current element. Then we have two choices for the previous element. Either we pick the previous element, if we do then we cannot choose the element the previous to previous element. But if we don’t, the problem is reduced to solving the problem until the previous to the previous element. It will be easier to understand using the code.
Code
C++ code to find maximum subsequence sum such that no three are consecutive
#include <bits/stdc++.h>
using namespace std;
int main()
{
int a[] = {1, 2, 3, 4, 5, 6};
int n = sizeof(a) / sizeof(a[0]);
int dp[n];
// base case
if(n>=0)dp[0] = a[0];
if(n>0)dp[1] = a[0] + a[1];
if(n>1)dp[2] = max({a[0] + a[1], a[2]+a[0], a[2]+a[1]});
// if you choose a[i], then choose a[i-1] that is dp[i] = a[i]+a[i-1]+dp[i-3]
// if you choose a[i], then you do not choose a[i-1] dp[i] = dp[i-2] + a[i]
// if you do not choose a[i], dp[i] = dp[i-1]
for (int i = 3; i < n; i++)
dp[i] = max({a[i]+a[i-1]+dp[i-3], dp[i-2]+a[i], dp[i-1]});
cout<<dp[n-1];
}
16
Java code to find maximum subsequence sum such that no three are consecutive
import java.util.*;
class Main{
public static void main(String[] args)
{
int a[] = {1, 2, 3, 4, 5, 6};
int n = a.length;
int dp[] = new int[n];
// base case
if(n>=0)dp[0] = a[0];
if(n>0)dp[1] = a[0] + a[1];
if(n>1)dp[2] = Math.max(Math.max(a[0] + a[1], a[2]+a[0]), a[2]+a[1]);
// if you choose a[i], then choose a[i-1] that is dp[i] = a[i]+a[i-1]+dp[i-3]
// if you choose a[i], then you do not choose a[i-1] dp[i] = dp[i-2] + a[i]
// if you do not choose a[i], dp[i] = dp[i-1]
for (int i = 3; i < n; i++)
dp[i] = Math.max(Math.max(a[i]+a[i-1]+dp[i-3], dp[i-2]+a[i]), dp[i-1]);
System.out.println(dp[n-1]);
}
}
16
Complexity Analysis
Time Complexity
O(N), because we had simply traversed the array and kept on filling our DP array. Thus the time complexity is linear.
Space Complexity
O(N), because we had to make a one dimensional DP array to store the values. The space complexity is also linear.
Translate »
|
__label__pos
| 0.986143 |
The goal of this assignment is to create a star schema for a data warehouse. The data warehouse... 1 answer below »
The goal of this assignment is to create a star schema for a data warehouse. The data warehouse is for the fictitious college used in many of the examples during this course. Analysts at the college are trying to detect trends in registrations in courses over time. A registration is one student registering for one course. They want to know if registrations for individual courses are going up, down, or staying the same over time and whether this is different at the different college campuses. They also want to know if registration trends are different for female students compared to male students. Here is the schemaView in a new windowfor the operational database.
1. 1) Determine the fact for the star schema using the information provided above. You can assume an ETL process exists to create the data for any fact you determine.
2. 2) Determine the dimension tables based on the information provided above. Identify the primary key by underling the appropriate column in the table. List all the columns in the table. List all the columns if the dimension table comes directly from the operational database.
3. 3) Create the fact table. Include all appropriate columns.
4. 4) Underline the primary key column(s) in each table. Italicize the foreign key column(s) in each table unless you use Microsoft Access. See the information below if you use Access.
5. 5) Create your schema using Microsoft Word, PowerPoint. Name your file star_schema. Check with your instructor for approval before using any other application for this assignment.
Solutions:
Tutor picture
Dr Vimal Bibhu answered
956 answers so far
3 Ratings, (9 Votes)
Related Questions
Recent Questions in Database Management System
Submit Your Questions Here !
Copy and paste your question here...
Attach Files
|
__label__pos
| 0.651824 |
Reading lines from a data file
Discussion in 'Python' started by shoemoodoshaloo, May 20, 2009.
1. shoemoodoshaloo
shoemoodoshaloo
Joined:
May 20, 2009
Messages:
1
Hey guys,
I'm new to the forum and hope to learn a great deal and someday even contribute.
I have the following basic assignment:
Take lines from some data file, call it input, and write them to an output. What I want to do is have the user specify the range (for example lines 10-20) that will be picked from the input file to the output file. I am trying to use readlines() and I am able to get the program to pick a certain number of lines, but it always begins as line 1. For example, if I specify lines (30-200) it will recognize that it must extract 170 lines from the input file; however, it starts at line 1 and runs to line 170 instead of starting at line 30. Here is a snipet of the code:
first = int(raw_input('Enter a starting value'))
last = int(raw_input('Enter a final value'))
def add_line_numbers(infile, outfile):
f = open(infile,'r')
o = open(outfile,'w')
i = 1
for i in range(first, last):
o.write(str(i)+'\t'+f.readline())
f.close()
o.close()
--- Parsing code follows
The code originally worked fine until I began editing it. The original code takes an entire input file and transfers it to an entire output file. So only my edits are in question.
The 'i' acts as a counter, and that part works fine. The counter will read 30-200 for example; however, the lines that are inputing are still 1-170 from the original data type.
As I said, I am new to python and very amenable to suggestions so if you have a smarter way to tackled this problem, I am all ears.
shoemoodoshaloo, May 20, 2009
#1
1. Advertising
Want to reply to this thread or ask your own question?
It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
1. Jack
Replies:
9
Views:
2,636
2. Joe Wright
Replies:
0
Views:
496
Joe Wright
Jul 27, 2003
3. lovecreatesbeauty
How to know two lines are a pare parallel lines
lovecreatesbeauty, Apr 27, 2006, in forum: C Programming
Replies:
11
Views:
638
Old Wolf
Apr 28, 2006
4. Murali
Replies:
2
Views:
541
Jerry Coffin
Mar 9, 2006
5. [email protected]
Replies:
5
Views:
68
Chris Angelico
May 14, 2014
Loading...
Share This Page
|
__label__pos
| 0.610702 |
emathematics.net
Español
User: desconectar
Maths Exercices
- Multiply a decimal by a one-digit whole number
To multiply a decimal by a one-digit whole number, count the decimal places in the factors, multiply, and move the decimal point one place to the left for each decimal place you counted.
Multiply 8.3 x 9 =
Multiply as you would multiply whole numbers.
Count the number of decimal places in the factors. There are 1 decimal places in 8.3. Move the decimal point 1 places to the left in the answer.
747. 74.7
Write the answer:
Multiply 8.2 x 4 =
Solution:
|
__label__pos
| 0.996079 |
programming4us
programming4us
DESKTOP
Windows 7 : Troubleshooting Hardware Components (part 2) - Troubleshooting the Motherboard, Troubleshooting RAM, Troubleshooting Hard Disks
- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
4. Troubleshooting the Motherboard
The motherboard is the main component of the computer. It includes the CPU or CPUs, slots for memory modules; expansion slots for other devices; and (typically with modern motherboards) built-in components and related ports for Ethernet, sound, video, and USB.
Figure 1 shows a modern motherboard with built-in components for video, USB, Ethernet, and audio.
The following section provides a set of basic strategies for troubleshooting motherboard problems.
When you attempt to start the computer, you see no video and hear no beep codes.
1. Disconnect all external accessories, such as external drives and PC cards, and then attempt to restart the computer. If you can start the computer, attempt to isolate the problem device by attaching one more device and restarting and until the failure reappears. Once you determine the external device that is causing the problem, contact the device manufacturer for further troubleshooting instructions.
Modern motherboards usually include built-in components for video, USB, Ethernet, and audio.
Figure 1. Modern motherboards usually include built-in components for video, USB, Ethernet, and audio.
2. Verify that the monitor is in fact receiving power and is plugged into the computer.
3. Verify that the power supply fan is running. If it is not running, troubleshoot the power supply.
4. Verify that all required power connectors are plugged into the motherboard and into other computer devices. (Remember that most modern motherboards require two power connectors.)
5. Verify that any internal power switch is turned on.
6. If your power supply has a voltage switch, verify that the switch is set to the proper AC voltage for your country.
7. Verify that the motherboard is seated properly and that the CPU is fitted properly in its slot.
8. Verify that your RAM modules are seated properly and in the correct slots according to the motherboard manufacturer's specifications.
9. Run Windows Memory Diagnostic and replace any RAM modules if necessary.
10. Reset the BIOS to default settings. (To learn how to do this, consult the manual for the motherboard. Note that you can also reset the BIOS by removing the battery on the motherboard for 30 minutes.)
11. Use the manual for the motherboard to verify that any jumpers on the motherboard are properly set.
12. If your computer has no internal speaker (which would allow you to hear beep codes), replace the video card.
13. Replace the power supply unit.
14. Replace the motherboard.
When you turn on the computer, you hear beep codes, but the computer fails to start.
1. Disconnect all external accessories such as external drives and PC cards, and then attempt to restart the computer. If you can start the computer, attempt to isolate the problem device by attaching one more device and restarting until the failure reappears.
2. Consult the motherboard manual or manufacturer Web site to determine the meaning of the beep code you hear.
3. Try to fix the faulty component denoted by the beep code. This step might include attaching power connectors, reseating components such as RAM or the CPU, resetting the BIOS, or resetting motherboard jumpers.
4. If necessary, replace the faulty component denoted by the beep code.
The computer repeatedly loses power whenever it runs for a number of minutes.
1. Verify that the CPU fan on the motherboard is working. If not, replace the CPU fan.
2. Adjust the environment around the computer so that hot air cannot build up in its vicinity. (Laptops are especially sensitive to this.)
The computer shuts down randomly at unpredictable intervals.
1. Run Windows Memory Diagnostic to check your RAM for hardware faults.
2. Run motherboard diagnostic software to check the functionality of the motherboard. To obtain this software, consult the motherboard manufacturer.
3. Adjust the environment around the computer so that hot air cannot build up in its vicinity. (Laptops are especially sensitive to this.)
The operating system cannot use power management, virtualization, USB or network boot, hot swapping, or other features that are supported by your hardware.
Enable the desired feature in the BIOS Setup program.
5. Troubleshooting RAM
In the context of personal computers, the term RAM refers specifically to the volatile, dynamic random access memory supplied by modules such as dual inline memory modules (DIMMs). This type of memory is used to store relatively large amounts of data in a location that the processor can access quickly. An important limitation of computer RAM is that it can store data only when power is supplied to it.
The most typical symptom of a memory problem is a system crash or stop error in Windows. When these errors occur, you might see a message explicitly indicating a memory problem. However, memory problems can also prevent Windows from starting in the first place. If you see an error message directly related to memory, or if you need to rule out faulty memory as the cause of computer crashes or startup failures, perform the following steps:
1. Run Windows Memory Diagnostic software.
2. If no errors are found, or if some of the installed RAM is not recognized, do the following:
1. Verify that the memory modules are seated properly.
2. Verify that the memory modules are seated in the proper slots according to the motherboard manufacturer's specifications.
3. Verify that the memory used is the type required according to the motherboard manufacturer's specifications.
4. If the problem persists, remove all modules, clean the memory slots, insert one module in the first slot, and then restart the computer. Use this method to test all your memory modules.
6. Troubleshooting Hard Disks
Described technically, a hard disk drive represents a type of non-volatile memory storage device that encodes data on a spinning magnetic platter. Though the technology is decades old, it is still the most common type of computer storage today. However, hard disk drives are starting to be replaced by alternative forms of non-volatile storage, such as solid-state drives.
The following section provides a set of basic strategies for troubleshooting hard disk problems.
You hear a loud whirring, screeching, or clicking.
1. Back up your data. The hard drive could be about to fail.
2. Replace the drive.
The operating system fails to start, and you receive an error message similar to any of the following:
Hard disk error.
Invalid partition table.
A disk-read error occurred.
Couldn't find loader.
1. Verify that the BIOS Setup program is configured to boot from the hard drive.
2. Verify that the hard drive contains an operating system.
3. Run the Startup Repair tool.
4. Verify that the power connectors are attached to the hard drive.
5. Verify that any jumpers on your hard drives are configured properly according to manufacturer specifications.
6. Attempt to recover the disk by using the System Image Recovery option.
7. Replace the hard drive.
The operating system loads, but performance gradually decreases over time.
Run Disk Defragmenter.
The operating system loads, but you find evidence of data corruption.
OR
The system occasionally freezes and remains unresponsive.
1. Run Chkdsk.
2. Run software diagnostics from the hard disk drive manufacturer to test the physical functionality of the hard disk drive.
PRACTICE: Testing Specific Hardware Components
In this practice, you run diagnostics to test the integrity of your computer memory and hard disk.
EXERCISE 1 Testing your RAM with Windows Memory Diagnostic
In this exercise, you restart your computer, open the Windows Boot Manager menu, choose Windows Memory Diagnostic, and perform a memory test.
1. Remove all CD or DVD discs from the local drives on a computer that is running Windows 7.
2. Start or restart the computer.
3. As the computer is starting, press the spacebar repeatedly (once per second is sufficiently fast).
The Windows Boot Manager menu appears.
4. Press the Tab key to select Windows Memory Diagnostic on the Windows Boot Manager menu, and then press Enter.
The Windows Memory Diagnostic tool opens.
5. Review the contents of the screen, and then press F1 to open the Options screen.
6. In the Options screen, use the Tab key, arrow keys, and number keys to set the test mix to Basic and the pass count to 1.
7. Press F10 to apply the new settings.
8. A quick memory test begins. After the memory test is complete, Windows restarts automatically. Soon after you next log on, a notification bubble will appear indicating whether any errors were found.
EXERCISE 2 Testing Your Hard Disk with Chkdsk
In this exercise, you log on to Windows 7, open an elevated command prompt, and run the Chkdsk command from the command line.
1. Log on to Windows 7 and open an elevated command prompt. You can do this by selecting Start\All Programs\Accessories\, right-clicking Command Prompt, selecting Run As Administrator from the shortcut menu, and then clicking Yes on the User Account Control message prompt that appears.
2. At the command prompt, type chkdsk /?.
3. Read the output and review the options available with the Chkdsk command.
4. At the command prompt, type chkdsk c: /f /v /i /c.
(If your system drive is assigned a letter other than C:, then replace the c: in this command with the drive letter to which you have assigned the system drive. For example, if your system drive is assigned E:, then you should type chkdsk e: /f /v /i /c.)
This set of options automatically fixes errors (/f) that are found and displays cleanup messages (/v). However, Chkdsk performs a faster test that skips certain types of checks (/i and /c).
5. A message output appears, indicating that Chkdsk cannot run because it is in use by another process and asks if you would like to schedule the volume to be checked the next time the system restarts.
This message appears because the volume you have chosen to test is currently being used to run Windows. You can run Chkdsk only on a volume that is not otherwise in use.
6. Type Y, and then restart the system.
7. When Windows restarts, a message appears while Chkdsk is being run and indicates that because the /i and /c options were specified, the disk could still be corrupt even if no errors are found.
When Chkdsk finishes, Windows starts automatically.
Other
Top 10
Free Mobile And Desktop Apps For Accessing Restricted Websites
MASERATI QUATTROPORTE; DIESEL : Lure of Italian limos
TOYOTA CAMRY 2; 2.5 : Camry now more comely
KIA SORENTO 2.2CRDi : Fuel-sipping slugger
How To Setup, Password Protect & Encrypt Wireless Internet Connection
Emulate And Run iPad Apps On Windows, Mac OS X & Linux With iPadian
Backup & Restore Game Progress From Any Game With SaveGameProgress
Generate A Facebook Timeline Cover Using A Free App
New App for Women ‘Remix’ Offers Fashion Advice & Style Tips
SG50 Ferrari F12berlinetta : Prancing Horse for Lion City's 50th
- Messages forwarded by Outlook rule go nowhere
- Create and Deploy Windows 7 Image
- How do I check to see if my exchange 2003 is an open relay? (not using a open relay tester tool online, but on the console)
- Creating and using an unencrypted cookie in ASP.NET
- Directories
- Poor Performance on Sharepoint 2010 Server
- SBS 2008 ~ The e-mail alias already exists...
- Public to Private IP - DNS Changes
- Send Email from Winform application
- How to create a .mdb file from ms sql server database.......
programming4us programming4us
programming4us
programming4us
|
__label__pos
| 0.922381 |
Failover of a control plane node in single node/ HA Kubernetes Cluster created using kubeadm
Hi ,
I have the following doubts:
Scenario 1: kubeadm cluster version upgrade (single control plane node cluster), when the control plane is down during upgrade.
If one of the node/pod fails, and then the master node fails (due to upgrade/ irrespective of upgrade), would the cluster have to be manually restarted and how will etcd get the state of the pods post restart?
Scenario 2 : In case of HA control plane cluster, if there are 3 control plane running, and one of them goes down (due to upgrade / irrespective of upgrade-hardware failure) .
Would the control plane node be automatically created ? If Yes, what version (new or old)?
|
__label__pos
| 0.997607 |
Apparatus for efficient utilization of removable data recording media
- IBM
An apparatus for efficiently utilizing data recording media performs data compression beneath the level of the host processor is disclosed. To improve the ability of a recording media to be copied without increasing host processor overhead, the control unit which sees the compressed data is checked only upon recording a predetermined amount of uncompressed data. At such time, a compression ratio is calculated for the current data set and is used to monitor the recording of the remaining data of the current data set in compressed form. When a predetermined amount of compressed data is estimated to be recorded, the predetermined amount being the minimum storage capacity of a recording media, recording begins on a new recording media. Recording media spanning is reduced by checking counters in the storage device control unit only upon completion of recording an entire data set, and then using the uncompressed size of the next data set to be recorded to determine whether or not to continue recording on the same or a new cartridge.
Skip to: Description · Claims · References Cited · Patent History · Patent History
Description
BACKGROUND OF THE INVENTION
1. FIELD OF THE INVENTION
The present invention relates to an apparatus for efficiently utilizing data recording media in a data processing system. More particularly, the invention relates to improving the ability of a recording media to be copied and for reducing recording media spanning.
2. DESCRIPTION OF THE RELATED ART
Modern computers require a host processor including one or more central processing units and a memory facility. The processor manipulates data stored in the memory according to instructions provided to it. The memory must therefore be capable of storing data required by the processor and transferring that data to the processor at a rate capable of making the overall operation of the computer feasible. The cost and performance of computer memory is thus critical to the commercial success of the computer system.
Because today's computers require large quantities of data storage capacity, computer memory is available in many forms. A fast but expensive form of memory is main memory, typically comprised of microchips. Other available forms of memory are known as peripheral storage devices and include magnetic direct access storage devices (DASD), magnetic tape storage devices, optical recording devices, and magnetic or optical mass storage libraries. Each of these other types of memory has a greater storage density and thus lower cost than main memory. However, these other memory devices do not provide the performance provided by main memory. For example, the time required to mount a tape or disk in a tape drive, DASD, or optical disk drive and the time required to properly position the tape or disk beneath the read/write mechanism of the drive cannot compare with the rapid, purely electronic data transfer rate of main memory. It is inefficient to store all of the data in a computer system on but a single type of memory device. Storing all of the data in main memory is too costly and storing all of the data on one of the peripheral storage devices reduces performance.
A typical computer system includes both main memory and one or more types of peripheral storage devices arranged in a data storage hierarchy. The data storage hierarchy arrangement is tailored to the performance and cost requirements of the user. In such a hierarchy, main memory is often referred to as primary data storage, the next level of the hierarchy is often referred to as secondary data storage, and so on. Generally, the highest level of the hierarchy has the lowest storage density capability, highest performance and highest cost. As one proceeds down through the levels of the hierarchy, storage density generally increases, performance generally decreases, and cost generally decreases. By transferring data between different levels of the hierarchy as required, the cost of memory is minimized and performance is maximized. Data is thus stored in main memory only so long as it is expected to be required by the processor. The hierarchy may take many forms, include any number of data storage or memory levels, and may be able to transfer data directly between any two distinct memory levels. The transfer of data may employ I/O channels, controllers, or cache memories, as are well known in the art.
A variety of techniques are known for improving the efficiency of use of one or more components of a data storage hierarchy. One set of such techniques is known as data "compaction" and similar names. The term compaction has been used in many ways to refer to methods of storing and transmitting data efficiently. One type of compaction improves data transformation by using the minimum number of required bits to represent the most commonly coded characters. Less commonly coded characters may be represented by more than the minimum number of bits required. Overall, this compaction technique allows for a given amount of information to be coded using a minimum number of bits.
Another type of compaction which is frequently used is the coding of data in such a manner as to remove non-changing bits. Sometimes referred to as run length limited (RLL) coding, this type of compaction replaces strings of the same bit with a simple binary representation of the number of bits to be repeated. An example of such a technique is disclosed in U.S. Pat. No. 4,675,750. The patent discloses a video compression system including the removal of superfluous bits, as stored on magnetic tape.
Another technique for data compaction is the elimination of invalid data. Because recorded data may include invalid data subsequently corrected using error correction codes, more data storage space may be required to store the data than that required if no errors existed therein. In the IBM Technical Disclosure Bulletin Vol. 24, No. 9, February, 1982, page 4483, a technique is disclosed for eliminating invalid data from data sets. The technique includes copying only the valid data of a data set when the size of that data set reaches a certain threshold, ignoring the invalid data. The amount of storage space required to store such data is thus reduced.
Yet another compaction technique saves storage space by using fragmented storage space. Fragmentation refers to the unused portions of a recording media which result from frequent accesses to the data sets thereon. During the course of use, various areas of a recording media may be erased or otherwise eliminated from use. However, each contiguous unused recording space on the recording media may be so small as to make it difficult to record an entire data set therein. Compaction techniques are known for copying data sets from one recording media to another to permit the accumulation of several unused recording areas into a single large contiguous recording space. In addition, U.S. Pat. No. 3,787,827 discloses a data recording system in which a recording media is cyclically checked to locate unused spaces therein. Such checking ensures that unused areas in the recording media are eventually used.
Yet another compaction technique is blocking. Blocking is the combination of two or more logical records into a single transferable or recordable entity. The single entity is typically referred to as a block. Blocking reduces the number of inter-record or inter-block gaps which exist between records to permit them to be distinguished from one another. Blocking sacrifices the ability to access logical records individually to achieve a greater recording density. An example of such a blocking technique is shown in U.S. Pat. No. 3,821,703.
The aforementioned data compaction techniques are all directed toward reducing the amount of data storage space required to record a particular amount of information. In addition, the transfer of data in compacted form may improve data transfer rates. Because the term compaction is loosely used to represent any of the aforementioned techniques, the term "compression" will hereinafter be used to refer to any technique that saves data storage space by, for example, eliminating gaps, empty fields, redundancies, or unnecessary data to shorten the length of records or blocks. The penalty for using data compression is the overhead required to convert the data from uncompressed to compressed form and vice versa. The logic required to compress and decompress data may be provided in the host processor. Unfortunately, the compression and decompression of data at the level of a host processor detracts from the ability of the host processor to perform its normal responsibilities. Thus, the logic required to compress and decompress data is sometimes provided in the control units of peripheral storage devices, thereby offloading the responsibility for data compression and decompression from the host processor to the peripheral storage device. Data processing systems having the responsibility for data compression and decompression residing outside of the host processor are shown in IBM Technical Disclosure Bulletin Vol. 22, No. 9, February 1980, pp. 4191-4193 and IBM Technical Disclosure Bulletin Vol. 26, No. 3A, August 1983, page 1281.
Two problems arise when data compression is offloaded to the control unit of a peripheral storage device. The first problem is associated with the ability of a recording media to be copied onto another recording media. For example, consider the IBM 3480 magnetic tape drive, in which the listed storage capacity of a tape cartridge is 200 megabytes. Due to the nature of the tape cartridge production process, the exact length of tape wound in a tape cartridge can only be specified to within a particular tolerance. Thus, the actual storage capacity of a tape cartridge may be slightly greater than 200 megabytes. It is necessary to limit the total recorded data on a tape cartridge to that of the minimum amount of data capacity on the cartridge if the ability to copy the data from one cartridge to another single cartridge is to be guaranteed. If data were recorded until the actual capacity of the cartridge was exceeded (i.e., no tape remained) it would be possible to record more than 200 megabytes on a cartridge, and in turn it would be impossible to copy the entire contents of that tape cartridge to another tape cartridge having a capacity of merely 200 megabytes. Similar problems can occur with other types of data recording media.
Two techniques can be used to ensure that the amount of data recorded on a recording media does not exceed the minimum amount of data storage capacity guaranteed thereon. The first technique is to physically check how much of the recording media has been used throughout recording. Such a technique may come at the expense of heavy overhead or of imprecision. For example, in a tape drive it is known to use tachometers and the like to control tape motion and to track the length of tape on a particular tape reel. Examples of techniques for physically checking how much of a recording media has been used are disclosed in U.S. Pat. Nos. 4,125,881 and 4,811,132. Unfortunately, techniques for physically determining how much of a data recording media has been recorded are not accurate enough to be relied upon for all applications.
The other method for ensuring that no more data than the minimum capacity for a particular recording media is recorded includes monitoring the data as it is recorded. In data processing systems in which data is transferred or stored in uncompressed form, such techniques are reliable. As the data is written to the recording media, it is monitored to keep track of the total amount of data that has been recorded on each media. Because the data is not compressed, the amount of data recorded correlates to the amount of data seen by both the host processor and the storage device control unit. However, in data processing systems which compress data, it is necessary to know the amount of data recorded in compressed form. If the data is compressed within the host processor, there is no problem. Storage management software which runs in the host processor will have access to the data in compressed form and thus have the ability to monitor the amount of data stored in such compressed form. In many of today's data processing systems however, the overhead associated with compressing the data at the level of the host processor has proved too costly. As previously mentioned, the performance of the host processor has been upgraded by offloading the responsibility for compressing the data from the host processor to the peripheral storage device control units. Such offloading not only improves the performance of the host processor, but also permits data compression and decompression to be transparent to the host processor. Different compression algorithms may be used by each peripheral storage device connected to a single host processor so long as that device returns data to the host processor in uncompressed form.
In data processing systems in which compression is done in storage device control units it is impossible for the storage management software operating in the host processor to be aware of the amount of data stored on a recording media in the storage device in compressed form. Although the storage management software still "sees" the data in uncompressed form in the host processor, it is impossible for it to determine the exact amount of recording media space required to store the data when it is compressed. Merely recording until a particular amount of uncompressed data has been recorded could result in the minimum tape capacity being exceeded because the assumed amount of compression was not in fact accurate. Using counters in the storage device control unit, it is possible to monitor the amount of data that is recorded in compressed form. However, constant retrieval of such compressed data information from counters in the storage device control unit to the host processor for access by storage management software again results in costly overhead. There is thus a need for a method of accurately monitoring the amount of compressed data that is stored on a recording media with a minimum of host processor overhead.
The other problem associated with data compression is recording media spanning. It is generally desirable to avoid spanning a data set across multiple recording media because recall of that data set will require the mounting of more than one recording media, or if all required recording media are already mounted, more than one seed of data on those recording media. It is known to simply write data to the end of a recording media and span a data set across multiple recording media if so required when the end of a recording media is reached. However, as libraries of data recording media have grown in modern times, the need to avoid recording media spanning has become more important. Again, as it has become practice to compress data at the level of a storage device control unit it has become more difficult to predict the likelihood that a data set will be required to span across multiple recording media prior to its recording and with a minimum amount of host processor overhead.
SUMMARY OF THE INVENTION
The primary object of the present invention is improved utilization of removable data recording media in a data processing system.
Another object of the present invention is to improve the ability of a recording media in a data processing system to be copied with a minimum of host processor overhead and where data compression is performed beneath the level of the host processor.
Yet another object of the present invention is to reduce recording media spanning of data sets in a data processing system with a minimum of host processor overhead and where such system compresses data at a level beneath that of the host processor.
Yet another object of the present invention is a data processing system including improved methods for both increasing the ability of a recording media to be copied and reducing recording media spanning as previously described.
These and other objects of the present invention are achieved by monitoring methods performed by storage management software. To improve the ability of a recording media to be copied without increasing host processor overhead, the control unit which sees the compressed data is checked only upon recording a predetermined amount of uncompressed data. The amount of uncompressed data recorded can be monitored directly by the host processor. At such time as the predetermined amount of uncompressed data is recorded, the compression ratio for the data set is calculated and used to monitor the recording of the remaining data in compressed form. When a predetermined amount of compressed data is estimated to be recorded, the predetermined amount being the minimum storage capacity of a recording media, recording begins on a new recording media.
The method of reducing recording media spanning without increasing host processor overhead includes checking counters in the storage device control unit only upon completion of recording an entire data set, and then using the uncompressed size of the next data set to be recorded to determine whether or not to continue recording on the same or a new cartridge. If the total of the known compressed data recorded and the uncompressed data to be recorded exceeds the target capacity of the recording media, a new recording media is inserted and the data set is recorded on the new media. The aforementioned methods can also account for inaccuracies in the data provided by the control unit counters.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of the preferred embodiment of the invention, as illustrated in the accompanying drawing.
BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 is a schematic diagram of a multi-host data processing system having a plurality of peripheral data storage devices which can be managed according to the invention.
FIG. 2 is a flow diagram illustrating the invention.
FIG. 3 is a flow diagram which connects with that of FIG. 2.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The invention will now be described as practiced in a multi-host processor data processing environment having a plurality of peripheral data storage devices of diverse types and capabilities. It should be understood that the invention may also be practiced in a single-host processor environment having a smaller number of peripheral data storage devices, or with a variety of different system structures.
Referring to FIG. 1, a data processing system in a multi-host environment will now be described. The system includes two or more host processors, a host processor 10 and a host processor 11 being shown in the figure, each of which includes the usual component portions of a host processor, such as the arithmetic logic unit, main memory, and input/output channels (not shown). Each host processor can be either a unit processor or a multi-processor. The host processors employ various operating systems not pertinent to an understanding of the present invention. Within each host processor is a computer program employing the invention, as will be detailed.
Host processors 10 and 11 are connected to a common DASD 12. Common DASD (direct access storage device) 12 consists of a high performance disk-type data storage device. Stored in common DASD 12 are those control data structures (not shown) desired for coordinating operations of host processors 10 and 11 in executing a data storage management program. A high performance DASD 14 labeled L0 DASD, stores those data sets directly accessed by host processors 10 and 11 and receives data sets for storage generated by host processors 10 and 11. A lower performance DASD 15, labeled L1 DASD, stores those data sets accessed by host processors 10 and 11 less frequently than those stored on high performance DASD 14. When the data sets stored in DASD 14 become aged through non-access by host processors 10 and 11 the data storage management program automatically moves data sets from DASD 14 to DASD 15 such that data set access by host processors 10 and 11 is enhanced by keeping only those data sets that are frequently accessed by the host processors in DASD 14. DASD's 14 and 15 represent the first two levels of a data storage hierarchy created by the data storage management program.
A still lower level in the data storage hierarchy is represented by a mass storage system (MSS) 16, labeled L2 MSS, and a tape drive 17, labeled L2 TAPE. MSS 16 and DASD's 12, 14 and 15 provide for automatic accessing of all data sets stored therein. MSS 16 includes one or more means for reading and writing to recording media and automated means for transferring such media between storage cells located in MSS 16 and the means for reading and writing. The recording media may be magnetic tape, magnetic disk, or optical disk and the means for reading and writing may be tape drives or magnetic or optical disk drives as the case may be. MSS 16 may also include means for inserting or removing recording media therein. Tape drive 17 is used for archival or other long term data storage, backup and the like and usually require operator intervention for mounting and demounting tape volumes. The system operator and system console is not shown in FIG. 1 for the purpose of simplification.
In the preferred embodiment, the storage management program including the invention is Hierarchical Storage Manager (HSM), a data facility in the Multiple Virtual Storage (MVS) operating system environment. A general description of HSM may be found in U.S. Pat. Nos. 4,771,375 and 4,638,424; IBM Manual SH35-0085-3, DATA FACILITY HIERARCHICAL STORAGE MANAGER VERSION 2 RELEASE 4.0, "System Programmer's Guide"; IBM Manual SH35-0083-3, DATA FACILITY HIERARCHICAL STORAGE MANAGER VERSION 2 RELEASE 4.0, "System Programmer's Command Reference"; and IBM Manual LY35-0098-1, DATA FACILITY HIERARCHICAL STORAGE MANAGER VERSION 2 RELEASE 4.0, "Diagnosis Guide", the disclosure of which are hereby incorporated by reference. HSM is a continuously running application program and includes instructions residing in host processors 10 and 11. HSM provides for data processing system space management by migrating data sets between the different levels of the data storage hierarchy according to predetermined specifications, and also provides availability management by backing up data sets and dumping volumes of data also according to predetermined or user driven specifications. The subject invention may improve the efficiency of any type of recording media used in a data processing system. As regards FIG. 1, the inventive method will be described with respect to a preferred embodiment when data is being recorded to tape drive 17. More specifically, tape drive 17 is an IBM 3480 magnetic tape drive and the recording media is a magnetic tape cartridge storing a data volume of up to 200 megabytes. The control unit 23 for the tape drive compresses data and maintains counters 21 including certain information about the data written to a tape cartridge since it was last mounted, as will be explained later.
As stated, the data to be recorded is compressed in the tape drive control unit, which acts as a buffer to the tape cartridges. Data compression is performed in accordance with U.S. Pat. Nos. 4,463,342 and 4,467,317, commonly assigned co-pending U.S. patent application Ser. No. 07/372,744, by Dunn, et al., and IBM Technical Disclosure Bulletin Vol. 27, No. 6, November 1984, pp. 3275-3278, the disclosure of which is hereby incorporated by reference. Data transferred to the control unit for recording is referred to as logical or uncompressed data. Data that has already been compressed in the tape drive control unit is referred to as compressed data. Compressed data that has been recorded on a tape cartridge is referred to as physical data. The distinction between logical and physical data is thus the number of bytes (i.e., the amount) of contiguous storage space on the tape cartridge that is required to store the data. Logical data to be written on a tape cartridge is transferred from a host processor to the tape drive control unit when a minimum block of 16K bytes of data in uncompressed form. It is compressed by the tape drive control unit and accumulated in compressed form. When a still larger threshold amount of data is accumulated in the control unit buffer 22 the data is physically recorded on a tape cartridge. The data set being written at any given time is referred to as the current data set.
The counters of the tape drive control unit maintain certain statistics used to monitor the amount of tape in a tape cartridge which has been recorded (i.e., the position of the tape). One counter tallies the amount of logical data which has actually been received by the tape drive control unit, another counter tallies the amount of physical data written on the tape cartridge, and yet another counter tallies the number of inter-block gaps in the physical data. As stated previously, the counters are reset each time a tape cartridge is mounted. Access to the information in the counters is achieved by issuance of a READ BUFFERED LOG command. The structure and operation of the counters are known to one of skill in the art.
As stated previously, common DASD 12 stores certain control data structures. DASD 12 includes a migration control data set (MCDS) for migration volumes and a backup control data set (BCDS) for backup volumes. The control data sets are accessed by specifying the record type and record key (VOLSER), the structure and operation of which are known to one of skill in the art. The control data sets maintain certain information on each tape cartridge, including the position of the tape at the end of output from its previous mount in the tape drive. The position thus indicates the total amount of physical data on the tape cartridge, at the end of the previous mount, including actual length of data and inter-record gaps. Also included in the control data set is the total number of logical data bytes requested to be written to the tape cartridge during the current mount. This number does not include inter-record gaps. Finally, the control data set includes the total number of physical data bytes on a tape cartridge, also not including interrecord gaps.
HSM maintains certain statistical information in the main memory of the active host processor. This information includes tallies of the amount of logical data and associated number of blocks which have been sent to the tape drive control unit. Also maintained in main memory is any other information required as will be described.
Referring to FIG. 2, the method begins at point 30 when a tape cartridge is mounted in the tape drive. At step 31 the host processor transferring the data to the tape cartridge begins to logically monitor the recording. As recording proceeds during step 31 the amount of uncompressed data that is sent to the control unit of the peripheral storage device is tracked in main memory. So long as a target amount of data is not reached during step 31 recording continues. The target is shown at step 32 and may be set to the minimum capacity of a tape cartridge to improve the ability of the cartridge to be copied onto another single cartridge, or may be set to any predetermined level desired by the storage administrator. So long as the target is not met, recording will continue until the end of the data set is reached at step 33. When the end of a data set is reached the branch step 33 directs the flow of operations to point 50 in FIG. 3.
When the end of a data set is reached, the method reaches step 51 wherein the actual position on the tape, or amount of physical data thereon, is calculated. The actual position is calculated by extracting the counts from the control unit for use by the recording host processor. The amount of tape storage space used during the current mount is calculated by summing together the amount of physical data written, the number of inter-block gaps in the physical data, and the amount of logical data which has been sent to the tape drive control unit, less the amount of logical data actually recorded on a tape cartridge (the last two amounts normally being equal). The amount of storage space used is then added to any previous tally of the position of the tape from any previous mounts of the tape cartridge. The position of the tape is then stored in the control data set for the particular tape cartridge in common DASD 12. Should the tape cartridge be removed from tape drive 17 and then later reinserted to add more data to the data volume, the tally stored in the controlled data set will enable the subject method to continue where it left off upon the last time the data cartridge was written to. At step 52, the estimated number of uncompressed data bytes in the next user data set to be written to the tape cartridge is added to the calculated actual position of step 51. The sum is an estimate of the position of the tape following the recording of the next user data set to be written.
At step 53, the sum determined in step 52 is reviewed to determine whether or not the next data set will produce a potential spanning problem. Two characteristics of the information received in step 52 are reviewed. First, the size in uncompressed bytes of the estimated next data set to be written is checked to determine whether or not it is smaller or larger than a size set by the user, which in the preferred embodiment is eight megabytes. If the estimated uncompressed size of the next data set to be written is greater than eight megabytes the method returns to point 30 in FIG. 2. This result is due to the fact that a large data set, if used to force the end of a volume and to record on the next tape cartridge, would waste a potentially large amount of space at the end of the current tape cartridge. If the estimated uncompressed size of the next data set is less than or equal to eight megabytes then the logical estimate of the position calculated in step 52 is compared to a predetermined target value. Note that this target may or may not be the same as the target used in step 32. If according to step 53 the size of the next data set to be written would not cause the target to be exceeded, recording is returned to step 30 in FIG. 2. Thus, if the estimate in uncompressed bytes of the data set size is over eight megabytes or if the estimated position of the tape would not exceed the target capacity should this data set be written to the tape, writing of the data set to the current tape cartridge continues at point 30 of FIG. 2. However, if the estimated output size is smaller or equal to eight megabytes and if the current estimated position of the tape cartridge if the data set were written would cause the target capacity to be exceeded, the end of the volume is forced (FEOV) at step 54, and the tape cartridge is demounted in favor of a new tape cartridge which is mounted before recording of the next data set continues. Such continued recording on the new tape cartridge would then return to point 30 in FIG. 2.
Assuming that the end of a data set was not reached at step 33, or that the end of a data set was reached in step 33 but that operations returned to step 31, writing to the current tape cartridge will continue until the target is met at step 32. Once the target is reached in step 32 operations continue to step 35 wherein the actual position of the tape cartridge, or the amount of physical bytes of data thereon, and certain statistics are calculated. The calculation of the actual position of the tape is the same as that already described in step 51. In addition, step 35 includes calculating statistics which will be needed for further monitoring of recording on the current tape cartridge. The statistics include calculation of the compression percentage for the current data set. The compression percentage is the ratio expressed in percentage form of the amount of compressed data bytes recorded for a data set to the amount of uncompressed data bytes for that recorded data. More particularly, the ratio is the number of physical data bytes and interblock gaps divided by the number of logical data bytes and blocks sent to the tape drive control unit, all of which numbers are accessible in main memory or the tape drive control unit. The compression percentage for the data set is used to predict the number of bytes required on the data cartridge to store the remaining unrecorded logical bytes in the current data set.
At step 36 recording continues and is physically monitored. By physical monitoring it is meant that the compression percentage is used by the recording host processor to estimate the number of physical data bytes required to record the uncompressed bytes it is sending to the control unit. At step 37, as recording continues the estimated position of recording in compressed data bytes is compared to a target value. Once again the target value at step 37 may be the same or different from the target used in step 32 or in previous step 53. So long as the target is not met, recording continues at step 38 which like step 33 detects the end of a data set which has been recorded. So long as the end of a data set is not detected recording continues at steps 36 and 37. However, when the end of a data set is detected operations are again transferred to point 50 in FIG. 3. From point 50, the method continues as previously described.
Assuming the end of a data set is not detected at step 38, physical monitoring continues at steps 36 and 37. When the target of step 37 is reached, indicating that further recording would likely cause the tape cartridge to be impossible to copy onto another single tape cartridge, the end of volume is forced at step 40 and operations are returned to step 30 with the insertion of a new tape cartridge. Note that the end of volume forced at step 40 does not necessarily account for tape spanning, the target could be met in the middle of a data set.
In performing the operations of FIGS. 2 and 3, the control unit information that is extracted from the tape drive control unit needs to be accurate. However, it is possible for the operating system to occasionally unload such information therein to error recording and reporting software. After the operating system has caused the information to be unloaded for error recording purposes, the counters are typically reset. Because the storage management program is not able to access the error recording program it is necessary for the method to include detecting when the information in the tape drive control unit is inaccurate. This check is not shown in any of the figures for convenience, but is performed anytime access to such information is required. Detection of the inaccuracy of the information is accomplished by maintaining in main memory the number of uncompressed data bytes written to the tape cartridge during a continuous mount of the tape. This number should equal the count of such maintained in the tape drive control unit. By comparing the two counters, it is possible to determine if a reset has occurred in the tape drive control unit.
Once the loss of the tape drive control unit information has been detected the loss must be accounted for. Two methods of adjustment are possible. In the first such method, when a difference in the counters is detected it is assumed that all data that has been recorded on the recording media is in fact uncompressed data. That is, the number of compressed bytes recorded on the recording media is set to equal the number of uncompressed bytes detected by the host processor as recording was occurring. The other method, which is the preferred embodiment, includes the same kind of assumption, but only for those bytes that are missing from the counter. Thus, if main memory indicates a particular byte count and the tape drive control unit was reset during that count, the tape drive control unit will indicate a smaller number or subset of the main memory count. At such time the physical or compressed data count present in the tape drive control unit will be assumed to be accurate to the extent that it applies only to the count that is shown in uncompressed form in the buffer. The remaining bytes, i.e., the difference in the uncompressed data byte count in main memory and in the tape drive control unit, can be accounted for by assuming that no compression took place. In such way the loss of data in the tape drive control unit can be accounted for.
While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. For example, the subject method can be used with various types of recording media other than magnetic tape, such as magnetic and optical disks. Accordingly, the methods should be limited only as specified in the following claims.
Claims
1. A peripheral data storage subsystem for using recording media efficiently, said peripheral data storage subsystem having a first recording medium mounted in a first peripheral data storage device and a second recording medium mounted in a second peripheral data storage device, the first recording medium having physical data stored therein and having a total physical data amount associated therewith, and wherein said peripheral data storage subsystem ensures the first recording medium's physical data is fully copyable onto the second recording medium, said peripheral data storage subsystem coupled to a host processor for receiving logical data of a current data set therefrom and converting the logical data into compressed data of the current data set, said peripheral data storage subsystem comprising:
a plurality of counters for tallying a logical data amount of the logical data of the current data set, and for tallying a compressed data amount of the compressed data of the current data set; and
a control unit including:
means coupled to said plurality of counters and said host processor for receiving the logical data of the current data set and compressing the logical data into the compressed data;
means for calculating a compression ratio from the compressed data amount and the logical data amount;
means for using the compression ratio for determining an estimated physical data amount, the estimated physical data amount being used to determine whether writing the current data set to the first recording medium would exceed a copy limit of the first recording medium such that the contents of the first recording medium would not be fully copy-able and forcing an end-of-volume if the copy limit is exceed; and
means for writing the compressed data to the first recording medium as current physical data if the copy limit is not exceeded and adding a current physical data amount of the current physical data to the total physical data amount associated with the first recording medium for determining an updated total physical data amount.
2. The peripheral data storage subsystem according to claim 1 wherein said control unit compares the logical data amount of the current data set to a target stored therein and determine the compression ratio only if the target is exceeded.
3. The peripheral data storage subsystem according to claim 2 wherein said control unit writes the compressed data to the second recording medium if an end-of-volume was forced.
4. The peripheral data storage subsystem according to claim 3 wherein said control unit includes means for resetting said plurality of counters upon removal of the first recording medium from said first peripheral data storage device.
5. The peripheral data storage subsystem according to claim 4 wherein said control unit includes means for causing the updated total physical data amount to be stored in a control data set in a storage device coupled to the host processor, and causing the updated total physical data amount to be recalled to said control unit upon re-mounting the first recording medium onto said first peripheral data storage device.
6. A tape drive subsystem for using a plurality of tape cartridges efficiently, said tape drive subsystem having a first tape cartridge mounted in a first tape device and a second tape cartridge mounted in a second tape device, the first tape cartridge having physical data stored therein and having a total physical data amount associated therewith, and wherein said tape drive subsystem ensures the first tape cartridge's physical data is fully copy-able onto any other of the plurality of tape cartridges, said tape drive subsystem coupled to a host processor for receiving logical data of a current data set therefrom and converting the logical data into compressed data of the current data set, said tape drive subsystem comprising:
a buffer;
a plurality of counters coupled to said buffer, including a first counter for tallying a logical data amount of the logical data of the current data set, and including a second counter for tallying a compressed data amount of the compressed data of the current data set; and
a control unit including:
means coupled to said plurality of counters and said host processor for receiving the logical data of the current data set and compressing the logical data into the compressed data;
means for calculating a compression ratio from the compressed data amount and the logical data amount;
means for using the compression ratio for determining an estimated physical data amount, the estimated physical data amount being used to determine whether writing the current data set to the first tape cartridge would exceed a copy limit of the first tape cartridge such that the physical data stored on the first tape cartridge would not be fully copy-able and forcing an end-of-volume if the copy limit is exceeded; and
means for writing the compressed data to the first tape cartridge from said buffer as current physical data if the copy limit is not exceeded and adding a current physical data amount of the current physical data to the total physical data amount of the first tape cartridge for determining an updated total physical data amount.
7. The tape drive subsystem according to claim 6 wherein said control unit compares the logical data amount of the current data set to a target stored therein and determines the compression ratio only if the target is exceeded.
8. The tape drive subsystem according to claim 7 wherein said control unit writes the compressed data from said buffer to the second tape cartridge if an end-of-volume was forced.
9. The tape drive subsystem according to claim 8 wherein said control unit includes means for resetting said plurality of counters upon dis-mounting the first tape cartridge from said first tape device.
10. The tape drive subsystem according to claim 9 wherein said control unit includes means for causing the updated total physical data amount to be stored in a control data set in a storage device coupled to the host processor, and causing the updated total physical data amount to be recalled back to said control unit upon re-mounting the first tape cartridge onto said first tape device.
Referenced Cited
U.S. Patent Documents
4574351 March 4, 1986 Dang et al.
4586027 April 29, 1986 Tsukiyama et al.
4638424 January 20, 1987 Beglin et al.
4771375 September 13, 1988 Beglin et al.
4811132 March 7, 1989 Hunter et al.
4849878 July 18, 1989 Roy
4891784 January 2, 1990 Kato et al.
4974189 November 27, 1990 Russon et al.
5167034 November 24, 1992 MacLean, Jr. et al.
Other references
• IBM Technical Disclosure Bulletin, vol. 24, No. 9, Feb. 1982, p. 4483.
Patent History
Patent number: 5235695
Type: Grant
Filed: May 8, 1992
Date of Patent: Aug 10, 1993
Assignee: International Business Machines Corporation (Armonk, NY)
Inventor: Jerry W. Pence (Tucson, AZ)
Primary Examiner: Michael R. Fleming
Assistant Examiner: Gopal C. Ray
Attorneys: M. W. Schecter, F. E. Anderson
Application Number: 7/880,416
Classifications
Current U.S. Class: 395/425; 360/961; Tape (360/134); 364/2383; 364/2481; 364/2362; 364/2482; 364/2363; 364/2366; 364/2606; 364/DIG1
International Classification: G06F 1574; G06F 500; G06F 1200;
|
__label__pos
| 0.610513 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.