content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Why Use Vuforia for ARKit Development?
Estimated read time 4 min read
As a developer, you know the importance of choosing the right tools and frameworks to create engaging and interactive experiences for your audience. When it comes to augmented reality (AR) development, one of the most popular platforms is ARKit from Apple. However, there are other AR development tools available that can help you take your ARKit projects to the next level. In this article, we will explore why you should consider using Vuforia for ARKit development.
Vuforia is a leading provider of augmented reality solutions for businesses and developers. With Vuforia, you can create high-quality AR experiences that are both engaging and interactive. Here are some reasons why you should consider using Vuforia for your ARKit projects:
1. Easy to Use
One of the biggest advantages of Vuforia is its ease of use. With Vuforia’s intuitive interface, you can create AR experiences in no time. You don’t need to be an expert in AR development to get started with Vuforia. Even if you have little to no experience, you can still create engaging AR experiences with Vuforia.
2. Advanced Features
Vuforia offers a wide range of advanced features that can help you take your ARKit projects to the next level. For example, Vuforia’s image recognition capabilities allow you to track real-world objects and display relevant content on top of them. You can also use Vuforia’s 3D object tracking feature to create interactive experiences that respond to the movement of the device.
3. Cross-Platform Compatibility
Vuforia is compatible with a wide range of devices, including iOS, Android, and Windows. This means you can create AR experiences that work across multiple platforms, giving your users the best possible experience regardless of their device.
4. Scalability
Vuforia offers scalable solutions for businesses of all sizes. Whether you’re just starting out or you’re a large enterprise, Vuforia has the tools and resources to help you create engaging AR experiences that meet your needs.
1. Support and Community
One of the biggest advantages of using Vuforia is the support and community it provides. With Vuforia, you have access to a dedicated team of experts who can help you with any questions or issues you may have. You also have access to a large community of developers who are constantly sharing new ideas and best practices.
Case Study: Coca-Cola’s "Share a Coke" Campaign
One example of a successful ARKit project that used Vuforia is Coca-Cola’s "Share a Coke" campaign. In this campaign, Coca-Cola used Vuforia to create an interactive AR experience that allowed users to share personalized messages with their friends and family. The campaign was a huge success, with over 1 million people sharing messages through the AR app.
Personal Experience: Creating an AR Experience with Vuforia
As an ARKit developer, I have had the opportunity to use Vuforia on several projects. One of my favorite things about Vuforia is its ease of use. With Vuforia’s intuitive interface, I was able to create a high-quality AR experience in no time. I also appreciate the advanced features that Vuforia offers, such as image recognition and 3D object tracking. These features allowed me to create an interactive experience that responded to the movement of the device and displayed relevant content on top of real-world objects.
Conclusion
In conclusion, there are many reasons why you should consider using Vuforia for your ARKit projects. From its ease of use to its advanced features, Vuforia has everything you need to create engaging and interactive AR experiences that meet the needs of your users. Whether you’re just starting out or you’re a large enterprise, Vuforia has the tools and resources to help you take your ARKit projects to the next level. So why wait? Try Vuforia today and see
You May Also Like
More From Author
|
__label__pos
| 0.73199 |
Kristina Lerman
Kristina Lerman
Why your friends have more to be thankful for
Analytics, Social Networks
As Thanksgiving approaches, it may feel like everyone else has so much more to be thankful for. Just check your Facebook, Twitter or Instagram: your friends seem to dine at finer restaurants, take more exotic vacations, and attend more exciting parties. Research suggests this is not simply a matter of perception, but a mathematical fact (for most of us anyway). This unsettling observation is rooted in the friendship paradox, which states that “on average, your friends are more popular than you are”. This means that if you ask a random person who her friends are, the average number of friends her friends have is likely to be larger than the number of friends she has. Friendship paradox holds online too: 98% of Twittter users follow others who have larger followings, on average. Unless you are Lady Gaga, most of your followers are also more popular than you are.
The friendship paradox is not merely a mathematical curiosity, but has useful applications in disease monitoring and trend prediction. Researchers used it to spot flu outbreaks on a college campus in their early stages devise efficient strategies to predict trending topics on Twitter weeks before they became popular. Similarly, if you arrive in an African village with only five Ebola vaccines, the best strategy is not to vaccinate five random people, but ask those people who their friends are and vaccinate five of these friends. Due to the friendship paradox, the friends are likely to be more central, both in the Twitterverse and in the village, and thus more likely get sickened early by the virus or to tweet about topics that later become popular.
Although it sounds strange, the friendship paradox has a simple mathematical explanation. People are diverse: most of us have a few dozen friends, and then there is Lady Gaga. This rare outlier skews the average friend count of many people, putting them in the paradox regime. Mathematicians advise using the median when dealing with distributions that include such extremely large values. The median is the half-way point: half of the numbers in the distribution lie below the median and half above. Unlike the average, it is not easily skewed by a few extremely large numbers. The median is used, for example, to report the income of US households, where extreme fortunes of the top 1% of the population skew the average household income.
Remarkably, my group at University of Southern California has shown that friendship paradox still holds for the median. In other words, most of your friends have more friends than you do, not on average, but most! We showed that over 95% of Twitter users have fewer followers than most of the people they follow, or most of the people who follow them. Stranger still, the paradox holds not only for popularity, but for other personal attributes. As an example, consider how frequently a user posts status updates on Twitter. There is a paradox for that: most of the people you follow post more status updates than you do. Similarly, most of the people you follow receive more novel and diverse information than you do. Also, most of the people you follow receive information that ends up spreading much farther than what you see in your stream.
The friendship paradox helps explain why you are not as cool or interesting as your friends. Extraordinary people are likely to be better socially connected and have more friends than the more ordinary people like you and I. Extraordinary people are also likely to be more active and post more frequently about their extraordinary experiences. This is all it takes to skew our perceptions of the quality of our lives relative to those of our friends. So, if you feel that your friends have more to be grateful for, at least in this you are not alone.
thx
Blogger Profile:
Kristina Lerman is a Project Leader at the Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Viterbi School of Engineering’s Computer Science Department. Her research focuses on applying network- and machine learning-based methods to problems in social computing.
Copyright @ 2014, Kristina Lerman, All rights reserved.
Facebooktwitterlinkedin
123 views
Categories
|
__label__pos
| 0.668008 |
sbipc.dll
Process name: SbIpc
Application using this process: SafeBoot Security System
Recommended: Check your system for invalid registry entries.
sbipc.dll
Process name: SbIpc
Application using this process: SafeBoot Security System
Recommended: Check your system for invalid registry entries.
sbipc.dll
Process name: SbIpc
Application using this process: SafeBoot Security System
Recommended: Check your system for invalid registry entries.
What is sbipc.dll doing on my computer?
sbipc.dll is a SbIpc belonging to SafeBoot Security System from Control Break International Non-system processes like sbipc.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
sbipc.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is sbipc.dll harmful?
sbipc.dll has not been assigned a security rating yet.
sbipc.dll is unrated
Can I stop or remove sbipc.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. sbipc.dll is used by 'SafeBoot Security System'.This is an application created by 'Control Break International'. To stop sbipc.dll permanently uninstall 'SafeBoot Security System' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time.
Is sbipc.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up.
Why is sbipc.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
Toolbox
ProcessQuicklink
|
__label__pos
| 0.957518 |
Axiom:Axiom of Pairing
From ProofWiki
Jump to navigation Jump to search
Axiom
Set Theory
For any two sets, there exists a set to which only those two sets are elements:
$\forall a: \forall b: \exists c: \forall z: \paren {z = a \lor z = b \iff z \in c}$
Class Theory
Let $a$ and $b$ be sets.
Then the class $\set {a, b}$ is likewise a set.
Also known as
The axiom of pairing is also known as the axiom of the unordered pair.
Some sources call it the pairing axiom.
Also see
• Results about the axiom of pairing can be found here.
Sources
|
__label__pos
| 0.915714 |
In 1175 AD, one of the greatest European mathematicians was born. His birth name was Leonardo Pisano. Pisano is Italian for the city of Pisa, which is where Leonardo was born. Leonardo wanted to carry his family name so he called himself Fibonacci, which is pronounced fib-on-arch-ee. Guglielmo Bonnacio was Leonardo’s father. Fibonacci is a nickname, which comes from filius Bonacci, meaning son of Bonacci. However, occasionally Leonardo would use Bigollo as his last name. Bigollo means traveler. I will call him Leonardo Fibonacci, but if anyone who does any research work on him may find the other names listed in older books.
Guglielmo Bonaccio, Leonardo’s father, was a customs officer in Bugia, which is a Mediterranean trading port in North Africa. He represented the merchants from Pisa that would trade their products in Bugia. Leonardo grew up in Bugia and was educated by the Moors of North Africa. As Leonardo became older, he traveled quite extensively with his father around the Mediterranean coast. They would meet with many merchants. While doing this Leonardo learned many different systems of mathematics. Leonardo recognized the advantages of the different mathematical systems of the different countries they visited. But he realized that the “Hindu-Arabic” system of mathematics had many more advantages than all of the other systems combined. Leonardo stopped travelling with his father in the year 1200. He returned to Pisa and began writing. Books by Fibonacci Leonardo wrote numerous books regarding mathematics. The books include his own contributions, which have become very significant, along with ancient mathematical skills that needed to be revived. Only four of his books remain today. His books were all handwritten so the only way for a person to obtain one in the year 1200 was to have another handwritten copy made. The four books that still exist are Liber abbaci, Practica geometriae, Flos, and Liber quadratorum. Leonardo had written several other books, which unfortunately were lost. These books included Di minor guisa and Elements. Di minor guisa contained information on commercial mathematics. His book Elements was a commentary to Euclid’s Book X. In Book X, Euclid had approached irrational numbers from a geometric perspective. In Elements, Leonardo utilized a numerical treatment for the irrational numbers. Practical applications such as this made Leonardo famous among his contemporaries. Leonardo’s book Liber abbaci was published in 1202. He dedicated this book to Michael Scotus. Scotus was the court astrologer to the Holy Roman Emperor Fredrick II. Leonardo based this book on the mathematics and algebra that he had learned through his travels.
The name of the book Liber abbaci means book of the abacus or book of calculating. This was the first book to introduce the Hindu-Arabic place value decimal system and the use of Arabic numerals in Europe. Liber abbaci is predominately about how to use the Arabic numeral system, but Leonardo also covered linear equations in this book. Many of the problems Leonardo used in Liber abacci were similar to problems that appeared in Arab sources. Liber abbaci was divided into four sections. In the second section of this book, Leonardo focused on problems that were practical for merchants. The problems in this section relate to the price of goods, how to calculate profit on transactions, how to convert between the various currencies in Mediterranean countries and other problems that had originated in China. In the third section of Liber abbaci, there are problems that involve perfect numbers, the Chinese remainder theorem, geometric series and summing arithmetic. But Leonardo is best remembered today for this one problem in the third section: “A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?” This problem led to the introduction of the Fibonacci numbers and the Fibonacci sequence, which will be discussed in further detail in section II.
Today almost 800 years later there is a journal called the “Fibonacci Quarterly” which is devoted to studying mathematics related to the Fibonacci sequence. In the fourth section of Liber abbaci Leonardo discusses square roots. He utilized rational approximations and geometric constructions. Leonardo produced a second edition of Liber abbaci in 1228 in which he added new information and removed unusable information. Leonardo wrote his second book, Practica geometriae, in 1220. He dedicated this book to Dominicus Hispanus who was among the Holy Roman Emperor Fredrick II’s court. Dominicus had suggested that Fredrick meet Leonardo and challenge him to solve numerous mathematical problems. Leonardo accepted the challenge and solved the problems. He then listed the problems and solutions to the problems in his third book Flos. Practica geometriae consists largely of geometry problems and theorems. The theorems in this book were based on the combination of Euclid’s Book X and Leonard’s commentary, Elements, to Book X. Practica geometriae also included a wealth of information for surveyors such as how to calculate the height of tall objects using similar triangles.
Leonardo called the last chapter of Practica geometriae, geometrical subtleties; he described this chapter as follows: “Among those included is the calculation of the sides of the pentagon and the decagon from the diameter of circumscribed and inscribed circles; the inverse calculation is also given, as well as that of the sides from the surfaces…to complete the section on equilateral triangles, a rectangle and a square are inscribed in such a triangle and their sides are algebraically calculated…” In 1225 Leonardo completed his third book, Flos. In this book Leonardo included the challenge he had accepted from the Holy Roman Emperor Fredrick II. He listed the problems involved in the challenge along with the solutions. After completing this book he mailed it to the Emperor. Also in 1225, Leonardo wrote his fourth book titled Liber quadratorum. Many mathematicians believe that this book is Leonardo’s most impressive piece of work. Liber quadratorum means the book of squares.
In this book he utilizes different methods to find Pythagorean triples. He discovered that square numbers could be constructed as sums of odd numbers. An example of square numbers will be discussed in section II regarding root finding. In this book Leonardo writes: “I thought about the origin of all square numbers and discovered that they arose from the regular ascent of odd numbers. For unity is a square and from it is produced the first square, namely 1; adding 3 to this makes the second square, namely 4, whose root is 2; if to this sum is added a third odd number, namely 5, the third square will be produced, namely 9, whose root is 3; and so the sequence and series of square numbers always rise through the regular addition of odd numbers.” Leonardo died sometime during the 1240’s, but his contributions to mathematics are still in use today. Now I would like to take a closer look at some of Leonardo’s contributions along with some examples. II Fibonacci’s Contributions to Math Decimal Number System vs. Roman Numeral System Algorithm Root Finding Fibonacci Sequence Decimal Number System vs. Roman Numeral System As previously mentioned Leonardo was the first person to introduce the decimal number system or also known as the Hindu-Arabic number system into Europe. This is the same system that we use today, we call it the positional system and we use base ten. This simply means we use ten digits, 0,1,2,3,4,5,6,7,8,9, and a decimal point. In his book, Liber abbaci, Leonardo described and illustrated how to use this system. Following are some examples of the methods Leonardo used to illustrate how to use this new system: 174 174 174 28 = 6 remainder 6 + 28 – 28 x28 202 146 3480 + 1392 4872 It is important to?174 remember that until Leonardo introduced this system the Europeans were using the Roman Numeral system for mathematics, which was not easy to do. To understand the difficulty of the Roman Numeral System I would like to take a closer look at it. In Roman Numerals the following letters are equivalent to the corresponding numbers: I = 1 V = 5 X = 10 L = 50 C = 100 D = 500 M = 1000 In using Roman Numerals the order of the letters was important. If a smaller value came before the next larger value it was subtracted, if it came after the larger value it was added. For example: XI = 11 but IX = 9 This system as you can imagine was quite cumbersome and could be confusing when attempting to do arithmetic. Here are some examples using roman numerals in arithmetic: CLXXIM + XXVIII = CCII (174) (28) (202) Or CLXXIV – XXVIII = CXLVI (174) (28) (146) The order of the numbers in the decimal system is very important, like in the Roman Numeral System. For example 23 is very different from 32. One of the most important factors of the decimal system was the introduction of the digit zero. This is crucial to the decimal system because each digit holds a place value. The zero is necessary to get the digits into their correct places in numbers such as 2003, which has no tens and no hundreds. The Roman Numeral System had no need for zero. They would write 2003 as MMIII, omitting the values not used. Algorithm Leonardo’s Elements, commentary to Euclid’s Book X, is full of algorithms for geometry. The following information regarding Algorithm was obtained from a report by Dr. Ron Knott titled “Fibonacci’s Mathematical Contributions”: An algorithm is defined as any precise set of instructions for performing a computation. An algorithm can be as simple as a cooking recipe, a knitting pattern, or travel instructions on the other hand an algorithm can be as complicated as a medical procedure or a calculation by computers. An algorithm can be represented mechanically by machines, such as placing chips and components at correct places on a circuit board. Algorithms can be represented automatically by electronic computers, which store the instructions as well as data to work on. (page 4) An example of utilizing algorithm principles would be to calculate the value of pi to 205 decimal places.
Root Finding Leonardo amazingly calculated the answer to the following challenge posed by Holy Roman Emperor Fredrick II: What causes this to be an amazing accomplishment is that Leonardo calculated the answer to this mathematical problem utilizing the Babylonian system of mathematics, which uses base 60. His answer to the problem above was: 1, 22, 7, 42, 33, 4, 40 is equivalent to: Three hundred years passed before anyone else was able to obtain the same accurate results. Fibonacci Sequence As discussed earlier, the Fibonacci sequence is what Leonardo is famous for today. In the Fibonacci sequence each number is equal to the sum of the two previous numbers. For example: (1,1,2,3,5,8,13…) Or 1+1=2 1+2=3 2+3=5 3+5=8 5+8=13 Leonardo used his sequence method to answer the previously mentioned rabbit problem. I will restate the rabbit problem: “A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?” I will now give the answer to the problem, which I discovered in the “Mathematics Encyclopedia”. “It is easy to see that 1 pair will be produced the first month, and 1 pair also in the second month (since the new pair produced in the first month is not yet mature), and in the third month 2 pairs will be produced, one by the original pair and one by the pair which was produced in the first month. In the fourth month 3 pairs will be produced, and in the fifth month 5 pairs. After this things expand rapidly, and we get the following sequence of numbers: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 235, … This is an example of recursive sequence, obeying the simple rule that two calculate the next term one simply sums the preceding two. Thus 1 and 1 are 2, 1 and 2 are 3, 2 and 3 are 5, and so on.” (page 1) III Conclusion Conclusion Leonardo Fibonacci was a mathematical genius of his time. His findings have contributed to the methods of mathematics that are still in use today.
His mathematical influence continues to be evident by such mediums as the Fibonacci Quarterly and the numerous internet sites discussing his contributions. Many colleges offer classes that are devoted to the Fibonacci methods. Leonardo’s dedication to his love of mathematics rightfully earned him a respectable place in world history. A statue of him stands today in Pisa, Italy near the famous Leaning Tower. It is a commemorative symbol that signifies the respect and gratitude that Italy endures toward him. Many of Leonardo’s methods will continue to be taught for generations to come.
Works Cited Dr. Ron Knott “Fibonacci’s Mathematical Contributions” March 6, 1998 www.ee.surrey.ac.uk/personal/R.Knott/Fibonacci/fibBio.html (Feb. 10, 1999) “Mathematics Encyclopedia” www.mathacademy.com/platonic_realms/encyclop/articles/fibonac.html (March 23, 1999)
Bibliography:
|
__label__pos
| 0.974684 |
Converting NFA To DFA Compile Design in C Source Code Programming
Include the Necessary Package, Declare the variable in array. I use the checke(char a), push(char a), pop(),pushd(char *a) function to perform the NFA to DFA conversion.
Algorithm Source Code NFA to DFA Example
#include<iostream.h>
#include<string.h>
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
char nfa[50][50],s[20],st[10][20],eclos[20],input[20];
int x,e,top=0,topd=0,n=0,ns,nos,in;
int checke(char a)
{
int i;
for(i=0;i<e;i++)
{
if(eclos[i]==a)
return i;
}
return -1;
}
int check(char a)
{
int i;
for(i=0;i<in;i++)
{
if(input[i]==a)
return i;
}
return -1;
}
void push(char a)
{
s[top]=a;
top++;
}
char pop()
{
top--;
return s[top];
}
void pushd(char *a)
{
strcpy(st[topd],a);
topd++;
}
char *popd()
{
topd--;
return st[topd];
}
int ctoi(char a)
{
int i=a-48;
return i;
}
char itoc(int a)
{
char i=a+48;
return i;
}
char *eclosure(char *a)
{
int i,j;
char c;
for(i=0;i<strlen(a);i++)
push(a[i]);
e=strlen(a);
strcpy(eclos,a);
while(top!=0)
{
c=pop();
for(j=0;j<ns;j++)
{
if(nfa[ctoi(c)][j]=='e')
{
if(check(itoc(j))==-1)
{
eclos[e]=itoc(j);
push(eclos[e]);
e++;
}
}
}
}
eclos[e]='\0';
return eclos;
}
void main()
{
int i,j,k,count;
char ec[20],a[20],b[20],c[20],dstates[10][10];
clrscr();
cout<<"Enter the number of states"<<endl;
cin>>ns;
for(i=0;i<ns;i++)
{
for(j=0;j<ns;j++)
{
cout<<"Move["<<i<<"]["<<j<<"]";
cin>>nfa[i][j];
if(nfa[i][j]!='-'&&nfa[i][j]!='e')
{
if((check(nfa[i][j]))==-1)
input[in++]=nfa[i][j];
}
}
}
topd=0;
nos=0;
c[0]=itoc(0);
c[1]='\0';
pushd(eclosure(c));
strcpy(dstates[nos],eclosure(c));
for(x=0;x<in;x++)
cout<<"\t"<<input[x];
cout<<"\n";
while(topd>0)
{
strcpy(a,popd());
cout<<a<<"\t";
for(i=0;i<in;i++)
{
int len=0;
for(j=0;j<strlen(a);j++)
{
int x=ctoi(a[j]);
for(k=0;k<ns;k++)
{
if(nfa[x][k]==input[i])
ec[len++]=itoc(k);
}
}
ec[len]='\0';
strcpy(b,eclosure(ec));
count=0;
for(j=0;j<=nos;j++)
{
if(strcmp(dstates[j],b)==0)
count++;
}
if(count==0)
{
if(b[0]!='\0')
{
nos++;
pushd(b);
strcpy(dstates[nos],b);
}
}
cout<<b<<"\t";
}
cout<<endl;
}
getch();
}
OUTPUT NFA to DFA Example
Enter the number of states
5
Move[0][0]-
Move[0][1]e
Move[0][2]-
Move[0][3]e
Move[0][4]-
Move[1][0]-
Move[1][1]-
Move[1][2]a
Move[1][3]-
Move[1][4]-
Move[2][0]-
Move[2][1]e
Move[2][2]-
Move[2][3]e
Move[2][4]-
Move[3][0]-
Move[3][1]-
Move[3][2]-
Move[3][3]-
Move[3][4]b
Move[4][0]-
Move[4][1]-
Move[4][2]-
Move[4][3]-
Move[4][4]-
a b
013 213 4
4
213 213 4
OUTPUT NFA to DFA
Enter the number of states
6
Move[0][0]-
Move[0][1]a
Move[0][2]-
Move[0][3]-
Move[0][4]-
Move[0][5]-
Move[1][0]-
Move[1][1]-
Move[1][2]b
Move[1][3]-
Move[1][4]-
Move[1][5]-
Move[2][0]-
Move[2][1]-
Move[2][2]-
Move[2][3]a
Move[2][4]e
Move[2][5]-
Move[3][0]-
Move[3][1]-
Move[3][2]c
Move[3][3]-
Move[3][4]e
a b
0 1
1 24
24 3244 5
5
3244 3244 55
55
No comments:
Write comments
|
__label__pos
| 0.999098 |
Randomization
Also found in: Dictionary, Thesaurus, Medical, Encyclopedia, Wikipedia.
Randomization
1. The selection of a representative, random section of the population. Randomization is important in generating accurate statistics, which is vital in marketing.
2. See: Random Number Generation.
References in periodicals archive ?
Data source: A Mendelian randomization study involving 77,679 adults from the Danish general population who were genotyped to identify carriers of three BMI-increasing polymorphisms and who were followed for up to 34 years for the development of symptomatic gallstone disease.
Increased performance when generating randomization schedules, importing strata and populating or importing inventory schedules.
Fourth, I come to the crucial condition of an experiment, the treatment randomization.
Metaphor use and health literacy: A pilot study of strategies to explain randomization in cancer clinical trials.
We believe that, when people hear randomization described as a flip of a coin, they think of there being a winner and a loser," Krieger asserts.
The relatively modern meaning of mendelian randomization is based on Mendel's second law, the law of independent assortment, which assumes that the inheritance of 1 trait is independent of the inheritance of other traits.
Randomization and decoding can be initiated by simply connecting or disconnecting the USB drive.
The CytelRAND(R) randomization engine is part of Cytel's FlexRandomizer(R), a suite of tools and services for randomization during clinical trial design, simulation, monitoring, and analysis.
Unlike the Cochrane review, studies using block randomization of whole wards were not excluded.
Of the 3,182 who agreed to randomization, 1,613 were in the acupuncture group and 1,569 were in the control group.
But Zenk's randomization failed to produce two similar groups.
IEF was calculated by pooling two weekly patient diaries prior to randomization and three diaries after randomization representing baseline and endpoint, respectively.
|
__label__pos
| 0.787421 |
1. Richard Bair
2. fx-games
Commits
Richard Bair committed e3a3977
Added implementation of MorphingPath. This is another approach to doing Path-based shape morphing. Not sure which we want to use (MorphingPath or MorphTransition) but I figured we'd add it and see how it goes. Guts of this class originally written by Jim Graham, I'm releasing these as sample code under BSD.
• Participants
• Parent commits e55acb3
• Branches default
Comments (0)
Files changed (1)
File defender/src/main/java/com/fxexperience/games/defender/games/simple/animation/MorphingPath.java
View file
• Ignore whitespace
+package com.fxexperience.games.defender.games.simple.animation;
+
+import javafx.beans.property.DoubleProperty;
+import javafx.beans.property.SimpleDoubleProperty;
+import javafx.collections.FXCollections;
+import javafx.collections.ObservableList;
+import javafx.geometry.Point2D;
+import javafx.scene.shape.*;
+
+import java.util.LinkedList;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.Vector;
+
+/**
+ * Morphs one path into another.
+ */
+public class MorphingPath extends Path {
+ private final ObservableList<PathElement> fromElements = FXCollections.observableArrayList();
+ public final ObservableList<PathElement> getFromElements() { return fromElements; }
+
+ private final ObservableList<PathElement> toElements = FXCollections.observableArrayList();
+ public final ObservableList<PathElement> getToElements() { return toElements; }
+
+ private final DoubleProperty fraction = new SimpleDoubleProperty(this, "fraction", 0) {
+ @Override
+ protected void invalidated() {
+ morph();
+ }
+ };
+ public final double getFraction() { return fraction.get(); }
+ public final void setFraction(double value) { fraction.set(value); }
+ public final DoubleProperty fractionProperty() { return fraction; }
+
+ private Geometry geom0;
+ private Geometry geom1;
+
+ private void morph() {
+ // TODO logic for taking short cuts in the case that path elements / paths
+ // fromElements and toElements have not changed
+// if (savedv0 != v0 || savedv1 != v1) {
+// if (savedv0 == v1 && savedv1 == v0) {
+// // Just swap the geometries
+// Geometry gtmp = geom0;
+// geom0 = geom1;
+// geom1 = gtmp;
+// } else {
+ geom0 = new Geometry(getFromElements());
+ geom1 = new Geometry(getToElements());
+ double tvals0[] = geom0.getTvals();
+ double tvals1[] = geom1.getTvals();
+ double masterTvals[] = mergeTvals(tvals0, tvals1);
+ geom0.setTvals(masterTvals);
+ geom1.setTvals(masterTvals);
+
+ // Now set up my path elements. Note that all the path elements
+ // are of type CubicCurveTo, except for the first MoveTo and last ClosePath.
+ List<PathElement> elements = new LinkedList<>();
+ elements.add(new MoveTo(geom0.getCoord(0), geom0.getCoord(1)));
+ int index = 2;
+ while (index < geom0.getNumCoords()) {
+ elements.add(new CubicCurveTo(
+ geom0.getCoord(index),
+ geom0.getCoord(index+1),
+ geom0.getCoord(index+2),
+ geom0.getCoord(index+3),
+ geom0.getCoord(index+4),
+ geom0.getCoord(index+5)
+ ));
+ index += 6;
+ }
+ elements.add(new ClosePath());
+ getElements().setAll(elements);
+
+// }
+
+ // Now morph! All of my PathElements now are setup, and geom1 number of
+ // elements is a perfect match, so I just have to interpolate.
+ elements = getElements();
+ final double t = getFraction();
+ MoveTo moveTo = (MoveTo) elements.get(0);
+ moveTo.setX(interp(moveTo.getX(), geom1.getCoord(0), t));
+ moveTo.setY(interp(moveTo.getY(), geom1.getCoord(1), t));
+ index = 2;
+ for (int i=1; i<elements.size()-1; i++) {
+ CubicCurveTo to = (CubicCurveTo) elements.get(i);
+ to.setControlX1(interp(to.getControlX1(), geom1.getCoord(index), t));
+ to.setControlY1(interp(to.getControlY1(), geom1.getCoord(index + 1), t));
+ to.setControlX2(interp(to.getControlX2(), geom1.getCoord(index + 2), t));
+ to.setControlY2(interp(to.getControlY2(), geom1.getCoord(index + 3), t));
+ to.setX(interp(to.getX(), geom1.getCoord(index + 4), t));
+ to.setY(interp(to.getY(), geom1.getCoord(index + 5), t));
+ index += 6;
+ }
+ }
+
+ private static class Geometry {
+ static final double THIRD = (1.0 / 3.0);
+ static final double MIN_LEN = 0.001;
+ double bezierCoords[];
+ int numCoords;
+ double myTvals[];
+
+ public Geometry(ObservableList<PathElement> elements) {
+ // Multiple of 6 plus 2 more for initial move to
+ bezierCoords = new double[20];
+ if (elements.isEmpty()) {
+ // We will have 1 segment and it will be all zeros
+ // It will have 8 coordinates (2 for move to, 6 for cubic)
+ numCoords = 8;
+ }
+
+ final ListIterator<PathElement> pi = elements.listIterator();
+ PathElement e = pi.next();
+ if (!(e instanceof MoveTo)) {
+ // TODO or assume an implicit MoveTo of some kind? What does Path do?
+ throw new IllegalStateException("missing initial MoveTo");
+ }
+ double currentX, currentY, moveX, moveY;
+ MoveTo m = (MoveTo) e;
+ bezierCoords[0] = currentX = moveX = m.getX();
+ bezierCoords[1] = currentY = moveY = m.getY();
+ double newX, newY;
+ Vector<Point2D> savedPathEndPoints = new Vector<>();
+ numCoords = 2;
+ while (pi.hasNext()) {
+ e = pi.next();
+ if (e instanceof MoveTo) {
+ if (currentX != moveX || currentY != moveY) {
+ appendLineTo(currentX, currentY, moveX, moveY);
+ currentX = moveX;
+ currentY = moveY;
+ }
+ m = (MoveTo) e;
+ newX = m.getX();
+ newY = m.getY();
+ if (currentX != newX || currentY != newY) {
+ savedPathEndPoints.add(new Point2D(moveX, moveY));
+ appendLineTo(currentX, currentY, newX, newY);
+ currentX = moveX = newX;
+ currentY = moveY = newY;
+ }
+ } else if (e instanceof ClosePath) {
+ if (currentX != moveX || currentY != moveY) {
+ appendLineTo(currentX, currentY, moveX, moveY);
+ currentX = moveX;
+ currentY = moveY;
+ }
+ } else if (e instanceof LineTo) {
+ LineTo to = (LineTo) e;
+ newX = to.getX();
+ newY = to.getY();
+ appendLineTo(currentX, currentY, newX, newY);
+ currentX = newX;
+ currentY = newY;
+ } else if (e instanceof QuadCurveTo) {
+ QuadCurveTo to = (QuadCurveTo) e;
+ double ctrlX = to.getControlX();
+ double ctrlY = to.getControlY();
+ newX = to.getX();
+ newY = to.getY();
+ appendQuadTo(currentX, currentY, ctrlX, ctrlY, newX, newY);
+ currentX = newX;
+ currentY = newY;
+ } else if (e instanceof CubicCurveTo) {
+ CubicCurveTo to = (CubicCurveTo) e;
+ appendCubicTo(to.getControlX1(), to.getControlY1(),
+ to.getControlX2(), to.getControlY2(),
+ to.getX(), to.getY());
+ }
+ }
+ // Add closing segment if either:
+ // - we only have initial moveto - expand it to an empty cubic
+ // - or we are not back to the starting point
+ if ((numCoords < 8) || currentX != moveX || currentY != moveY) {
+ appendLineTo(currentX, currentY, moveX, moveY);
+ currentX = moveX;
+ currentY = moveY;
+ }
+ // Now retrace our way back through all of the connecting
+ // inter-sub path segments
+ for (int i = savedPathEndPoints.size()-1; i >= 0; i--) {
+ Point2D p = savedPathEndPoints.get(i);
+ newX = p.getX();
+ newY = p.getY();
+ if (currentX != newX || currentY != newY) {
+ appendLineTo(currentX, currentY, newX, newY);
+ currentX = newX;
+ currentY = newY;
+ }
+ }
+ // Now find the segment endpoint with the smallest Y coordinate
+ int minPt = 0;
+ double minX = bezierCoords[0];
+ double minY = bezierCoords[1];
+ for (int ci = 6; ci < numCoords; ci += 6) {
+ double x = bezierCoords[ci];
+ double y = bezierCoords[ci + 1];
+ if (y < minY || (y == minY && x < minX)) {
+ minPt = ci;
+ minX = x;
+ minY = y;
+ }
+ }
+ // If the smallest Y coordinate is not the first coordinate,
+ // rotate the points so that it is...
+ if (minPt > 0) {
+ // Keep in mind that first 2 coords == last 2 coords
+ double newCoords[] = new double[numCoords];
+ // Copy all coordinates from minPt to the end of the
+ // array to the beginning of the new array
+ System.arraycopy(bezierCoords, minPt,
+ newCoords, 0,
+ numCoords - minPt);
+ // Now we do not want to copy 0,1 as they are duplicates
+ // of the last 2 coordinates which we just copied. So
+ // we start the fromElements copy at index 2, but we still
+ // copy a full minPt coordinates which copies the two
+ // coordinates that were at minPt to the last two elements
+ // of the array, thus ensuring that thew new array starts
+ // and ends with the same pair of coordinates...
+ System.arraycopy(bezierCoords, 2,
+ newCoords, numCoords - minPt,
+ minPt);
+ bezierCoords = newCoords;
+ }
+ /* Clockwise enforcement:
+ * - This technique is based on the formula for calculating
+ * the area of a Polygon. The standard formula is:
+ * Area(Poly) = 1/2 * sum(x[i]*y[i+1] - x[i+1]y[i])
+ * - The returned area is negative if the polygon is
+ * "mostly clockwise" and positive if the polygon is
+ * "mostly counter-clockwise".
+ * - One failure mode of the Area calculation is if the
+ * Polygon is self-intersecting. This is due to the
+ * fact that the areas on each side of the self-intersection
+ * are bounded by segments which have opposite winding
+ * direction. Thus, those areas will have opposite signs
+ * on the acccumulation of their area summations and end
+ * up canceling each other out partially.
+ * - This failure mode of the algorithm in determining the
+ * exact magnitude of the area is not actually a big problem
+ * for our needs here since we are only using the sign of
+ * the resulting area to figure out the overall winding
+ * direction of the path. If self-intersections cause
+ * different parts of the path to disagree as to the
+ * local winding direction, that is no matter as we just
+ * wait for the final answer to tell us which winding
+ * direction had greater representation. If the final
+ * result is zero then the path was equal parts clockwise
+ * and counter-clockwise and we do not care about which
+ * way we order it as either way will require half of the
+ * path to unwind and re-wind itself.
+ */
+ double area = 0;
+ // Note that first and last points are the same so we
+ // do not need to process coords[0,1] against coords[n-2,n-1]
+ currentX = bezierCoords[0];
+ currentY = bezierCoords[1];
+ for (int i = 2; i < numCoords; i += 2) {
+ newX = bezierCoords[i];
+ newY = bezierCoords[i + 1];
+ area += currentX * newY - newX * currentY;
+ currentX = newX;
+ currentY = newY;
+ }
+ if (area < 0) {
+ /* The area is negative so the shape was clockwise
+ * in a Euclidean sense. But, our screen coordinate
+ * systems have the origin in the upper left so they
+ * are flipped. Thus, this path "looks" ccw on the
+ * screen so we are flipping it to "look" clockwise.
+ * Note that the first and last points are the same
+ * so we do not need to swap them.
+ * (Not that it matters whether the paths end up cw
+ * or ccw in the end as long as all of them are the
+ * same, but above we called this section "Clockwise
+ * Enforcement", so we do not want to be liars. ;-)
+ */
+ // Note that [0,1] do not need to be swapped with [n-2,n-1]
+ // So first pair to swap is [2,3] and [n-4,n-3]
+ int i = 2;
+ int j = numCoords - 4;
+ while (i < j) {
+ currentX = bezierCoords[i];
+ currentY = bezierCoords[i + 1];
+ bezierCoords[i] = bezierCoords[j];
+ bezierCoords[i + 1] = bezierCoords[j + 1];
+ bezierCoords[j] = currentX;
+ bezierCoords[j + 1] = currentY;
+ i += 2;
+ j -= 2;
+ }
+ }
+ }
+
+ private void appendLineTo(double x0, double y0,
+ double x1, double y1)
+ {
+ appendCubicTo(// A third of the way from xy0 to xy1:
+ interp(x0, x1, THIRD),
+ interp(y0, y1, THIRD),
+ // A third of the way from xy1 back to xy0:
+ interp(x1, x0, THIRD),
+ interp(y1, y0, THIRD),
+ x1, y1);
+ }
+
+ private void appendQuadTo(double x0, double y0,
+ double ctrlx, double ctrly,
+ double x1, double y1)
+ {
+ appendCubicTo(// A third of the way from ctrlxy back to xy0:
+ interp(ctrlx, x0, THIRD),
+ interp(ctrly, y0, THIRD),
+ // A third of the way from ctrlxy to xy1:
+ interp(ctrlx, x1, THIRD),
+ interp(ctrly, y1, THIRD),
+ x1, y1);
+ }
+
+ private void appendCubicTo(double ctrlx1, double ctrly1,
+ double ctrlx2, double ctrly2,
+ double x1, double y1)
+ {
+ if (numCoords + 6 > bezierCoords.length) {
+ // Keep array size to a multiple of 6 plus 2
+ int newsize = (numCoords - 2) * 2 + 2;
+ double newCoords[] = new double[newsize];
+ System.arraycopy(bezierCoords, 0, newCoords, 0, numCoords);
+ bezierCoords = newCoords;
+ }
+ bezierCoords[numCoords++] = ctrlx1;
+ bezierCoords[numCoords++] = ctrly1;
+ bezierCoords[numCoords++] = ctrlx2;
+ bezierCoords[numCoords++] = ctrly2;
+ bezierCoords[numCoords++] = x1;
+ bezierCoords[numCoords++] = y1;
+ }
+
+ public int getNumCoords() {
+ return numCoords;
+ }
+
+ public double getCoord(int i) {
+ return bezierCoords[i];
+ }
+
+ public double[] getTvals() {
+ if (myTvals != null) {
+ return myTvals;
+ }
+
+ // assert(numCoords >= 8);
+ // assert(((numCoords - 2) % 6) == 0);
+ double tvals[] = new double[(numCoords - 2) / 6 + 1];
+
+ // First calculate total "length" of path
+ // Length of each segment is averaged between
+ // the length between the endpoints (a lower bound for a cubic)
+ // and the length of the control polygon (an upper bound)
+ double segx = bezierCoords[0];
+ double segy = bezierCoords[1];
+ double tlen = 0;
+ int ci = 2;
+ int ti = 0;
+ while (ci < numCoords) {
+ double prevx, prevy, newx, newy;
+ prevx = segx;
+ prevy = segy;
+ newx = bezierCoords[ci++];
+ newy = bezierCoords[ci++];
+ prevx -= newx;
+ prevy -= newy;
+ double len = Math.sqrt(prevx * prevx + prevy * prevy);
+ prevx = newx;
+ prevy = newy;
+ newx = bezierCoords[ci++];
+ newy = bezierCoords[ci++];
+ prevx -= newx;
+ prevy -= newy;
+ len += Math.sqrt(prevx * prevx + prevy * prevy);
+ prevx = newx;
+ prevy = newy;
+ newx = bezierCoords[ci++];
+ newy = bezierCoords[ci++];
+ prevx -= newx;
+ prevy -= newy;
+ len += Math.sqrt(prevx * prevx + prevy * prevy);
+ // len is now the total length of the control polygon
+ segx -= newx;
+ segy -= newy;
+ len += Math.sqrt(segx * segx + segy * segy);
+ // len is now sum of linear length and control polygon length
+ len /= 2;
+ // len is now average of the two lengths
+
+ /* If the result is zero length then we will have problems
+ * below trying to do the math and bookkeeping to split
+ * the segment or pair it against the segments in the
+ * other shape. Since these lengths are just estimates
+ * to map the segments of the two shapes onto corresponding
+ * segments of "approximately the same length", we will
+ * simply modify the length of this segment to be at least
+ * a minimum value and it will simply grow from zero or
+ * near zero length to a non-trivial size as it morphs.
+ */
+ if (len < MIN_LEN) {
+ len = MIN_LEN;
+ }
+ tlen += len;
+ tvals[ti++] = tlen;
+ segx = newx;
+ segy = newy;
+ }
+
+ // Now set tvals for each segment to its proportional
+ // part of the length
+ double prevt = tvals[0];
+ tvals[0] = 0;
+ for (ti = 1; ti < tvals.length - 1; ti++) {
+ double nextt = tvals[ti];
+ tvals[ti] = prevt / tlen;
+ prevt = nextt;
+ }
+ tvals[ti] = 1;
+ return (myTvals = tvals);
+ }
+
+ public void setTvals(double newTvals[]) {
+ double oldCoords[] = bezierCoords;
+ double newCoords[] = new double[2 + (newTvals.length - 1) * 6];
+ double oldTvals[] = getTvals();
+ int oldci = 0;
+ double x0, xc0, xc1, x1;
+ double y0, yc0, yc1, y1;
+ x0 = xc0 = xc1 = x1 = oldCoords[oldci++];
+ y0 = yc0 = yc1 = y1 = oldCoords[oldci++];
+ int newci = 0;
+ newCoords[newci++] = x0;
+ newCoords[newci++] = y0;
+ double t0 = 0;
+ double t1 = 0;
+ int oldti = 1;
+ int newti = 1;
+ while (newti < newTvals.length) {
+ if (t0 >= t1) {
+ x0 = x1;
+ y0 = y1;
+ xc0 = oldCoords[oldci++];
+ yc0 = oldCoords[oldci++];
+ xc1 = oldCoords[oldci++];
+ yc1 = oldCoords[oldci++];
+ x1 = oldCoords[oldci++];
+ y1 = oldCoords[oldci++];
+ t1 = oldTvals[oldti++];
+ }
+ double nt = newTvals[newti++];
+ // assert(nt > t0);
+ if (nt < t1) {
+ // Make nt proportional to [t0 => t1] range
+ double relt = (nt - t0) / (t1 - t0);
+ newCoords[newci++] = x0 = interp(x0, xc0, relt);
+ newCoords[newci++] = y0 = interp(y0, yc0, relt);
+ xc0 = interp(xc0, xc1, relt);
+ yc0 = interp(yc0, yc1, relt);
+ xc1 = interp(xc1, x1, relt);
+ yc1 = interp(yc1, y1, relt);
+ newCoords[newci++] = x0 = interp(x0, xc0, relt);
+ newCoords[newci++] = y0 = interp(y0, yc0, relt);
+ xc0 = interp(xc0, xc1, relt);
+ yc0 = interp(yc0, yc1, relt);
+ newCoords[newci++] = x0 = interp(x0, xc0, relt);
+ newCoords[newci++] = y0 = interp(y0, yc0, relt);
+ } else {
+ newCoords[newci++] = xc0;
+ newCoords[newci++] = yc0;
+ newCoords[newci++] = xc1;
+ newCoords[newci++] = yc1;
+ newCoords[newci++] = x1;
+ newCoords[newci++] = y1;
+ }
+ t0 = nt;
+ }
+ bezierCoords = newCoords;
+ numCoords = newCoords.length;
+ myTvals = newTvals;
+ }
+ }
+
+ private static double interp(double v0, double v1, double t) {
+ return (v0 + ((v1 - v0) * t));
+ }
+
+ private static double[] mergeTvals(double tvals0[], double tvals1[]) {
+ int count = sortTvals(tvals0, tvals1, null);
+ double newtvals[] = new double[count];
+ sortTvals(tvals0, tvals1, newtvals);
+ return newtvals;
+ }
+
+ private static int sortTvals(double tvals0[],
+ double tvals1[],
+ double newtvals[])
+ {
+ int i0 = 0;
+ int i1 = 0;
+ int numtvals = 0;
+ while (i0 < tvals0.length && i1 < tvals1.length) {
+ double t0 = tvals0[i0];
+ double t1 = tvals1[i1];
+ if (t0 <= t1) {
+ if (newtvals != null) newtvals[numtvals] = t0;
+ i0++;
+ }
+ if (t1 <= t0) {
+ if (newtvals != null) newtvals[numtvals] = t1;
+ i1++;
+ }
+ numtvals++;
+ }
+ return numtvals;
+ }
+}
|
__label__pos
| 0.984446 |
How to Connect Anker Keyboard to Laptop: Ultimate Guide
How to Connect Anker Keyboard to Laptop: Ultimate Guide
Having a successful connection between an Anker keyboard and laptop is an essential step for efficient typing and computing. Making sure that these two devices are properly connected is the key to avoiding frustration and wasted time. In this article, we will provide readers with the necessary steps to ensure that their Anker keyboard is successfully connected to their laptop. We will also offer troubleshooting tips in case of any difficulties during the connection process.
Benefits of Anker Keyboards
Anker keyboards are a great investment for those who want to upgrade their laptop experience. Anker Keyboards are designed with the user in mind, offering features such as an ergonomic design, adjustable backlighting and multimedia keys that make typing and gaming comfortable and enjoyable. These keyboards also come with a range of other benefits that make them an ideal choice for laptop users.
Benefits of Anker Keyboards
One of the biggest advantages of Anker Keyboards is their durability. Because they are made from high-quality materials, these keyboards can withstand day-to-day wear and tear far better than most other brands.
Anker’s keyboards also come equipped with anti-ghosting technology which helps to prevent key presses from being lost when typing quickly or using multiple keys simultaneously. This feature can be especially useful when playing games or working on high-speed projects where accuracy is essential.
Connecting Anker Keyboard to Laptop
Connecting this type of keyboard to your laptop is simple and takes no time at all.
1. First, check the compatibility of your Anker keyboard with your laptop’s operating system by consulting the manufacturer’s website or manual.
2. Once you have confirmed that they are compatible, plug the USB cable into one of your laptop’s USB ports and connect it with the Anker keyboard.
3. Your computer should automatically recognize it and configure it as a device; if not, visit the device manager in Windows and select ‘update driver’ from there.
Connecting Anker Wireless Keyboard to Laptop
A wireless Anker keyboard can be a great way to help enhance your laptop experience. It can provide convenience, comfort and increased productivity for laptop users. Connecting an Anker wireless keyboard to your laptop is easy and straightforward. All you need are the right hardware and software pieces in place to get it done quickly and efficiently.
1. To begin, make sure that the batteries in the Anker keyboard are installed correctly and that it’s powered on.
2. Next, connect the included USB adapter into one of your laptop’s USB ports.
3. Once connected, open up the settings menu on your laptop and proceed to select “Bluetooth & other devices.”
4. After selecting this option, click “Add Bluetooth or other device” at the top of this menu and then select Anker keyboard from that list.
The Anker wireless keyboard offers a comfortable typing experience thanks to its large keys and low profile design that helps reduce fatigue over long periods of use.
Troubleshooting Anker Keyboard Connection to a Laptop
Troubleshooting an Anker keyboard connection to a laptop can be a hassle. But with these easy steps, you’ll be able to get your device up and running in no time.
The first step is to check the USB port on your laptop. Make sure that it is firmly plugged into the correct port and the cable is securely connected. It could also help to try another cable if you have one available. If this doesn’t work, then move on to the next step.
Troubleshooting Anker Keyboard Connection to a Laptop
Next, make sure that your Anker keyboard driver is installed correctly on your computer. You can check Windows Device Manager for any errors or conflicts related to the driver installation and update it if necessary. After updating, restart your laptop and see if this resolves the issue of connecting an Anker keyboard.
Troubleshooting Anker Wireless Keyboard Connection to a Laptop
Troubleshooting connection issues with an Anker Wireless keyboard to a laptop can be frustrating. But don’t despair, there are steps you can take to get your device up and running again in no time.
Start by powering off the wireless keyboard and then reboot your laptop. This may resolve any minor software glitches that could be causing the issue. If not, look for a reset button on the back of the keyboard, which should restore its factory settings and help establish a connection with your laptop.
If neither of these work, try checking if new software updates are available for either your laptop or wireless keyboard. Updating them can often solve connectivity problems due to outdated drivers or other technical issues.
You can also refer to Anker’s website or contact customer service if you need further assistance troubleshooting the problem.
FAQ
How do I connect my Anker A7726 keyboard to my laptop?
To connect your Anker A7726 keyboard to your laptop, first plug the USB receiver into an available USB port on your laptop. Next, turn on the keyboard by pressing the power button. Finally, wait for a few seconds until the connection is established and you should be ready to type.
How do you reset the Anker A7721 keyboard?
To reset your Anker A7721 keyboard, press and hold the Fn key and then press Z. This will enter pairing mode, allowing you to reconnect your keyboard to a device. Once connected, you should be all set!
Does the Anker keyboard require any special drivers or software to be installed?
Yes, the Anker keyboard does require special drivers or software to be installed. This can usually be done by downloading the appropriate software from the Anker website. Once installed, the keyboard should be ready to use and will work with most operating systems.
What types of Anker keyboards are compatible with my laptop?
It depends on your laptop’s make and model. Generally, Anker’s keyboards are compatible with Windows, Mac, Chrome OS, and Android devices. Check the product description of the Anker keyboard you’re interested in to see if it is compatible with your laptop.
How do I ensure that the Anker keyboard is correctly connected to my laptop?
To ensure that the Anker keyboard is connected correctly to your laptop, first make sure the USB cable is securely plugged into both the keyboard and laptop. Then, turn on the keyboard by pressing the power button. Finally, check that your laptop recognizes the keyboard by going to your computer’s settings and selecting “Devices”.
What are the best practices for maintaining the Anker keyboard once it is connected to my laptop?
To maintain your Anker keyboard, make sure to keep it clean and dust-free. Clean the keyboard with a soft, damp cloth every few weeks. Also, avoid spilling liquids on the keyboard and try not to use it in dusty or humid environments. Finally, regularly check for any loose connections between the keyboard and laptop to ensure everything is functioning properly.
Related Video: How to Connect Bluetooth Keyboard to Laptop
Final Thoughts
Connecting an Anker keyboard to a laptop is not as difficult as it may seem. With the right tools and following the correct steps, anyone can complete this task with ease. It is important to remember that some additional drivers or software might be required in order to ensure that the connection is successful. Furthermore, it can be beneficial to look up helpful tutorials and guides if further assistance is needed during setup.
Similar Posts
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.503947 |
blob: b88f04d40aad024ab9d8481785ecaf393e59e37e [file] [log] [blame]
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#
# Common build rules for efcode shared objects.
#
# For historical reasons, these shared objects aren't explicitly versioned, so
# turn off VERS and LIBLINKS (VERS must be cleared before the inclusion of
# Makefile.lib, and LIBLINKS must be cleared afterwards). Also, because of
# the weird alternate naming scheme, we must define our own symlink macros.
#
VERS =
include $(SRC)/lib/Makefile.lib
LIBS = $(DYNLIB)
LDLIBS += -lc
LIBLINKS =
MAPFILES =
CPPFLAGS += -DDEBUG -DFCODE_INTERNAL
CPPFLAGS += -I $(SRC)/lib/efcode/include -I $(ROOT)/usr/platform/sun4u/include
DYNFLAGS += -R\$$ORIGIN
CERRWARN += -_gcc=-Wno-unused-variable
CERRWARN += -_gcc=-Wno-unused-function
CERRWARN += -_gcc=-Wno-unused-value
CERRWARN += -_gcc=-Wno-parentheses
CERRWARN += -_gcc=-Wno-uninitialized
CERRWARN += -_gcc=-Wno-type-limits
EFCODE64DIR = /usr/lib/efcode/$(MACH64)
ROOTLIBDIR = $(ROOT)/usr/lib/efcode
ROOTLIBDIR64 = $(ROOT)/usr/lib/efcode/$(MACH64)
ROOTSYMLINKS64 = $(SYMLINKS:%=$(ROOTLIBDIR64)/%)
#
# Since a number of efcode shared objects depend on fcode.so, fcdriver.so, and
# fcpackage.so, provide macros that expand to their full paths.
#
FCODE64 = $(ROOTLIBDIR64)/fcode.so
FCDRIVER64 = $(ROOTLIBDIR64)/fcdriver.so
FCPACKAGE64 = $(ROOTLIBDIR64)/fcpackage.so
.KEEP_STATE:
all: $(LIBS)
lint: lintcheck
$(ROOTSYMLINKS64): $(ROOTLIBDIR64)/$(DYNLIB)
-$(RM) $@; $(SYMLINK) $(DYNLIB) $@
include $(SRC)/lib/Makefile.targ
|
__label__pos
| 0.553527 |
Builder in practice
In this blog post I’m going to explain how to construct a builder for constructing objects. To be clear from the beginning, what I’m going to describe is not the official builder pattern, however, in practice this is often referred to as the builder pattern nevertheless.
A builder is used to construct complex objects with many constructor parameters where some of them are optional. An alternative to handle optional parameters is the telescoping constructor pattern. For each possibility an object can be constructed, a constructor is provided. This has the drawback that it blows up the code and it is quite hard to read. From time to time, this is seen in practice and can be eliminated by bringing a builder into action. A second alternative is the JavaBeans pattern, in which a parameter-less constructor is called whereafter the properties are set via setter methods. However, this pattern has the disadvantages that the object may be in inconsistent state through its construction and it precludes the possibility to make the class immutable. Builders are by far the best approach to deal with optional parameters. Furthermore, I also have often used builders in tests to construct test objects. Builders are quite popular as it allows one to construct a fluent API.
So let’s create a builder for a Airplane object. The object has one mandatory parameter seats. Mandatory parameters usally set directly in the constructor. Furthermore, there are two optional parameters named engine and rescue and an optional List of instruments. The builder is implemented as a static nested inner class. The constructor of the Airplane is private. Note that most IDE can generate exactly this code for you!
public final class Airplane {
private final int seats;
private final int engine;
private final int rescue;
private final List<Instrument> instruments;
private Airplane(Builder builder){
this.seats = builder.seats;
this.engine = builder.engine;
this.rescue = builder.rescue;
this.instruments = builder.instruments;
}
public static class Builder {
private final int seats;
private int engine;
private int rescue;
private List<Instrument> instruments = new ArrayList<>();
public Builder(int seats){
this.seats = seats;
}
public Builder withEngine(int engine) {
this.engine = engine;
return this;
}
public Builder withRescue(int rescue) {
this.rescue = rescue;
return this;
}
public Builder withInstrumentList(List<Instrument> instruments) {
this.instruments = instruments;
return this;
}
public Airplane build(){
return new Airplane(this);
}
}
}
view raw
Airplane.java
hosted with ❤ by GitHub
An airplane object is then constructed as follows:
List<Instrument> instruments = new ArrayList<>();
instruments.add(new Instrument("Altimeter"));
instruments.add(new Instrument("Velocity"));
Airplane airplane = new Airplane.Builder(4).withEngine(375).withRescue(2).withInstrumentList(instruments).build();
So far so good. But it would be nice if we have not to construct a list before the builder is called. It would be much more elegant if we could create the instruments list by means of the builder. In the following section we are going to dive into this problem. The goal is to construct an airplane object, that encompasses a list of instruments, as follows:
Airplane airplane = new Airplane.Builder(4).withEngine(375).withRescue(2).addList().add().withName("Altimeter").toList().add().withName("Velocity").toList().done().build();
The concept behind this construct is to have parent and child builders. There is an Instrument builder that has a ListBuilder and the ListBuilder has again a Airplane builder.
The Airplane class has just slightly changed. There were only added two method called addList. The first method gets the child builder and hands over the parent builder. The second method is just a setter which is used in the child builder.
public final class Airplane {
private final int seats;
private final int engine;
private final int rescue;
private final List<Instrument> instruments;
private Airplane(Builder builder){
this.seats = builder.seats;
this.engine = builder.engine;
this.rescue = builder.rescue;
this.instruments = builder.instruments;
}
public static class Builder {
private final int seats;
private int engine;
private int rescue;
private List<Instrument> instruments;
public Builder(int seats){
this.seats = seats;
}
public Builder withEngine(int engine) {
this.engine = engine;
return this;
}
public Builder withRescue(int rescue) {
this.rescue = rescue;
return this;
}
// get child builder and hands over parent builder
public Instrument.ListBuilder addList(){
return new Instrument.ListBuilder().setAirplaneBuilder(this);
}
public void addList(List<Instrument> instruments){
this.instruments = instruments;
}
public Airplane build(){
return new Airplane(this);
}
}
}
view raw
Airplane.java
hosted with ❤ by GitHub
The Instrument class now comprises two nested static classes, namely the Builder and the ListBuilder. Both these classes are structurally built in the same way and in each of them there exists a parent builder.
public class Instrument {
private String name;
private Instrument(Builder builder) {
this.name = builder.name;
}
public static class Builder {
private String name;
private ListBuilder listBuilder; // parent builder
public Builder withName(String name) {
this.name = name;
return this;
}
public Instrument build() {
return new Instrument(this);
}
// setter for parent builder
public Builder setListBuilder(ListBuilder builder) {
this.listBuilder = builder;
return this;
}
// build it and get parent builder again
public ListBuilder toList(){
this.listBuilder.add(this.build());
return this.listBuilder;
}
}
public static class ListBuilder {
private List<Instrument> instruments = new ArrayList<>();
private Airplane.Builder airplaneBuilder; // parent builder
public ListBuilder add(Instrument instrument){
this.instruments.add(instrument);
return this;
}
public List<Instrument> build(){
return this.instruments;
}
// get child builder and hands over parent builder
public Instrument.Builder add() {
return new Instrument.Builder().setListBuilder(this);
}
// setter for parent builder
public ListBuilder setAirplaneBuilder(Airplane.Builder builder){
this.airplaneBuilder = builder;
return this;
}
// build it and get parent builder again
public Airplane.Builder done() {
this.airplaneBuilder.addList(this.build());
return airplaneBuilder;
}
}
}
view raw
Instrument.java
hosted with ❤ by GitHub
With the help of generics and reflections, these builders could be constructed more generically. Maybe I will explain this some when later.
SQL Injection
SQL injections are one of the greatest risk on the web today. It is not difficult to rebuild existing programs so that SQL injections are no longer possible. The main problem of most programmers is lack of knowledge about this type of attack. Understanding it and identify risks in your applications is absolutely critical.
A SQL injection is nothing more than the manipulation of the SQL command on the victim’s side. It is the exploitation of a vulnerability in connection with SQL databases. The attacker attempts to infiltrate its own database commands on the application. Its aim is to spy or change data, to gain control of the server, or simply to inflict maximum damage .
Let’s look the following naive approach how a login is implemented. The email and password is replaced with the data entered in the SQL string. The so generated SQL statement is then sent to the database. If a value is returned, the user is considered as logged in.
String email = request.getParameter("email");
String password = request.getParameter("password");
String sql = "select * from users where (email ='" + email +"' and password ='" + password + "')";
Connection connection = pool.getConnection();
Statement statement = connection.createStatement();
ResultSet result = statement.executeQuery(sql);
if (result.next()) {
loggedIn = true;
// # Successfully logged in and redirect to user's profile page
} else {
// # Auth failure – Redirect to Login Page
}
A SQL injection can be made quite easily. We only need to inject characters interpreted by the database in order that the SQL string is modified. If ‚ or 1=1)- – is entered as password, the generated SQL look as follows:
select * from users where (email ='' and password ='' or 1=1)–')
view raw
modifiedSQL
hosted with ❤ by GitHub
The SQL above is a completely valid statement. It will return all rows from the table users, since 1=1 is always true. The — is a comment and causes that everything behind is ignored. A second possibility to start a SQL injection attack is to enter ‚ or “=“ as email and as password. The generated SQL looks then as follows:
select * from users where (email ='' or ''='' and password ='' or ''='');
view raw
modifiedSQL2nd
hosted with ❤ by GitHub
This statement will also return all users since “=“ is always true. In this manner one get access to all the user names and passwords in a database.
But how can SQL injections be avoided? An application can be protected from SQL injection attacks by using SQL parameters. For this purpose, so-called prepared statements are used in Java, that means the data is passed as a parameter to an already compiled statement. The data is therefore not interpreted and thus prevents SQL injection attacks. The changed code looks then as follows:
String sql = "select * from users where (email =? and password =?)";
Connection connection = pool.getConnection();
PreparedStatement pstmt = connection.prepareStatement(sql);
pstmt.setString(1, email);
pstmt.setString(2, password);
ResultSet result = pstmt.executeQuery();
The ? is used as a placeholder. By using the PreparedStatement class, the program may even experience a gain in performance if the statement is used multiple times.
Exception Testing with JUnit
Testing exceptions can be done with the @Test annotation and its expected property. The message of the exception has to be asserted in a catch-block. This test seems a bit cumbersome.
@Test(expected = IllegalArgumentException.class)
public void exceptionTesting() {
try {
throw new IllegalArgumentException("id must not be null");
}
catch(IllegalArgumentException iae) {
assertEquals("id must not be null", iae.getMessage());
throw iae;
}
}
Since JUnit 4.7, it is possible to use the @Rule annotation to expect exceptions. In this way, the test can be expressed quite more elegantly.
@Rule
public ExpectedException thrown = ExpectedException.none();
@Test
public void shouldThrowExpectedException(){
thrown.expect(IllegalArgumentException.class);
thrown.expectMessage("id must not be null");
throw new IllegalArgumentException("id must not be null");
}
In JUnit 5, we use Java 8 Lambdas to describe the same test.
@Test
void exceptionTesting() {
Throwable exception = expectThrows(IllegalArgumentException.class, () -> {
throw new IllegalArgumentException("id must not be null");
});
assertEquals("id must not be null", exception.getMessage());
}
Open/Closed Principle
The Open/Closed Principle (OCP) states that classes should be open for extension, but closed for modification. The goal is to allow classes to be easily extended to incorporate new behavior without modifying existing code. This means when extending your software you should not need to go and dig around in its internals just to change its behavior. You should be able to extend it by adding to it new classes without the need to change the existing code.
Open to Extension = New behavior can be added in the future
Closed for Modification = Changes to code are not required
But applying the OCP everywhere is wasteful and unnecessary. The OCP leads to more complex designs and to harder understandable code especially for beginners. It is said that the OCP should not be applied at first. If the class is changed, we should accept it. If it changes a second time, we should refactor it to OCP.
Let’s look at an example. Assume we have a web shop where there is a function that calculates the total amount of all items in a shopping cart. As shown in the code below, there are different type of rules how the total amount is calculated depending on the item.
public double totalAmount(List<Items> items) {
double total = 0.0;
for (item : items) {
if (item.getCategory() == "DISCOUNT") {
total += 0.95 * item.getPrice();
}
else if (item.getCategory() == "WEIGHT") {
total += item.getQuantity()*5/1000);
}
else if (item.getCategory() =="SPECIAL") {
total += 0.8*item.getPrice();
}
// more rules are coming!
}
return total;
}
Every time a new rule is added or the way how items are priced is modified requires changes to the class an its method. Each change can introduce bug and requires re-testing. At this point, we know that there are more rules coming. So we must think about how we can refactor this code in such a way that we don’t have to go in and edit this particular method every time. The way that we can introduce new behavior is through new classes. They are less likely to introduce new problems since nothing depends on them yet.
There are typical two approaches in an object-oriented programming language to achieve OCP. The first possibility is using the template pattern. This pattern encompasses an abstract base class that provides a default behavior. Items in our example inherit from this base class and override the default. The second possibility is to use the strategy pattern. This pattern allows to change the class behavior or its algorithm at run time. Maybe, this pattern would be a bit over-engineered in our simple example. However, the resulting calculation can be shortened with both patterns to the following:
public double totalAmount(List<Items> items) {
double total = 0.0;
for (item : items) {
total += item.getPrice();
}
return total;
}
Groove your Jenkins
Jenkins jobs are part of the infrastructure and shall be considered as a code according to the paradigm of „Infrastructure as Code“. Jenkins allows to create build jobs with Groovy Job DSL. Jobs are no longer created manually through the graphical console, but code is written and checked in. The configuration of the jobs with the whole history can be seen in the version management. In this blog post I want to briefly show how this works in order to facilitate the entry .
A so-called seed job has to be created. If this job is built, a build job is generated from the the seed job which can be run to build the project as usual. Unfortunately, the build job is not automatically updated if the seed job is changed and rerun.
First you need the Job DSL Plugin. The plugin allows to describe build jobs in a Groovy script. Next, a freestyle project is created that will be the seed job. In the configuration you can leave everything empty. In the section Build click the Add build step and then select Process Job DSLs.
process_job_dsl
Then select Use the provided DSL script. The code for the job can directly typed in to the console as shown in the picture below. When selecting Look on Filesystem instead, you have the opportunity to load job DSL scripts from the workspace. This is the recommended option because the scripts can be checked in and managed in the version control.
dsl_config
Before the JDK and Mavenversion can be set in the DSL script, they must first be configured in the settings under Global Configuration Tools.
Funny Jenkins Plugins
There are some funny Jenkins plugins that can spice up a little the builds . In addition, they can even increase the motivation to have good builds. high time to mention them briefly here.
The first plugin is probably the best known . It is the Chuck Norris plugin . When enabled, on every build page Build Chuck Norris appears with one of his saying and keeps things more fun.chuck_wisdom
chuck_norris
The second plugin is the Emotional Jenkins plugin . When a build fails, Jenkins gets visibly upset. Depending on the build state, one of the Jenkins below is displayed on the build page. If the build is successful, Jenkins is satisfied . If a test fails, Jenkins looks a bit sadly . And if there is a compilation error , then Jenkins is angry.
emotional_jenkins
The third plugin is the Continuous Integration Game. Points are gained for fixing broken builds, writing new unit tests, fixing static analysis violation errors etc. On the other side points are loosed for braking the build or producing new testing errors. On Jenkins home, a leaderboard with the current ranking is displayed. The plugin is intended to stimulate a kind of competition among developers, and thus lead to a good build quality.leaderboard
build_points
The last plugin that I want to mention here is the Claim plugin . If a build fails, someone can claim it. The bad build is then assigned to the the appropriate person and all are informed that someone takes care of the build. Not just funny, but even useful.
claim_build
claim_report
Once the plugins are installed, they have to be enabled on per-job basis. To enable the plugins for a specific job, go to the job configuration page, click Add post-build action and select the corresponding feature from the list.
activate_plugins
Recap Linux
Permissions
I work very irregularly with Linux. When I do, however, I often have to change the permissions of a file. And almost every time it happens that I can not remember the rough concepts and the commands. Therefore, I have decided to write it down here very briefly . Maybe it will help even someone else..
Show permissions of a file or folder:
ls -ld filename
What does all the following mean?
linux_permissions
ModeFields Hardlinks Owner Group Filesize Date&Time Filename
The first mode field is the „special file“ designator. It basically marks the type of the file Regular files display as – (none). Then, the mode field has three triples of format rwx . The first triple determines the permissions for the user , the second for the group and the third for others. r ⇒ read access, w ⇒ write access , x ⇒ executable.
Give all permissions to everyone:
chmod 777 filename
chmod means change mode fields. 7 is 4+2+1 that is 111.
Installing and updating software
http://superuser.com/questions/125933/what-is-the-difference-between-yum-apt-get-rpm-configure-make-install
Command line aliases in Windows
Aliases are nicknames for command calls and thus supersede a lot of typing. In Linux there exist the command alias, in Windows there is doskey. An alias can be defined as follows:
doskey ls=dir
Typed aliases are volatile , that means, these are no longer available after a recall of the command line console. In order to make them persistent two steps are necessary. Firstly, create a bat-script containing all aliases and save it to an arbitrary location. Secondly, insert a corresponding string value in the Windows registry.
1. Open Registry by searching „regedit“
2. Open HKEY_CURRENT_USER → SOFTWARE → Microsoft → Command Processor
3. Add new String Value called AutoRun with the path to the created bat-script.registry_value
Whenever a command line console is opened, the script is loaded and automatically executed in the current session.
Transpiling and bundling modules with webpack
Webpack is a newer module bundler continuously gaining popularity. It can be basically viewed as a replacement for grunt or gulp. Webpack has a broad feature set: It can be used to bundle AMD, CommonJS and ES2015 modules. Further, it provides a feature known as code splitting that allows to group the code in multiple bundles in order to optimize how it is downloaded. Moreover, webpack can be used to bundle javascript, css, images and other assets. It also provides loaders that can be used to preprocess files before bundling them. In this blog post, I’m going to scratch the surface of loaders. I like to demonstrate how to configure the babel-loader with webpack in order that the files are transpiled whenever webpack is called.
First of all, we need to install webpack in the project as well as globally using npm.
npm install webpack –save-dev
npm install webpack -g
Next, we need to install the babel-loader as well as the babel-core. These are 3rd party components provided by babel. If babel-cli and babel-preset-es2015 in not yet installed, install them as well.
npm install babel-loader babel-core –save-dev
npm install babel-cli babel-preset-es2015 –save-dev
Next, we have to configure the webpack.config.js which contains the configuration for webpack. It is basically a CommonJS module.
module.exports = {
entry: './js/app.js',
output: {
path: './build',
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['es2015']
}
}
]
}
};
The input file is assumed to be app.js placed in the folder js. The transpiled and bundled file will be located in the folder build and is called bundle.js. Without going into the details, the loader will look for all files ending with .js exluding the files in the node_modules. Then, it will manipulate the files and turn them from es6 to es5. To make all this happen, we only need start a command line in the project and type in webpack.
There are a lot more useful loaders that can be configured. For example, there exist a css-loader which bundles all css files, a saas-loader that does the same for saas files and a url-loader that can be used to bundle images and fonts. Without further explanation, they are inserted below.
{
test: /\.css$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader'
},
{
test: /\.scss$/,
exclude: /node_modules/,
loader: 'style-loader!css-loader!saas-loader'
},
{
test: /\.(png|jpg|ttf|eot)/,
exclude: /node_modules/,
loader: 'url-loader?limit=10000'
}
view raw
webpack_loaders
hosted with ❤ by GitHub
Spring Boot devtools with IntelliJ
The overall goal of Spring Boot devtools is to improve the development time. It’s available since Spring Boot version 1.3 and it includes different features among others property default, live reload, automatic restart etc.
Spring Boot devtools works by watching the classpath for any build changes and then automatically restarts the application. In Eclipse or most other IDE, every time you save, it actually builds the code. In IntelliJ, however, the code is not every time fully built when it is saved or automatically saved. In this short post, I like to demonstrate how you can configure automatic restart with IntelliJ.
First of all, you need to add devtools to your dependencies. Note that the dependency is optional in order that it is not transitively included in other projects.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
view raw
devtools.xml
hosted with ❤ by GitHub
Next, we record a macro by selecting Edit → Macros → Start Macro Recording. Thereafter the following two steps must be recorded:
1. File → Save All
2. Build → Make Project
Once this is done, you can stop the recording via Edit → Macros → Stop Macro Recording and give a name, e.g. „Spring Boot Automatic Restart“ to the recorded macro. Next, go to keymap in the settings (File → Settings ). Copy the current keymap and rename it to „Spring Boot Keymap“ for example. Scroll down to macros and select your recorded macro. Via the context menu (right-click) add a keyboard shortcut like CTRL + S.
Whenever your Spring Boot application is running and CTRL + S is pressed, an automatic restart is done. Spring Boot devtools works with two classloaders. One that loads all the classes in the beginning and another one that only loads the changes. Thus, a startup improvement can be achieved. I observed on my machine, that the startup time is more than halved by using devtools and its automatic restart.
|
__label__pos
| 0.903504 |
跳转表 (skiplist) 的期望分析
5 分钟阅读
Intro
写这篇博文的主要原因是发现, 中文互联网上竟然没有看到过关于跳转表时间性能的定量分析… 英文也只有学术文献而没有简单的版本.
写完二叉树, 看看另一种更容易实现的 $O(\log N)$ 检索/插入/删除的数据结构: skiplist. 当然容易实现的是指随机跳转表. 通常来说它的表现是要差于二叉树的, 但因为检索和恢复的过程只影响局域的链表链接, 尤其适用于高并发的场景.
跳转表
简单说就是每一层是个链表, 而后每个结点以概率 $p$ 保留到下一层. 最底层是全部数据. 从顶部开始做检索, 自顶而下进行. 常见的取法是 $p=1/2$.
以下内容记 $h_n$ 为第 $n$ 个结点的高度, $h = \max{h_n}$ 为总的高度, $l_i = \sum_n \mathbb I \big[h_n \ge i\big]$ 为第 $i$ 层的结点数量, 数据总量 $N$.
一组最简单的 Python 实现附于文末.
空间性能
空间性能是相对容易计算的. 每一层的结点数量期望都是上一层期望的 $p$:
\[E[l_i] = E_{l_{i-1}}\big[E[l_i|l_{i-1}]\big] = E_{l_{i-1}}[pl_{i-1}] = pE[l_{i-1}]\]
最低层是全部数据的链表, 即 $l_0 = N = E[l_0]$. 于是总的空间占用期望是
\[E\left[\sum_{i=0}^\infty l_i\right] = \sum_i E[l_i] = E[l_0]\sum_i p^i = \frac{N}{1-p} \sim O(N)\]
若考虑链表实现的额外开销 (如链表头的哨兵), 则和总高度 $h$ 的期望有关, 空间开销为 $O(N+h)$, 其中 $h\sim\log(N)$ 证明如下:
首先计算关于 $h$ 的概率分布:
\[P(h > h') = 1 - \prod_i P(h_i \le h') = 1 - (1-p^{h'})^N\]
于是 $h$ 的期望
\[\begin{align*} E[h] &= \sum_{h'=0}^\infty h' P(h = h') \\ &= \sum_{h'} h'\big(P(h > h') - P(h > h'-1)\big) \\ &= \sum_{h'} P(h>h') = \sum_{h'} \left[1-(1-p^{h'})^N\right] \\ &= \sum_{n=0}^{N-1} \sum_{h'} (1-p^{h'})^n p^{h'} \\ &\sim \sum_n \int_0^1 y^n \frac{\mathrm d y}{-\ln p} \qquad (y = 1-p^{h'}) \\ &= \frac{1}{-\ln p}\left(1 + \sum_{n=1}^{N-1} \frac 1n\right) \\ &\sim \frac{\ln N}{-\ln p} = \log_{1/p}{N} \end{align*}\]
(这个结果一点都不意外: 从 1 开始以 $p$ 的增长率增长到 $N$ 的结果就是这个. 不过一个比较严格的计算过程略微有点费劲, 容易出现把求和拆成两个发散和的错误)
时间性能
在第 $i$ 层结点上搜索的时间, 最大为上一层相邻结点之间的结点数量, 其期望值为 $(l_i - l_{i+1})/(l_{i+1}+1)$, 即:
\[E[t_{i}|l_i, l_{i+1}] = \frac{l_i - l_{i+1}}{l_{i+1} + 1}\]
考虑给定 $l_{i}$, $l_{i+1}\sim B(l_i, p)$ 是二项分布,
\[\begin{align*} E[t_{i}|l_{i}] &= \sum_{l_{i+1}=0}^{l_i} \frac{l_i - l_{i+1}}{l_{i+1} + 1} {l_{i+1} \choose l_i} p^{l_{i+1}} q^{l_i-l_{i+1}} \qquad (q = 1-p)\\ &= \sum_{l_{i+1}} \left(\frac{l_i + 1}{l_{i+1} + 1} - 1\right) {l_{i+1} \choose l_i} p^{l_{i+1}} q^{l_i-l_{i+1}} \\ &= (l_i + 1)\left(\sum \frac{1}{l_{i+1}+1}{l_{i+1}\choose l_i} p^{l_{i+1}}q^{l_i - l_{i+1}}\right) - 1 \\ &= (l_i + 1)\frac{(p+q)^{l_i+1} - q^{l_i+1}}{(l_i+1)p} - 1 \\ &= \frac{1 - (1-p)^{l_i+1}}{p} - 1 \end{align*}\]
所以每一层的期望为
\[\begin{align*} E[t_i] &= \sum_{l_i=0}^N \left(\frac{1 - (1-p)^{l_i+1}}{p} - 1\right) {l\choose n} p^{il} (1-p^i)^{N-l} \\ &= \left(\frac 1p - 1\right) - \frac{1-p}{p} \sum_{l_i} (1-p)^{l_i} {l\choose n} p^{il} (1-p^i)^{N-l} \\ &= \frac 1p - 1 - \frac{1-p}{p} \big[(1-p)p^i + 1-p^i\big]^N \\ &= \frac{1-p}{p} \big[1 - (1-p^{i+1})^N\big] \end{align*}\]
假定向下跳转的消耗和向右跳转的比值为 $\xi$, 搜索所需的时间期望值为
\[\begin{align*} E\left[\sum_{i=0} t_i + \xi\right] &= \xi E[h] + \sum_i E[t_i] \\ &= \xi E[h] + \left(\frac 1p - 1 \right) \sum_i \big[1 - (1-p^{i+1})^N\big] \\ &= \left(\frac 1p - 1 + \xi \right)E[h] \sim \left(\frac 1p - 1 + \xi \right)\log_{1/p}{N} \end{align*}\]
最后一步代入了 $E[h]$ 的计算过程第三行. 这个结果也和挥挥手的结果是一致的: 期望来看, $l_{i+1} = p l_i$, 于是 $t_i = 1/p-1$, 总耗时 $(1/p-1+\xi)h$. (这个一致性可能是个巧合? 不然算了老半天岂不是白费力气… )
其中有结论:
1. 总的时间复杂度 $\mathcal O(\log N)$
2. 最佳的概率 $p^*$ 取决于 $\xi$ : $p^*(\ln 1/p^* - 1) = \xi - 1$
• 特别的当 $\xi = 1$ 时, $p^* = 1/e$ 是最优解;
• 当 $\xi\to 0$ 时搜索耗时关于 $p$ 单调递减.
Python 实现
实现了按值插入和删除的接口, 和上一篇平衡二叉树的接口一致. 为了避免存储左边和上边的前序结点, 内部实现的搜索接口需要记录路径.
from random import randrange as rand, random as drand
class QuadList:
def __init__(self, val=None, right=None, below=None):
self.val = val
self.right = right
self.below = below
class SkipList:
def __init__(self):
self.header = QuadList()
def _search(self, val):
prevs = []
c = self.header
while c.below:
while c.right and c.right.val < val:
c = c.right
prevs.append(c)
c = c.below
while c.right and c.right.val < val:
c = c.right
return c, prevs
def _insert(self, val, c, prevs):
c.right = QuadList(val, c.right)
c = c.right
# while rand(2):
while drand() > 0.632:
# 0.632 = 1-1/e
if prevs:
last = prevs.pop()
else:
self.header = QuadList(below=self.header)
last = self.header
last.right = QuadList(val, last.right, c)
def _delete(self, val, c, prevs):
assert c.right.val == val
while c.right and c.right.val == val:
c.right = c.right.right
if prevs:
c = prevs.pop()
else:
break
# clean empty layers
while self.header.right is None and self.header.below:
self.header = self.header.below
def search(self, val):
c, _ = self._search(val)
return c.val, c.right.val if c.right else None
def insert(self, val):
self._insert(val, *self._search(val))
def delete(self, val):
self._delete(val, *self._search(val))
鸣谢
感谢 J.Y. Zhang 和妍姐的讨论! 我发现我做概率题的能力真的是下降的厉害… 做实验降智商…
留下评论
|
__label__pos
| 0.998088 |
Meterpreter通讯分析
前几天玩鹤城杯遇到一个 Meterpreter 流量分析,花了一下午时间尝试恢复明文通讯无果,于是打算直接从源码的层面上剖析其通讯流程。
获取源码
我们可以直接在 Metasploit 的 Github 上获取 meterpreter 的源码,在 rapid7/metasploit-payloads
工作流程
寻找入口
由于我并没有足够的二进制经验,也只能依靠自己仅存的开发经验来找。猜测是 DLL 注入,定位入口 c/meterpreter/source/metsrv.c 中的 DllMain,并通过 LPVOID lpReserved,将参数 MetsrvConfig config 传入,其中参数包含以下内容:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
// source/common/common_config.h
typedef struct _MetsrvSession
{
union
{
UINT_PTR handle;
BYTE padding[8];
} comms_handle; ///! Socket/handle for communications (if there is one).
DWORD exit_func; ///! Exit func identifier for when the session ends.
int expiry; ///! The total number of seconds to wait before killing off the session.
BYTE uuid[UUID_SIZE]; ///! UUID
BYTE session_guid[sizeof(GUID)]; ///! Current session GUID
} MetsrvSession;
typedef struct _MetsrvTransportCommon
{
CHARTYPE url[URL_SIZE]; ///! Transport url: scheme://host:port/URI
int comms_timeout; ///! Number of sessions to wait for a new packet.
int retry_total; ///! Total seconds to retry comms for.
int retry_wait; ///! Seconds to wait between reconnects.
} MetsrvTransportCommon;
typedef struct _MetsrvConfig
{
MetsrvSession session;
MetsrvTransportCommon transports[1]; ///! Placeholder for 0 or more transports
// Extensions will appear after this
// After extensions, we get a list of extension initialisers
// <name of extension>\x00<datasize><data>
// <name of extension>\x00<datasize><data>
// \x00
} MetsrvConfig;
看起来这里定义了会话以及通讯相关的参数,包括:会话 GUID、UUID、通讯地址、重试参数、退出方式等。
初始化设定
接下来调用了 Init()Init() 调用了 server_setup(),开始了整个通讯过程。这里我们不去关心他实现的细节,直奔协议处理部分。
调用了 remote_allocate() 分配了一个远程会话,其结构体定义如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
typedef struct _Remote
{
PConfigCreate config_create; ///! Pointer to the function that will create a configuration block from the curren setup.
Transport* transport; ///! Pointer to the currently used transport mechanism in a circular list of transports
Transport* next_transport; ///! Set externally when transports are requested to be changed.
DWORD next_transport_wait; ///! Number of seconds to wait before going to the next transport (used for sleeping).
MetsrvConfig* orig_config; ///! Pointer to the original configuration.
LOCK* lock; ///! General transport usage lock (used by SSL, and desktop stuff too).
HANDLE server_thread; ///! Handle to the current server thread.
HANDLE server_token; ///! Handle to the current server security token.
HANDLE thread_token; ///! Handle to the current thread security token.
DWORD orig_sess_id; ///! ID of the original Meterpreter session.
DWORD curr_sess_id; ///! ID of the currently active session.
char* orig_station_name; ///! Original station name.
char* curr_station_name; ///! Name of the current station.
char* orig_desktop_name; ///! Original desktop name.
char* curr_desktop_name; ///! Name of the current desktop.
PTransportCreate trans_create; ///! Helper to create transports from configuration.
PTransportRemove trans_remove; ///! Helper to remove transports from the current session.
int sess_expiry_time; ///! Number of seconds that the session runs for.
int sess_expiry_end; ///! Unix timestamp for when the server should shut down.
int sess_start_time; ///! Unix timestamp representing the session startup time.
PivotTree* pivot_sessions; ///! Collection of active Meterpreter session pivots.
PivotTree* pivot_listeners; ///! Collection of active Meterpreter pivot listeners.
PacketEncryptionContext* enc_ctx; ///! Reference to the packet encryption context.
} Remote;
这里包含了基本所有远程通讯相关的东西了,同时包含加密相关的 context。
我们看一下加密相关的 context 所对应的数据结构定义:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
typedef struct _Aes256Key
{
BLOBHEADER header;
DWORD length;
BYTE key[256/8];
} Aes256Key;
typedef struct _PacketEncryptionContext
{
HCRYPTPROV provider;
HCRYPTKEY aes_key;
int provider_idx;
BOOL valid;
Aes256Key key_data;
BOOL enabled;
} PacketEncryptionContext;
很明显 AES-256 具体的模式未知,我们继续往下看,下面设置了 expiry times,并建立了对应的 transport,每种 transport 对应的 handler 可以在 source/metsrv/server_transport_*.c 中找到,最终将充满函数指针的函数体赋值给 remote->transport。后面就是获取一些列基础信息和进行一系类初始化了,直到后面开始调用 remote->transport->transport_init() 以及进行对应的异常处理。所以我们可以直接去看对应的 server_transport_*.c
从 TCP 的 transport handler 开始 dive deeper
首先上游是调用了 transport_init(),其对应了 tcp 中的 configure_tcp_connection(),最终发现是 server_dispatch_tcp() 来负责处理数据包的接受,使用 packet_receive()接受数据包 同时调用 command_handle() 来处理接收到的指令。
先看下 packet_receive()
packet_receive()
首先我们列一下用到的数据结构
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// guiddef.h
typedef struct _GUID {
unsigned long Data1;
unsigned short Data2;
unsigned short Data3;
unsigned char Data4[ 8 ];
} GUID;
// source/common/common_core.h
typedef struct
{
BYTE xor_key[4]; // 4 bytes
BYTE session_guid[sizeof(GUID)]; // 16 bytes
DWORD enc_flags; // 4 bytes
DWORD length; // 4 bytes 这里定义的是除了 header 外其他部分所有长度
DWORD type; // 4 bytes
} PacketHeader;
其具体流程如下:
1. 接受一个 PacketHeader 的大小 (经计算为 32 字节)
2. 对 packet header 的 xor key 进行检查 (xor key没有零字节)
3. 调用 xor_bytes() 对剩余的 packet header 进行解密
4. 取出 payload length (header.length - sizeof(TlvHeader))
5. 根据 payload length 循环接收数据,直到获取到完整的 payload
6. 检查 payload 中的 GUID 是否为空或者与设置中的一致,符合条件则调用 decrypt_packet() 解密,不符合则继续找对应的 pivot
7. 对 payload 部分做 xor 处理,根据 enc_flags 以及对应的 context 的完整性决定是否进行解密 其中 enc_flags 为 0x0 则不解密 0x1 使用 AES-256 (source/metsrv/packet_encryption.h) 同时根据下文得出,meterpreter 使用了 AES-256-CBC
8. 如果上述流程均没有问题,则进入下一步
command_handle()
老样子,列一下我们需要的数据结构:
1
2
3
4
5
6
7
8
9
10
11
typedef struct
{
DWORD length;
DWORD type;
} TlvHeader;
typedef struct
{
TlvHeader header;
PUCHAR buffer;
} Tlv;
我们直接看 command_handle() 的源码,其流程如下:
1. 首先调用了 packet_get_tlv_value_uint(),获取对应的 COMMAND_ID,后面经过一系列调用,最终来到 packet_find_tlv_buf(),这个函数会遍历除去 Packet Header 外所有的部分来找对应的类型。使用到了上面提到的 Tlv 数据结构。在处理 Header 的时候也会检查 type 是否包含 TLV_META_TYPE_COMPRESSED 这一个 flag,是的话则对 payload 部分进行 zlib 解压,同时去掉 flag 中的 TLV_META_TYPE_COMPRESSED,为上层提供透明访问。
2. 找到 COMMAND_ID 后,根据这个 ID 去寻找对应的拓展,找到拓展后判断是否为 inline,是 inline 则调用 command_process_inline(), 不是 inline 的话就另开线程运行
3. 这里直接看 command_process_inline(),后面是直接调用了 由第三步 command_locate_extension() 获取到的 handler,最后调用 packet_call_completion_handlers()
这里我们只关心协议,重点关心 packet_find_tlv_buf() 即可,可以看到,这个函数通过逐一读取 TLV 部分的 header,获取长度对整个 buffer 进行遍历。
对应 type 和 meta 的定义我们可以在 source/common/common_core.h 中找到。
先简单写个程序验证下(以刚开始上传 public key 为例):
对称密钥协商与解密
只找到协议大致的解析方式还是远远不够的,我们还需要后面对称密码协商的过程,因此继续往下看:
source/metsrv/packet_encryption.c 中的 request_negotiate_aes_key(): 先生成对应的 AES 密钥,再做使用之前获取的 RSA 公钥进行加密,我们不妨看下 public_key_encrypt() 函数,标准的 RSA 加密,直接使用 pycryptodome 就能解出来。
如果有 enc_flags 为 1,则使用 AES-256-CBC 进行解密,Packet Header 后面的 16 字节做 IV,IV 后面即加密的数据,该程序是解密后将解密后的数据放到了 Packet Header 后,为上层提供透明服务,这里同样,使用 pycryptodome 解出来即可。
模型整理
根据上述的分析,整理得如下模型:
对于所有 traffic,均有:
1
[4-bytes xor key][xor-ed payloads]
其中,这四字节的 xor key 是属于 Packet Header 部分的,后面的数据包括剩余的 Packet Header 以及后续的数据,因此,在处理流量时应当使用 xor key 对后续数据进行异或操作。
在进行异或操作后,整个 packet 大致可分为以下两种情况:
对于没有加密的情况(enc_flags = 0):
1
[32-bytes packet header][variable-length payload]
对于启用加密的情况(enc_flags = 1):
1
[32-bytes packet header][16-bytes IV][variable-length payload]
根据源码可知,目前只有一种加密,即 AES-256-CBC
其中 Packet Header:
1
[4-bytes xor key][16-bytes session GUID][4-bytes enc-flags][4-bytes payload-length(including 8-bytes tlv header length)][4-bytes packet-type]
Payload 部分:
Payload 部分可以由多个 TLV 单元组成,解析过程中,会根据 packet header 里的 length 部分,对整个 payload 进行逐一解析,直到找到其需要的数据。其中,每一个 TLV 单元组成如下:
1
[4-bytes length][4-bytes type][variable-length payload]
TLV_TYPE_SYM_KEY 的 TLV 单元:
1
[4-bytes length][4-bytes type][variable-length encrypted payload]
在协商对称加密密钥过程中,payload 使用了预设的 RSA 公钥进行了加密。
一次通讯过程
在 meterpreter 完成加载后,会发生以下过程:
1. LHOST 生成 RSA 密钥对,将 core_negotiate_tlv_encryption 与对应的 RSA 公钥一起发送到 RHOST
2. RHOST 生成 AES 密钥,使用接收到的 RSA 公钥进行加密(即 TLV_TYPE_SYM_KEY),发送回 LHOST
3. 两端进行测试通信,无差错的话则以后都使用 AES-256-CBC 进行通讯,即 enc_flags 标志位置 1,在 Packet Header 后面插入 16 字节 IV,同时加密 payload 部分
4. 直到断开连接
代码
这里我简单写了一个程序来对通讯过程进行解密,效果如下:
当前已实现功能:
• 正确解析 Meterpreter 的数据包
• 对 Meterpreter 数据包类型进行识别
• 对 Meterpreter 数据包进行解密
• 使用私钥提取 Meterpreter 的对称加密密钥
• 识别 TLV Unit 的 Meta 与类型
代码可以从以下渠道获得:
|
__label__pos
| 0.779784 |
5 Examples of Fantastic Chatbot UI
Chatbots evolved from being purely text-based interfaces to little interactive assistants full of personality. It’s fascinating, but it also sets the bar really high.
Customers will likely abandon your chatbot if it can’t keep up with them or is too frustrating to use. Putting careful thought into your chatbot’s user interface is the first step to avoiding this.
Getting started can be the hardest part, so we’ll share some of our favorite chatbot UI examples and actionable steps you can take. But first, it’s important to know the definition, role and expectations of your chatbot user interface.
What is chatbot UI?
A chatbot user interface (UI) is a series of graphical and language elements that allow for human-computer interaction. There are different types of user interfaces , chatbots being a natural language user interface. This means users can communicate on their terms, not the computer’s.
However, a chatbot’s communication skills will vary depending on the interface you create. A chatbot UI that relies on predetermined answers, such as button options, limits what the user can ask and what the chatbot understands.
But contextual and many rule-based chatbots are often designed to understand and respond to a variety of text and voice inputs.
What’s the difference between UI and UX?
In simple terms, UI is the means by which a human and a computer interact. UX, or user experience, is the overall experience a user has from using a product like a chatbot or website.
Based on our research , customers are willing to speak to a chatbot first, but they still want the option to easily escalate an issue to a human rep. This is likely because people have talked to chatbots that were incapable of handling difficult issues. For the user experience to be positive, the user interface needs to exceed expectations.
Carefully considering every detail of your chatbot’s functionality will help create a better user experience. It may also help ease customer skepticism and improve their chatbot perception.
5 examples of great chatbot interfaces
A quality chatbot interface allows you to achieve several things:
• Create a personalized branded experience
• Communicate with your customers in a humanlike way
• Handle a variety of tasks ranging in difficulty
• Assist many of your customers at one time
Here are examples of companies who put careful thought into their chatbot UI:
1
Lark - healthcare chatbot
Lark is a contextual chatbot prescribed to help patients. It’s designed to have humanlike conversations with users via mobile app.
Lark has a friendly, kind and humorous persona that appeals to seniors, its largest clientele. Users can engage with the chatbot through chat, voice and button options.
Lark CEO Julia Hu reported that seniors use the chatbot as a sort of social outlet, which is a testimony to its UI. Research shows that seniors are more resistant to using new technology because they lack the confidence to do so. Lark created a chatbot user interface that gives seniors authority over their health and is simple to use without help.
image of older man chatting with Lark
Lark even offers behavioral health coaching to help people manage stress and anxiety during the COVID-19 pandemic
The health chatbot’s primary color is green, which symbolizes rest, tranquility and good health . Lark’s messages are motivating and uplifting, which works well with its calming color scheme.
What you can do:
To mimic Lark’s UI approach, pick a color that best captures the instinct and emotion of your brand. Use images, graphs and praise to create a lively experience and inspire your users.
Lark also puts a lot of emphasis on tone in its script. Write with your audience in mind by using words, slang, jokes and phrases they use. A good place to observe this is in your live chat conversations with customers or on social media.
2
Chatfuel - Facebook chatbot provider
Chatfuel lets you create Facebook Messenger chatbots that are decision tree-based with some contextual capabilities. It’s used by major companies such as Lego, Netflix and Adidas.
When designing for Messenger, you’re far more limited in terms of unique design. Each chatbot generally looks the same — black text, white background, blue and gray speech bubbles — but there are elements you can use to personalize the interaction.
Chatfuel chatbots often use a mixture of images, button options and text to interact with users. Chatbots, like Hello Fresh’s Freddy , can detect and respond to a variety of food related keywords and phrases in messages. It even recommends playlists to listen to while cooking.
This chatbot approach could be considered rather minimalist in design, but it’s easy for your Messenger users to navigate. It resembles and functions similarly to the conversations they’re already having with their friends.
What you can do:
Creating a chatbot for Messenger gives you less freedom in UI customization, so make the experience unique by using GIFs, quizzes and images. You can also create an interactive conversation by offering a mix of button options and typed commands.
And don’t forget to give your chatbot a very distinct icon image so it’s noticeable in your customer’s friend list.
3
Replika - self-help chatbot
Replika is one of the most human-sounding chatbots on the market. It’s a contextual chatbot that learns from conversations with its users to the point where it even starts to mimic the user’s manner of speaking.
The intelligent chatbot was created for those in need of a companion. Replika, which can be named anything the user wants to make the friendship more personal, adjusts its mood and tone based on the user’s mood or the conversation topic.
To keep conversation moving, users can select from a variety of topics or issues that they’d like to discuss. You can now even write a song with your Replika. It then awards personality badges the more it learns about the user.
Replika is available via web and mobile, and has a customizable interface. Users can switch to night mode, customize the background and upload a photo that represents their Replika. The UI is focused on creating a personalized, cozy “environment” for conversations.
screenshot of Replika's customization settings
What you can do:
To capture some of Replika’s personalized touches in your own chatbot, let users change the background and color scheme of your user interface. Studies show that personalized content satisfies a person’s desire for control, reduces information overload and makes the experience more relevant and interesting.
For fluid conversation, write a long list of creative responses (I recommend the “yes, and” approach ) to keep conversation moving or for when your chatbot doesn’t understand a message.
4
Milo - website builder chatbot
Standing out from the norm, Milo greets you right at the top of An Artful Science’s homepage. The conversation appears like it’s floating and is well-integrated into the website’s quirky design.
screenshot of Milo's welcome message
Made with Landbot.io , Milo is a button-based chatbot that gives the user limited control over the conversation. However, for what it lacks in conversational abilities it makes up for with its entertaining script.
Milo is a lovable character that speaks and behaves like a longtime friend. The button responses you can choose to respond with are in step with the chatbot’s casual tone.
screenshot of conversation with Milo
A tasteful use of GIFs and images spice up the conversation. If you leave the page, Milo asks if you’d like to start again or continue from where you left off.
When you’re done speaking to Milo, you can just keep scrolling. There’s no lingering window in the corner or flashing notification beckoning you back into the conversation.
What you can do:
If you plan for your chatbot to welcome new visitors to your website, try integrating it into the landing page. It’s a rare approach, but that’s what makes it exciting.
Milo greets the user by stating its purpose and asking how the user is doing. Next, it offers a free gift visitors can claim if they continue chatting. Take a leaf out of Milo’s book and introduce your chatbot’s role in its welcome message. Keep your visitors hooked by giving them incentives to further engage with your bot.
5
Erica - banking chatbot
Erica is like Siri, but for banking. The Bank of America chatbot is voice- and chat-driven so customers can make text or voice commands to check all things bank account related.
The app-exclusive chatbot uses text, images and graphs to communicate a user’s spending habits, recurring charges, account balance, etc.
Its navy blue interface evokes trust and dependability , and Erica’s use of emoji and praise add a human touch to conversations.
What you can do:
Create messages that share unique insights about your user’s habits. Use generated graphs, clear language and the rare emoji for a personalized yet professional feel.
Set expectations about what your chatbot can do by creating an About section similar to Erica’s. Include a few FAQ questions at the beginning of the conversation to help users quickly jump to the information they need.
The Ultimate Guide to Chatbots in Business
Learn how chatbots work, what they can do, how to build one – and whether they will end up stealing your job.
Read more
Three things to consider when creating a chatbot UI
What is your chatbot’s purpose?
Many businesses don’t even need a chatbot , but their popularity and cool factor is just too alluring. If there’s no game plan or use for it, why waste the money and effort into building it?
Give your chatbot a purpose so it doesn’t become a talking website accessory. This can range from giving your brand a voice to helping customers with simple tasks. Comparing Milo and Erica, Milo lacks Erica’s functionality and intelligence, but does a great job at giving visitors a taste of the company’s brand and style.
If your chatbot will help with tasks, what does this include? Create a list of everything you want your chatbot to achieve and then break it down to what’s viable for your budget, time and customer base.
Also, determine where and how often you’ll use your chatbot. Will it live solely on your website, or do you want to offer it on a variety of channels?
How will your chatbot look, speak and behave?
It’s easier to write and plan a character when you have a profile to work with. We created a guide on to help with this process. You’ll determine how your chatbot looks, behaves and speaks as well as its strengths, tasks and weaknesses.
Once you know its characteristics, you may find it easier to create a catchy name for your chatbot . There doesn’t seem to be a trend or preference for which names work best (human names versus cutesy ones), so brainstorm options that fit your brand, are easy to pronounce and fit the look of your chatbot.
All of this preparation will help with script writing, the longest yet most important element in chatbot UI creation. You can use our post “6 steps for creating a smooth chatbot conversation flow” as a pain-free guide to help you get started.
No matter if you use a button-based, decision tree or contextual chatbot, its speaking mannerisms and tone are important for achieving that humanlike feel. Prepare for topic deviations, words or phrases that have double meanings and misunderstandings.
What platform and data will you use?
You have your choice of chatbot builder varieties, from advanced to no-code options. We created a list of the 10 must-try chatbot providers for every budget to help you decide.
To choose the right provider, consider how you want your users to interact with your chatbot — how can you simplify the interface so customers don’t have to press a lot of buttons and repeat themselves to get what they need?
Pumping your chatbot with data will also improve its ability to assist customers. Start with your company data (who you are, what you provide, what users can expect) and then group other interactions you’d expect your chatbot to have.
There are also pre-made data sets you can incorporate into your chatbot to improve its natural language processing, or if you need a place to start data-wise.
Test and monitor your chatbot
Ask your top clients if they would be interested in testing your chatbot before launch. It’s a great way to be transparent with your business and get honest feedback with a fresh perspective.
Include employees who weren’t involved in the creation process as well. They can assess if the chatbot “fits in” with your company culture and matches the right attitude. Puns are fun, but maybe not in life insurance.
Before launch, connect your chatbot to your live chat solution , such as Userlike . From the dashboard, your agents can monitor your chatbot and take handovers no matter where your chatbot is assisting customers — your website, social media and in messaging apps.
You can try Userlike’s free 14-day trial to get a feel for our platform. If it seems like a good fit, we’ll help you get started with a plan suitable for managing your chatbot.
|
__label__pos
| 0.660143 |
#include #include #include #include #include static const char *test_str = "Hello world!\n"; int main(int argc, char **argv) { SMBCCTX *context; SMBCFILE *file; smbc_open_fn smb_open; smbc_write_fn smb_write; smbc_lseek_fn smb_lseek; smbc_close_fn smb_close; off_t ret; int err; context = smbc_new_context(); smbc_init_context(context); smb_open = smbc_getFunctionOpen(context); smb_write = smbc_getFunctionWrite(context); smb_lseek = smbc_getFunctionLseek(context); smb_close = smbc_getFunctionClose(context); file = smb_open(context, "smb://localhost/test/test.txt", O_WRONLY | O_CREAT | O_TRUNC, 0); if (!file) return 1; smb_write(context, file, test_str, strlen(test_str)); ret = smb_lseek(context, file, 0, SEEK_SET); err = errno; printf("Return value: %li\nError: %s\n", ret, strerror(err)); smb_close(context, file); return 0; }
|
__label__pos
| 0.999991 |
Dolev–Yao model
From Wikipedia, the free encyclopedia
(Redirected from Dolev-Yao threat model)
The Dolev–Yao model,[1] named after its authors Danny Dolev and Andrew Yao, is a formal model used to prove properties of interactive cryptographic protocols.[2][3]
The network[edit]
The network is represented by a set of abstract machines that can exchange messages. These messages consist of formal terms. These terms reveal some of the internal structure of the messages, but some parts will hopefully remain opaque to the adversary.
The adversary[edit]
The adversary in this model can overhear, intercept, and synthesize any message and is only limited by the constraints of the cryptographic methods used. In other words: "the attacker carries the message."
This omnipotence has been very difficult to model, and many threat models simplify it, as has been done for the attacker in ubiquitous computing.[4]
The algebraic model[edit]
Cryptographic primitives are modeled by abstract operators. For example, asymmetric encryption for a user is represented by the encryption function and the decryption function . Their main properties are that their composition is the identity function () and that an encrypted message reveals nothing about . Unlike in the real world, the adversary can neither manipulate the encryption's bit representation nor guess the key. The attacker may, however, re-use any messages that have been sent and therefore become known. The attacker can encrypt or decrypt these with any keys he knows, to forge subsequent messages.
A protocol is modeled as a set of sequential runs, alternating between queries (sending a message over the network) and responses (obtaining a message from the network).
Remark[edit]
The symbolic nature of the Dolev–Yao model makes it more manageable than computational models and accessible to algebraic methods but potentially less realistic. However, both kinds of models for cryptographic protocols have been related.[5] Also, symbolic models are very well suited to show that a protocol is broken, rather than secure, under the given assumptions about the attackers capabilities.
See also[edit]
References[edit]
1. ^ Dolev, D.; Yao, A. C. (1983), "On the security of public key protocols" (PDF), IEEE Transactions on Information Theory, IT-29 (2): 198–208, doi:10.1109/tit.1983.1056650, S2CID 13643880
2. ^ Backes, Michael; Pfitzmann, Birgit; Waidner, Michael (2006), Soundness Limits of Dolev-Yao Models (PDF), Workshop on Formal and Computational Cryptography (FCC'06), affiliated with ICALP'06
3. ^ Chen, Quingfeng; Zhang, Chengqi; Zhang, Shichao (2008), Secure Transaction Protocol Analysis: Models and Applications, Lecture Notes in Computer Science / Programming and Software Engineering, ISBN 9783540850731
4. ^ Creese, Sadie; Goldsmith, Michael; Roscoe, Bill; Zakiuddin, Irfan (2003). The Attacker in Ubiquitous Computing Environments: Formalising the Threat Model (PDF). Proc. of the 1st Intl Workshop on Formal Aspects in Security and Trust (Technical report). pp. 83–97.
5. ^ Herzog, Jonathan (2005), A Computational Interpretation of Dolev-Yao Adversaries, p. 2005, CiteSeerX 10.1.1.94.2941
|
__label__pos
| 0.814648 |
Answers
Solutions by everydaycalculation.com
Answers.everydaycalculation.com » Add fractions
Add 60/90 and 8/3
1st number: 60/90, 2nd number: 2 2/3
60/90 + 8/3 is 10/3.
Steps for adding fractions
1. Find the least common denominator or LCM of the two denominators:
LCM of 90 and 3 is 90
2. For the 1st fraction, since 90 × 1 = 90,
60/90 = 60 × 1/90 × 1 = 60/90
3. Likewise, for the 2nd fraction, since 3 × 30 = 90,
8/3 = 8 × 30/3 × 30 = 240/90
4. Add the two fractions:
60/90 + 240/90 = 60 + 240/90 = 300/90
5. After reducing the fraction, the answer is 10/3
6. In mixed form: 31/3
MathStep (Works offline)
Download our mobile app and learn to work with fractions in your own time:
Android and iPhone/ iPad
Related:
© everydaycalculation.com
|
__label__pos
| 0.999892 |
Version: 2.6.34 2.6.35 2.6.36 2.6.37 2.6.38 2.6.39 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14
Linux/fs/anon_inodes.c
1 /*
2 * fs/anon_inodes.c
3 *
4 * Copyright (C) 2007 Davide Libenzi <[email protected]>
5 *
6 * Thanks to Arnd Bergmann for code review and suggestions.
7 * More changes for Thomas Gleixner suggestions.
8 *
9 */
10
11 #include <linux/cred.h>
12 #include <linux/file.h>
13 #include <linux/poll.h>
14 #include <linux/sched.h>
15 #include <linux/init.h>
16 #include <linux/fs.h>
17 #include <linux/mount.h>
18 #include <linux/module.h>
19 #include <linux/kernel.h>
20 #include <linux/magic.h>
21 #include <linux/anon_inodes.h>
22
23 #include <asm/uaccess.h>
24
25 static struct vfsmount *anon_inode_mnt __read_mostly;
26 static struct inode *anon_inode_inode;
27
28 /*
29 * anon_inodefs_dname() is called from d_path().
30 */
31 static char *anon_inodefs_dname(struct dentry *dentry, char *buffer, int buflen)
32 {
33 return dynamic_dname(dentry, buffer, buflen, "anon_inode:%s",
34 dentry->d_name.name);
35 }
36
37 static const struct dentry_operations anon_inodefs_dentry_operations = {
38 .d_dname = anon_inodefs_dname,
39 };
40
41 static struct dentry *anon_inodefs_mount(struct file_system_type *fs_type,
42 int flags, const char *dev_name, void *data)
43 {
44 return mount_pseudo(fs_type, "anon_inode:", NULL,
45 &anon_inodefs_dentry_operations, ANON_INODE_FS_MAGIC);
46 }
47
48 static struct file_system_type anon_inode_fs_type = {
49 .name = "anon_inodefs",
50 .mount = anon_inodefs_mount,
51 .kill_sb = kill_anon_super,
52 };
53
54 /**
55 * anon_inode_getfile - creates a new file instance by hooking it up to an
56 * anonymous inode, and a dentry that describe the "class"
57 * of the file
58 *
59 * @name: [in] name of the "class" of the new file
60 * @fops: [in] file operations for the new file
61 * @priv: [in] private data for the new file (will be file's private_data)
62 * @flags: [in] flags
63 *
64 * Creates a new file by hooking it on a single inode. This is useful for files
65 * that do not need to have a full-fledged inode in order to operate correctly.
66 * All the files created with anon_inode_getfile() will share a single inode,
67 * hence saving memory and avoiding code duplication for the file/inode/dentry
68 * setup. Returns the newly created file* or an error pointer.
69 */
70 struct file *anon_inode_getfile(const char *name,
71 const struct file_operations *fops,
72 void *priv, int flags)
73 {
74 struct qstr this;
75 struct path path;
76 struct file *file;
77
78 if (IS_ERR(anon_inode_inode))
79 return ERR_PTR(-ENODEV);
80
81 if (fops->owner && !try_module_get(fops->owner))
82 return ERR_PTR(-ENOENT);
83
84 /*
85 * Link the inode to a directory entry by creating a unique name
86 * using the inode sequence number.
87 */
88 file = ERR_PTR(-ENOMEM);
89 this.name = name;
90 this.len = strlen(name);
91 this.hash = 0;
92 path.dentry = d_alloc_pseudo(anon_inode_mnt->mnt_sb, &this);
93 if (!path.dentry)
94 goto err_module;
95
96 path.mnt = mntget(anon_inode_mnt);
97 /*
98 * We know the anon_inode inode count is always greater than zero,
99 * so ihold() is safe.
100 */
101 ihold(anon_inode_inode);
102
103 d_instantiate(path.dentry, anon_inode_inode);
104
105 file = alloc_file(&path, OPEN_FMODE(flags), fops);
106 if (IS_ERR(file))
107 goto err_dput;
108 file->f_mapping = anon_inode_inode->i_mapping;
109
110 file->f_flags = flags & (O_ACCMODE | O_NONBLOCK);
111 file->private_data = priv;
112
113 return file;
114
115 err_dput:
116 path_put(&path);
117 err_module:
118 module_put(fops->owner);
119 return file;
120 }
121 EXPORT_SYMBOL_GPL(anon_inode_getfile);
122
123 /**
124 * anon_inode_getfd - creates a new file instance by hooking it up to an
125 * anonymous inode, and a dentry that describe the "class"
126 * of the file
127 *
128 * @name: [in] name of the "class" of the new file
129 * @fops: [in] file operations for the new file
130 * @priv: [in] private data for the new file (will be file's private_data)
131 * @flags: [in] flags
132 *
133 * Creates a new file by hooking it on a single inode. This is useful for files
134 * that do not need to have a full-fledged inode in order to operate correctly.
135 * All the files created with anon_inode_getfd() will share a single inode,
136 * hence saving memory and avoiding code duplication for the file/inode/dentry
137 * setup. Returns new descriptor or an error code.
138 */
139 int anon_inode_getfd(const char *name, const struct file_operations *fops,
140 void *priv, int flags)
141 {
142 int error, fd;
143 struct file *file;
144
145 error = get_unused_fd_flags(flags);
146 if (error < 0)
147 return error;
148 fd = error;
149
150 file = anon_inode_getfile(name, fops, priv, flags);
151 if (IS_ERR(file)) {
152 error = PTR_ERR(file);
153 goto err_put_unused_fd;
154 }
155 fd_install(fd, file);
156
157 return fd;
158
159 err_put_unused_fd:
160 put_unused_fd(fd);
161 return error;
162 }
163 EXPORT_SYMBOL_GPL(anon_inode_getfd);
164
165 static int __init anon_inode_init(void)
166 {
167 anon_inode_mnt = kern_mount(&anon_inode_fs_type);
168 if (IS_ERR(anon_inode_mnt))
169 panic("anon_inode_init() kernel mount failed (%ld)\n", PTR_ERR(anon_inode_mnt));
170
171 anon_inode_inode = alloc_anon_inode(anon_inode_mnt->mnt_sb);
172 if (IS_ERR(anon_inode_inode))
173 panic("anon_inode_init() inode allocation failed (%ld)\n", PTR_ERR(anon_inode_inode));
174
175 return 0;
176 }
177
178 fs_initcall(anon_inode_init);
179
180
This page was automatically generated by LXR 0.3.1 (source). • Linux is a registered trademark of Linus Torvalds • Contact us
|
__label__pos
| 0.993232 |
joxeankoret's user avatar
joxeankoret's user avatar
joxeankoret's user avatar
joxeankoret
• Member for 10 years, 9 months
• Last seen more than a week ago
42 votes
Accepted
What are the targets of professional reverse software engineering?
36 votes
Where can I, as an individual, get malware samples to analyze?
30 votes
Decent GUI for GDB
17 votes
how can I diff two x86 binaries at assembly code level?
14 votes
What is an "opaque predicate"?
12 votes
How do AV vendors create signatures for polymorphic viruses?
12 votes
Accepted
Decoding the UPX ELF header file
11 votes
How to modify/replace a non exported function in a native code dll
10 votes
How do anti-virus programs catch a virus? How they detect it?
10 votes
Tool or data for analysis of binary code to detect CPU architecture
9 votes
Hooking functions in Linux and/or OSX?
9 votes
Are there any interactive decompilers besides HexRays?
8 votes
Accepted
Idapython: How to get the opcode bytes corresponding to an instruction?
8 votes
What is "overlapping instructions" obfuscation?
7 votes
How to check if an ELF file is UPX packed?
7 votes
Accepted
How to compile c, cpp and python code as "Released/Final" version?
7 votes
Accepted
IDAPython: How to get function argument values
6 votes
Do IDA Python plugins work with IDA free or only IDA pro?
6 votes
What is the difference between IDA and OllyDbg?
6 votes
Generating call graph for assembly instructions
6 votes
How can we determine that malware are related?
6 votes
Is there any simple open source Windows packer?
6 votes
What is your vulnerability discovery process?
5 votes
Accepted
How to design opaque predicates?
5 votes
Accepted
IDA Proximity viewer not finding obvious paths?
5 votes
Open-Source library for Complete Binary Disassembly
5 votes
Accepted
Wierd names in import table
5 votes
Accepted
Lifting up binaries of any arch into an intermediate language for static analysis
4 votes
Accepted
Is reverse engineering legal?
4 votes
reverse engineering methodology
|
__label__pos
| 1 |
answersLogoWhite
0
Best Answer
Sum of angles in a polygon is (n -2)180. Hexagon has six sides. So the sum of angles is (6- 2)180. (4)180 =720
User Avatar
Wiki User
2011-05-12 21:10:45
This answer is:
User Avatar
Study guides
Science
17 cards
Is glucose solution a homogenous mixture
Properties that describe the appearance of matter are known as what properties
Hearing sight sound and smell are examples of that you can use to make observations
What type of chemical weathering is caused when rocks sit in a pool of saltwater
➡️
See all cards
3.88
85 Reviews
Add your answer:
Earn +20 pts
Q: What is the sum of the angles on a hexagon?
Write your answer...
Submit
Still have questions?
magnify glass
imp
People also asked
|
__label__pos
| 0.65232 |
Splunk® Enterprise
Search Manual
Splunk Enterprise version 9.0 will no longer be supported as of June 14, 2024. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
SPL and regular expressions
Regular expressions in the Splunk Search Processing Language (SPL) are Perl Compatible Regular Expressions (PCRE).
You can use regular expressions with the rex and regex commands. You can also use regular expressions with evaluation functions such as match and replace. See Evaluation functions in the Search Manual.
The following sections provide guidance on regular expressions in SPL searches.
Pipe characters
A pipe character ( | ) is used in regular expressions to specify an OR condition. For example, A | B means A or B.
Because pipe characters are used to separate commands in SPL, you must enclose a regular expression that uses the pipe character in quotation marks. The following search shows how to use quotation marks around a pipe character, which is interpreted by SPL as a search for the text "expression" OR "with pipe"..
...|regex "expression | with pipe"
Backslash characters in regular expressions
The backslash character ( \ ) is used in regular expressions to escape any special characters that have meaning in regular expressions, such as periods ( . ), double quotation marks ( " ), and backslashes themselves. For example, the period character is used in a regular expression to match any character, except a line break character. If you want to match a period character, you must escape the period character by specifying \. in your regular expression.
In searches that include a regular expression that contains a double backslash, like the file path c:\\temp, the search interprets the first backslash as a regular expression escape character. The file path is interpreted as c:\temp, because one of the backslashes is removed. You must escape both backslash characters in the file path by specifying 4 consecutive backslashes for the root portion of the file path, such as c:\\\\temp. For a longer file path, such as c:\\temp\example, you can specify c:\\\\temp\\example in your regular expression in the search string.
One reason you might need extra escaping backslashes in your searches is that the Splunk platform parses text twice; once for SPL and then again for regular expressions. Each parse applies its own use of backslashes in layers and treats each backslash as a special character that needs an additional backslash to make it literal. As a result, \\ in SPL becomes \ before it is parsed as a regular expression, and \\\\ in SPL becomes \\ before it is parsed as a regular expression.
See Backslashes in the Search Manual.
Avoid extra escaping backslash characters
To avoid using extra escaping backslashes in your searches, you can use the octal code \134 or the hexadecimal code \x5c in your regular expression. These codes are equivalent to the backslash character and get around the need to double-escape backslashes. For example, consider the following search, which extracts the characters ABC that follow 2 backslashes:
| makeresults | eval example="xyz\\ABC" | rex field=example max_match=3 ".*\\\(?<extract>.*)"
The search results look something like this:
time example extract
2023-09-20 17:20:59 xyz\ABC ABC
Instead of using 3 backslashes, you can get the same search results using \x5c in the regular expression, like this:
| makeresults | eval example="xyz\\ABC" | rex field=example max_match=3 ".*\x5c(?<extract>.*)"
More about regular expressions
For more information:
Last modified on 01 November, 2023
Field expressions About search optimization
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.10, 8.1.0, 7.2.3, 8.0.8, 7.0.1, 8.0.7, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.2.0, 9.2.1, 9.2.2, 8.0.9, 8.1.1, 8.1.10
Was this topic useful?
You must be logged into splunk.com in order to post comments. Log in now.
Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.
0 out of 1000 Characters
|
__label__pos
| 0.736755 |
Community
cancel
Showing results for
Search instead for
Did you mean:
Altera_Forum
Honored Contributor I
1,292 Views
Problem on Linux PCI Express driver for an Avalon-MM DMA reference design
Hi I'm Joel
I'm trying to use the PCI Express Avalon-MM DMA reference design (https://www.altera.com/products/reference-designs/all-reference-designs/interface/ref-pciexpress-ava...) but I'm having some troubles with the Linux driver.
I'm working on an Ubuntu 14.04 LTS, 64 bits machine. I've already my Stratix V FPGA plugged in the PCI Express port of my computer.
Using instructions in the literature (https://www.altera.com/content/dam/altera-www/global/en_us/pdfs/literature/an/an690.pdf), I used the command "sudo make" to compile the driver. When I run the "sudo ./install", I keep having the error displayed in "installation_error.png". Though I'm having that error, when typing "lsmod" I see that the module altera_dma is well loaded into the kernel.
At this point I'm not sure about how to fix that, I know the problem is that there is no file /dev/altera_dma and because, of that I can't run the user application as depicted by "run_error.png"
Please can you help? Do you have any idea of what might be wrong?
Best regards
Tags (1)
0 Kudos
6 Replies
Altera_Forum
Honored Contributor I
89 Views
To fix this I just modified the Device ID. It was initially 0xE003. I changed it to 0xE001 (this one is the default Altera's PCIe Reference Design Device ID). It can also be found in the SoC design when opening QSYS
Altera_Forum
Honored Contributor I
89 Views
Hi Joel,
I am having the same problem with https://www.altera.com/en_us/pdfs/literature/an/an690.pdf, pretty much the same but I am using a Cyclone V.
I can see from the reference design specs the default device ID is the same as you mentioned above: 0xE003. Where exactly did you modify it? In any of the files for the driver/application on the host side? Did you modify it in the QSYS design?
Kind Regards,
Aidan
Altera_Forum
Honored Contributor I
89 Views
Hey Aidanob
If you go to "altera_dma_cmd.h", at line 6 you'll find
# define ALTERA_DMA_DID 0xE003.
The Device ID registered in your operating system and the one in the driver code must match. In my case, the device ID of my Stratix V FPGA board was 0xE001 from my operating system standpoint, so I had to modify the default value specified in the driver code. To check what device ID your operating system is seeing you can try doing:
- lspci
- cat /proc/bus/pci/devices |grep Altera
"lspci" should give something like: Altera Corporation Device e001 (rev01). That "e001" is your device ID
--- Quote Start ---
Hi Joel,
I am having the same problem with https://www.altera.com/en_us/pdfs/literature/an/an690.pdf, pretty much the same but I am using a Cyclone V.
I can see from the reference design specs the default device ID is the same as you mentioned above: 0xE003. Where exactly did you modify it? In any of the files for the driver/application on the host side? Did you modify it in the QSYS design?
Kind Regards,
Aidan
--- Quote End ---
Altera_Forum
Honored Contributor I
89 Views
Hi Joel,
Thanks for the detailed and clear response!
Kind Rgds,
Aidan
Altera_Forum
Honored Contributor I
89 Views
Hi Joel,
Thank you for sharing you experience! I am encountering the same problem. The only difference is that I am using a Arria-10 Dev Board.
I tried to follow your instruction but soon found that the output of lspci shows that I should use 0xE003 as DID.
# lspci | grep Altera 01:00.0 Non-VGA unclassified device: Altera Corporation Device e003 (rev 01)
May I ask if there's any other step you took between changing the DID and seeing /dev/altera_dma showing up?
Did you have to unplug/plug the FPGA, and/or reboot the host machine?
Any suggestion is much appreciated!
--- Quote Start ---
Hey Aidanob
If you go to "altera_dma_cmd.h", at line 6 you'll find
# define ALTERA_DMA_DID 0xE003.
The Device ID registered in your operating system and the one in the driver code must match. In my case, the device ID of my Stratix V FPGA board was 0xE001 from my operating system standpoint, so I had to modify the default value specified in the driver code. To check what device ID your operating system is seeing you can try doing:
- lspci
- cat /proc/bus/pci/devices |grep Altera
"lspci" should give something like: Altera Corporation Device e001 (rev01). That "e001" is your device ID
--- Quote End ---
Altera_Forum
Honored Contributor I
89 Views
Hello,
I have the same issue. Is there any workaround btw?
Best Regards
|
__label__pos
| 0.534543 |
Package: AnyServices$OCLAsTypeService
AnyServices$OCLAsTypeService
nameinstructionbranchcomplexitylinemethod
AnyServices.OCLAsTypeService(Method, Object)
M: 5 C: 0
0%
M: 0 C: 0
100%
M: 1 C: 0
0%
M: 2 C: 0
0%
M: 1 C: 0
0%
getType(Call, ValidationServices, IValidationResult, IReadOnlyQueryEnvironment, List)
M: 140 C: 0
0%
M: 16 C: 0
0%
M: 9 C: 0
0%
M: 28 C: 0
0%
M: 1 C: 0
0%
Coverage
1: /*******************************************************************************
2: * Copyright (c) 2015 Obeo.
3: * All rights reserved. This program and the accompanying materials
4: * are made available under the terms of the Eclipse Public License v1.0
5: * which accompanies this distribution, and is available at
6: * http://www.eclipse.org/legal/epl-v10.html
7: *
8: * Contributors:
9: * Obeo - initial API and implementation
10: *******************************************************************************/
11: package org.eclipse.acceleo.query.services;
12:
13: import java.lang.reflect.Method;
14: import java.util.ArrayList;
15: import java.util.Collection;
16: import java.util.Collections;
17: import java.util.Comparator;
18: import java.util.Iterator;
19: import java.util.LinkedHashSet;
20: import java.util.List;
21: import java.util.Set;
22:
23: import org.eclipse.acceleo.annotations.api.documentation.Documentation;
24: import org.eclipse.acceleo.annotations.api.documentation.Example;
25: import org.eclipse.acceleo.annotations.api.documentation.Other;
26: import org.eclipse.acceleo.annotations.api.documentation.Param;
27: import org.eclipse.acceleo.annotations.api.documentation.ServiceProvider;
28: import org.eclipse.acceleo.query.ast.Call;
29: import org.eclipse.acceleo.query.runtime.IReadOnlyQueryEnvironment;
30: import org.eclipse.acceleo.query.runtime.IService;
31: import org.eclipse.acceleo.query.runtime.IValidationResult;
32: import org.eclipse.acceleo.query.runtime.impl.AbstractServiceProvider;
33: import org.eclipse.acceleo.query.runtime.impl.JavaMethodService;
34: import org.eclipse.acceleo.query.runtime.impl.Nothing;
35: import org.eclipse.acceleo.query.runtime.impl.ValidationServices;
36: import org.eclipse.acceleo.query.validation.type.ClassType;
37: import org.eclipse.acceleo.query.validation.type.EClassifierType;
38: import org.eclipse.acceleo.query.validation.type.IType;
39: import org.eclipse.emf.common.util.Enumerator;
40: import org.eclipse.emf.ecore.EClass;
41: import org.eclipse.emf.ecore.EClassifier;
42: import org.eclipse.emf.ecore.EDataType;
43: import org.eclipse.emf.ecore.EEnum;
44: import org.eclipse.emf.ecore.EEnumLiteral;
45: import org.eclipse.emf.ecore.EObject;
46: import org.eclipse.emf.ecore.EPackage;
47:
48: //@formatter:off
49: @ServiceProvider(
50: value = "Services available for all types"
51: )
52: //@formatter:on
53: @SuppressWarnings({"checkstyle:javadocmethod", "checkstyle:javadoctype" })
54: public class AnyServices extends AbstractServiceProvider {
55:
56: /**
57: * Line separator constant.
58: */
59: private static final String LINE_SEP = System.getProperty("line.separator");
60:
61: /**
62: * The {@link IReadOnlyQueryEnvironment}.
63: */
64: private final IReadOnlyQueryEnvironment queryEnvironment;
65:
66: /**
67: * Constructor.
68: *
69: * @param queryEnvironment
70: * the {@link IReadOnlyQueryEnvironment}
71: */
72: public AnyServices(IReadOnlyQueryEnvironment queryEnvironment) {
73: this.queryEnvironment = queryEnvironment;
74: }
75:
76: @Override
77: protected IService getService(Method publicMethod) {
78: final IService result;
79:
80: if ("oclAsType".equals(publicMethod.getName())) {
81: result = new OCLAsTypeService(publicMethod, this);
82: } else {
83: result = new JavaMethodService(publicMethod, this);
84: }
85:
86: return result;
87: }
88:
89: // @formatter:off
90: @Documentation(
91: value = "Indicates whether the object \"o1\" i\"the same as the object \"o2\". For more " +
92: "information refer to the Object#equals(Object) method.",
93: params = {
94: @Param(name = "o1", value = "The object to compare for equality"),
95: @Param(name = "o2", value = "The reference object with which to compare")
96: },
97: result = "true\" if the object \"o1\" is the same as the object \"o2\", " +
98: "\"false\" otherwise",
99: examples = {
100: @Example(expression = "'Hello'.equals('World')", result = "false"),
101: @Example(expression = "'Hello'.equals('Hello')", result = "true")
102: }
103: )
104: // @formatter:on
105: public Boolean equals(Object o1, Object o2) {
106: final boolean result;
107:
108: if (o1 == null) {
109: result = o2 == null;
110: } else {
111: result = o1.equals(o2);
112: }
113:
114: return Boolean.valueOf(result);
115: }
116:
117: // @formatter:off
118: @Documentation(
119: value = "Indicates whether the object \"o1\" is a different object from the object \"o2\".",
120: params = {
121: @Param(name = "o1", value = "The object to compare"),
122: @Param(name = "o2", value = "The reference object with which to compare")
123: },
124: result = "\"true\" if the object \"o1\" is not the same as the object \"o2\", " +
125: "\"false\" otherwise.",
126: examples = {
127: @Example(expression = "'Hello'.differs('World')", result = "true"),
128: @Example(expression = "'Hello'.differs('Hello')", result = "false")
129: }
130: )
131: // @formatter:on
132: public Boolean differs(Object o1, Object o2) {
133: return Boolean.valueOf(!equals(o1, o2));
134: }
135:
136: // @formatter:off
137: @Documentation(
138: value = "Returns the concatenation of self (as a String) and the given string \"s\".",
139: params = {
140: @Param(name = "self", value = "The current object at the end of which to append \"s\"."),
141: @Param(name = "s", value = "The string we want to append at the end of the current object's string representation.")
142: },
143: result = "The string representation of self for which we added the string \"s\".",
144: examples = {
145: @Example(expression = "42.add(' times')", result = "'42 times'")
146: }
147: )
148: // @formatter:on
149: public String add(Object self, String s) {
150: final String result;
151:
152: if (s == null) {
153: result = toString(self);
154: } else {
155: result = toString(self) + s;
156: }
157:
158: return result;
159: }
160:
161: // @formatter:off
162: @Documentation(
163: value = "Returns the concatenation of the current string and the given object \"any\" (as a String).",
164: params = {
165: @Param(name = "self", value = "The current string."),
166: @Param(name = "any", value = "The object we want to append, as a string, at the end of the current string.")
167: },
168: result = "The current string with the object \"any\" appended (as a String).",
169: examples = {
170: @Example(expression = "'times '.add(42)", result = "'times 42'")
171: }
172: )
173: // @formatter:on
174: public String add(String self, Object any) {
175: final String result;
176:
177: if (self == null) {
178: result = toString(any);
179: } else {
180: result = self + toString(any);
181: }
182:
183: return result;
184: }
185:
186: // @formatter:off
187: @Documentation(
188: value = "Casts the current object to the given type.",
189: params = {
190: @Param(name = "object", value = "The object to cast"),
191: @Param(name = "type", value = "The type to cast the object to")
192: },
193: result = "The current object cast to a \"type\"",
194: examples = {
195: @Example(
196: expression = "anEPackage.oclAsType(ecore::EPackage)", result = "anEPackage",
197: others = {
198: @Other(
199: language = Other.ACCELEO_3, expression = "anEPackage.oclAsType(ecore::EPackage)", result = "anEPackage"
200: )
201: }
202: ),
203: @Example(
204: expression = "anEPackage.oclAsType(ecore::EClass)", result = "anEPackage",
205: others = {
206: @Other(
207: language = Other.ACCELEO_3, expression = "anEPackage.oclAsType(ecore::EClass)", result = "oclInvalid"
208: )
209: }
210: ),
211: },
212: comment = "Contrary to Acceleo 3, the type is ignored, the given object will be returned directly."
213: )
214: // @formatter:on
215: public Object oclAsType(Object object, Object type) {
216: if (oclIsKindOf(object, type)) {
217: return object;
218: }
219: throw new ClassCastException(object + " cannot be cast to " + type);
220: }
221:
222: // @formatter:off
223: @Documentation(
224: value = "Evaluates to \"true\" if the type of the object o1 conforms to the type \"classifier\". That is, " +
225: "o1 is of type \"classifier\" or a subtype of \"classifier\".",
226: params = {
227: @Param(name = "object", value = "The reference Object we seek to test."),
228: @Param(name = "type", value = "The expected supertype classifier.")
229: },
230: result = "\"true\" if the object o1 is a kind of the classifier, \"false\" otherwise.",
231: examples = {
232: @Example(expression = "anEPackage.oclIsKindOf(ecore::EPackage)", result = "true"),
233: @Example(expression = "anEPackage.oclIsKindOf(ecore::ENamedElement)", result = "true")
234: }
235: )
236: // @formatter:on
237: public Boolean oclIsKindOf(Object object, Object type) {
238: Boolean result;
239: if (object == null && type != null) {
240: // OCL considers "null" (OclVoid) to be compatible with everything.
241: // AQL considers it incompatible with anything.
242: result = false;
243: } else if (type instanceof EClass) {
244: EClass eClass = (EClass)type;
245: if (object instanceof EObject) {
246: result = eClass.isInstance(object);
247: } else {
248: result = false;
249: }
250: } else if (type instanceof EEnum) {
251: if (object instanceof EEnumLiteral) {
252: result = ((EEnumLiteral)object).getEEnum().equals(type);
253: } else if (object instanceof Enumerator) {
254: EEnumLiteral literal = ((EEnum)type).getEEnumLiteral(((Enumerator)object).getName());
255: result = literal.getEEnum().equals(type);
256: } else {
257: result = false;
258: }
259: } else if (type instanceof EDataType) {
260: result = ((EDataType)type).isInstance(object);
261: } else if (object != null && type instanceof Class<?>) {
262: result = ((Class<?>)type).isInstance(object);
263: } else {
264: result = false;
265: }
266: return result;
267: }
268:
269: // @formatter:off
270: @Documentation(
271: value = "Evaluates to \"true\" if the object o1 if of the type \"classifier\" but not a subtype of the " +
272: "\"classifier\".",
273: params = {
274: @Param(name = "object", value = "The reference Object we seek to test."),
275: @Param(name = "type", value = "The expected type classifier.")
276: },
277: result = "\"true\" if the object o1 is a type of the classifier, \"false\" otherwise.",
278: examples = {
279: @Example(expression = "anEPackage.oclIsKindOf(ecore::EPackage)", result = "true"),
280: @Example(expression = "anEPackage.oclIsKindOf(ecore::ENamedElement)", result = "false")
281: }
282: )
283: // @formatter:on
284: public Boolean oclIsTypeOf(Object object, Object type) {
285: Boolean result;
286: if (object == null && type != null) {
287: // OCL considers "null" (OclVoid) to be compatible with everything.
288: // AQL considers it incompatible with anything.
289: result = false;
290: } else if (type instanceof EClass) {
291: EClass eClass = (EClass)type;
292: if (object instanceof EObject) {
293: result = eClass == ((EObject)object).eClass();
294: } else {
295: result = false;
296: }
297: } else if (type instanceof EEnum) {
298: if (object instanceof EEnumLiteral) {
299: result = ((EEnumLiteral)object).getEEnum().equals(type);
300: } else if (object instanceof Enumerator) {
301: EEnumLiteral literal = ((EEnum)type).getEEnumLiteral(((Enumerator)object).getName());
302: result = literal.getEEnum().equals(type);
303: } else {
304: result = false;
305: }
306: } else if (type instanceof EDataType) {
307: result = ((EDataType)type).isInstance(object);
308: } else if (object != null && type instanceof Class<?>) {
309: result = ((Class<?>)type).equals(object.getClass());
310: } else {
311: result = false;
312: }
313: return result;
314: }
315:
316: // @formatter:off
317: @Documentation(
318: value = "Returns a string representation of the current object.",
319: params = {
320: @Param(name = "self", value = "The current object")
321: },
322: result = "a String representation of the given Object. For Collections, this will be the concatenation of " +
323: "all contained Objects' toString.",
324: examples = {
325: @Example(expression = "42.toString()", result = "'42'")
326: }
327: )
328: // @formatter:on
329: public String toString(Object object) {
330: final StringBuffer buffer = new StringBuffer();
331: if (object instanceof Collection<?>) {
332: final Iterator<?> childrenIterator = ((Collection<?>)object).iterator();
333: while (childrenIterator.hasNext()) {
334: buffer.append(toString(childrenIterator.next()));
335: }
336: } else if (object != null && !(object instanceof Nothing)) {
337: final String toString = object.toString();
338: if (toString != null) {
339: buffer.append(toString);
340: }
341: }
342: // else return empty String
343: return buffer.toString();
344: }
345:
346: // @formatter:off
347: @Documentation(
348: value = "Returns a string representation of the current environment.",
349: params = {
350: @Param(name = "self", value = "The current object")
351: },
352: result = "a string representation of the current environment.",
353: examples = {
354: @Example(expression = "42.trace()", result = "'Metamodels:\n\thttp://www.eclipse.org/emf/2002/Ecore\n" +
355: "Services:\n\torg.eclipse.acceleo.query.services.AnyServices\n\t\tpublic java.lang.String org." +
356: "eclipse.acceleo.query.services.AnyServices.add(java.lang.Object,java.lang.String)\n\t\t...\nreceiver: 42\n'")
357: }
358: )
359: // @formatter:on
360: public String trace(Object object) {
361: final StringBuilder result = new StringBuilder();
362:
363: result.append("Metamodels:" + LINE_SEP);
364: for (EPackage ePgk : queryEnvironment.getEPackageProvider().getRegisteredEPackages()) {
365: result.append("\t" + ePgk.getNsURI() + LINE_SEP);
366: }
367: result.append("Services:" + LINE_SEP);
368: final List<IService> services = new ArrayList<IService>(queryEnvironment.getLookupEngine()
369: .getRegisteredServices());
370: Collections.sort(services, new Comparator<IService>() {
371:
372: /**
373: * {@inheritDoc}
374: *
375: * @see java.util.Comparator#compare(java.lang.Object, java.lang.Object)
376: */
377: @Override
378: public int compare(IService service1, IService service2) {
379: final int result;
380:
381: if (service1.getPriority() < service2.getPriority()) {
382: result = -1;
383: } else if (service1.getPriority() > service2.getPriority()) {
384: result = 1;
385: } else {
386: result = service1.getName().compareTo(service2.getName());
387: }
388: return result;
389: }
390:
391: });
392: for (IService service : services) {
393: result.append("\t\t" + service.getLongSignature() + LINE_SEP);
394: }
395: result.append("receiver: ");
396: result.append(toString(object) + LINE_SEP);
397:
398: return result.toString();
399: }
400:
401: private static class OCLAsTypeService extends FilterService {
402: public OCLAsTypeService(Method publicMethod, Object serviceInstance) {
403: super(publicMethod, serviceInstance);
404: }
405:
406: @Override
407: public Set<IType> getType(Call call, ValidationServices services, IValidationResult validationResult,
408: IReadOnlyQueryEnvironment environment, List<IType> argTypes) {
409: final Set<IType> result = new LinkedHashSet<IType>();
410:
411: final IType receiverType = argTypes.get(0);
412: final IType filterType = argTypes.get(1);
413:• if (services.lower(receiverType, filterType) != null) {
414: Object resultType = filterType.getType();
415:• if (resultType instanceof EClassifier) {
416: result.add(new EClassifierType(environment, (EClassifier)resultType));
417:• } else if (resultType instanceof Class) {
418: result.add(new ClassType(environment, (Class<?>)resultType));
419:• } else if (resultType != null) {
420: result.add(services.nothing("Unknown type %s", resultType));
421: } else {
422: result.add(services.nothing("Unknown type %s", "null"));
423: }
424: } else {
425:• if (receiverType instanceof EClassifierType
426: && !environment.getEPackageProvider().isRegistered(
427:• ((EClassifierType)receiverType).getType())) {
428: result.add(services.nothing("%s is not registered within the current environment.",
429: receiverType));
430:• } else if (filterType instanceof EClassifierType
431: && !environment.getEPackageProvider().isRegistered(
432:• ((EClassifierType)filterType).getType())) {
433: result.add(services.nothing("%s is not registered within the current environment.",
434: filterType));
435: } else {
436: result.add(services
437: .nothing("%s is not compatible with type %s", receiverType, filterType));
438: }
439: }
440:
441: return result;
442: }
443: }
444: }
|
__label__pos
| 0.999942 |
Util InventoryFilter - Find items easily
Discussion in 'Resources' started by FisheyLP, Jul 11, 2016.
Thread Status:
Not open for further replies.
1. Offline
FisheyLP
Hey guys,
I made this class to find items in an inventory easily by various filters :)
Class (open)
Code:
import java.utils.List;
import java.utils.ArrayList;
import java.utils.Arrays;
import org.bukkit.inventory.Inventory;
import org.bukkit.inventory.ItemStack;
import org.bukkit.inventory.meta.ItemMeta;
import org.bukkit.material.Material;
/*
* Created by FisheyLP
*/
public class InventoryFilter {
private Material material;
private String displayName;
private List<String> lore;
private short minDurability = -1, maxDurability = -1;
private int minAmount = -1, maxAmount = -1;
private byte data = -1;
public InventoryFilter material(Material material) {
this.material = material;
return this;
}
public InventoryFilter amount(int amount) {
minAmount = maxAmount = amount;
return this;
}
public InventoryFilter minAmount(int amount) {
minAmount = amount;
return this;
}
public InventoryFilter maxAmount(int amount) {
maxAmount = amount;
return this;
}
public InventoryFilter displayName(String displayName) {
this.displayName = displayName;
return this;
}
public InventoryFilter lore(String... lore) {
if (lore == null) return lore((List<String>) null);
return lore(Arrays.asList(lore));
}
public InventoryFilter lore(List<String> lore) {
this.lore = lore;
return this;
}
public InventoryFilter durability(short durability) {
minDurability = maxDurability = durability;
return this;
}
public InventoryFilter minDurability(short durability) {
this.minDurability = durability;
return this;
}
public InventoryFilter maxDurability(short durability) {
this.maxDurability = durability;
return this;
}
public InventoryFilter data(byte data) {
this.data = data;
return this;
}
public List<ItemStack> apply(Inventory inv) {
List<ItemStack> items = new ArrayList<ItemStack>();
for (ItemStack item : inv.getContent())
if (appliesTo(item)) items.add(item);
return items;
}
public List<ItemStack> applyReverse(Inventory inv) {
List<ItemStack> items = new ArrayList<ItemStack>();
for (ItemStack item : inv.getContent())
if (!appliesTo(item)) items.add(item);
return items;
}
public boolean appliesTo(ItemStack item) {
if (item == null) return false;
if (material != null && material != item.getType()) return false;
if (minAmount != -1 && item.getAmount() < minAmount) return false;
if (maxAmount != -1 && item.getAmount() > maxAmount) return false;
if (minDurability != -1 && item.getDurability() < minDurability) return false;
if (maxDurability != -1 && item.getDurability() > maxDurability) return false;
if (data != -1 && data != item.getData().getData()) return false;
if (item.hasItemMeta()) {
ItemMeta meta = item.getItemMeta();
if (displayName != null && !displayName.equals(meta.getDisplayName())) return false;
if (lore != null && !lore.equals(meta.getLore())) return false;
}
return true;
}
}
Example usage:
Code:
Inventory inv = ...
InventoryFilter filter = new InventoryFilter().material(Material.DIAMOND_SWORD)
.displayName("§aExcalibur").maxDurability(100);
List<ItemStack> results = filter.apply(inv);
You can now do something with the results, for example change the displayname for all of them.
Code:
for (ItemStack item : results) {
ItemMeta meta = item.getItemMeta();
meta.setDisplayName("§eExcalibur");
item.setItemMeta(meta);
// update item in inventory
}
You can edit the filter easily:
Code:
filter.displayName("§eExcalibur").maxDurability(50);
And for checking single items:
Code:
ItemStack item = ...
if (filter.appliesTo(item)) {
// do stuff
}
Or get every item that doesn't match the filter:
Code:
List<ItemStack> result = filter.applyReverse(inv);
To remove single filters, either set it to -1 if it's parameter is a number, or null if not:
Code:
filter = filter.durability(-1).lore(null);
Methods explained (open)
Code:
material(Material material)
item's material is the same as material
Code:
amount(int amount)
item's amount is the same as amount
Code:
minAmount(int amount)
item's amount is >= amount
Code:
maxAmount(int amount)
item's amount is <= amount
Code:
displayName(String displayName)
item's displayName equals displayName
Code:
lore(String... lore)
lore(List<String> lore)
item's lore equals lore
Code:
durability(short durability)
item's durability is the same as durability
Code:
minDurability(short durability)
item's durability is >= durability
Code:
maxDurability(short durability)
item's durability is <= durability
Code:
data(byte data)
item's data is the same as data
Code:
apply(Inventory inv)
returns a List<ItemStack> with the items that match the filter
Code:
applyReverse(Inventory inv)
returns a List<ItemStack> with the items that don't match the filter
Code:
appliesTo(ItemStack item)
returns true if the item matches the filter
Funfact (open)
I wrote all this in the bukkit forums new-thread-editor and didn't even test it :p But it should work ... (I hope it does :oops:)
Didn't know how to implement Enchantments (too complicated) so I left them out lel
Last edited: Jul 12, 2016
ChipDev likes this.
2. Offline
GLookLike
Code:
public InventoryFilter minDurability(short durability){
this.minDurability = minDurability;
return this;
}
There's an error in this.minDurability = minDurability;, it says the assignment to minDurability has no effects.
Code:
if(material != null && material != item.getMaterial())
return false;
There's an error in item.getMaterial, it says the method is undefined for the type ItemStack.
3. Offline
bwfcwalshy Retired Staff
He has fixed both these issues in the edited post.
4. Offline
hsndmrts98
oh nice class ;( i have to use it immediately.
Thread Status:
Not open for further replies.
Share This Page
|
__label__pos
| 0.811899 |
Tribute Page position problem
Tell us what’s happening:
the div element with the id="tribute link " is unable to position, i have set position property to relative and i have given some value, still it doesnt move
Your code so far
‘’’
body{ background-color:rgba(222, 197, 151,0.3); } #main{ background-color:grey; width:90%; height:1200px; margin-left:50px; } #title{ width:100%; height:10%; background-color:#9eb8b2; } #title h1{ position:relative; top:40px; margin-left:270px; font-family: 'Advent Pro', sans-serif; } #img-div{ width:50%; height:50%; background-color:; position:relative; left:200px; box-shadow: 5px 5px 70px; } #image{ width:100%; height:100%; } #tribute-info{ background-color:#9eb8b2; } #tribute-info h2{ text-align:center; #tribute-link { position:relative; background-color:black; left:50px; top:50px; } #tribute-link p a{ target:_blank; border: 0.1px solid; border-color:white; padding: 5px; color:white; } tribute for abhinandhan
ABHINANDHAN VARTHAMAN
<div id="img-div">
<img id="image" src="C:\Users\veenu\Desktop\htm\abhinandhan.jpg">
</div>
<div id="tribute-info">
<h2>Tribute for Abhinandhan Varthaman </h2>
<h4> Who is Abhinandhan?</h4>
<p>Abhinandan Varthaman is a wing-commander in the Indian Air Force. In the 2019 India-Pakistan standoff, he was held captive for 60 hours in Pakistan after his aircraft was shot down in an aerial dogfight.</p>
</div>
<div id="tribute-link">
<p>To know more about Abhinandhan..<a href="https://en.wikipedia.org/wiki/Abhinandan_Varthaman">click here</a> </p>
</div>
''' **Your browser information:* chrome 75 on window 7 User Agent is: `Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36`.
Link to the challenge:
Please make a Codepen with your project and link to it. You can’t use the local file path you are using for the image, see if you can find a picture on the web and link to it directly.
You didn’t close the #tribute-info h2 style block.
#tribute-info h2 {
text-align:center;
#tribute-link {
position:relative;
background-color:black;
left:50px;
top:50px;
}
Should be:
#tribute-info h2 {
text-align: center;
}
#tribute-link {
position: relative;
background-color: black;
left: 50px;
top: 50px;
}
|
__label__pos
| 0.902648 |
On this page:
6.3.1 Parts, Flows, Blocks, and Paragraphs
6.3.2 Tags
6.3.3 Styles
6.3.4 Collected and Resolved Information
6.3.5 Structure Reference
part
paragraph
table
itemization
nested-flow
compound-paragraph
traverse-block
delayed-block
element
image-element
target-element
toc-target-element
toc-target2-element
page-target-element
redirect-target-element
toc-element
link-element
index-element
multiarg-element
traverse-element
delayed-element
part-relative-element
collect-element
render-element
collected-info
target-url
document-version
document-date
color-property
background-color-property
table-cells
table-columns
box-mode
box-mode*
block?
content?
style
plain
element-style?
tag?
generated-tag
content->string
content-width
block-width
part-number-item?
numberer?
make-numberer
numberer-step
link-render-style
current-link-render-style
collect-info
resolve-info
info-key?
collect-put!
resolve-get
resolve-get/ ext?
resolve-get/ ext-id
resolve-search
resolve-get/ tentative
resolve-get-keys
part-collected-info
tag-key
traverse-block-block
traverse-element-content
block-traverse-procedure/ c
element-traverse-procedure/ c
6.3.6 HTML Style Properties
attributes
alt-tag
column-attributes
url-anchor
hover-property
script-property
css-addition
css-style-addition
js-addition
js-style-addition
body-id
document-source
html-defaults
head-extra
render-convertible-as
part-link-redirect
link-resource
install-resource
6.3.7 Latex Style Properties
tex-addition
latex-defaults
latex-defaults+ replacements
command-extras
command-optional
short-title
6.3 Structures And Processing
(require scribble/core) package: scribble-lib
A document is represented as a part, as described in Parts, Flows, Blocks, and Paragraphs. This representation is intended to be independent of its eventual rendering, and it is intended to be immutable; rendering extensions and specific data in a document can collude arbitrarily, however.
A document is processed in four passes:
None of the passes mutate the document representation. Instead, the traverse pass, collect pass, and resolve pass accumulate information in a side hash table, collect-info table, and resolve-info table. The collect pass and resolve pass are effectively specialized version of traverse pass that work across separately built documents.
6.3.1 Parts, Flows, Blocks, and Paragraphs
This diagram shows the large-scale structure of the type hierarchy for Scribble documents. A box represents a struct or a built-in Racket type; for example part is a struct. The bottom portion of a box shows the fields; for example part has three fields, title, blocks, and subparts. The substruct relationship is shown vertically with navy blue lines connected by a triangle; for example, a compound-paragraph is a block. The types of values on fields are shown via dark red lines in the diagram. Doubled lines represent lists and tripled lines represent lists of lists; for example, the blocks field of compound-paragraph is a list of blocks. Dotted lists represent functions that compute elements of a given field; for example, the block field of a traverse-block struct is a function that computes a block.
The diagram is not completely accurate: a table may have 'cont in place of a block in its cells field, and the types of fields are only shown if they are other structs in the diagram. A prose description with more detail follows the diagram.
image
A part is an instance of part; among other things, it has a title content, an initial flow, and a list of subsection parts. There is no difference between a part and a full document; a particular source module just as easily defines a subsection (incorporated via include-section) as a document.
A flow is a list of blocks.
A block is either a table, an itemization, a nested flow, a paragraph, a compound paragraph, a traverse block, or a delayed block.
Changed in version 1.23 of package scribble-lib: Changed the handling of convertible? values to recognize a 'text conversion and otherwise use write.
6.3.2 Tags
A tag is a list containing a symbol and either a string, a generated-tag instance, or an arbitrary list. The symbol effectively identifies the type of the tag, such as 'part for a tag that links to a part, or 'def for a Racket function definition. The symbol also effectively determines the interpretation of the second half of the tag.
A part can have a tag prefix, which is effectively added onto the second item within each tag whose first item is 'part, 'tech, or 'cite, or whose second item is a list that starts with 'prefixable:
The prefix is used for reference outside the part, including the use of tags in the part’s tags field. Typically, a document’s main part has a tag prefix that applies to the whole document; references to sections and defined terms within the document from other documents must include the prefix, while references within the same document omit the prefix. Part prefixes can be used within a document as well, to help disambiguate references within the document.
Some procedures accept a “tag” that is just the string part of the full tag, where the symbol part is supplied automatically. For example, section and secref both accept a string “tag”, where 'part is implicit.
The scribble/tag library provides functions for constructing tags.
6.3.3 Styles
A style combines a style name with a list of style properties in a style structure. A style name is either a string, symbol, or #f. A style property can be anything, including a symbol or a structure such as color-property.
A style has a single style name, because the name typically corresponds to a configurable instruction to a renderer. For example, with Latex output, a string style name corresponds to a Latex command or environment. For more information on how string style names interact with configuration of a renderer, see Extending and Configuring Scribble Output. Symbolic style names, meanwhile, provide a simple layer of abstraction between the renderer and documents for widely supported style; for example, the 'italic style name is supported by all renderers.
Style properties within a style compose with style names and other properties. Again, symbols are often used for properties that are directly supported by renderers. For example, 'unnumbered style property for a part renders the part without a section number. Many properties are renderer-specific, such as a hover-property structure that associates text with an element to be shown in an HTML display when the mouse hovers over the text.
6.3.4 Collected and Resolved Information
The collect pass, resolve pass, and render pass processing steps all produce information that is specific to a rendering mode. Concretely, the operations are all represented as methods on a render<%> object.
The result of the collect method is a collect-info instance. This result is provided back as an argument to the resolve method, which produces a resolve-info value that encapsulates the results from both iterations. The resolve-info value is provided back to the resolve method for final rendering.
Optionally, before the resolve method is called, serialized information from other documents can be folded into the collect-info instance via the deserialize-info method. Other methods provide serialized information out of the collected and resolved records.
During the collect pass, the procedure associated with a collect-element instance can register information with collect-put!.
During the resolve pass, collected information for a part can be extracted with part-collected-info, which includes a part’s number and its parent part (or #f). More generally, the resolve-get method looks up information previously collected. This resolve-time information is normally obtained by the procedure associated with a delayed block or delayed element.
The resolve-get information accepts both a part and a resolve-info argument. The part argument enables searching for information in each enclosing part before sibling parts.
6.3.5 Structure Reference
struct
(struct part (tag-prefix
tags
title-content
style
to-collect
blocks
parts)
#:extra-constructor-name make-part)
tag-prefix : (or/c #f string?)
tags : (listof tag?)
title-content : (or/c #f list?)
style : style?
to-collect : list?
blocks : (listof block?)
parts : (listof part?)
The tag-prefix field determines the optional tag prefix for the part.
The tags indicates a list of tags that each link to the section. Normally, tags should be a non-empty list, so that hyperlinks can target the section.
The title-content field holds the part’s title, if any.
For the style field, the currently recognized symbolic style names are as follows:
The recognized style properties are as follows:
The to-collect field contains content that is inspected during the collect pass, but ignored in later passes (i.e., it doesn’t directly contribute to the output).
The blocks field contains the part’s initial flow (before sub-parts).
The parts field contains sub-parts.
Changed in version 1.25 of package scribble-lib: Added 'no-index support.
Changed in version 1.26: Added link-render-style support.
struct
(struct paragraph (style content)
#:extra-constructor-name make-paragraph)
style : style?
content : content?
A paragraph has a style and a content.
For the style field, a string style name corresponds to a CSS class for HTML output or a macro for Latex output (see Implementing Styles). The following symbolic style names are recognized:
When a paragraph’s style is #f, then it is boxable in the sense of box-mode for Latex output.
The currently recognized style properties are as follows:
struct
(struct table (style blockss)
#:extra-constructor-name make-table)
style : style?
blockss : (listof (listof (or/c block? 'cont)))
See also the tabular function.
A table has, roughly, a list of list of blocks. A cell in the table can span multiple columns by using 'cont instead of a block in the following columns (i.e., for all but the first in a set of cells that contain a single block).
Within style, a string style name corresponds to a CSS class for HTML output or an environment for Latex output (see Implementing Styles). The following symbolic style names are also recognized:
The following style properties are currently recognized:
For Latex output, a paragraph as a cell value is not automatically line-wrapped, unless a vertical alignment is specified for the cell through a table-cells or table-columns style property. To get a line-wrapped paragraph, use a compound-paragraph or use an element with a string style and define a corresponding Latex macro in terms of \parbox. For Latex output of blocks in the flow that are nested-flows, itemizations, compound-paragraphs, or delayed-blocks, the block is wrapped with minipage using \linewidth divided by the column count as the width.
struct
(struct itemization (style blockss)
#:extra-constructor-name make-itemization)
style : style?
blockss : (listof (listof block?))
A itemization has a style and a list of flows.
In style, a string style name corresponds to a CSS class for HTML output or a macro for Latex output (see Implementing Styles). In addition, the following symbolic style names are recognized:
The following style properties are currently recognized:
struct
(struct nested-flow (style blocks)
#:extra-constructor-name make-nested-flow)
style : any/c
blocks : (listof block?)
A nested flow has a style and a flow.
In style, the style name is normally a string that corresponds to a CSS class for HTML <blockquote> output or a Latex environment (see Implementing Styles). The following symbolic style names are recognized:
The following style properties are currently recognized:
struct
(struct compound-paragraph (style blocks)
#:extra-constructor-name make-compound-paragraph)
style : style?
blocks : (listof block?)
A compound paragraph has a style and a list of blocks.
For HTML, a paragraph block in blocks is rendered without a <p> tag, unless the paragraph has a style with a non-#f style name. For Latex, each block in blocks is rendered with a preceding \noindent, unless the block has the 'never-indents property (checking recursively in a nested-flow or compound-paragraph if the nested-flow or compound-paragraph itself has no 'never-indents property).
The style field of a compound paragraph is normally a string that corresponds to a CSS class for HTML output or Latex environment for Latex output (see Implementing Styles). The following style properties are currently recognized:
struct
(struct traverse-block (traverse)
#:extra-constructor-name make-traverse-block)
traverse : block-traverse-procedure/c
Produces another block during the traverse pass, eventually.
The traverse procedure is called with get and set procedures to get and set symbol-keyed information; the traverse procedure should return either a block (which effectively takes the traverse-block’s place) or a procedure like traverse to be called in the next iteration of the traverse pass.
All traverse-element and traverse-blocks that have not been replaced are forced in document order relative to each other during an iteration of the traverse pass.
The get procedure passed to traverse takes a symbol and any value to act as a default; it returns information registered for the symbol or the given default if no value has been registered. The set procedure passed to traverse takes a symbol and a value to registered for the symbol.
See also cond-block in scriblib/render-cond. The symbol 'scribble:current-render-mode is automatically registered to a list of symbols that describe the target of document rendering. The list contains 'html when rendering to HTML, 'latex when rendering via Latex, and 'text when rendering to text. The registration of 'scribble:current-render-mode cannot be changed via set.
struct
(struct delayed-block (resolve)
#:extra-constructor-name make-delayed-block)
resolve : (any/c part? resolve-info? . -> . block?)
The resolve procedure is called during the resolve pass to obtain a normal block. The first argument to resolve is the renderer.
struct
(struct element (style content)
#:extra-constructor-name make-element)
style : element-style?
content : content?
Styled content within an enclosing paragraph or other content.
The style field can be a style structure, but it can also be just a style name.
In style, a string style name corresponds to a CSS class for HTML output and a macro name for Latex output (see Implementing Styles). The following symbolic style names are recognized:
The following style properties are currently recognized:
Changed in version 1.6 of package scribble-lib: Changed 'exact-chars handling to take effect when the style name is #f.
struct
(struct image-element element (path suffixes scale)
#:extra-constructor-name make-image-element)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?)))
suffixes : (listof #rx"^[.]")
scale : real?
Used as a style for an element to inline an image. The path field can be a result of path->main-collects-relative.
For each string in suffixes, if the rendered works with the corresponding suffix, the suffix is added to path and used if the resulting path refers to a file that exists. The order in suffixes determines the order in which suffixes are tried. The HTML renderer supports ".png", ".gif", and ".svg", while the Latex renderer supports ".png", ".pdf", and ".ps" (but rendering Latex output to PDF will not work with ".ps" files, while rendering to Latex DVI output works only with ".ps" files). If suffixes is empty or if none of the suffixes lead to files that exist, path is used as-is.
The scale field scales the image in its rendered form.
struct
(struct target-element element (tag)
#:extra-constructor-name make-target-element)
tag : tag?
Declares the content as a hyperlink target for tag.
struct
(struct toc-target-element target-element ()
#:extra-constructor-name make-toc-target-element)
Like target-element, the content is also a kind of section label to be shown in the “on this page” table for HTML output.
struct
(struct toc-target2-element toc-target-element (toc-content)
#:extra-constructor-name make-toc-target2-element)
toc-content : content?
Extends target-element with a separate field for the content to be shown in the “on this page” table for HTML output.
struct
(struct page-target-element target-element ()
#:extra-constructor-name make-page-target-element)
Like target-element, but a link to the element goes to the top of the containing page.
struct
(struct redirect-target-element target-element (alt-path
alt-anchor)
#:extra-constructor-name make-redirect-target-element)
alt-path : path-string?
alt-anchor : string?
Like target-element, but a link to the element is redirected to the given URL.
struct
(struct toc-element element (toc-content)
#:extra-constructor-name make-toc-element)
toc-content : content?
Similar to toc-target-element, but with specific content for the “on this page” table specified in the toc-content field.
struct
(struct link-element element (tag)
#:extra-constructor-name make-link-element)
tag : tag?
Represents a hyperlink to tag.
Normally, the content of the element is rendered as the hyperlink. When tag is a part tag and the content of the element is null, however, rendering is treated specially based on the mode value of a link-render-style style property:
If a link-render-style style property is not attached to a link-element that refers to a part, a link-render-style style property that is attached to an enclosing part is used, since attaching a link-render-style style property to a part causes current-link-render-style to be set while rendering the part. Otherwise, the render-time value of current-link-render-style determine’s a link-element’s rendering.
The following style properties are recognized in addition to the style properties for all elements:
Changed in version 1.26 of package scribble-lib: Added link-render-style support.
struct
(struct index-element element (tag plain-seq entry-seq desc)
#:extra-constructor-name make-index-element)
tag : tag?
plain-seq : (and/c pair? (listof string?))
entry-seq : (listof content?)
desc : any/c
The plain-seq specifies the keys for sorting, where the first string is the main key, the second is a sub-key, etc. For example, an “night” portion of an index might have sub-entries for “night, things that go bump in” and “night, defender of the”. The former would be represented by plain-seq '("night" "things that go bump in"), and the latter by '("night" "defender of the"). Naturally, single-string plain-seq lists are the common case, and at least one word is required, but there is no limit to the word-list length. The strings in plain-seq must not contain a newline character.
The entry-seq list must have the same length as plain-seq. It provides the form of each key to render in the final document.
The desc field provides additional information about the index entry as supplied by the entry creator. For example, a reference to a procedure binding can be recognized when desc is an instance of procedure-index-desc. See scribble/manual-struct for other typical types of desc values.
See also index.
struct
(struct multiarg-element (style contents)
#:extra-constructor-name make-multiarg-element)
style : element-style?
contents : (listof content?)
Like element with a list for content, except that for Latex output, if the style name in style is a string, then it corresponds to a Latex command that accepts as many arguments (each in curly braces) as elements of contents.
struct
(struct traverse-element (traverse)
#:extra-constructor-name make-traverse-element)
traverse : element-traverse-procedure/c
See also cond-element in scriblib/render-cond. Like traverse-block, but the traverse procedure must eventually produce content, rather than a block.
struct
(struct delayed-element (resolve sizer plain)
#:extra-constructor-name make-delayed-element)
resolve : (any/c part? resolve-info? . -> . content?)
sizer : (-> any/c)
plain : (-> any/c)
The render procedure’s arguments are the same as for delayed-block, but the result is content. Unlike delayed-block, the result of the render procedure’s argument is remembered on the first call for re-use for a particular resolve pass.
The sizer field is a procedure that produces a substitute content for the delayed element for the purposes of determining the delayed element’s width (see element-width).
The plain field is a procedure that produces a substitute content when needed before the collect pass, such as when element->string is used before the collect pass.
struct
(struct part-relative-element (resolve sizer plain)
#:extra-constructor-name make-part-relative-element)
resolve : (collect-info? . -> . content?)
sizer : (-> any/c)
plain : (-> any/c)
Similar to delayed-block, but the replacement content is obtained in the collect pass by calling the function in the resolve field.
The resolve function can call collect-info-parents to obtain a list of parts that enclose the element, starting with the nearest enclosing section. Functions like part-collected-info and collected-info-number can extract information like the part number.
struct
(struct collect-element element (collect)
#:extra-constructor-name make-collect-element)
collect : (collect-info . -> . any)
Like element, but the collect procedure is called during the collect pass. The collect procedure normally calls collect-put!.
Unlike delayed-element or part-relative-element, the element remains intact (i.e., it is not replaced) by either the collect pass or resolve pass.
struct
(struct render-element element (render)
#:extra-constructor-name make-render-element)
render : (any/c part? resolve-info? . -> . any)
Like delayed-element, but the render procedure is called during the render pass.
If a render-element instance is serialized (such as when saving collected info), it is reduced to a element instance.
struct
(struct collected-info (number parent info)
#:extra-constructor-name make-collected-info)
number : (listof part-number-item?)
parent : (or/c #f part?)
info : any/c
Computed for each part by the collect pass.
The length of the number list indicates the section’s nesting depth. Elements of number correspond to the section’s number, it’s parent’s number, and so on (that is, the section numbers are in reverse order):
Changed in version 1.1 of package scribble-lib: Added (list/c string? string?) number items for numberer-generated section numbers.
struct
(struct target-url (addr)
#:extra-constructor-name make-target-url)
addr : path-string?
Used as a style property for an element. A path is allowed for addr, but a string is interpreted as a URL rather than a file path.
struct
(struct document-version (text)
#:extra-constructor-name make-document-version)
text : (or/c string? #f)
Used as a style property for a part to indicate a version number.
struct
(struct document-date (text)
#:extra-constructor-name make-document-date)
text : (or/c string? #f)
Used as a style property for a part to indicate a date (which is typically used for Latex output).
struct
(struct color-property (color)
#:extra-constructor-name make-color-property)
color : (or/c string? (list/c byte? byte? byte?))
Used as a style property for an element to set its color. Recognized string names for color depend on the renderer, but at the recognized set includes at least "white", "black", "red", "green", "blue", "cyan", "magenta", and "yellow". When color is a list of bytes, the values are used as RGB levels.
When rendering to HTML, a color-property is also recognized for a block, part (and used for the title in the latter case)or cell in a table.
struct
(struct background-color-property (color)
#:extra-constructor-name make-background-color-property)
color : (or/c string? (list/c byte? byte? byte?))
Like color-property, but sets the background color.
struct
(struct table-cells (styless)
#:extra-constructor-name make-table-cells)
styless : (listof (listof style?))
Used as a style property for a table to set its cells’ styles.
If a cell style has a string name, it is used as an HTML class for the <td> tag or as a Latex command name.
The following are recognized as cell-style properties:
Changed in version 1.1 of package scribble-lib: Added color-property and background-color-property support.
Changed in version 1.4: Added 'border, 'left-border, 'right-border, 'top-border, and 'bottom-border support.
struct
(struct table-columns (styles)
#:extra-constructor-name make-table-columns)
styles : (listof style?)
Like table-cells, but with support for a column-attributes property in each style, and the styles list is otherwise duplicated for each row in the table. The non-column-attributes parts of a table-columns are used only when a table-cells property is not present along with the table-columns property.
For HTML table rendering, for each column that has a column-attributes property in the corresponding element of styles, the attributes are put into an HTML col tag within the table.
struct
(struct box-mode (top-name center-name bottom-name)
#:extra-constructor-name make-box-mode)
top-name : string?
center-name : string?
bottom-name : string?
procedure
(box-mode* name) box-mode?
name : string?
As a style property, indicates that a nested flow or paragraph is boxable when it is used in a boxing context for Latex output, but a nested flow is boxable only if its content is also boxable.
A boxing context starts with a table cell in a multi-column table, and the content of a block in a boxing context is also in a boxing context. If the cell’s content is boxable, then the content determines the width of the cell, otherwise a width is imposed. A paragraph with a #f style name is boxable as a single line; the 'wraps style name makes the paragraph non-boxable so that its width is imposed and its content can use multiple lines. A table is boxable when that all of its cell content is boxable.
To generate output in box mode, the box-mode property supplies Latex macro names to apply to the nested flow or paragraph content. The top-name macro is used if the box’s top line is to be aligned with other boxes, center-name if the box’s center is to be aligned, and bottom-name if the box’s bottom line is to be aligned. The box-mode* function creates a box-mode structure with the same name for all three fields.
A box-mode style property overrides any automatic boxed rendering (e.g., for a paragraph with style name #f). If a block has both a box-mode style property and a 'multicommand style property, then the Latex macro top-name, center-name, or bottom-name is applied with a separate argument for each of its content.
procedure
(block? v) boolean?
v : any/c
Returns #t if v is a paragraph, table, itemization, nested-flow, traverse-block, or delayed-block, #f otherwise.
procedure
(content? v) boolean?
v : any/c
Returns #t if v is a string, symbol, element, multiarg-element, traverse-element, delayed-element, part-relative-element, a convertible value in the sense of convertible?, or list of content. Otherwise, it returns #f.
struct
(struct style (name properties)
#:extra-constructor-name make-style)
name : (or/c string? symbol? #f)
properties : list?
Represents a style.
value
plain : style?
A style (make-style #f null).
procedure
(element-style? v) boolean?
v : any/c
Returns #t if v is a string, symbol, #f, or style structure.
procedure
(tag? v) boolean?
v : any/c
Returns #t if v is acceptable as a link tag, which is a list containing a symbol and either a string, a generated-tag instance, or a non-empty list of serializable? values.
struct
(struct generated-tag ()
#:extra-constructor-name make-generated-tag)
A placeholder for a tag to be generated during the collect pass. Use tag-key to convert a tag containing a generated-tag instance to one containing a string.
procedure
(content->string content) string?
content : content?
(content->string content renderer p info) string?
content : content?
renderer : any/c
p : part?
info : resolve-info?
Converts content to a single string (essentially rendering the content as “plain text”).
If p and info arguments are not supplied, then a pre-“collect” substitute is obtained for delayed elements. Otherwise, the two arguments are used to force the delayed element (if it has not been forced already).
Returns the width in characters of the given content.
procedure
(block-width e) exact-nonnegative-integer?
e : block?
Returns the width in characters of the given block.
procedure
(part-number-item? v) boolean
v : any/c
Return #t if v is #f, an exact non-negative integer, a string, or a list containing two strings. See collected-info for information on how different representations are used for numbering.
Added in version 1.1 of package scribble-lib.
procedure
(numberer? v) boolean?
v : any/c
procedure
(make-numberer step initial-value) numberer?
step :
(any/c (listof part-number-item?)
. -> .
(values part-number-item? any/c))
initial-value : any/c
procedure
(numberer-step n
parent-number
ci
numberer-values)
part-number-item? hash?
n : numberer?
parent-number : (listof part-number-item?)
ci : collect-info?
numberer-values : hash?
A numberer implements a representation of a section number that increment separately from the default numbering style and that can be rendered differently than as Arabic numerals.
The numberer? function returns #t if v is a numberer, or #f otherwise.
The make-numberer function creates a numberer. The step function computes both the current number’s representation and increments the number, where the “number” can be an arbitrary value; the initial-value argument determines the initial value of the “number”, and the step function receives the current value as its first argument and returns an incremented value as its second result. A numberer’s “number” value starts fresh at each new nesting level. In addition to the numberer’s current value, the step function receives the parent section’s numbering (so that its result can depend on the part’s nesting depth).
The numberer-step function is normally used by a renderer. It applies a numberer, given the parent section’s number, a collect-info value, and a hash table that accumulates numberer values at a given nesting layer. The collect-info argument is needed because a numberer’s identity is based on a generated-tag. The result of numberer-step is the rendered form of the current section number plus an updated hash table with an incremented value for the numberer.
Typically, the rendered form of a section number (produced by numberer-step) is a list containing two strings. The first string is the part’s immediate number, which can be combined with a prefix for enclosing parts’ numbers. The second string is a separator that is placed after the part’s number and before a subsection’s number for each subsection. If numberer-step produces a plain string for the rendered number, then it is not added as a prefix to subsection numbers. See also collected-info.
Added in version 1.1 of package scribble-lib.
struct
(struct link-render-style (mode)
#:extra-constructor-name make-link-render-style)
mode : (or/c 'default 'number)
Used as a style property for a part or a specific link-element to control the way that a hyperlink is rendered for a part via secref or for a figure via figure-ref from scriblib/figure.
The 'default and 'number modes represent generic hyperlink-style configurations that could make sense for various kinds of references. The 'number style is intended to mean that a specific number is shown for the reference and that only the number is hyperlinked. The 'default style is more flexible, allowing a more appropriate choice for the rendering context, such as using the target section’s name for a hyperlink in HTML.
Added in version 1.26 of package scribble-lib.
A parameter that determines the default rendering style for a section link.
When a part has a link-render-style as one of its style properties, then the current-link-render-style parameter is set during the resolve pass and render pass for the part’s content.
Added in version 1.26 of package scribble-lib.
struct
(struct collect-info (fp
ht
ext-ht
ext-demand
parts
tags
gen-prefix
relatives
parents)
#:extra-constructor-name make-collect-info)
fp : any/c
ht : any/c
ext-ht : any/c
ext-demand : (tag? collect-info? . -> . any/c)
parts : any/c
tags : any/c
gen-prefix : any/c
relatives : any/c
parents : (listof part?)
Encapsulates information accumulated (or being accumulated) from the collect pass. The fields are exposed, but not currently intended for external use, except that collect-info-parents is intended for external use.
struct
(struct resolve-info (ci delays undef searches)
#:extra-constructor-name make-resolve-info)
ci : any/c
delays : any/c
undef : any/c
searches : any/c
Encapsulates information accumulated (or being accumulated) from the resolve pass. The fields are exposed, but not currently intended for external use.
procedure
(info-key? v) boolean?
v : any/c
Returns #t if v is an info key: a list of at least two elements whose first element is a symbol. The result is #f otherwise.
For a list that is an info tag, the interpretation of the second element of the list is effectively determined by the leading symbol, which classifies the key. However, a #f value as the second element has an extra meaning: collected information mapped by such info keys is not propagated out of the part where it is collected; that is, the information is available within the part and its sub-parts, but not in ancestor or sibling parts.
Note that every tag is an info key.
procedure
(collect-put! ci key val) void?
ci : collect-info?
key : info-key?
val : any/c
Registers information in ci. This procedure should be called only during the collect pass.
procedure
(resolve-get p ri key) any/c
p : (or/c part? #f)
ri : resolve-info?
key : info-key?
Extract information during the resolve pass or render pass for p from ri, where the information was previously registered during the collect pass. See also Collected and Resolved Information.
The result is #f if the no value for the given key is found. Furthermore, the search failure is recorded for potential consistency reporting, such as when racket setup is used to build documentation.
procedure
(resolve-get/ext? p ri key)
any/c boolean?
p : (or/c part? #f)
ri : resolve-info?
key : info-key?
Like resolve-get, but returns a second value to indicate whether the resulting information originated from an external source (i.e., a different document).
procedure
(resolve-get/ext-id p ri key)
any/c (or/c boolean? string?)
p : (or/c part? #f)
ri : resolve-info?
key : info-key?
Like resolve-get/ext?, but the second result can be a string to indicate the source document’s identification as established via load-xref and a #:doc-id argument.
Added in version 1.1 of package scribble-lib.
procedure
(resolve-search dep-key p ri key) void?
dep-key : any/c
p : (or/c part? #f)
ri : resolve-info?
key : info-key?
Like resolve-get, but a shared dep-key groups multiple searches as a single request for the purposes of consistency reporting and dependency tracking. That is, a single success for the same dep-key means that all of the failed attempts for the same dep-key have been satisfied. However, for dependency checking, such as when using racket setup to re-build documentation, all attempts are recorded (in case external changes mean that an earlier attempt would succeed next time).
procedure
(resolve-get/tentative p ri key) any/c
p : (or/c part? #f)
ri : resolve-info?
key : info-key?
Like resolve-search, but without dependency tracking. For multi-document settings where dependencies are normally tracked, such as when using racket setup to build documentation, this function is suitable for use only for information within a single document.
procedure
(resolve-get-keys p ri pred) list?
p : (or/c part? #f)
ri : resolve-info?
pred : (info-key? . -> . any/c)
Applies pred to each key mapped for p in ri, returning a list of all keys for which pred returns a true value.
procedure
(part-collected-info p ri) collected-info?
p : part?
ri : resolve-info?
Returns the information collected for p as recorded within ri.
procedure
(tag-key t ri) tag?
t : tag?
ri : resolve-info?
Converts a generated-tag value with t to a string.
procedure
(traverse-block-block b i) block?
b : traverse-block?
i : (or/c resolve-info? collect-info?)
Produces the block that replaces b.
Produces the content that replaces e.
Defined as
(recursive-contract
((symbol? any/c . -> . any/c)
(symbol? any/c . -> . any)
. -> . (or/c block-traverse-procedure/c
block?)))
Defined as
(recursive-contract
((symbol? any/c . -> . any/c)
(symbol? any/c . -> . any)
. -> . (or/c element-traverse-procedure/c
content?)))
6.3.6 HTML Style Properties
The scribble/html-properties library provides datatypes used as style properties for HTML rendering.
struct
(struct attributes (assoc)
#:extra-constructor-name make-attributes)
assoc : (listof (cons/c symbol? string?))
Used as a style property to add arbitrary attributes to an HTML tag.
struct
(struct alt-tag (name)
#:extra-constructor-name make-alt-tag)
name : (and/c string? #rx"^[a-zA-Z0-9]+$")
Use as a style property for an element, paragraph, or compound-paragraph to substitute an alternate HTML tag (instead of <span>, <p>, div, etc.).
struct
(struct column-attributes (assoc)
#:extra-constructor-name make-column-attributes)
assoc : (listof (cons/c symbol? string?))
Used as a style property on a style with table-columns to add arbitrary attributes to an HTML col tag within the table.
struct
(struct url-anchor (name)
#:extra-constructor-name make-url-anchor)
name : string?
Used as a style property with element to insert an anchor before the element.
struct
(struct hover-property (text)
#:extra-constructor-name make-hover-property)
text : string?
Used as a style property with element to add text that is shown when the mouse hovers over the element.
struct
(struct script-property (type script)
#:extra-constructor-name make-script-property)
type : string?
script : (or/c path-string? (listof string?))
Used as a style property with element to supply a script alternative to the element content.
struct
(struct css-addition (path)
#:extra-constructor-name make-css-addition)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?))
url?
bytes?)
Used as a style property to supply a CSS file (if path is a path, string, or list), URL (if path is a url) or content (if path is a byte string) to be referenced or included in the generated HTML. This property can be attached to any style, and all additions are collected to the top of the generated HTML page.
The path field can be a result of path->main-collects-relative.
struct
(struct css-style-addition (path)
#:extra-constructor-name make-css-style-addition)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?))
url?
bytes?)
Like css-addition, but added after any style files that are specified by a document and before any style files that are provided externally.
struct
(struct js-addition (path)
#:extra-constructor-name make-js-addition)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?))
url?
bytes?)
Like css-addition, but for a JavaScript file instead of a CSS file.
struct
(struct js-style-addition (path)
#:extra-constructor-name make-js-style-addition)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?))
url?
bytes?)
Like css-style-addition, but for a JavaScript file instead of a CSS file.
struct
(struct body-id (value)
#:extra-constructor-name make-body-id)
value : string?
Used as a style property to associate an id attribute with an HTML tag.
struct
(struct document-source (module-path)
#:extra-constructor-name make-document-source)
module-path : module-path?
Used as a style property to associate a module path with a part. Clicking on a section title within the part may show module-path with the part’s tag string, so that authors of other documents can link to the section.
More specifically, the section title is given the HTML attributes x-source-module and x-part-tag, plus x-part-prefixes if the section or enclosing sections declare tag prefixes, and x-source-pkg if the source is found within a package at document-build time. The scribble/manual style recognizes those tags to make clicking a title show cross-reference information.
Added in version 1.2 of package scribble-lib.
Changed in version 1.7: Added x-part-prefixes.
Changed in version 1.9: Added x-source-pkg.
struct
(struct html-defaults (prefix style extra-files)
#:extra-constructor-name make-html-defaults)
prefix :
(or/c bytes? path-string?
(cons/c 'collects (listof bytes?)))
style :
(or/c bytes? path-string?
(cons/c 'collects (listof bytes?)))
extra-files :
(listof (or/c path-string?
(cons/c 'collects (listof bytes?))))
Like latex-defaults, but use for the scribble command-line tool’s --html and --htmls modes.
struct
(struct head-extra (xexpr)
#:extra-constructor-name make-head-extra)
xexpr : xexpr/c
For a part that corresponds to an HTML page, adds content to the <head> tag.
struct
(struct render-convertible-as (types)
#:extra-constructor-name make-render-convertible-as)
types : (listof (or/c 'png-bytes 'svg-bytes))
For a part that corresponds to an HTML page, controls how objects that subscribe to the file/convertible protocol are rendered.
The alternatives in the types field are tried in order and the first one that succeeds is used in the html output.
struct
(struct part-link-redirect (url)
#:extra-constructor-name make-part-link-redirect)
url : url?
As a style property on a part, causes hyperiinks to the part to be redirected to url instead of the rendered part.
struct
(struct link-resource (path)
#:extra-constructor-name make-link-resource)
path : path-string?
As a style property on an element, causes the elements to be rendered as a hyperlink to (a copy of) path.
The file indicated by path is referenced in place when render<%> is instantiated with refer-to-existing-files as true. Otherwise, it is copied to the destination directory and potentially renamed to avoid conflicts.
struct
(struct install-resource (path)
#:extra-constructor-name make-install-resource)
path : path-string?
Like link-resource, but makes path accessible in the destination without rendering a hyperlink.
This style property is useful only when render<%> is instantiated with refer-to-existing-files as #f, and only when path does not match then name of any other file that is copied by the renderer to the destination.
6.3.7 Latex Style Properties
The scribble/latex-properties library provides datatypes used as style properties for Latex rendering.
struct
(struct tex-addition (path)
#:extra-constructor-name make-tex-addition)
path :
(or/c path-string?
(cons/c 'collects (listof bytes?))
bytes?)
Used as a style property to supply a ".tex" file (if path is a path, string, or list) or content (if path is a byte string) to be included in the generated Latex. This property can be attached to any style, and all additions are collected to the top of the generated Latex file.
The path field can be a result of path->main-collects-relative.
struct
(struct latex-defaults (prefix style extra-files)
#:extra-constructor-name make-latex-defaults)
prefix :
(or/c bytes? path-string?
(cons/c 'collects (listof bytes?)))
style :
(or/c bytes? path-string?
(cons/c 'collects (listof bytes?)))
extra-files :
(listof (or/c path-string?
(cons/c 'collects (listof bytes?))))
Used as a style property on the main part of a document to set a default prefix file, style file, and extra files (see Configuring Output). The defaults are used by the scribble command-line tool for --latex or --pdf mode if none are supplied via --prefix and --style (where extra-files are used only when prefix is used). A byte-string value is used directly like file content, and a path can be a result of path->main-collects-relative.
Languages (used with #lang) like scribble/manual and scribble/sigplan add this property to a document to specify appropriate files for Latex rendering.
See also scribble/latex-prefix.
struct
(struct latex-defaults+replacements latex-defaults (replacements)
#:extra-constructor-name make-latex-defaults+replacements)
replacements :
(hash/c string? (or/c bytes? path-string?
(cons/c 'collects (listof bytes?))))
Like latex-defaults but it allows for more configuration. For example if the replacements maps "scribble-load-replace.tex" to "my-scribble.tex", then the "my-scribble.tex" file in the current directory will we used in place of the standard scribble package inclusion header.
struct
(struct command-extras (arguments)
#:extra-constructor-name make-command-extras)
arguments : (listof string?)
Used as a style property on an element to add extra arguments to the element’s command in Latex output.
struct
(struct command-optional (arguments)
#:extra-constructor-name make-command-optional)
arguments : (listof string?)
Used as a style property on a element to add a optional arguments to the element’s command in Latex output.
Added in version 1.20 of package scribble-lib.
struct
(struct short-title (text)
#:extra-constructor-name make-short-title)
text : (or/c string? #f)
Used as a style property on a title-decl. Attaches a short title to the title for a part if the Latex class file uses a short title.
Added in version 1.20 of package scribble-lib.
|
__label__pos
| 0.595406 |
[Solved] Getting server from Timing out
Discussion in 'Install/Configuration' started by Brent W Peterson, Apr 30, 2012.
1. Brent W Peterson
Brent W Peterson New Member
I have a client with Litespeed and I am trying to do a Magento upgrade. In the past I have not been successful using lightspeed.
Any thoughts on how I can stop it from timing out? Is there a setting like apache that will make the server process forever?
2. webizen
webizen New Member
3. Brent W Peterson
Brent W Peterson New Member
One more question: Where is the conf file that I can edit if I don't have web access (only ssh)
4. webizen
webizen New Member
/path/to/lsws/conf/httpd_conf.xml
5. Brent W Peterson
Brent W Peterson New Member
I have it installed /usr/local/lsws but I don't have a conf dir? What is the name of the file?
6. Brent W Peterson
Brent W Peterson New Member
I found it httpd_config.xml
Share This Page
|
__label__pos
| 0.972923 |
Sqlite database error attempt to write a readonly database design
The addRow method iterates through all the columns and copy the content of 1st column to new column. For information on the advantages of using client datasets to cache updates, see Using a client dataset to cache updates Client datasets can apply edits directly to a database server when the dataset is read-only.
Dynamically Add/Remove rows in HTML table using JavaScript
In this case, marking rss entries as read, page by Page. So why do we need a DbContext. A connection component uses this list, for example, to close all of the datasets when it closes the database connection.
It only required midas. You might want to use a two-part dataset for the following reasons: Every time the database connection has to be closed once you are done with database access. RecordCount "Suppliers" ; Console.
Álvaro Ramírez
String, Date and Data properties can be optional. There are a number of other extension methods that can do the same. List instances can also be used to model collections of primitive values for example, an array of strings or integers.
This layout holds the design of single note item in the list. So why move away from the BDE.
Web SQL Database
Note that if the class is declared as objcMembers Swift 4 or laterthe individual properties can just be declared as dynamic var.
Add the below resources to colors. Configuring a Realm Configure a Realm before opening it by creating an instance of Realm. Configuration, you can create a Realm that runs entirely in memory without being persisted to disk.
There are three exceptions to this: Using a client dataset provides a standard way to make such data editable. You can have multiple DbContext objects, one for each main database with attached database s. In fact, it can represent multiple queries and stored procedures simultaneously, with separate properties for each.
SQLite is one way of storing app data. To create one, simply subclass Object or an existing Realm model class.
光 HikariCP・A solid, high-performance, JDBC connection pool at last. - brettwooldridge/HikariCP. Using my Django app, I'm able to read from the database just fine.
Android SQLite Database Tutorial
When the application didn't have permission to access the file, it gave me this error: attempt to write a readonly database Wh. #define SQLITE_SERIALIZE_NOCOPY 0x /* Do no memory allocations */ Zero or more of the following constants can be OR-ed together for the F argument to sqlite3_serialize(D,S,P,F).
C-language Interface Specification for SQLite
SQLITE_SERIALIZE_NOCOPY means that sqlite3_serialize() will return a pointer to contiguous in-memory database that it is currently using, without making a copy of the database. How can I change an SQLite database from read-only to read-write? When I executed the update statement, I always got: SQL error: attempt to write a readonly database.
The SQLite file is a writeable file on the filesystem.
Web SQL Database
Desktop application deploy — attempt to write a readonly database sqlite. Ask Question. C# Sqlite: Attempt to write a readonly database. 0 (SQLite) Error: attempt to write to a readonly database. Hot Network Questions. If method is none, then that's all there clientesporclics.com method is const or linear, the time-weighted series of values is taken into account clientesporclics.com weight is the timespan between two subsequent updates.
With the const method, the value is the value of the reading at the beginning of the timespan; with the linear method, the value is the arithmetic average .
Sqlite database error attempt to write a readonly database design
Rated 4/5 based on 73 review
Android SQLite Database Tutorial
|
__label__pos
| 0.947236 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
What would be the output of this ? I see the output but not able to understand why that happens.
def multiple(x,y):
mul = x*y
return mul
x=int(raw_input("Enter value 1 ")),
y=int(raw_input("Enter value 2 "))
print multiple(x,y)
share|improve this question
4 Answers 4
In your code, the , at the end of the first raw_input means x is actually a tuple containing the user input. When you call the function, what you are actually doing is multiplying the tuple by an integer, which just multiplies the tuple (x) y times.
For example:
>>> x = 2,
>>> x * 5
(2, 2, 2, 2, 2)
>>> x = 2
>>> x * 5
10
share|improve this answer
The comma makes x equal to a tuple of size 1 (containing the int).
Simple test:
>>> a = 1,
>>> print a
(1,)
share|improve this answer
A large error with this is that if x and y are not numbers (aka a string), the function would be messed up. This can be fixed by saying: try: mul = float(x) * float(y) then, to catch the case when x or y are not numbers, except TypeError: print('Please do not give a string...') In this case, you want to show that mul is not valid, so you say, mul = None Now you can return mul in line with the try and except statements.
This ensures that the inputs are decimal point numbers, not characters.
share|improve this answer
First of all, you define a function called multiple and it multiplies x and y (parameters of multiple), then returns that value. Then it takes input for two different variables, x and y (not the same as the parameters above), multiplies (by calling multiple), and prints them out, which is what you see as output.
The comma however, simply defines x as a tuple.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.934175 |
You copied the Doc URL to your clipboard.
Model Trace Interface
This section describes the Model Trace Interface (MTI).
Fast Models supports the generation of traces that consistently track the execution and related activities in the model. In particular, tracking those activities that affect the state of the modeled IP. Generated virtual platforms provide trace support by using plug-ins in the form of DLLs and shared objects on Windows and Linux, respectively.
The following diagram represents the MTI architecture, where all the tracing information of the system is found.
Figure 1-2 MTI architecture
MTI architecture
A trace source provides information about a specific event that occurs in the component. In a processing unit component, this event can be, for example, the execution of an instruction, the taking of a branch, or an MMU translation. It can also be an event that is specific to models, such as the SYNC event, which is called at every quantum boundary. Each trace source contains fields that give more information about the event, including a text description. When the field is of type MTI_ENUM, the values for this field are also listed. For example:
Source CACHE_READ_HIT (Read access cache hit.)
Field IS_SHARED type:MTI_ENUM size:1 (Is the access shared)
0x0 = NON_SHARED
0x1 = SHARED
Field IS_PRELOAD type:MTI_ENUM size:1 (Is the access a preload)
0x0 = NOT_PRELOAD
0x1 = PRELOAD
ARM produces several prebuilt plug-ins, which are documented in the chapter Plug-ins for Fast Models. Source code example plug-ins are provided at $PVLIB_HOME/examples/MTI. Plug-ins are loaded at simulation start-up. You can load multiple plug-ins at the same time.
|
__label__pos
| 0.937441 |
When testing new Xampp, had this trouble on using the mail using localhost even though i’m using gmail as a sender… The thing is I can’t seem to find ;extension=php_openssl.dll in my php.ini configuration so I just added it then it worked
1. Stop your Apache service
2. Find libeay32.dll and ssleay32.dll in xamppphp folder, and copy it into xamppapachebin folder. Overwrite the older files in there.
3. Edit php.ini file in xamppapachebin, remove the semicolon in “;extension=php_openssl.dll” or add the code extension=php_openssl.dll
4. Start the Apache service
|
__label__pos
| 0.782789 |
Your achievements
Level 1
0% to
Level 2
Tip /
Sign in
Sign in to Community
to gain points, level up, and earn exciting badges like the new
Bedrock Mission!
Learn more
View all
Sign in to view all badges
The 1st edition of the Target Community Lens newsletter is out now! Click to the right to find all the latest updates
SOLVED
Activity not saving changes
Avatar
Level 2
This is a weird one. I make changes to an existing activity, save the changes. Test the changes via the QA Link but my changes aren't present. If I edit the activity I can see my changes there in the scripts.
When I edit the activity on a different computer but with the same account, the changes I made earlier aren't there. However, saving it on this computer gives me the same results.
The only way I've found to effectively save changes, is by copying the activity and saving it.
Any advice?
EDIT: Tried the copy activity method, and it doesn't work either. Instead of showing the previous changes, it shows up as if it's a blank activity. I can still edit and see my changes. Also tried deleting the scripts and re-adding them.
1 Accepted Solution
Avatar
Correct answer by
Employee
Hi @Mark_47 ,
Few things:
Test the changes via the QA Link but my changes aren't present. If I edit the activity I can see my changes there in the scripts.
This sounds like a content delivery issue, try adding the param mboxDebug=1 to the QA URL and observe the logs on the Developer tools console- do you see any errors?
When I edit the activity on a different computer but with the same account, the changes I made earlier aren't there. However, saving it on this computer gives me the same results.
Interesting- this might need a screenshare to troubleshoot. Please email us at [email protected] and we can set up a call.
Thanks!
View solution in original post
3 Replies
Avatar
Correct answer by
Employee
Hi @Mark_47 ,
Few things:
Test the changes via the QA Link but my changes aren't present. If I edit the activity I can see my changes there in the scripts.
This sounds like a content delivery issue, try adding the param mboxDebug=1 to the QA URL and observe the logs on the Developer tools console- do you see any errors?
When I edit the activity on a different computer but with the same account, the changes I made earlier aren't there. However, saving it on this computer gives me the same results.
Interesting- this might need a screenshare to troubleshoot. Please email us at [email protected] and we can set up a call.
Thanks!
Avatar
Employee
@Mark_47 Can you please confirm that you are firstly saving all the modifications being applied to the activity, then later save the activity.
For troubleshooting the content delivery issue, you could try using the mboxTrace https://docs.adobe.com/content/help/en/target/using/activities/troubleshoot-activities/content-troub...
Regards,
Karan Dhawan
Avatar
Employee
@Mark_47
When I edit the activity on a different computer but with the same account, the changes I made earlier aren't there. However, saving it on this computer gives me the same results.
- It means the changes you made are not saved, so I would advise you to make sure that whatever changes you are making are getting saved.
- for that you can click on the 'save' button available on Top-Right when your making changes in the activity.
- Once your changes are saved then you check with QA Urls if Target content is getting delivered or not ?
- If for some reason you are not able to save the changes, Try in incognito window or in another browser.
- If changes are still getting lost, then it requieres a screenshare to troubleshoot, then please email us at [email protected]
Regards
Skand
|
__label__pos
| 0.839833 |
vendor/CMF/1.6.3/CMFCore
view PortalFolder.py @ 2:4c712d7bd1d7
Added tag 1.6.3 for changeset 1babb9d61518
author Georges Racinet on purity.racinet.fr <[email protected]>
date Fri, 09 Sep 2011 12:44:00 +0200
parents
children
line source
1 ##############################################################################
2 #
3 # Copyright (c) 2001 Zope Corporation and Contributors. All Rights Reserved.
4 #
5 # This software is subject to the provisions of the Zope Public License,
6 # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
7 # THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
8 # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
9 # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
10 # FOR A PARTICULAR PURPOSE.
11 #
12 ##############################################################################
13 """ PortalFolder: CMF-enabled Folder objects.
15 $Id$
16 """
18 import base64
19 import marshal
20 import re
21 from warnings import warn
23 from AccessControl import ClassSecurityInfo
24 from AccessControl import getSecurityManager
25 from Acquisition import aq_parent, aq_inner, aq_base
26 from Globals import DTMLFile
27 from Globals import InitializeClass
28 from OFS.OrderSupport import OrderSupport
29 from OFS.Folder import Folder
31 from CMFCatalogAware import CMFCatalogAware
32 from DynamicType import DynamicType
33 from exceptions import AccessControl_Unauthorized
34 from exceptions import BadRequest
35 from exceptions import zExceptions_Unauthorized
36 from interfaces.Folderish import Folderish as IFolderish
37 from permissions import AddPortalContent
38 from permissions import AddPortalFolders
39 from permissions import ChangeLocalRoles
40 from permissions import DeleteObjects
41 from permissions import ListFolderContents
42 from permissions import ManagePortal
43 from permissions import ManageProperties
44 from permissions import View
45 from utils import _checkPermission
46 from utils import getToolByName
49 factory_type_information = (
50 { 'id' : 'Folder'
51 , 'meta_type' : 'Portal Folder'
52 , 'description' : """ Use folders to put content in categories."""
53 , 'icon' : 'folder_icon.gif'
54 , 'product' : 'CMFCore'
55 , 'factory' : 'manage_addPortalFolder'
56 , 'filter_content_types' : 0
57 , 'immediate_view' : 'folder_edit_form'
58 , 'aliases' : {'(Default)': 'index_html',
59 'view': 'index_html',
60 'index.html':'index_html'}
61 , 'actions' : ( { 'id' : 'view'
62 , 'name' : 'View'
63 , 'action': 'string:${object_url}'
64 , 'permissions' : (View,)
65 }
66 , { 'id' : 'edit'
67 , 'name' : 'Edit'
68 , 'action': 'string:${object_url}/folder_edit_form'
69 , 'permissions' : (ManageProperties,)
70 }
71 , { 'id' : 'localroles'
72 , 'name' : 'Local Roles'
73 , 'action':
74 'string:${object_url}/folder_localrole_form'
75 , 'permissions' : (ChangeLocalRoles,)
76 }
77 , { 'id' : 'folderContents'
78 , 'name' : 'Folder contents'
79 , 'action': 'string:${object_url}/folder_contents'
80 , 'permissions' : (ListFolderContents,)
81 }
82 , { 'id' : 'new'
83 , 'name' : 'New...'
84 , 'action': 'string:${object_url}/folder_factories'
85 , 'permissions' : (AddPortalContent,)
86 , 'visible' : 0
87 }
88 , { 'id' : 'rename_items'
89 , 'name' : 'Rename items'
90 , 'action': 'string:${object_url}/folder_rename_form'
91 , 'permissions' : (AddPortalContent,)
92 , 'visible' : 0
93 }
94 )
95 }
96 ,
97 )
100 class PortalFolderBase(DynamicType, CMFCatalogAware, Folder):
101 """Base class for portal folder
102 """
103 meta_type = 'Portal Folder Base'
105 __implements__ = (IFolderish, DynamicType.__implements__,
106 Folder.__implements__)
108 security = ClassSecurityInfo()
110 description = ''
112 manage_options = ( Folder.manage_options +
113 CMFCatalogAware.manage_options )
115 def __init__( self, id, title='' ):
116 self.id = id
117 self.title = title
119 #
120 # 'MutableDublinCore' interface methods
121 #
122 security.declareProtected(ManageProperties, 'setTitle')
123 def setTitle( self, title ):
124 """ Set Dublin Core Title element - resource name.
125 """
126 self.title = title
128 security.declareProtected(ManageProperties, 'setDescription')
129 def setDescription( self, description ):
130 """ Set Dublin Core Description element - resource summary.
131 """
132 self.description = description
134 #
135 # other methods
136 #
137 security.declareProtected(ManageProperties, 'edit')
138 def edit(self, title='', description=''):
139 """
140 Edit the folder title (and possibly other attributes later)
141 """
142 self.setTitle( title )
143 self.setDescription( description )
144 self.reindexObject()
146 security.declarePublic('allowedContentTypes')
147 def allowedContentTypes( self ):
148 """
149 List type info objects for types which can be added in
150 this folder.
151 """
152 result = []
153 portal_types = getToolByName(self, 'portal_types')
154 myType = portal_types.getTypeInfo(self)
156 if myType is not None:
157 for contentType in portal_types.listTypeInfo(self):
158 if myType.allowType( contentType.getId() ):
159 result.append( contentType )
160 else:
161 result = portal_types.listTypeInfo()
163 return filter( lambda typ, container=self:
164 typ.isConstructionAllowed( container )
165 , result )
168 def _morphSpec(self, spec):
169 '''
170 spec is a sequence of meta_types, a string containing one meta type,
171 or None. If spec is empty or None, returns all contentish
172 meta_types. Otherwise ensures all of the given meta types are
173 contentish.
174 '''
175 warn('Using the \'spec\' argument is deprecated. In CMF 2.0 '
176 'contentItems(), contentIds(), contentValues() and '
177 'listFolderContents() will no longer support \'spec\'. Use the '
178 '\'filter\' argument with \'portal_type\' instead.',
179 DeprecationWarning)
180 new_spec = []
181 types_tool = getToolByName(self, 'portal_types')
182 types = types_tool.listContentTypes( by_metatype=1 )
183 if spec is not None:
184 if type(spec) == type(''):
185 spec = [spec]
186 for meta_type in spec:
187 if not meta_type in types:
188 raise ValueError('%s is not a content type' % meta_type)
189 new_spec.append(meta_type)
190 return new_spec or types
192 def _filteredItems( self, ids, filt ):
193 """
194 Apply filter, a mapping, to child objects indicated by 'ids',
195 returning a sequence of ( id, obj ) tuples.
196 """
197 # Restrict allowed content types
198 if filt is None:
199 filt = {}
200 else:
201 # We'll modify it, work on a copy.
202 filt = filt.copy()
203 pt = filt.get('portal_type', [])
204 if type(pt) is type(''):
205 pt = [pt]
206 types_tool = getToolByName(self, 'portal_types')
207 allowed_types = types_tool.listContentTypes()
208 if not pt:
209 pt = allowed_types
210 else:
211 pt = [t for t in pt if t in allowed_types]
212 if not pt:
213 # After filtering, no types remain, so nothing should be
214 # returned.
215 return []
216 filt['portal_type'] = pt
218 query = ContentFilter(**filt)
219 result = []
220 append = result.append
221 get = self._getOb
222 for id in ids:
223 obj = get( id )
224 if query(obj):
225 append( (id, obj) )
226 return result
228 #
229 # 'Folderish' interface methods
230 #
231 security.declarePublic('contentItems')
232 def contentItems( self, spec=None, filter=None ):
233 # List contentish and folderish sub-objects and their IDs.
234 # (method is without docstring to disable publishing)
235 #
236 if spec is None:
237 ids = self.objectIds()
238 else:
239 # spec is deprecated, use filter instead!
240 spec = self._morphSpec(spec)
241 ids = self.objectIds(spec)
242 return self._filteredItems( ids, filter )
244 security.declarePublic('contentIds')
245 def contentIds( self, spec=None, filter=None):
246 # List IDs of contentish and folderish sub-objects.
247 # (method is without docstring to disable publishing)
248 #
249 if spec is None:
250 ids = self.objectIds()
251 else:
252 # spec is deprecated, use filter instead!
253 spec = self._morphSpec(spec)
254 ids = self.objectIds(spec)
255 return map( lambda item: item[0],
256 self._filteredItems( ids, filter ) )
258 security.declarePublic('contentValues')
259 def contentValues( self, spec=None, filter=None ):
260 # List contentish and folderish sub-objects.
261 # (method is without docstring to disable publishing)
262 #
263 if spec is None:
264 ids = self.objectIds()
265 else:
266 # spec is deprecated, use filter instead!
267 spec = self._morphSpec(spec)
268 ids = self.objectIds(spec)
269 return map( lambda item: item[1],
270 self._filteredItems( ids, filter ) )
272 security.declareProtected(ListFolderContents, 'listFolderContents')
273 def listFolderContents( self, spec=None, contentFilter=None ):
274 """ List viewable contentish and folderish sub-objects.
275 """
276 items = self.contentItems(spec=spec, filter=contentFilter)
277 l = []
278 for id, obj in items:
279 # validate() can either raise Unauthorized or return 0 to
280 # mean unauthorized.
281 try:
282 if getSecurityManager().validate(self, self, id, obj):
283 l.append(obj)
284 except zExceptions_Unauthorized: # Catch *all* Unauths!
285 pass
286 return l
288 #
289 # webdav Resource method
290 #
292 # protected by 'WebDAV access'
293 def listDAVObjects(self):
294 # List sub-objects for PROPFIND requests.
295 # (method is without docstring to disable publishing)
296 #
297 if _checkPermission(ManagePortal, self):
298 return self.objectValues()
299 else:
300 return self.listFolderContents()
302 #
303 # 'DublinCore' interface methods
304 #
305 security.declareProtected(View, 'Title')
306 def Title( self ):
307 """ Dublin Core Title element - resource name.
308 """
309 return self.title
311 security.declareProtected(View, 'Description')
312 def Description( self ):
313 """ Dublin Core Description element - resource summary.
314 """
315 return self.description
317 security.declareProtected(View, 'Type')
318 def Type( self ):
319 """ Dublin Core Type element - resource type.
320 """
321 if hasattr(aq_base(self), 'getTypeInfo'):
322 ti = self.getTypeInfo()
323 if ti is not None:
324 return ti.Title()
325 return self.meta_type
327 #
328 # other methods
329 #
330 security.declarePublic('encodeFolderFilter')
331 def encodeFolderFilter(self, REQUEST):
332 """
333 Parse cookie string for using variables in dtml.
334 """
335 filter = {}
336 for key, value in REQUEST.items():
337 if key[:10] == 'filter_by_':
338 filter[key[10:]] = value
339 encoded = base64.encodestring( marshal.dumps(filter) ).strip()
340 encoded = ''.join( encoded.split('\n') )
341 return encoded
343 security.declarePublic('decodeFolderFilter')
344 def decodeFolderFilter(self, encoded):
345 """
346 Parse cookie string for using variables in dtml.
347 """
348 filter = {}
349 if encoded:
350 filter.update(marshal.loads(base64.decodestring(encoded)))
351 return filter
353 def content_type( self ):
354 """
355 WebDAV needs this to do the Right Thing (TM).
356 """
357 return None
359 # Ensure pure PortalFolders don't get cataloged.
360 # XXX We may want to revisit this.
362 def indexObject(self):
363 pass
365 def unindexObject(self):
366 pass
368 def reindexObject(self, idxs=[]):
369 pass
371 def reindexObjectSecurity(self):
372 pass
374 def PUT_factory( self, name, typ, body ):
375 """ Factory for PUT requests to objects which do not yet exist.
377 Used by NullResource.PUT.
379 Returns -- Bare and empty object of the appropriate type (or None, if
380 we don't know what to do)
381 """
382 registry = getToolByName(self, 'content_type_registry', None)
383 if registry is None:
384 return None
386 typeObjectName = registry.findTypeName( name, typ, body )
387 if typeObjectName is None:
388 return None
390 self.invokeFactory( typeObjectName, name )
392 # invokeFactory does too much, so the object has to be removed again
393 obj = aq_base( self._getOb( name ) )
394 self._delObject( name )
395 return obj
397 security.declareProtected(AddPortalContent, 'invokeFactory')
398 def invokeFactory(self, type_name, id, RESPONSE=None, *args, **kw):
399 """ Invokes the portal_types tool.
400 """
401 pt = getToolByName(self, 'portal_types')
402 myType = pt.getTypeInfo(self)
404 if myType is not None:
405 if not myType.allowType( type_name ):
406 raise ValueError('Disallowed subobject type: %s' % type_name)
408 return pt.constructContent(type_name, self, id, RESPONSE, *args, **kw)
410 security.declareProtected(AddPortalContent, 'checkIdAvailable')
411 def checkIdAvailable(self, id):
412 try:
413 self._checkId(id)
414 except BadRequest:
415 return False
416 else:
417 return True
419 def MKCOL_handler(self,id,REQUEST=None,RESPONSE=None):
420 """
421 Handle WebDAV MKCOL.
422 """
423 self.manage_addFolder( id=id, title='' )
425 def _checkId(self, id, allow_dup=0):
426 PortalFolderBase.inheritedAttribute('_checkId')(self, id, allow_dup)
428 if allow_dup:
429 return
431 # FIXME: needed to allow index_html for join code
432 if id == 'index_html':
433 return
435 # Another exception: Must allow "syndication_information" to enable
436 # Syndication...
437 if id == 'syndication_information':
438 return
440 # This code prevents people other than the portal manager from
441 # overriding skinned names and tools.
442 if not getSecurityManager().checkPermission(ManagePortal, self):
443 ob = self
444 while ob is not None and not getattr(ob, '_isPortalRoot', False):
445 ob = aq_parent( aq_inner(ob) )
446 if ob is not None:
447 # If the portal root has a non-contentish object by this name,
448 # don't allow an override.
449 if (hasattr(ob, id) and
450 id not in ob.contentIds() and
451 # Allow root doted prefixed object name overrides
452 not id.startswith('.')):
453 raise BadRequest('The id "%s" is reserved.' % id)
454 # Don't allow ids used by Method Aliases.
455 ti = self.getTypeInfo()
456 if ti and ti.queryMethodID(id, context=self):
457 raise BadRequest('The id "%s" is reserved.' % id)
458 # Otherwise we're ok.
460 def _verifyObjectPaste(self, object, validate_src=1):
461 # This assists the version in OFS.CopySupport.
462 # It enables the clipboard to function correctly
463 # with objects created by a multi-factory.
464 securityChecksDone = False
465 sm = getSecurityManager()
466 parent = aq_parent(aq_inner(object))
467 object_id = object.getId()
468 mt = getattr(object, '__factory_meta_type__', None)
469 meta_types = getattr(self, 'all_meta_types', None)
471 if mt is not None and meta_types is not None:
472 method_name=None
473 permission_name = None
475 if callable(meta_types):
476 meta_types = meta_types()
478 for d in meta_types:
480 if d['name']==mt:
481 method_name=d['action']
482 permission_name = d.get('permission', None)
483 break
485 if permission_name is not None:
487 if not sm.checkPermission(permission_name,self):
488 raise AccessControl_Unauthorized, method_name
490 if validate_src:
492 if not sm.validate(None, parent, None, object):
493 raise AccessControl_Unauthorized, object_id
495 if validate_src > 1:
496 if not sm.checkPermission(DeleteObjects, parent):
497 raise AccessControl_Unauthorized
499 # validation succeeded
500 securityChecksDone = 1
502 #
503 # Old validation for objects that may not have registered
504 # themselves in the proper fashion.
505 #
506 elif method_name is not None:
508 meth = self.unrestrictedTraverse(method_name)
510 factory = getattr(meth, 'im_self', None)
512 if factory is None:
513 factory = aq_parent(aq_inner(meth))
515 if not sm.validate(None, factory, None, meth):
516 raise AccessControl_Unauthorized, method_name
518 # Ensure the user is allowed to access the object on the
519 # clipboard.
520 if validate_src:
522 if not sm.validate(None, parent, None, object):
523 raise AccessControl_Unauthorized, object_id
525 if validate_src > 1: # moving
526 if not sm.checkPermission(DeleteObjects, parent):
527 raise AccessControl_Unauthorized
529 securityChecksDone = 1
531 # Call OFS' _verifyObjectPaste if necessary
532 if not securityChecksDone:
533 PortalFolderBase.inheritedAttribute(
534 '_verifyObjectPaste')(self, object, validate_src)
536 # Finally, check allowed content types
537 if hasattr(aq_base(object), 'getPortalTypeName'):
539 type_name = object.getPortalTypeName()
541 if type_name is not None:
543 pt = getToolByName(self, 'portal_types')
544 myType = pt.getTypeInfo(self)
546 if myType is not None and not myType.allowType(type_name):
547 raise ValueError('Disallowed subobject type: %s'
548 % type_name)
550 security.setPermissionDefault(AddPortalContent, ('Owner','Manager'))
552 security.declareProtected(AddPortalFolders, 'manage_addFolder')
553 def manage_addFolder( self
554 , id
555 , title=''
556 , REQUEST=None
557 ):
558 """ Add a new folder-like object with id *id*.
560 IF present, use the parent object's 'mkdir' alias; otherwise, just add
561 a PortalFolder.
562 """
563 ti = self.getTypeInfo()
564 method_id = ti and ti.queryMethodID('mkdir', context=self)
565 if method_id:
566 # call it
567 getattr(self, method_id)(id=id)
568 else:
569 self.invokeFactory( type_name='Folder', id=id )
571 ob = self._getOb( id )
572 ob.setTitle( title )
573 try:
574 ob.reindexObject()
575 except AttributeError:
576 pass
578 if REQUEST is not None:
579 return self.manage_main(self, REQUEST, update_menu=1)
581 InitializeClass(PortalFolderBase)
584 class PortalFolder(OrderSupport, PortalFolderBase):
585 """
586 Implements portal content management, but not UI details.
587 """
588 meta_type = 'Portal Folder'
589 portal_type = 'Folder'
591 __implements__ = (PortalFolderBase.__implements__,
592 OrderSupport.__implements__)
594 security = ClassSecurityInfo()
596 manage_options = ( OrderSupport.manage_options +
597 PortalFolderBase.manage_options[1:] )
599 security.declareProtected(AddPortalFolders, 'manage_addPortalFolder')
600 def manage_addPortalFolder(self, id, title='', REQUEST=None):
601 """Add a new PortalFolder object with id *id*.
602 """
603 ob = PortalFolder(id, title)
604 self._setObject(id, ob)
605 if REQUEST is not None:
606 return self.folder_contents( # XXX: ick!
607 self, REQUEST, portal_status_message="Folder added")
609 InitializeClass(PortalFolder)
612 class ContentFilter:
613 """
614 Represent a predicate against a content object's metadata.
615 """
616 MARKER = []
617 filterSubject = []
618 def __init__( self
619 , Title=MARKER
620 , Creator=MARKER
621 , Subject=MARKER
622 , Description=MARKER
623 , created=MARKER
624 , created_usage='range:min'
625 , modified=MARKER
626 , modified_usage='range:min'
627 , Type=MARKER
628 , portal_type=MARKER
629 , **Ignored
630 ):
632 self.predicates = []
633 self.description = []
635 if Title is not self.MARKER:
636 self.predicates.append( lambda x, pat=re.compile( Title ):
637 pat.search( x.Title() ) )
638 self.description.append( 'Title: %s' % Title )
640 if Creator and Creator is not self.MARKER:
641 self.predicates.append( lambda x, creator=Creator:
642 creator in x.listCreators() )
643 self.description.append( 'Creator: %s' % Creator )
645 if Subject and Subject is not self.MARKER:
646 self.filterSubject = Subject
647 self.predicates.append( self.hasSubject )
648 self.description.append( 'Subject: %s' % ', '.join(Subject) )
650 if Description is not self.MARKER:
651 self.predicates.append( lambda x, pat=re.compile( Description ):
652 pat.search( x.Description() ) )
653 self.description.append( 'Description: %s' % Description )
655 if created is not self.MARKER:
656 if created_usage == 'range:min':
657 self.predicates.append( lambda x, cd=created:
658 cd <= x.created() )
659 self.description.append( 'Created since: %s' % created )
660 if created_usage == 'range:max':
661 self.predicates.append( lambda x, cd=created:
662 cd >= x.created() )
663 self.description.append( 'Created before: %s' % created )
665 if modified is not self.MARKER:
666 if modified_usage == 'range:min':
667 self.predicates.append( lambda x, md=modified:
668 md <= x.modified() )
669 self.description.append( 'Modified since: %s' % modified )
670 if modified_usage == 'range:max':
671 self.predicates.append( lambda x, md=modified:
672 md >= x.modified() )
673 self.description.append( 'Modified before: %s' % modified )
675 if Type:
676 if type( Type ) == type( '' ):
677 Type = [ Type ]
678 self.predicates.append( lambda x, Type=Type:
679 x.Type() in Type )
680 self.description.append( 'Type: %s' % ', '.join(Type) )
682 if portal_type and portal_type is not self.MARKER:
683 if type(portal_type) is type(''):
684 portal_type = [portal_type]
685 self.predicates.append( lambda x, pt=portal_type:
686 hasattr(aq_base(x), 'getPortalTypeName')
687 and x.getPortalTypeName() in pt )
688 self.description.append( 'Portal Type: %s'
689 % ', '.join(portal_type) )
691 def hasSubject( self, obj ):
692 """
693 Converts Subject string into a List for content filter view.
694 """
695 for sub in obj.Subject():
696 if sub in self.filterSubject:
697 return 1
698 return 0
700 def __call__( self, content ):
702 for predicate in self.predicates:
704 try:
705 if not predicate( content ):
706 return 0
707 except (AttributeError, KeyError, IndexError, ValueError):
708 # predicates are *not* allowed to throw exceptions
709 return 0
711 return 1
713 def __str__( self ):
714 """
715 Return a stringified description of the filter.
716 """
717 return '; '.join(self.description)
719 manage_addPortalFolder = PortalFolder.manage_addPortalFolder.im_func
720 manage_addPortalFolderForm = DTMLFile( 'folderAdd', globals() )
|
__label__pos
| 0.977245 |
3.6.6 Using Foreign Keys
In MySQL, InnoDB tables support checking of foreign key constraints. See Section 14.6, “The InnoDB Storage Engine”, and Section 1.8.2.4, “Foreign Key Differences”.
A foreign key constraint is not required merely to join two tables. For storage engines other than InnoDB, it is possible when defining a column to use a REFERENCES tbl_name(col_name) clause, which has no actual effect, and serves only as a memo or comment to you that the column which you are currently defining is intended to refer to a column in another table. It is extremely important to realize when using this syntax that:
You can use a column so created as a join column, as shown here:
CREATE TABLE person (
id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
name CHAR(60) NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE shirt (
id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
style ENUM('t-shirt', 'polo', 'dress') NOT NULL,
color ENUM('red', 'blue', 'orange', 'white', 'black') NOT NULL,
owner SMALLINT UNSIGNED NOT NULL REFERENCES person(id),
PRIMARY KEY (id)
);
INSERT INTO person VALUES (NULL, 'Antonio Paz');
SELECT @last := LAST_INSERT_ID();
INSERT INTO shirt VALUES
(NULL, 'polo', 'blue', @last),
(NULL, 'dress', 'white', @last),
(NULL, 't-shirt', 'blue', @last);
INSERT INTO person VALUES (NULL, 'Lilliana Angelovska');
SELECT @last := LAST_INSERT_ID();
INSERT INTO shirt VALUES
(NULL, 'dress', 'orange', @last),
(NULL, 'polo', 'red', @last),
(NULL, 'dress', 'blue', @last),
(NULL, 't-shirt', 'white', @last);
SELECT * FROM person;
+----+---------------------+
| id | name |
+----+---------------------+
| 1 | Antonio Paz |
| 2 | Lilliana Angelovska |
+----+---------------------+
SELECT * FROM shirt;
+----+---------+--------+-------+
| id | style | color | owner |
+----+---------+--------+-------+
| 1 | polo | blue | 1 |
| 2 | dress | white | 1 |
| 3 | t-shirt | blue | 1 |
| 4 | dress | orange | 2 |
| 5 | polo | red | 2 |
| 6 | dress | blue | 2 |
| 7 | t-shirt | white | 2 |
+----+---------+--------+-------+
SELECT s.* FROM person p INNER JOIN shirt s
ON s.owner = p.id
WHERE p.name LIKE 'Lilliana%'
AND s.color <> 'white';
+----+-------+--------+-------+
| id | style | color | owner |
+----+-------+--------+-------+
| 4 | dress | orange | 2 |
| 5 | polo | red | 2 |
| 6 | dress | blue | 2 |
+----+-------+--------+-------+
When used in this fashion, the REFERENCES clause is not displayed in the output of SHOW CREATE TABLE or DESCRIBE:
SHOW CREATE TABLE shirt\G
*************************** 1. row ***************************
Table: shirt
Create Table: CREATE TABLE `shirt` (
`id` smallint(5) unsigned NOT NULL auto_increment,
`style` enum('t-shirt','polo','dress') NOT NULL,
`color` enum('red','blue','orange','white','black') NOT NULL,
`owner` smallint(5) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
The use of REFERENCES in this way as a comment or reminder in a column definition works with MyISAM tables.
|
__label__pos
| 0.950729 |
C Program to Convert Octal Number to Hexadecimal Number
Here is a C program to convert octal number to hexadecimal number system. Octal number system is a base 8 number system using digits 0 and 7 whereas Hexadecimal number system is base 16 number system and using digits from 0 to 9 and A to F. Given an octal number as input from user convert it to hexadecimal number.
For Example:
1652 in Octal is equivalent to 3AA in hexadecimal number system.
Required Knowledge
C program to convert a octal number to hexadecimal number
#include <stdio.h>
#include <string.h>
int main() {
int octalDigitToBinary[8] = {0, 1, 10, 11, 100, 101, 110, 111};
int hexDigitToBinary[16] = {0, 1, 10, 11, 100, 101, 110, 111, 1000,
1001, 1010, 1011, 1100, 1101, 1110, 1111};
char hexDigits[16] = {'0', '1', '2', '3', '4', '5', '6', '7', '8',
'9', 'A', 'B', 'C', 'D', 'E', 'F'};
char hexadecimalNumber[30];
long long octalNumber, binaryNumber = 0, position;
int digit, fourDigit, i;
/* Take an Octal Number as input from user */
printf("Enter an Octal Number\n");
scanf("%ld", &octalNumber);
position = 1;
/* Finds Binary of the octal number */
while(octalNumber != 0) {
digit = octalNumber % 10;
binaryNumber = (octalDigitToBinary[digit]*position)+binaryNumber;
octalNumber /= 10;
position *= 1000;
}
/* Now convert Binary number to hexadecimal */
position = 0;
while(binaryNumber != 0){
fourDigit = binaryNumber%10000;
for(i = 0; i < 16; i++){
if(hexDigitToBinary[i] == fourDigit){
hexadecimalNumber[position] = hexDigits[i];
break;
}
}
position++;
binaryNumber /= 10000;
}
hexadecimalNumber[position] = '\0';
strrev(hexadecimalNumber);
printf("HexaDecimal Number = %s", hexadecimalNumber);
return 0;
}
Output
Enter an Octal Number
1652
HexaDecimal Number = 3AA
Enter an Octal Number
1234
HexaDecimal Number = 29C
Related Topics
C program to convert binary numbers to octal number using function
C program to convert decimal numbers to binary numbers
C program to convert binary number to decimal number system
C program to convert kilometer to miles
C program to convert decimal numbers to binary numbers
C program to convert decimal number to hexadecimal number system
C program to convert hexadecimal number to decimal number system
C program to convert temperature from celsius to fahrenheit
C program to make a simple calculator using switch statement
List of all C programs
|
__label__pos
| 0.998247 |
minerals minerals - 2 months ago 8
Python Question
What is Python's heapq module?
I tried "heapq" and arrived at the conclusion that my expectations differ from what I see on the screen. I need somebody to explain how it works and where it can be useful.
From the book Python Module of the Week under paragraph 2.2 Sorting it is written
If you need to maintain a sorted list as you add and remove values,
check out heapq. By using the functions in heapq to add or remove
items from a list, you can maintain the sort order of the list with
low overhead.
Here is what I do and get.
import heapq
heap = []
for i in range(10):
heap.append(i)
heap
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
heapq.heapify(heap)
heapq.heappush(heap, 10)
heap
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
heapq.heappop(heap)
0
heap
[1, 3, 2, 7, 4, 5, 6, 10, 8, 9] <<< Why the list does not remain sorted?
heapq.heappushpop(heap, 11)
1
heap
[2, 3, 5, 7, 4, 11, 6, 10, 8, 9] <<< Why is 11 put between 4 and 6?
So, as you see the "heap" list is not sorted at all, in fact the more you add and remove the items the more cluttered it becomes. Pushed values take unexplainable positions.
What is going on?
Answer
The heapq module maintains the heap invariant, which is not the same thing as maintaining the actual list object in sorted order.
Quoting from the heapq documentation:
Heaps are binary trees for which every parent node has a value less than or equal to any of its children. This implementation uses arrays for which heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2] for all k, counting elements from zero. For the sake of comparison, non-existing elements are considered to be infinite. The interesting property of a heap is that its smallest element is always the root, heap[0].
This means that it is very efficient to find the smallest element (just take heap[0]), which is great for a priority queue. After that, the next 2 values will be larger (or equal) than the 1st, and the next 4 after that are going to be larger than their 'parent' node, then the next 8 are larger, etc.
You can read more about the theory behind the datastructure in the Theory section of the documentation. You can also watch this lecture from the MIT OpenCourseWare Introduction to Algorithms course, which explains the algorithm in general terms.
A heap can be turned back into a sorted list very efficiently:
def heapsort(heap):
return [heapq.heappop(heap) for _ in range(len(heap))]
by just popping the next element from the heap. Using sorted(heap) should be faster still, however, as the TimSort will take advantage of the partial ordering already present in a heap.
You'd use a heap if you are only interested in the smallest value, or the first n smallest values, especially if you are interested in those values on an ongoing basis; adding new items and removing the smallest is very efficient indeed, more so than resorting the list each time you added a value.
|
__label__pos
| 0.986395 |
Single Responsibility And Loosely Coupling Principles
Geliştirdiğimiz uygulamalar farklı sorumlulukları yerine getiren bileşenlerin bir arada çalışmasından meydana gelir. Geliştirme, bakım ve güncelleme süreçlerinde karşılaşılacak zorlukları en aza indirmek için, uygulama mimarisi bileşenler tek bir iş yapacak ve birbirleri ili gevşek ilişkide haberleşecek şekilde tasarlanmalıdır. Her bileşenin tek bir işi olması Single Responsibility Principle, bileşenler arasındaki bağların gevşek olması Loosely Copling Principle olarak adlandırılır.
public class UserService
{
public void Register(string email, string name)
{
try
{
//Save User
//Send Mail
_smtpClient.SendMail("success")
}
catch (Exception ex)
{
//Error Log
File.Write(ex.Message)
}
}
}
Yukarıdaki örnekte UserService sınıfı kullanıcı kaydetme işleminin yanında mail gönderme, hata loglama sorumluluklarını da üstlenmiş durumdadır. Bu durum tek sorumluluk ilkesini ihlal eder, sorumluklardan herhangi biri değiştiğinde uygulamanın tekrar derlenmesi gerekir. Olması gereken sorumlulukların ayrı ayrı ve yeniden kullanılabilir şekilde bölünmesidir.
public class FileLoger{
public static void WriteLog(string log) {}
}
public class MessageSender{
public static void Send(string message){}
}
public class UserService
{
public void Register(string email, string name)
{
try
{
//Save User
MessageSender.Send("Success");
}
catch (Exception ex)
{
FileLoger.WriteLog(ex.Message);
}
}
}
Bu şekilde UserService sınıfının sorumlulukları başka sınıflara devredilerek single responsibility sağlanmıştır. Ancak UserService sınıfı MessageSender ve FileLoger sınıflarına bağımlı durumdadır. Bu durum Tightly Coupled – Sıkı Bağlılık olarak adlandırılmaktadır. Bu sorunuda Interface kullanarak çözebiliriz.
public interface ILoger
{
void Write(string Exception);
}
public interface IMessageSender
{
void Send(string Message);
}
public class FileLoger : ILoger
{
public void Write(string Exception)
{
Console.WriteLine("File Log = " +Exception);
}
}
public class DbLoger : ILoger
{
public void Write(string Exception)
{
Console.WriteLine("Db Log = " + Exception);
}
}
public class MailSender : IMessageSender
{
public void Send(string Message)
{
Console.WriteLine("Email Message = " + Message);
}
}
public class SmsSender : IMessageSender
{
public void Send(string Message)
{
Console.WriteLine("Sms Message = " + Message);
}
}
public class UserService
{
private ILoger _loger;
private IMessageSender _messageSender;
public UserService(ILoger loger, IMessageSender messageSender)
{
_loger = loger;
_messageSender = messageSender;
}
public void Register(string email, string name)
{
try
{
//Save User
//Send Mail
_messageSender.Send("success");
}
catch (Exception ex)
{
//Error Log
_loger.Write(ex.Message);
}
}
}
static Main (string [] args){
var mailSender = new MailSender();
var smsSender = new SmsSender();
var fileLoger = new FileLoger();
var dbLoger = new DbLoger();
var userService = new UserService(fileLoger, mailSender);
userService.Register("[email protected]","password1");
userService = new UserService(dbLoger, smsSender);
userService.Register("[email protected]", "password2");
}
Görüleceği üzere UserService sınıfı bağımlılıklarını interface üzerinden bildiği için daha alt düzeydeki sınıflarla bağımlığı kalmamıştır. Ancak uygulamayı oluşturan bileşenlerden birinde bir değişiklik olduğunda uygulamanın tekrar derlenmesi gerekmektedir. Bu sorunun çözümünü bir sonraki makalede inceliyor olacağız
Sağlıklı ve huzurlu günler dilerim
Bir Cevap Yazın
Aşağıya bilgilerinizi girin veya oturum açmak için bir simgeye tıklayın:
WordPress.com Logosu
WordPress.com hesabınızı kullanarak yorum yapıyorsunuz. Çıkış Yap / Değiştir )
Google+ fotoğrafı
Google+ hesabınızı kullanarak yorum yapıyorsunuz. Çıkış Yap / Değiştir )
Twitter resmi
Twitter hesabınızı kullanarak yorum yapıyorsunuz. Çıkış Yap / Değiştir )
Facebook fotoğrafı
Facebook hesabınızı kullanarak yorum yapıyorsunuz. Çıkış Yap / Değiştir )
w
Connecting to %s
|
__label__pos
| 0.9004 |
Blue Diary
A lightweight & effective Todo app made with Flutter. Supports English and Korean.
Blue-Diary
Usage
You can build and run this app by yourself. You'll need Git, Flutter,and Android Studio installed first. After that, clone this project by running command:
$ git clone https://github.com/giantsol/Blue-Diary.git
Open cloned directory with Android Studio and it'll notify you to run Packages get to install dependencies. Do that.
Lastly, when you try to run the project by pressing Run button at the top, build will fail because this app uses Sendgrid to send emails in SettingsBloc file, and SENDGRID_AUTHORIZATION constant isn't git-controlled.
You can solve this in 2 ways:
1. You can follow Sendgrid guide and assign your own token to SENDGRID_AUTHORIZATION constant:
// Create lib/Secrets.dart file
const SENDGRID_AUTHORIZATION = 'Bearer <<YOUR API KEY>>';
1. Just replace SENDGRID_AUTHORIZATON to ''. In this case, email sending won't function, but other app functions will work just fine. In SettingsBloc file:
headers: {
HttpHeaders.authorizationHeader: SENDGRID_AUTHORIZATION,
HttpHeaders.contentTypeHeader: 'application/json',
},
Replace above code with below:
headers: {
HttpHeaders.authorizationHeader: '',
HttpHeaders.contentTypeHeader: 'application/json',
},
Press Run button again, and it should build fine.
If you still can't run it, please leave Feedback!
Architecture
This app is based on BLoC pattern, together with my own architectural practices.
Inside the lib folder, there are three main folders:
1. data: This folder contains Dart files that actually update/fetch data from Preferences, Databases, or Network (although we don't use Network here). Most of the files here are implementations of Repository interface declared in domain/repository folder.
2. domain: This folder contains the so called 'Business Logic' of this app. It is further divided into three main folders:
• entity: contains pure data classes such as ToDo and Category.
• repository: contains interfaces defining functions that update/fetch data. Actual implementations are located in data folder.
• usecase: contains per-screen business logics that utilizes several repositories to achieve each screen's needs. This is the layer that presentation has access to to utilize app data. For instance, WeekScreen uses (well, actually WeekBloc uses) WeekUsecases to interact with data underneath without directly touching repositories.
3. presentation: This folder contains Screens, Blocs and States that are used to display UI. It is divided into further directories that correspond to each screens in the app.
• **Screen: where Widget's build method is called to build the actual UI shown to the user. UI is determined by values inside State, and any interactions users make (e.g. clicking a button) are delegated to corresponding Blocs.
• **Bloc: what this basically does is "User does something (e.g. click a button)" -> "Set/Get data using corresponding usecase and update the values inside State obect" -> "Notify Screen that State has changed and you have to rebuild".
• **State: holds all the information Screen needs to draw UI. For instance, currentDate, todos, and isLocked kinds of things.
Above three directories are divided to as closely follow Uncle Bob's Clean Architecture pattern. Any tackles, highly welcomed.
Besides these directories are flat Dart files inside lib folder:
1. AppColors.dart: just simple color constants.
2. Delegators.dart: I used delegators when children needed to call parent's methods. However, as I've become more familiar with Flutter now, I guess ancestorStateOfType can just do that job... researching on it!
3. Dependencies.dart: contains singleton objects such as repositories and usecases. Basically, it enables a very simple injection pattern like dependencies.weekUsecases as in WeekBloc.dart.
4. Localization.dart: where localization texts are declared.
5. Main.dart: the main entry point of this app.
6. Utils.dart: Utils (duh).
If you have any questions on why the heck I've done something strange, or have any suggestions to make this app better, please do contact me as shown in Feedback. Thank you!
Download
Get it on Google Play
iOS version, not yet.
GitHub
https://github.com/giantsol/Blue-Diary
|
__label__pos
| 0.817223 |
Q. 205.0( 2 Votes )
In a ΔABC, C = 3 B = 2 (A + B), then B = ?
A. 20°
B. 40°
C. 60°
D. 80°
Answer :
Let us assume, A = xo and B = yo
A = 3 B = (3y)o
We know that, sum of all sides of the triangle is equal to 180o
A + B + C = 180o
x + y + 3y = 180o
x + 4y = 180o (i)
Also we have, C = 2 ( A + B)
3y = 2 (x + y)
2x – y = 0 (ii)
Now, by multiplying (ii) by 4 we get:
8x – 4y = 0 (iii)
And adding (i) and (iii), we get
9x = 180o
x = 20
Putting the value of x in (i), we get
20 + 4y = 180
4y = 180 – 20
4y = 160
y = 40
B = y = 40o
Hence, option B is correct
Rate this question :
How useful is this solution?
We strive to provide quality solutions. Please rate us to serve you better.
Related Videos
Dealing With the Real Life Problems53 mins
Quiz | Real Life Problems Through Linear Equations56 mins
Quiz | Solution of Linear Equations53 mins
Pair of Linear Equations in Two Variables46 mins
Champ Quiz | Consistency and Inconsistency of Solutions36 mins
Consistent and Inconsistent Equations33 mins
Elimination (quicker than quickest)44 mins
Real Life Problems Through Linear Equations41 mins
All Kinds of Word Problems in Linear Equations42 mins
Master Substitution Method46 mins
Try our Mini CourseMaster Important Topics in 7 DaysLearn from IITians, NITians, Doctors & Academic Experts
Dedicated counsellor for each student
24X7 Doubt Resolution
Daily Report Card
Detailed Performance Evaluation
view all courses
RELATED QUESTIONS :
|
__label__pos
| 0.959591 |
Click here to Skip to main content
14,773,509 members
Articles » Multimedia » General Graphics » Graphics
Article
Posted 6 Apr 2005
Stats
242.3K views
11.5K downloads
162 bookmarked
Drawing UPC-A Barcodes with C#
Rate me:
Please Sign up or sign in to vote.
4.94/5 (56 votes)
13 Apr 2005
Demonstrates a method to draw UPC-A barcodes using C#.
Introduction
On almost every product sold, there is typically a UPC barcode of some type which is used to identify the product. The most common barcode used, in the United States and Canada, is the UPC-A barcode. In this article, we will look at the UPC-A specification and examine some code that can produce UPC-A barcodes.
UPC-A Background
The UPC-A barcode is composed of 12 digits which are made up of the following sections:
• the first digit is the product type,
• the next five digits are the manufacturer code,
• the next five digits are the product code,
• the last digit is the checksum digit.
Product Type
The product type is a one digit number which is used to describe the type of product.
Product Type NumberDescription
0Regular UPC codes
1Reserved
2Weight items marked at the store.
3National Drug/Health-related code.
4No format restrictions, in-store use on non-food items.
5Coupons
6Reserved
7Regular UPC codes
8Reserved
9Reserved
Manufacturer code and product code
The manufacturer code is assigned by the Uniform Code Council, and is used to uniquely identify the product's manufacturer. The product code is used to identify the product.
Checksum digit
The checksum digit is calculated using the product type, manufacturer's code, and the product code. The odd numbers are multiplied by 3 and added to the sum, while the even numbers are simply added to the sum. The modulus of 10 is then taken of the summed total. This is subtracted from 10 and the modulus of 10 is taken again.
For example: UPC-A 01234567890
Product Type : 0
Manufacturer's Code : 12345
Product Code : 67890
The first digit '0' is odd, so multiple it by 3, the second digit 1 is even so just add it, etc...
(0 * 3) + 1 + (2 * 3) + 3 + (4 * 3) + 5 + (6 * 3) + 7 + (8 * 3) + 9 + (0 * 3) = 85
85 % 10 = 5
( ( 10 - 5 ) % 10 ) = 5
Symbol size
The specifications for the UPC-A barcode specify the nominal size of a UPC symbol as 1.496" wide and 1.02" high. Based upon this nominal size the UPC symbol can be scaled by a magnification factor of 0.8 to 2.0. Scaling the barcode will produce a barcode between the minimal allowable size of 1.175" wide by .816" high and the maximum allowable size of 2.938" wide and 2.04" high.
Digit patterns
Each digit in a UPC-A bar code is composed of a series of two spaces and two bars. Each digit is drawn within a space that is 7 modules wide. In addition to the 12 digits, which make up a UPC-A barcode, the barcode symbol also has two quite zones, a lead block, a separator, and a trailing block. Each quite zone is 9 modules wide, the lead and trailing blocks are a series of lines and spaces in the format of bar, space, bar. The separator is signified by the sequence space/bar/space/bar/space.
Special SymbolPattern
Quite Zone000000000
Lead / Trailer101
Separator01010
where '0' represents space and '1' denotes a bar.
In addition to the special symbol patterns listed above, the UPC-A barcode symbol uses two distinct digit patterns as well, the Left Digit pattern and the Right Digit pattern. The Left Digit pattern is used to draw the product type and the manufacturer code. The Right Digit pattern is used to draw the product code and the checksum digit. The Left Digit pattern starts with spaces and the Right Digit pattern starts with bars (see table below).
NumberLeft DigitsRight Digits
000011011110010
100110011100110
200100111101100
301111011000010
401000111011100
501100011001110
601011111010000
701110111000100
801101111001000
900010111110100
where a '0' denotes a space and '1' represents a bar.
Using the code
First, we will examine how to use the UpcA class, and then we'll examine how the UpcA class works.
Using the UpcA Class
The code excerpt below uses the UpcA class to draw a UPC-A barcode in a picture box control:
private void DrawUPC( )
{
System.Drawing.Graphics g = this.picBarcode.CreateGraphics( );
g.FillRectangle(new System.Drawing.SolidBrush(
System.Drawing.SystemColors.Control),
new Rectangle(0, 0, picBarcode.Width, picBarcode.Height));
// Create an instance of the UpcA Class.
upc = new UpcA( );
upc.ProductType = "0";
upc.ManufacturerCode = "21200";
upc.ProductCode = "10384";
upc.Scale =
(float)Convert.ToDecimal( cboScale.Items [cboScale.SelectedIndex] );
upc.DrawUpcaBarcode( g, new System.Drawing.Point( 0, 0 ) );
g.Dispose( );
}
The first step for the DrawUPC function is to create an instance of the UpcA class, and then set the product type, manufacturer code, the product code, and the scale factor properties (the check sum will be calculated by the UpcA class). Once these properties are set, a call to the DrawUpcaBarcode function is made, passing a Graphics object and a Point, which indicates the starting position to draw at, this will cause the barcode to be drawn in the picture box starting at point (0, 0).
The UpcA Class
The most significant variables are listed below:
// This is the nomimal size recommended by the UCC.
private float _fWidth = 1.469f;
private float _fHeight = 1.02f;
private float _fFontSize = 8.0f;
private float _fScale = 1.0f;
// Left Hand Digits.
private string [] _aLeft = { "0001101", "0011001", "0010011", "0111101",
"0100011", "0110001", "0101111", "0111011",
"0110111", "0001011" };
// Right Hand Digits.
private string [] _aRight = { "1110010", "1100110", "1101100", "1000010",
"1011100", "1001110", "1010000", "1000100",
"1001000", "1110100" };
private string _sQuiteZone = "0000000000";
private string _sLeadTail = "101";
private string _sSeparator = "01010";
The _fWidth, _fHeight, and the _fScale variables are initialized with the nominal size recommended by the Uniform Code Council. When the barcode is rendered, its actual size will be determined by the nominal size, and the scale factor, as discussed in the Symbol Size section of this article. The variables _aLeft, _aRight, _sQuiteZone, _sLeadTail, and _sSeparator are all string representations of the bar/space graphics, which represent the various parts of a UPC-A barcode. Essentially, a '1' represents a bar and a '0' represents a space, so _sSeparator would cause a space-bar-space-bar-space to be rendered. An alternate method to using a string could be to use a binary representation, where a 0 bit would be space and a 1 bit is a bar.
There are three primary functions which provide the majority of the functionality for the UpcA class. The workhorse of these functions is DrawUpcaBarcode. DrawUpcaBarcode uses the other two functions as helper functions. The two helper functions are: CalculateChecksumDigit, ConvertToDigitPatterns and these will be discussed first. There is also a fourth function, CreateBitmap, which provides an easy means for creating a bitmap image.
The first helper function DrawUpcaBarcode calls the CalculateChecksumDigit function, which uses the product type, manufacturer code, and product code to calculate the barcode's check sum.
public void CalculateChecksumDigit( )
{
string sTemp = this.ProductType + this.ManufacturerCode + this.ProductCode;
int iSum = 0;
int iDigit = 0;
// Calculate the checksum digit here.
for( int i = 1; i <= sTemp.Length; i++ )
{
iDigit = Convert.ToInt32( sTemp.Substring( i - 1, 1 ) );
if( i % 2 == 0 )
{ // even
iSum += iDigit * 1;
}
else
{ // odd
iSum += iDigit * 3;
}
}
int iCheckSum = ( 10 - ( iSum % 10 ) ) % 10;
this.ChecksumDigit = iCheckSum.ToString( );
}
The CalculateChecksumDigit function calculates the check sum using the method discussed in the Checksum Digit section listed above.
The second helper function used is the ConvertToDigitPatterns function. This function takes the individual numbers of the manufacturer code, and the product number, and converts them to the string representation of the barcode graphics.
private string ConvertToDigitPatterns( string inputNumber, string [] patterns )
{
System.Text.StringBuilder sbTemp = new StringBuilder( );
int iIndex = 0;
for( int i = 0; i < inputNumber.Length; i++ )
{
iIndex = Convert.ToInt32( inputNumber.Substring( i, 1 ) );
sbTemp.Append( patterns[iIndex] );
}
return sbTemp.ToString( );
}
The ConvertToDigitPatterns function requires two parameters:
• inputNumber
• patterns
The inputNumber will be either the manufacturer number or the product number, and the patterns will either be the _aLeft or the _aRight array depending on whether the inputNumber is the manufacturer number or the product number.
Finally the workhorse; the DrawUpcaBarcode handles the rendering of the barcode graphics and requires two parameters:
• g
• pt
This function begins by determining the width and height for the barcode, by scaling the nominal width and height by the scale factor. The lineWidth is based upon the total number of modules required to render a UPC-A barcode. The total number of modules, 113, is determined by the following: for example:
UPC-A code - 021900103841
Barcode SectionNumeric ValueGraphic RepresentationNumber of Modules
Quite ZoneN/A0000000009 modules
LeadN/A1013 modules
Product Type1 digit - "0"00011017 modules
Manufacturer Number5 digits = "21900"001001100110010001011000110100011015 digits * 7 modules = 35 modules
SeparatorN/A010105 modules
Product Number5 digits = "10384"110011011100101000010100100010111005 digits * 7 modules = 35 modules
Check Sum1 digit = "1"11001107 modules
TrailerN/A1013 modules
Quite ZoneN/A0000000009 modules
So, to determine the total module width, simply add the individual parts: 9 + 3 + 7 + 35 + 5 + 35 + 7 + 3 + 9 = 113.
public void DrawUpcaBarcode(System.Drawing.Graphics g,System.Drawing.Point pt)
{
float width = this.Width * this.Scale;
float height = this.Height * this.Scale;
// A upc-a excluding 2 or 5 digit supplement information
// should be a total of 113 modules wide.
// Supplement information is typically
// used for periodicals and books.
float lineWidth = width / 113f;
// Save the GraphicsState.
System.Drawing.Drawing2D.GraphicsState gs = g.Save( );
// Set the PageUnit to Inch because all of
// our measurements are in inches.
g.PageUnit = System.Drawing.GraphicsUnit.Inch;
// Set the PageScale to 1, so an inch will represent a true inch.
g.PageScale = 1;
System.Drawing.SolidBrush brush =
new System.Drawing.SolidBrush( System.Drawing.Color.Black );
float xPosition = 0;
System.Text.StringBuilder strbUPC = new System.Text.StringBuilder( );
float xStart = pt.X;
float yStart = pt.Y;
float xEnd = 0;
System.Drawing.Font font =
new System.Drawing.Font( "Arial", this._fFontSize * this.Scale );
// Calculate the Check Digit.
this.CalculateChecksumDigit( );
// Build the UPC Code.
strbUPC.AppendFormat( "{0}{1}{2}{3}{4}{5}{6}{1}{0}",
this._sQuiteZone, this._sLeadTail,
ConvertToDigitPatterns( this.ProductType, this._aLeft ),
ConvertToDigitPatterns( this.ManufacturerCode, this._aLeft ),
this._sSeparator,
ConvertToDigitPatterns( this.ProductCode, this._aRight ),
ConvertToDigitPatterns( this.ChecksumDigit, this._aRight ) );
string sTempUPC = strbUPC.ToString( );
float fTextHeight = g.MeasureString( sTempUPC, font ).Height;
// Draw the barcode lines.
for( int i = 0; i < strbUPC.Length; i++ )
{
if( sTempUPC.Substring( i, 1 ) == "1" )
{
if( xStart == pt.X )
xStart = xPosition;
// Save room for the UPC number below the bar code.
if( ( i > 19 && i < 56 ) || ( i > 59 && i < 95 ) )
// Draw space for the number
g.FillRectangle( brush, xPosition, yStart,
lineWidth, height - fTextHeight );
else
// Draw a full line.
g.FillRectangle( brush, xPosition, yStart, lineWidth, height );
}
xPosition += lineWidth;
xEnd = xPosition;
}
// Draw the upc numbers below the line.
xPosition = xStart - g.MeasureString( this.ProductType, font ).Width;
float yPosition = yStart + ( height - fTextHeight );
// Draw Product Type.
g.DrawString( this.ProductType, font, brush,
new System.Drawing.PointF( xPosition, yPosition ) );
// Each digit is 7 modules wide, therefore the MFG_Number
// is 5 digits wide so
// 5 * 7 = 35, then add 3 for the LeadTrailer
// Info and another 7 for good measure,
// that is where the 45 comes from.
xPosition +=
g.MeasureString( this.ProductType, font ).Width + 45 * lineWidth -
g.MeasureString( this.ManufacturerCode, font ).Width;
// Draw MFG Number.
g.DrawString( this.ManufacturerCode, font, brush,
new System.Drawing.PointF( xPosition, yPosition ) );
// Add the width of the MFG Number and 5 modules for the separator.
xPosition += g.MeasureString( this.ManufacturerCode, font ).Width +
5 * lineWidth;
// Draw Product ID.
g.DrawString( this.ProductCode, font, brush,
new System.Drawing.PointF( xPosition, yPosition ) );
// Each digit is 7 modules wide, therefore
// the Product Id is 5 digits wide so
// 5 * 7 = 35, then add 3 for the LeadTrailer
// Info, + 8 more just for spacing
// that is where the 46 comes from.
xPosition += 46 * lineWidth;
// Draw Check Digit.
g.DrawString( this.ChecksumDigit, font, brush,
new System.Drawing.PointF( xPosition, yPosition ) );
// Restore the GraphicsState.
g.Restore( gs );
}
The function uses the CalculateChecksumDigit function to calculate the correct check sum digit, and then uses the ConvertToDigitPatterns function to convert the various numeric parts of the UPC-A barcode number to a string representation. Once the number has been converted over to a string representation, the code uses the string representation to render the barcode, 1 will cause a rectangle to be drawn, and 0 will cause the code to skip drawing a rectangle. If the code draws a rectangle, it also takes into consideration whether it needs to shorten the rectangle to allow space for the manufacturer's number and the product number. Once the barcode is completely rendered, the code then determines the position, and draws the product type number, the manufacturer's number, the product number, and the check sum digit.
The CreateBitmap function simply creates a Bitmap object, and uses the DrawUpcaBarcode function to render the barcode to the Bitmap object, and then it returns the Bitmap.
public System.Drawing.Bitmap CreateBitmap( )
{
float tempWidth = ( this.Width * this.Scale ) * 100 ;
float tempHeight = ( this.Height * this.Scale ) * 100;
System.Drawing.Bitmap bmp =
new System.Drawing.Bitmap( (int)tempWidth, (int)tempHeight );
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage( bmp );
this.DrawUpcaBarcode( g, new System.Drawing.Point( 0, 0 ) );
g.Dispose( );
return bmp;
}
Points of interest
The United States and Canada are the only two countries that use the UPC-A barcode system, the rest of the world uses the EAN barcode system. So, as of January 1, 2005, the Uniform Code Council, UCC, has mandated that all U.S. and Canadian point-of-sale companies must be able to scan and process EAN-8, and EAN-13 barcodes, in addition to the UPC-A barcodes (this is called the 2005 Sunrise). The UCC has also began to push a new bar code system, known as the Global Trade Item Numbers (GTINs), basically a GTIN is a 14 digit number which conforms to the UPC-A, and EAN-13 symbol standards but uses additional digits to store country of origin information. If you want more information, go to UCC: 2005 Sunrise.
History
• Version 1.0 - Initial application.
License
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
Share
About the Author
rainman_63
Web Developer
United States United States
No Biography provided
Comments and Discussions
QuestionUsage in products Pin
Member 114312555-Feb-15 17:58
MemberMember 114312555-Feb-15 17:58
QuestionGood article about a method to draw UPC-A barcodes using C#. Pin
Volynsky Alex20-Apr-14 23:35
professionalVolynsky Alex20-Apr-14 23:35
QuestionUPC-A Barcode VB.NET version? Pin
laxi3005-Oct-11 9:29
Memberlaxi3005-Oct-11 9:29
Questionbarcode in code 39 Pin
Member 79252208-Jul-11 23:28
MemberMember 79252208-Jul-11 23:28
GeneralMy vote of 5 Pin
dl4gbe8-Aug-10 10:34
Memberdl4gbe8-Aug-10 10:34
GeneralBarcode does not show up if placed in Window directory Pin
BHHNVB18-Sep-09 8:01
MemberBHHNVB18-Sep-09 8:01
GeneralNOT PRODUCING AN ACCURATE BAR CODE! [modified] Pin
Chris Marcan16-Jul-09 8:43
MemberChris Marcan16-Jul-09 8:43
AnswerRe: NOT PRODUCING AN ACCURATE BAR CODE! [modified] Pin
Chris Marcan16-Jul-09 9:36
MemberChris Marcan16-Jul-09 9:36
GeneralRe: NOT PRODUCING AN ACCURATE BAR CODE! Pin
muppaluri23-Oct-09 5:53
Membermuppaluri23-Oct-09 5:53
Generalquistion Pin
beeshooo9-Feb-09 8:21
Memberbeeshooo9-Feb-09 8:21
GeneralMessage Closed Pin
3-Mar-13 17:41
MemberThomasssss3-Mar-13 17:41
Generalhelp Pin
lakshmi priya.t.n29-Jan-09 18:33
Memberlakshmi priya.t.n29-Jan-09 18:33
GeneralMessage Closed Pin
3-Mar-13 18:11
MemberThomasssss3-Mar-13 18:11
GeneralThis saves me $$$ Pin
David Tum27-Jan-07 6:53
MemberDavid Tum27-Jan-07 6:53
GeneralUPC-E Pin
Bill Daugherty II1-Jan-06 9:59
MemberBill Daugherty II1-Jan-06 9:59
GeneralRe: UPC-E Pin
rainman_633-Jan-06 3:46
Memberrainman_633-Jan-06 3:46
GeneralRe: UPC-E Pin
Bill Daugherty II3-Jan-06 9:04
MemberBill Daugherty II3-Jan-06 9:04
GeneralJust some inforamtion. Pin
Bill Daugherty II1-Jan-06 9:56
MemberBill Daugherty II1-Jan-06 9:56
Generalrunnign code as webapp... Pin
walkerla27-Sep-05 16:26
Memberwalkerla27-Sep-05 16:26
AnswerRe: runnign code as webapp... Pin
rainman_6328-Sep-05 5:37
Memberrainman_6328-Sep-05 5:37
QuestionRe: runnign code as webapp... Pin
MadCoder817-Mar-08 4:46
MemberMadCoder817-Mar-08 4:46
GeneralModify to get EAN-8 codes Pin
gRabbi24-Jul-05 12:29
MembergRabbi24-Jul-05 12:29
Generalconvert string to UPC-A Pin
SpYflaX4-May-05 10:34
MemberSpYflaX4-May-05 10:34
GeneralRe: convert string to UPC-A Pin
SpYflaX4-May-05 11:06
MemberSpYflaX4-May-05 11:06
GeneralRe: convert string to UPC-A Pin
laxi3005-Oct-11 9:33
Memberlaxi3005-Oct-11 9:33
General General News News Suggestion Suggestion Question Question Bug Bug Answer Answer Joke Joke Praise Praise Rant Rant Admin Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
__label__pos
| 0.61134 |
1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Homework Help: Integrating xy
1. Nov 6, 2008 #1
1. The problem statement, all variables and given/known data
It is simple: find the antiderivative of 2xy.
2. Relevant equations
3. The attempt at a solution
I am inclined to say that it equals (xy)^2 +c, but can't help but feel i have left out something.
2. jcsd
3. Nov 6, 2008 #2
rock.freak667
User Avatar
Homework Helper
What are you integrating with respect to? Are both x and y variables?
4. Nov 6, 2008 #3
Yes, they are both variables.
5. Nov 7, 2008 #4
HallsofIvy
User Avatar
Science Advisor
Then answer the question! You want to find the anti-derivative with respect to which variable?
[tex]\int 2xy dx= x^2y+ C[/tex]
[tex]\int 2xy dy= xy^2+ C[/tex]
Choose one!
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
__label__pos
| 0.996845 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
PerlMonks
Re: is it possible to use Perl to process Outlook emails?
by pileofrogs (Priest)
on Jul 26, 2011 at 20:10 UTC ( #916832=note: print w/replies, xml ) Need Help??
in reply to is it possible to use Perl to process Outlook emails?
I use Net::IMAP::Simple to do stuff with emails on our Exchange server.
When will this run? Is this something yuu want to run yourself, something you want to run periodically on it's own (like once a day, even when you're out sick), or something you want to process each email as it arrives? Is this something you're going to run on your desktop* or on a server somewhere?
*I'm defining "server" is a computer you don't turn off and "desktop" as one that you do turn off.
I usually have stuff like this run on a server with cron in unix or Scheduled Tasks in windows.
--Pileofrogs
• Comment on Re: is it possible to use Perl to process Outlook emails?
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://916832]
help
Chatterbox?
and all is quiet...
How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (7)
As of 2017-10-23 19:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
My fridge is mostly full of:
Results (282 votes). Check out past polls.
Notices?
|
__label__pos
| 0.82777 |
For Latin Modern fonts, there are the packages lmodern and cfr-lm. Additionally, there are the font packages lm and lm-math. For a beginner, it is difficult to figure out the relations between these packages. The confusing thing is that lmodern documentation indirectly points to lm, but lm is "just" a font, not a thing directly being usable by \usepackage{lm}.
up vote 15 down vote accepted
lm is a package which contains various things. Most importantly, it contains the Latin Modern fonts themselves, together with the files required to use them with TeX and friends. The fonts are provided in type1 format for use with TeX and pdfTeX, and in opentype format for use with XeTeX and LuaTeX, for example.
One element of the lm package is a set of support files for use of the type1 fonts with LaTeX or pdfLaTeX. This includes the lmodern.sty package which you use as \usepackage{lmodern}.
So far, so good.
Now, if you are using XeTeX or LuaTeX, then you may, if you wish, use lm-math which consists of an opentype maths font. unicode-math provides a means to use this. You don't have to do this - you can use the standard maths support - but you may.
If you are using TeX or pdfTeX, lm-math is irrelevant. You can't use it and you don't need it as lmodern already supports mathematics for these engines.
So far, that's all official support - or as official as it gets, anyway.
Now, if you are using TeX or pdfTeX with the type1 fonts, lmodern is somewhat limited. It supports only some of the features available in the fonts themselves. For example, it uses tabular, lining figures and, although you can access oldstyle numerals using special commands, these are still tabular and awkward to use. Moreover, there is no easy way to use italic small-caps, non-extended bold or upright italics, for example, as these are just not supported well by LaTeX by default. The variable width typewriter, the slashed zero and quotation sans are beyond reach and there is no easy, document-level command to access Latin Modern Dunhill.
For these engines, cfr-lm provides enhanced support. Insofar as possible, cfr-lm aims to provide access to everything in the fonts which might be useful through a fairly straightforward set of commands and options. cfr-lm is not just a package file, cfr-lm.sty. The bulk of cfr-lm consists of a set of TeX font files and LaTeX definition files. Essentially, these are files *.tfm, *.vf, *.fd and a new *.map file. This is all behind the scenes, though. All that matters to the end user is cfr-lm.sty and the documentation.
For example, you can pass options to the package saying whether you would prefer figures which are tabular or proportional, lining or oldstyle for each of typewriter, sans and serif. You can say which style of typewriter font you'd like. Moreover, you can switch between different styles within your document itself. For example, you can use oldstyle, proportional figures for text but switch to tabular, lining figures for a tabular.
If you don't want any of the features, use lmodern. Not only is that easier, it does not rely on virtual fonts which can be disadvantageous in some circumstances. (Don't ask me which circumstances - I haven't learnt this yet.)
If you want to use any of these features, however, cfr-lm will make life much easier. In some cases, it will make possible something which you could otherwise do only by creating the equivalent of cfr-lm yourself.
Note that cfr-lm uses just the same type1 fonts as lmodern. In addition, the support for maths is identical. cfr-lm just loads the maths support provided by lmodern. (The relevant parts of the package file are simply copied from lmodern.sty.
Note that cfr-lm will load fontenc with option T1. It will also load textcomp for access to the TS1 encoding.
Here's a sample:
cfr-lm sampler
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[rm={proportional,oldstyle},sf={proportional,oldstyle},tt={lining,tabular,monowidth}]{cfr-lm}
\begin{document}
1234567890\zeroslash (serif, oldstyle, proportional)
\textsf{1234567890\zeroslash} (sans, oldstyle, proportional)
\texttt{1234567890\zeroslash} (monowidth, tabular, lining typewriter)
\textpl{1234567890\zeroslash} (serif, proportional, lining)
\texttl{1234567890\zeroslash} (serif, tabular, lining)
\textsf{\tlstyle 1234567890\zeroslash} (sans, tabular, lining)
\textsl{This is oblique text.}
\textit{This is in regular italics.}
\textui{This text is in upright italics.}
\textsc{Here are some small-caps.}
\textsi{This is italic small-caps.}
Text weight and width (Medium)
\textsb{Text weight and width} (Bold)
\textbf{Text weight and width} (Bold Extended)
\textti{This is Latin Modern Dunhill.}
\texttt{Typewriter text.} (Monowidth)
\texttv{Typewriter text.} (Variable)
\end{document}
The documentation aims to be clear and comprehensive. If it is not, you could always try complaining to the package's maintainer.
texdoc cfr-lm
• Both answers here are good but I hope the OP accepts this one (can't do much better than the package author answering) :-) – Joseph Wright May 29 '15 at 5:57
• Docs for cfr-lm describe it as experimental. Is that comment now outdated? Also, if I'm fine with lmodern and use no new features, how sure is it that everything still works after switching to cfr-lm? – Blaisorblade Sep 24 '17 at 3:39
• 1
@Blaisorblade I had some reports a few years ago about inconveniences and updated the package. Nobody has contacted me since. So, either nobody now uses it or they haven't found the bugs yet. As far as I know, it works as advertised and I use it in almost every document I write. No guarantees. If you don't want the new features, there's no reason to use it. However, it is 99% sure it would still work if you switched, provided you set the relevant options if you want the output to look like lmodern. (cfr-lm has different defaults.) If you don't use the new features, it is 99.99%, – cfr Sep 24 '17 at 3:46
• 1
@Blaisorblade There are certain complications created by the limitations of NFSS, basically, which make it tricky to implement new font selection commands in ways which will always work as expected. cfr-lm uses nfssext-cfrwhich attempts to hide these complications from the user. Almost always, this works smoothly in practice, but there are some edge cases where it will do something strange. Basically, there are a couple of cases where it is hard to select a particular combination of font features because there are missing steps (i.e. no font exists) and accumulation fails. – cfr Sep 24 '17 at 3:51
• 1
@Blaisorblade But this is not something which I can fix, I don't think, or that anybody can fix - at least unless L3 comes up with some souped-up NFSS, which doesn't seem terribly likely. So suppose a font has a light condensed version, then nfssext-cfr provides macros to switch to light and to switch to condensed. This is fine if the font also provides light regular width and regular weight condensed - the commands can be combined in any order. If it has only one, the order of invocation is crucial. If it has neither, you have to resort to low-level font selection macros. – cfr Sep 24 '17 at 3:56
You shouldn't confuse a “CTAN package” with a “LaTeX package”. The former are sets of files that provide support for different kinds of TeX related objects, whereas a LaTeX package is a single file with extension .sty, possibly related to other support text files.
The lm and lm-math CTAN packages provide font files; the first one also contains lmodern.sty, a support LaTeX package for using those fonts within LaTeX. The second one just provides fonts in OpenType format, with no support LaTeX package (one can load unicode-math for using those fonts in XeLaTeX or LuaLaTeX).
Also cfr-lm is a CTAN package that provides several font files and a support LaTeX package called cfr-lm.sty.
You may have noticed that I use different markup for the two things: a CTAN package and a LaTeX package.sty.
What's the difference between lmodern.sty and cfr-lm.sty? The latter comes with many font files in cfr-lm; they are virtual fonts that eventually map glyphs to fonts coming with lm, and it makes it easy to select among several different features not present in the basic lm distribution and lmodern.sty: old style and lining figures, proportional or tabular, oblique small caps, condensed sans serif, semi-bold weight. Read the documentation for more information.
• Great answer, I would have written the first part (but probably not as well) if I wasn’t so lazy. It has to be said that the word “package” is a little confusing here. – Arthur Reutenauer May 28 '15 at 22:42
• 1
@ArthurReutenauer Thanks. Yes, it's confusing; I blame David and the others of the LaTeX team for having chosen “package” for the .sty files. ;-) Don't forget the “TeX Live packages” which are yet another different beast. – egreg May 28 '15 at 22:47
• @egreg Could you comment on cfr-lm and LuaLaTex? Should that work? – koppor Mar 8 at 21:45
• @koppor cfr-lm is only for pdflatex. – egreg Mar 8 at 21:54
Your Answer
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.521755 |
node package manager
pirandello
Pirandello
A better Stream abstraction for node.js. npm install pirandello
Use
There's a bunch of ways to create a Stream.
Creates a stream that sends a single value and ends.
Stream.of("hello world")
//> hello world
Just ends immediately.
Sends each item in the array, then ends.
Stream.fromArray(["hello ","world"])
//> hello world
Takes a node Readable stream and sends it one data chunk at a time.
Stream.fromReadable(fs.createReadStream('data.txt'))
To construct your own Streams, call the constructor (new is optional) with a function that takes two arguments. Each argument is a function: call the first one with an object to send it over the stream, call the second one to end the stream. From the implementation of fromArray:
Stream.fromArray = function(arr) {
return Stream(function(next,end) {
arr.forEach(next)
end()
})
}
Pirandello Streams are immutable; the methods below return a new stream. Streams are a fantasy-land MonadPlus.
Returns a new stream that sends the contents of the current stream followed by the contents of the other stream.
Stream.of("hello ").concat(Stream.of("world"))
//> hello world
Takes a function that operates on each chunk and should return a new Stream. Useful for concatenating lists.
function read(f) {
return Stream.fromReadable(fs.createReadStream(f));
}
Stream.fromArray(process.argv).chain(read).pipe(process.stdout)
Maps over the chunk.
Stream.fromArray(["hello ","world"]).map(function(s) {return s.toUpperCase()})
//> HELLO WORLD
Applies a stream of functions to a stream of inputs, returns a stream of outputs.
Stream.fromArray([
function(s) { return s.toUpperCase(); },
function(s) { return s.toLowerCase(); },
function(s) { return s.substr(0,5); }
]).ap(Stream.of("Hello World "))
//> HELLO WORLD hello world Hello
Converts a Stream of strings into a Stream of individual characters.
Given a number, returns a stream of the first n chunks. Useful with toCharstream.
Stream.of("hello world").toCharstream().take(5)
//> hello
Given a number, returns a stream without the first n chunks. Useful with toCharstream.
Stream.of("hello world").toCharstream().drop(6)
//> world
Mostly compatible with Readable::pipe, sends every chunk to the destination Writable.
When you really need low-level chunk functionality (maybe you're extending Pirandello? Good for you!), generator is what you want. It is, in fact, the function passed in when the Stream is instantiated; call it with two arguments, one function to deal with each chunk, and one which is called at the end. FUrom the implementation of pipe:
Stream.prototype.pipe = function(dest) {
this.generator(
function(chunk) { dest.write(chunk) }
function() { dest.end() }
)
}
Why?
I tried to make Readables look nice, I really did. The API is ugly, and the abstraction is leaky. Here's to a fresh start.
Licence
MIT
|
__label__pos
| 0.93689 |
Padrino
Application Helpers
Output Helpers
Output helpers are a collection of important methods for managing, capturing and displaying output in various ways and is used frequently to support higher-level helper functions. There are three output helpers worth mentioning: content_for, capture_html, and concat_content
The content_for functionality supports capturing content and then rendering this into a different place such as within a layout. One such popular example is including assets onto the layout from a template:
# app/views/site/index.erb
# ...
<% content_for :assets do %>
<%= stylesheet_link_tag 'index', 'custom' %>
<% end %>
# ...
Added to a template, this will capture the includes from the block and allow them to be yielded into the layout:
# app/views/layout.erb
<head>
<title>Example</title>
<%= stylesheet_link_tag 'style' %>
<%= yield_content :assets %>
</head>
This will automatically insert the contents of the block (in this case a stylesheet include) into the location the content is yielded within the layout.
You can also check if a content_for block exists for a given key using content_for?:
# app/views/layout.erb
<% if content_for?(:assets) %>
<div><%= yield_content :assets %></div>
<% end %>
The capture_html and the concat_content methods allow content to be manipulated and stored for use in building additional helpers accepting blocks or displaying information in a template. One example is the use of these in constructing a simplified form_tag helper which accepts a block.
# form_tag '/register' do ... end
def form_tag(url, options={}, &block)
# ... truncated ...
inner_form_html = capture_html(&block)
concat_content '<form>' + inner_form_html + '</form>'
end
This will capture the template body passed into the form_tag block and then append the content to the template through the use of concat_content. Note have been built to work for both haml and erb templates using the same syntax.
List of Output Helpers
• content_for(key, &block)
• Capture a block of content to be rendered at a later time.
• Existence can be checked using the content_for?(key) method.
• content_for(:head) { ...content... }
• Also supports arguments passed to the content block
• content_for(:head) { |param1, param2| ...content... }
• yield_content(key, *args)
• Render the captured content blocks for a given key.
• yield_content :head
• Also supports arguments yielded to the content block
• yield_content :head, param1, param2
• capture_html(*args, &block)
• Captures the html from a block of template code for erb or haml
• capture_html(&block) => "...html..."
• concat_content(text="")
• Outputs the given text to the templates buffer directly in erb or haml
• concat_content("This will be output to the template buffer in erb or haml")
last updated: 2022-02-22
comments powered by Disqus
|
__label__pos
| 0.784103 |
ctors/filesystem-cache-bundle
Adds a filesystem cache option
1.0.0 2013-10-05 17:55 UTC
README
This is a Symfony 2 Bundle that adds a filesystem cache.
Installing via Composer
{
"require": {
"ctors/filesystem-cache-bundle": "dev-master"
}
}
Using and Setting Up
AppKernel.php
public function registerBundles() {
$bundles = array(
new Ctors\FilesystemCacheBundle\CtorsFilesystemCacheBundle(),
);
}
Service usage
The cache service's name is ctors.cache. You can alias it in your project's config.yml:
service:
cache:
alias: ctors.cache
The cache implements the Doctrine CacheProvider interface. Simple usage example:
/** @var \Doctrine\Common\Cache\CacheProvider $cache */
$cache = $this->get('cache');
if ($cache->contains($searchKey)) {
$value = unserialize($cache->fetch($searchKey));
} else {
$value = new SomeValue();
$cache->save($searchKey, serialize($value));
}
Command usage
There is a command that prints the cache usage:
$ app/console ctors:filesystemcache:stats
Filesystem cache statistics (05/10/2013 15:07:09)
- number of objects 3
- disk usage 28K
You can also watch the cache, it will update any time a resource is added or removed:
$ app/console ctors:filesystemcache:stats -w
Filesystem cache statistics (05/10/2013 15:07:11)
- number of objects 3
- disk usage 28K
Filesystem cache statistics (05/10/2013 15:07:18)
- number of objects 4
- disk usage 44K
Filesystem cache statistics (05/10/2013 15:07:22)
- number of objects 5
- disk usage 48K
Notes
I added a CacheWarmer and CacheClearer listener. At the moment they just make sure the cache directory exists (warmer) and is removed (clearer). You could extend these to for example mount a tmpfs on the cache directory. It's just an idea.
|
__label__pos
| 0.643253 |
The Algorithms logo
The Algorithms
AboutDonate
Dimensionality Reduction
R
# Copyright (c) 2023 Diego Gasco ([email protected]), Diegomangasco on GitHub
"""
Requirements:
- numpy version 1.21
- scipy version 1.3.3
Notes:
- Each column of the features matrix corresponds to a class item
"""
import logging
import numpy as np
import pytest
from scipy.linalg import eigh
logging.basicConfig(level=logging.INFO, format="%(message)s")
def column_reshape(input_array: np.ndarray) -> np.ndarray:
"""Function to reshape a row Numpy array into a column Numpy array
>>> input_array = np.array([1, 2, 3])
>>> column_reshape(input_array)
array([[1],
[2],
[3]])
"""
return input_array.reshape((input_array.size, 1))
def covariance_within_classes(
features: np.ndarray, labels: np.ndarray, classes: int
) -> np.ndarray:
"""Function to compute the covariance matrix inside each class.
>>> features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> labels = np.array([0, 1, 0])
>>> covariance_within_classes(features, labels, 2)
array([[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667]])
"""
covariance_sum = np.nan
for i in range(classes):
data = features[:, labels == i]
data_mean = data.mean(1)
# Centralize the data of class i
centered_data = data - column_reshape(data_mean)
if i > 0:
# If covariance_sum is not None
covariance_sum += np.dot(centered_data, centered_data.T)
else:
# If covariance_sum is np.nan (i.e. first loop)
covariance_sum = np.dot(centered_data, centered_data.T)
return covariance_sum / features.shape[1]
def covariance_between_classes(
features: np.ndarray, labels: np.ndarray, classes: int
) -> np.ndarray:
"""Function to compute the covariance matrix between multiple classes
>>> features = np.array([[9, 2, 3], [4, 3, 6], [1, 8, 9]])
>>> labels = np.array([0, 1, 0])
>>> covariance_between_classes(features, labels, 2)
array([[ 3.55555556, 1.77777778, -2.66666667],
[ 1.77777778, 0.88888889, -1.33333333],
[-2.66666667, -1.33333333, 2. ]])
"""
general_data_mean = features.mean(1)
covariance_sum = np.nan
for i in range(classes):
data = features[:, labels == i]
device_data = data.shape[1]
data_mean = data.mean(1)
if i > 0:
# If covariance_sum is not None
covariance_sum += device_data * np.dot(
column_reshape(data_mean) - column_reshape(general_data_mean),
(column_reshape(data_mean) - column_reshape(general_data_mean)).T,
)
else:
# If covariance_sum is np.nan (i.e. first loop)
covariance_sum = device_data * np.dot(
column_reshape(data_mean) - column_reshape(general_data_mean),
(column_reshape(data_mean) - column_reshape(general_data_mean)).T,
)
return covariance_sum / features.shape[1]
def principal_component_analysis(features: np.ndarray, dimensions: int) -> np.ndarray:
"""
Principal Component Analysis.
For more details, see: https://en.wikipedia.org/wiki/Principal_component_analysis.
Parameters:
* features: the features extracted from the dataset
* dimensions: to filter the projected data for the desired dimension
>>> test_principal_component_analysis()
"""
# Check if the features have been loaded
if features.any():
data_mean = features.mean(1)
# Center the dataset
centered_data = features - np.reshape(data_mean, (data_mean.size, 1))
covariance_matrix = np.dot(centered_data, centered_data.T) / features.shape[1]
_, eigenvectors = np.linalg.eigh(covariance_matrix)
# Take all the columns in the reverse order (-1), and then takes only the first
filtered_eigenvectors = eigenvectors[:, ::-1][:, 0:dimensions]
# Project the database on the new space
projected_data = np.dot(filtered_eigenvectors.T, features)
logging.info("Principal Component Analysis computed")
return projected_data
else:
logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
logging.error("Dataset empty")
raise AssertionError
def linear_discriminant_analysis(
features: np.ndarray, labels: np.ndarray, classes: int, dimensions: int
) -> np.ndarray:
"""
Linear Discriminant Analysis.
For more details, see: https://en.wikipedia.org/wiki/Linear_discriminant_analysis.
Parameters:
* features: the features extracted from the dataset
* labels: the class labels of the features
* classes: the number of classes present in the dataset
* dimensions: to filter the projected data for the desired dimension
>>> test_linear_discriminant_analysis()
"""
# Check if the dimension desired is less than the number of classes
assert classes > dimensions
# Check if features have been already loaded
if features.any:
_, eigenvectors = eigh(
covariance_between_classes(features, labels, classes),
covariance_within_classes(features, labels, classes),
)
filtered_eigenvectors = eigenvectors[:, ::-1][:, :dimensions]
svd_matrix, _, _ = np.linalg.svd(filtered_eigenvectors)
filtered_svd_matrix = svd_matrix[:, 0:dimensions]
projected_data = np.dot(filtered_svd_matrix.T, features)
logging.info("Linear Discriminant Analysis computed")
return projected_data
else:
logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
logging.error("Dataset empty")
raise AssertionError
def test_linear_discriminant_analysis() -> None:
# Create dummy dataset with 2 classes and 3 features
features = np.array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]])
labels = np.array([0, 0, 0, 1, 1])
classes = 2
dimensions = 2
# Assert that the function raises an AssertionError if dimensions > classes
with pytest.raises(AssertionError) as error_info: # noqa: PT012
projected_data = linear_discriminant_analysis(
features, labels, classes, dimensions
)
if isinstance(projected_data, np.ndarray):
raise AssertionError(
"Did not raise AssertionError for dimensions > classes"
)
assert error_info.type is AssertionError
def test_principal_component_analysis() -> None:
features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dimensions = 2
expected_output = np.array([[6.92820323, 8.66025404, 10.39230485], [3.0, 3.0, 3.0]])
with pytest.raises(AssertionError) as error_info: # noqa: PT012
output = principal_component_analysis(features, dimensions)
if not np.allclose(expected_output, output):
raise AssertionError
assert error_info.type is AssertionError
if __name__ == "__main__":
import doctest
doctest.testmod()
|
__label__pos
| 0.995249 |
toStringDetails method Null safety
String toStringDetails()
Provides a string describing the status of this object, but not including information about the object itself.
This function is used by Animation.toString so that Animation subclasses can provide additional details while ensuring all Animation subclasses have a consistent toString style.
The result of this function includes an icon describing the status of this Animation object:
Implementation
String toStringDetails() {
assert(status != null);
switch (status) {
case AnimationStatus.forward:
return '\u25B6'; // >
case AnimationStatus.reverse:
return '\u25C0'; // <
case AnimationStatus.completed:
return '\u23ED'; // >>|
case AnimationStatus.dismissed:
return '\u23EE'; // |<<
}
}
|
__label__pos
| 0.987229 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I want to finish one activity from another activity, like:
In Activity [A], on button click, I am calling Activity [B] without finishing Activity [A].
Now in Activity [B], there are two buttons, New and Modify. When the user clicks on modify then pop an activity [A] from the stack with all the options ticked..
But when the user click on New button from Activity [B], then I will have to finish Activity [A] from the stack and reload that Activity [A] again into the stack.
I am trying it, but I am not able to finish Activity [A] from the stack... How can I do it?
I am using the code as:
From Activity [A]:
Intent GotoB = new Intent(A.this,B.class);
startActivityForResult(GotoB,1);
Another method in same activity
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == 1)
{
if (resultCode == 1) {
Intent i = getIntent();
overridePendingTransition(0, 0);
i.addFlags(Intent.FLAG_ACTIVITY_NO_ANIMATION);
finish();
overridePendingTransition(0, 0);
startActivity(i);
}
}
}
And in Activity [B], on button click:
setResult(1);
finish();
share|improve this question
hey, you wanted to finish one activity from another right... then I had posted the answer which is correct. But, why did you vote it down ? – Manjunath Apr 30 '12 at 6:26
There is a thread with a few nice answers here: stackoverflow.com/questions/14355731/… – a b May 8 '13 at 10:47
add comment
5 Answers
up vote 39 down vote accepted
1. Make your activity A in manifest file: launchMode = "singleInstance"
2. When the user clicks new, do FirstActivity.fa.finish(); and call the new Intent.
3. When the user clicks modify, call the new Intent or simply finish activity B.
FIRST WAY
In your first activity, declare one Activity object like this,
public static Activity fa;
onCreate()
{
fa = this;
}
now use that object in another Activity to finish first-activity like this,
onCreate()
{
FirstActivity.fa.finish();
}
SECOND WAY
While calling your activity FirstActivity which you want to finish as soon as you move on, You can add flag while calling FirstActivity
intent.addFlags(Intent.FLAG_ACTIVITY_NO_HISTORY);
But using this flag the activity will get finished evenif you want it not to. and sometime onBack if you want to show the FirstActivity you will have to call it using intent.
share|improve this answer
3
Great answer..!! – Vaibhav Vajani Apr 30 '12 at 6:46
Thanks Vaibhav. – MKJParekh Apr 30 '12 at 6:47
3
You are holding a static reference to an activity. This will never go away unless you clear it manually from somewhere -> you are creating a memory leak! – DArkO Apr 30 '12 at 9:02
Plus for declaring an activity singleInstance is also not a good practice. This is recommended approach only for Launcher type apps. "singleTop" should be used instead. – DArkO Apr 30 '12 at 9:04
3
Well its ok, you have my +1 for out-of-the-box thinking :D, i would just change to singleTop. however it still won't be my first choice when doing something similar. by cleanup code i meant a way to get rid of the static reference so there is no memory leaking, but that depends on the app structure. – DArkO Apr 30 '12 at 9:30
show 4 more comments
There is one approach that you can use in your case.
Step1: Start Activity B from Activity A
startActivity(new Intent(A.this, B.class));
Step2: If user click on modify button start Activity A using the FLAG_ACTIVITY_CLEAR_TOP.Also pass the flag in extra.
Intent i = new Intent(B.this, A.class);
i.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
i.putExtra("flag", "modify");
startActivity(i);
finish();
Step3: If user click on modify button start Activity A using the FLAG_ACTIVITY_CLEAR_TOP.Also pass the flag in extra.
Intent i = new Intent(B.this, A.class);
i.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
i.putExtra("flag", "add");
startActivity(i);
finish();
Step4: Now onCreate() method of the Activity A, write below code.
String flag = getIntent().getStringExtra("flag");
if(flag.equals("add")) {
//Write a code for add
}else {
//Write a code for modify
}
share|improve this answer
nice answer.... – MKJParekh Apr 30 '12 at 6:58
Thanks @Frankenstein – Dharmendra Apr 30 '12 at 7:06
add comment
This is a fairly standard communication question. One approach would be to use a ResultReceiver in Activity A:
Intent GotoB=new Intent(A.this,B.class);
GotoB.putExtra("finisher", new ResultReceiver(null) {
@Override
protected void onReceiveResult(int resultCode, Bundle resultData) {
A.this.finish();
}
});
startActivityForResult(GotoB,1);
and then in Activity B you can just finish it on demand like so:
((ResultReceiver)getIntent().getExtra("finisher")).send(1, new Bundle());
Try something like that.
share|improve this answer
No It is not working – Kanika Apr 30 '12 at 6:22
2
+1 Great. This is something new to me. – Dharmendra Apr 30 '12 at 6:54
add comment
That you can do, but I think you should not break the normal flow of activity. If you want to finish you activity then you can simply send a broadcast from your activity B to activity A.
Create a broadcast receiver before starting your activity B:
BroadcastReceiver broadcast_reciever = new BroadcastReceiver() {
@Override
public void onReceive(Context arg0, Intent intent) {
String action = intent.getAction();
if (action.equals("finish_activity")) {
finish();
// DO WHATEVER YOU WANT.
}
}
};
registerReceiver(broadcast_reciever, new IntentFilter("finish_activity"));
Send broadcast from activity B to activity A when you want to finish activity A from B
Intent intent = new Intent("finish_activity");
sendBroadcast(intent);
I hope it will work for you...
share|improve this answer
It shows this error "Syntax error on tokens, AnnotationName expected instead" on "registerReceiver(broadcast_reciever, new IntentFilter("finish_activity"));". What's wrong? – Behzad Nov 13 '12 at 8:29
add comment
See my answer to Stack Overflow question Finish All previous activities.
What you need is to add the Intent.FLAG_CLEAR_TOP. This flag makes sure that all activities above the targeted activity in the stack are finished and that one is shown.
Another thing that you need is the SINGLE_TOP flag. With this one you prevent Android from creating a new activity if there is one already created in the stack.
Just be wary that if the activity was already created, the intent with these flags will be delivered in the method called onNewIntent(intent) (you need to overload it to handle it) in the target activity.
Then in onNewIntent you have a method called restart or something that will call finish() and launch a new intent toward itself, or have a repopulate() method that will set the new data. I prefer the second approach, it is less expensive and you can always extract the onCreate logic into a separate method that you can call for populate.
share|improve this answer
Great answer DArko... really nice solution.... :) – Chirag_CID Apr 30 '12 at 7:22
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.846931 |
← RSS + HTTP / Webhook integrations
PUT Request with HTTP / Webhook APIon New Item in Feed from RSS API
Pipedream makes it easy to connect APIs for HTTP / Webhook, RSS and + other apps remarkably fast.
Trigger workflow on
New Item in Feed from the RSS API
Next, do this
PUT Request with the HTTP / Webhook API
No credit card required
Trusted by 200,000+ developers from startups to Fortune 500 companies:
Trusted by 200,000+ developers from startups to Fortune 500 companies
Developers Pipedream
Getting Started
This integration creates a workflow with a RSS trigger and HTTP / Webhook action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.
1. Select this integration
2. Configure the New Item in Feed trigger
1. Connect your RSS account
2. Configure Feed URL
3. Configure timer
3. Configure the PUT Request action
1. Connect your HTTP / Webhook account
2. Configure URL
3. Optional- Configure HTTP Body / Payload
4. Optional- Configure Query Parameters
5. Optional- Configure HTTP Headers
6. Optional- Configure Basic Auth
4. Deploy the workflow
5. Send a test event to validate your setup
6. Turn on the trigger
Details
This integration uses pre-built, open source components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.
To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.
Trigger
Description:Emit new items from an RSS feed.
Version:0.0.1
Key:rss-new-item-in-feed
Trigger Code
const rss = require('../../rss.app.js')
const fetch = require('node-fetch')
const FeedParser = require('feedparser')
const hash = require('object-hash')
module.exports = {
key: "rss-new-item-in-feed",
name: "New Item in Feed",
description: "Emit new items from an RSS feed.",
version: "0.0.1",
props: {
rss: {
type: 'app',
app: 'rss',
},
url:{
type: "string",
label: 'Feed URL',
description: "Enter the URL for any public RSS feed.",
},
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 60 * 15,
},
},
rss,
},
methods: {
// in theory if alternate setting title and description or aren't unique this won't work
itemKey(item) {
return item.guid || item.id || hash(item)
},
},
dedupe: "unique",
async run() {
const res = await fetch(this.url, {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36',
accept: 'text/html,application/xhtml+xml',
})
if (res.status !== 200) throw new Error('Bad status code')
const feedparser = new FeedParser({
addmeta: false,
})
const items = []
await new Promise((resolve, reject) => {
feedparser.on('error', reject)
feedparser.on('end', resolve)
feedparser.on('readable', function() {
let item
while (item = this.read()) {
for (const k in item) {
if (item[`rss:${k}`]) {
delete item[`rss:${k}`]
continue
}
const o = item[k]
if (o == null || (typeof o === 'object' && !Object.keys(o).length) || Array.isArray(o) && !o.length) {
delete item[k]
continue
}
}
items.push(item)
}
})
res.body.pipe(feedparser)
})
items.forEach(item=>{
this.$emit(item, {
id: this.itemKey(item),
summary: item.title,
ts: item.pubdate && +new Date(item.pubdate),
})
})
},
}
Trigger Configuration
This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
RSSrssappThis component uses the RSS app.
Feed URLurlstring
Enter the URL for any public RSS feed.
timer$.interface.timer
Trigger Authentication
The RSS API does not require authentication.
About RSS
Real Simple Syndication
Action
Description:Make an HTTP PUT request to any URL. Optionally configure query string parameters, headers and basic auth.
Version:0.1.1
Key:http-put-request
Action Code
import { axios } from "@pipedream/platform";
import http from "../../http.app.mjs";
export default {
key: "http-put-request",
name: "PUT Request",
description: "Make an HTTP PUT request to any URL. Optionally configure query string parameters, headers and basic auth.",
type: "action",
version: "0.1.1",
props: {
http,
url: {
propDefinition: [
http,
"url",
],
},
data: {
propDefinition: [
http,
"body",
],
},
params: {
propDefinition: [
http,
"params",
],
},
headers: {
propDefinition: [
http,
"headers",
],
},
auth: {
propDefinition: [
http,
"auth",
],
},
},
methods: {},
async run({ $ }) {
const {
data,
headers,
params,
url,
} = this;
const config = {
url,
method: "PUT",
data,
params,
headers,
};
if (this.auth) config.auth = this.http.parseAuth(this.auth);
return await axios($, config);
},
}
;
Action Configuration
This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.
LabelPropTypeDescription
HTTP / WebhookhttpappThis component uses the HTTP / Webhook app.
URLurlstring
The URL you'd like to send the HTTP request to
HTTP Body / Payloaddatastring
The body of the HTTP request. Enter a static value or reference prior step exports via the steps object (e.g., {{steps.foo.$return_value}}).
Query Parametersparamsobject
Add individual query parameters as key-value pairs or disable structured mode to pass multiple key-value pairs as an object.
HTTP Headersheadersobject
Add individual HTTP headers as key-value pairs or disable structured mode to pass multiple key-value pairs as an object.
Basic Authauthstring
To use HTTP basic authentication, enter a username and password separated by | (e.g., myUsername|myPassword).
Action Authentication
The HTTP / Webhook API does not require authentication.
About HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
More Ways to Connect HTTP / Webhook + RSS
About Pipedream
Stop writing boilerplate code, struggling with authentication and managing infrastructure. Start connecting APIs with code-level control when you need it — and no code when you don't.
Into to Pipedream
Watch us build a workflow
Watch us build a workflow
4 min
Watch now ➜
"The past few weeks, I truly feel like the clichéd 10x engineer."
@heyellieday
Powerful features that scale
Manage concurrency and execution rate
Manage concurrency and execution rate
Queue up to 10,000 events per workfow and manage the concurrency and rate at which workflows are triggered.
Process large payloads up to 5 terabytes
Process large payloads up to 5 terabytes
Large file support enables you to trigger workflows with any data (e.g., large JSON files, images and videos) up to 5 terabytes.
Return custom responses to HTTP requests
Return custom responses to HTTP requests
Return any JSON-serializable response from an HTTP triggered workflow using $respond().
Use most npm packages
Use most npm packages
To use any npm package, just require() it -- there's no npm install or package.json required.
Maintain state between executions
Maintain state between executions
Use $checkpoint to save state in one workflow invocation and read it the next time your workflow runs.
Pass data between steps
Pass data between steps
Return data from any step to inspect it in a human-friendly way and reference the data in future steps via the steps object.
|
__label__pos
| 0.898322 |
Perl | Automatic String to Number Conversion or Casting
Perl has a different way to deal with the operators because here operator defines how the operands will behave but in other programming languages, operands define how an operator behaves. Casting refers to the conversion of a particular variable’s data type to another data type. For example, if there is a string “1234” and after converting it to int data type the output will be an integer 1234. Conversion of string to integer in Perl has many ways. One is to use Typecasting, while the other one involves use of ‘sprintf‘ function. Sometimes people do not use the word casting in Perl because the whole thing of conversion is automatic.
Typecasting
Type conversion happens when we assign the value of one data type to another. If the data types are compatible, then Perl does Automatic Type Conversion. If not compatible, then they need to be converted explicitly which is known as Explicit Type conversion. There are two types of typecasting:
• Implicit Typecasting: Implicit type conversion is done by the compiler itself. There is no need for the user to mention a specific type conversion using any method. The compiler on its own determines the data type of the variable if required and fixes it. In Perl when we declare a new variable and assign a value to it, it automatically converts it into a required data type.
Example 1:
filter_none
edit
close
play_arrow
link
brightness_4
code
# Perl code to demonstrate implicit
# type casting
# variable x is of int type
$x = 57;
# variable y is of int type
$y = 13;
# z is an integer data type which
# will contain the sum of x and y
# implicit or automatic conversion
$z = $x + $y;
print "z is ${z}\n";
# type conversion of x and y integer to
# string due to concatenate function
$w = $x.$y;
# w is a string which has the value:
# concatenation of string x and string y
print "w is ${w}\n";
chevron_right
Output:
z is 70
w is 5713
• Explicit Typecasting: In this conversion the user can cast a variable to a particular data type according to requirement. Explicit type conversion is required if the programmer wants a particular variable to be of a particular data type. It is important for keeping the code consistent so that no variable causes an error due to type conversion.
Example: The following perform explicit typecasting where a string(or any data type) is converted to specified type(say int).
filter_none
edit
close
play_arrow
link
brightness_4
code
# Perl code to demonstrate Explicit
# type casting
# String type
$string1 = "27";
# conversion of string to int
# using typecasting int()
$num1 = int($string1);
$string2 = "13";
# conversion of string to int
# using typecasting int()
$num2 = int($string2);
print "Numbers are $num1 and $num2\n";
# applying arithmetic operators
# on int variables
$sum = $num1 + $num2;
print"Sum of the numbers = $sum\n";
chevron_right
Output:
Numbers are 27 and 13
Sum of the numbers = 40
sprintf function
This sprintf function returns a scalar value, a formatted text string, which gets typecasted according to the code. The command sprintf is a formatter and doesn’t print anything at all.
filter_none
edit
close
play_arrow
link
brightness_4
code
# Perl code to demonstrate the use
# of sprintf function
# string type
$string1 = "25";
# using sprintf to convert
# the string to integer
$num1 = sprintf("%d", $string1);
$string2 = "13";
# using sprintf to convert
# the string to integer
$num2 = sprintf("%d", $string2);
# applying arithmetic operators
# on int variables
print "Numbers are $num1 and $num2\n";
$sum = $num1 + $num2;
print"Sum of the numbers = $sum\n";
chevron_right
Output:
Numbers are 25 and 13
Sum of the numbers = 38
My Personal Notes arrow_drop_up
Check out this Author's contributed articles.
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Article Tags :
Practice Tags :
Be the First to upvote.
Please write to us at [email protected] to report any issue with the above content.
|
__label__pos
| 0.945423 |
+0
0
467
2
avatar+158
Let f(x) be a quartic polynomial with integer coefficients and four integer roots. Suppose the constant term of f(x) is 6 .
(a) Is it possible for x=3 to be a root of f(x)?
(b) Is it possible for x=3 to be a double root of f(x) ?
Prove your answers.
Apr 30, 2019
#1
avatar+6216
+2
\(\text{all the roots are integers so we have}\\ f(x) = (x-i_1)(x-i_2)(x-i_3)(x-i_4)\\ \text{The constant term is }c_0 = i_1 i_2 i_3 i_4 = 6\\ \text{3 can be a root as }3\cdot 2 = 6\\ \text{On the other hand 3 cannot be a double root as }3\cdot 3 = 9\\ \text{and there is no combination of integer factors that will multiply 9 to obtain 6}\)
.
May 1, 2019
#2
avatar+158
+2
Thank you this response is short and to the point while still helping me understand the problem
May 2, 2019
26 Online Users
avatar
avatar
|
__label__pos
| 0.99956 |
Take the 2-minute tour ×
Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required.
This is not a duplicate of this question
I watch videos on YouTube, Netflix, etc. Whenever I enter full-screen mode, I get the annoying "You've entered full screen mode. Press 'esc' to exit full screen mode".
Since I'm constantly switching between programs, I see this message a lot and am severely annoyed by it. Is there a way to disable these notifications?
share|improve this question
1 Answer 1
up vote 2 down vote accepted
[source]
For Windows, download and install this patch as per the instructions here
For Mac OS X (requires a windows PC):
1. Goto /Library/Internet Plug-Ins/
2. Open* Flash Player.plugin
3. In Flash Player.plugin goto Contents > PlugIns
4. Open* FlashPlayer-{Your Mac OS Version}.plugin (Example: My file was "FlashPlayer-10.4-10.5.plugin" for Mac OS 10.5.5)
5. In FlashPlayer-{Your Mac OS Version}.plugin goto Contents > MacOS
6. Copy FlashPlayer-{Your Mac OS Version} and patch it using the "Other" option in my program on a computer with windows.
7. Copy back the patched FlashPlayer-{Your Mac OS Version} into the folder.
share|improve this answer
6
I feel unconfortable, to say the least, applying a random patch to Flash. – That Brazilian Guy Aug 7 '13 at 21:02
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.641923 |
Go Premium for a chance to win a PS4. Enter to Win
x
?
Solved
Script to list Add/Remove Programs on workstation
Posted on 2009-05-04
3
Medium Priority
?
1,011 Views
Last Modified: 2012-05-06
Hello,
I'm looking for a script which list all programs in Add/Remove Programs list + Program versions and output result in .csv file.
Script must also include programs, which are installed WITHOUT MSI installer. Script should direct output file to network share. I'm planning to attach this script to users logon script in AD.
Help would be highly appreciated!
0
Comment
Question by:SMCWindows
2 Comments
LVL 14
Accepted Solution
by:
yehudaha earned 1000 total points
ID: 24294058
try this
change this files path
computer list
Set objlist = objfso.OpenTextFile("c:\list.txt", ForReading)
log\result lof file
Set objlog = objfso.CreateTextFile("c:\log.csv", ForWriting)
Const HKLM = &H80000002 'HKEY_LOCAL_MACHINE
strComputer = "."
strKey = "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\"
strEntry1a = "DisplayName"
Const ForReading = 1
Const ForWriting = 2
Set objfso = CreateObject("Scripting.FileSystemObject")
Set objlist = objfso.OpenTextFile("c:\list.txt", ForReading)
Set objlog = objfso.CreateTextFile("c:\log.csv", ForWriting)
Do Until objlist.AtEndOfStream
strcomputer = objlist.ReadLine
If Reachable(strcomputer) Then
If per(strcomputer) Then
objlog.WriteLine strcomputer
objlog.WriteLine "**********"
Set objReg = GetObject("winmgmts://" & strComputer & _
"/root/default:StdRegProv")
objReg.EnumKey HKLM, strKey, arrSubkeys
For Each strSubkey In arrSubkeys
intRet1 = objReg.GetStringValue(HKLM, strKey & strSubkey, _
strEntry1a, strValue1)
If intRet1 <> 0 Then
objReg.GetStringValue HKLM, strKey & strSubkey, _
strEntry1b, strValue1
End If
If strValue1 <> "" Then
objlog.Write strValue1 & vbNewLine
End If
Next
Else
objlog.WriteLine "Error To Connect To WMI on " & strcomputer & vbnewline
End If
Else
objlog.WriteLine strcomputer & " Isn't Reachable" & vbNewLine
End If
Loop
Function Reachable(strComputer)
strCmd = "ping -n 1 " & strComputer
Set objShell = CreateObject("WScript.Shell")
Set objExec = objShell.Exec(strCmd)
strTemp = UCase(objExec.StdOut.ReadAll)
If InStr(strTemp, "REPLY FROM") Then
Reachable = True
Else
Reachable = False
End If
End Function
Function per(computer)
strcomputer = computer
On Error Resume Next
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
If err.number <> 0 Then
err.Clear
per = False
On Error goto 0
Else
per = True
On Error goto 0
End If
End Function
Open in new window
0
LVL 6
Assisted Solution
by:Mark Pavlak
Mark Pavlak earned 1000 total points
ID: 24308618
here is a HTA I worte to docuement servers. save the code to either .html .htm or .hta You should be able to extract the info you want if it is to be stand alone etc.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<title>My HTML Application</title>
<script language="vbscript">
<!-- Insert code, subroutines, and functions here -->
'Globals
'=============================================================================================
Dim objExcel,objWorksheet
Dim strComputerName,strPath
Dim intLastRow,intInstalledSoftwareStart,intInstalledSoftwareStop,IntBackupSchedule,intRebootProcedures,intDependancies
'=============================================================================================
'Excel Constants
'=============================================================================================
Const xlLeft = -4131
Const xlBottom = -4107
Const xlContext = -5002
Const xlUnderlineStyleNone = -4142
Const xlAutomatic = -4105
Const xlDiagonalDown = 5
Const xlDiagonalUp = 6
Const xlNone = -4142
Const xlEdgeLeft = 7
Const xlContinuous = 1
Const xlThin = 2
Const xlEdgeTop = 8
Const xlEdgeBottom = 9
Const xlEdgeRight = 10
Const xlInsideVertical = 11
Const xlInsideHorizontal = 12
Const xlGeneral = 1
Const Xltop = -4160
Const xlDescending = 2
Const xlYes = 1
Const xlTopToBottom = 1
Const xlUp = -4162
Const xlCenter = -4108
Const XlRight = -4152
Const xlHairline = 1
Const xlPageBreakManual = -4135
Const xlToRight = -4161
'=============================================================================================
'Initialize Globals
'=============================================================================================
Set objExcel = CreateObject ("Excel.Application")
strPath = Left(document.location.pathname,InStrRev(document.location.pathname,"\"))
'=============================================================================================
Sub Main ()
'Get User input on server ie Name,Crticallity,Backup Schedule,Depenancies,Boot Procedures
'=============================================================================================
strComputerName = form1.txtComputerName.value
'=============================================================================================
InitializeExcel(strComputerName)
SectionOne (strComputerName)
CriticalityOfServer ()
PrimaryPurpose()
SystemConfiguration(strComputerName)
AdditionalSoftwareInstalled(strComputerName)
BackUpSchedule()
RebootProcedures()
Dependancies ()
FormatSheet ()
End Sub
Sub InitializeExcel(strCPUName)
With objExcel
.Visible = True
.Workbooks.add(strPath+"\ExcelTemplates\Server.xlt")
.Worksheets(1).name = strCPUName
.ActiveWindow.DisplayGridlines = False
.DisplayAlerts = False
strPath = Left(document.location.pathname,InStrRev(document.location.pathname,"\"))
End With
Set objWorksheet = objExcel.Worksheets(1)
With objWorksheet
.PageSetup.PrintArea = ""
.ResetAllPageBreaks
.PageSetup.Zoom = 100
End With
End Sub
Sub FormatSheet()
Dim objRange
'Format Section One
'=============================================================================================
'Set Font Options in Cell B1
'=============================================================================================
Set objRange = objWorksheet.range("B1")
objRange.Font.Bold = True
'=============================================================================================
'Format Section One
'=============================================================================================
'Set Borders for Section One
'=================================================================================
Set objRange = objWorksheet.Range("A1:B4")
With objRange
.HorizontalAlignment = xlLeft
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
End With
objRange.Borders(xlDiagonalDown).LineStyle = xlNone
objRange.Borders(xlDiagonalUp).LineStyle = xlNone
With objRange.Borders(xlEdgeLeft)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeTop)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeBottom)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeRight)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlInsideVertical)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlInsideHorizontal)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
'=================================================================================
'Set Font For Section One
'=================================================================================
Set objRange = objWorksheet.range("A1:B4")
With objRange.Font
.Name = "Tahoma"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
'=================================================================================
'=============================================================================================
'=============================================================================================
'Format Criticiality
'=============================================================================================
'Format Criticality Header
'=================================================================================
Set objRange = objWorksheet.Range("A7:b8")
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
Set objRange = objWorksheet.range("A6:B6")
objRange.Borders(xlDiagonalDown).LineStyle = xlNone
objRange.Borders(xlDiagonalUp).LineStyle = xlNone
With objRange.Borders(xlEdgeLeft)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeTop)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeBottom)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeRight)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
objRange.Borders(xlInsideVertical).LineStyle = xlNone
'=================================================================================
'Format Check Box Space
'=================================================================================
Set objRange = objWorksheet.range("A7:B8")
objRange.Borders(xlDiagonalDown).LineStyle = xlNone
objRange.Borders(xlDiagonalUp).LineStyle = xlNone
With objRange.Borders(xlEdgeLeft)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeTop)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeBottom)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeRight)
.LineStyle = xlContinuous
.Weight = xlThin
.ColorIndex = xlAutomatic
End With
'=================================================================================
'=================================================================================
'Set Font For Criticality of Server
'=================================================================================
Set objRange = objWorksheet.range("A6:B8")
With objRange.Font
.Name = "Tahoma"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
'=================================================================================
'Set font for High,Medium,Low
'=================================================================================
Set objRange = objWorksheet.range("A7")
objRange.Font.Bold = True
'=================================================================================
'=============================================================================================
'Format Primary Purpose
'=============================================================================================
Set objRange = objWorksheet.Range("A9:B9")
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.Bold = True
End With
Set objRange = objExcel.Range("A10")
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
objWorksheet.cells(9,1).interior.ColorIndex = 16
'=============================================================================================
'=============================================================================================
'Format System Configuration
'=============================================================================================
Set objRange = objWorksheet.Range("A14:B14")
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.bold = True
End With
objWorksheet.cells(14,1).interior.ColorIndex = 16
'=============================================================================================
'Format Additional Software
'=============================================================================================
'Format Header
'=================================================================================
Set objRange = objWorksheet.range("A" &intInstalledSoftwareStart-1 & ":B" &intInstalledSoftwareStart-1)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.bold = True
End With
objWorksheet.cells(intInstalledSoftwareStart -1,1).interior.ColorIndex = 16
'=================================================================================
'Format Titles
'=================================================================================
Set objRange = objWorksheet.range("A" &intInstalledSoftwareStart & ":B" &intInstalledSoftwareStart)
With objRange
.HorizontalAlignment = xlCenter
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = False
End With
objRange.Font.Bold = True
objRange.Font.Italic = True
'=================================================================================
'Format Verision Cells to right
'=================================================================================
Set objRange = objWorksheet.Range("B" & intInstalledSoftwareStart + 1 & ":B" & intInstalledSoftwareStop -1)
With objRange
.HorizontalAlignment = XlRight
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = False
End With
'=================================================================================
'Format Text and Make Grid for Installed Software
'=================================================================================
Set objRange = objWorksheet.Range("A" & intInstalledSoftwareStart + 1 & ":B" & intInstalledSoftwareStop -1 )
objRange.Borders(xlDiagonalDown).LineStyle = xlNone
objRange.Borders(xlDiagonalUp).LineStyle = xlNone
With objRange.Borders(xlEdgeLeft)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeTop)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeBottom)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlEdgeRight)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlInsideVertical)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Borders(xlInsideHorizontal)
.LineStyle = xlContinuous
.Weight = xlHairline
.ColorIndex = xlAutomatic
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
'=================================================================================
'Format Backup Schedule
'=============================================================================================
Set objRange = objWorksheet.Range("A"& IntBackupSchedule & ":B" &IntBackupSchedule)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.Bold = True
End With
Set objRange = objWorksheet.Range("A"& IntBackupSchedule + 1 & ":B" &IntBackupSchedule + 1)
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
objWorksheet.cells(IntBackupSchedule,1).interior.ColorIndex = 16
'=============================================================================================
'Format Reboot Procedures
'=============================================================================================
Set objRange = objWorksheet.Range("A"& intRebootProcedures & ":B" &intRebootProcedures)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.Bold = True
End With
Set objRange = objWorksheet.Range("A"& intRebootProcedures + 1 & ":B" &intRebootProcedures + 1)
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
objWorksheet.cells(intRebootProcedures,1).interior.ColorIndex = 16
'=============================================================================================
'Format Dependancies
'=============================================================================================
Set objRange = objWorksheet.Range("A"& intDependancies & ":B" &intDependancies)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlBottom
.WrapText = False
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
.Bold = True
End With
Set objRange = objWorksheet.Range("A"& intDependancies + 1 & ":B" &intDependancies + 1)
With objRange.Font
.Name = "Tahoma"
.FontStyle = "Regular"
.Size = 10
.Strikethrough = False
.Superscript = False
.Subscript = False
.OutlineFont = False
.Shadow = False
.Underline = xlUnderlineStyleNone
.ColorIndex = xlAutomatic
End With
objWorksheet.cells(intDependancies,1).interior.ColorIndex = 16
'=============================================================================================
'=============================================================================================
'Autofit WorkSheet
'=============================================================================================
Set objRange = objWorksheet.UsedRange
objRange.EntireRow.Autofit()
objRange.EntireColumn.Autofit()
'=============================================================================================
Set objRange = objWorksheet.range("C:C")
objRange.pageBreak = xlPageBreakManual
End Sub
Sub SectionOne (strComputerAccount)
'On Error Resume Next
'Variables
'=============================================================================================
Dim objWMI,objPCAttribute,objPC
'=============================================================================================
'Initalize WMI
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_OperatingSystem")
'=============================================================================================
'Get Attribues from Win32_LogicalDisk
'=============================================================================================
For Each objPCAttribute In objPC
With objWorksheet
.cells(1,1).value = "Server Name:"
.cells(1,2).value = UCase(strComputerAccount)
.cells(2,1).value = "Location:"
.cells(2,2).value = objPCAttribute.Description
.cells(3,1).value = "Operating System"
.cells(3,2).value = objPCAttribute.Caption
.cells(4,1).value = "Service Pack"
.cells(4,2).value = Replace(objPCAttribute.CSDVersion,"Service Pack","")
End With
Next
End Sub
Sub CriticalityOfServer ()
Dim objRange
objWorksheet.cells(6,1).value = "Criticality of Server"
'Check Apporiate box
If form1.High.checked Then
objWorksheet.cells(7,1).value = "High"
End If
If form1.Medium.checked Then
objWorksheet.cells(7,1).value = "Medium"
End If
If form1.Low.checked Then
objWorksheet.cells(7,1).value = "Low"
End If
End Sub
Sub PrimaryPurpose ()
objWorksheet.cells(9,1).value = "Primary Purpose"
Set objRange = objWorksheet.range("A10:B13")
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlTop
.WrapText = True
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
objWorksheet.cells(10,1).value = form1.PrimaryPurpose.Value
End Sub
Sub SystemConfiguration (strComputerAccount)
'Variables
'=============================================================================================
Dim objWMI,objPCAttribute,objPC
Dim strClockSpeed,strProcessorNum,strIP
'=============================================================================================
'Set Title
'=============================================================================================
objWorksheet.cells(14,1).value = "System Configuration"
'=============================================================================================
'Set Title Column for Type,Serial#,Server Model,Manufacturer,RAM
'=============================================================================================
objWorksheet.cells(15,1).value = "Type"
objWorksheet.cells(16,1).value = "Serial Number"
objWorksheet.cells(17,1).value = "Server Model"
objWorksheet.cells(18,1).value = "Manufactuer"
objWorksheet.cells(19,1).value = "RAM"
objWorksheet.cells(20,1).value = "Number of Processors"
'=============================================================================================
'Initalize WMI for Type,Model,Manufactuer,RAM
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_ComputerSystem")
''Get Attribues from WIN32_Computersystem
'=================================================================================
For Each objPCAttribute In objPC
If (objPCAttribute.Manufacturer = "VMware, Inc.") Then
objWorksheet.cells(15,2).value = "Virtual"
Else
objWorksheet.cells(15,2).value = "Physical"
End If
objWorksheet.cells(17,2).value = objPCAttribute.model
objWorksheet.cells(18,2).value = objPCAttribute.Manufacturer
objWorksheet.cells(19,2).value = FormatNumber(objPCAttribute.TotalPhysicalMemory/1073741824,2)+"gigs"
strProcessorNum = objPCAttribute.NumberOfProcessors
Next
'=================================================================================
'=============================================================================================
'Initalize WMI For Serial Number
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_SystemEnclosure")
'Get Attribues from WIN32_SystemEnclosure
'=================================================================================
For Each objPCAttribute In objPC
objWorksheet.cells(16,2).value = objPCAttribute.SerialNumber
Next
'=================================================================================
'=============================================================================================
'Initalize WMI for Clock Speed
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_Processor")
'Get Attribues from Win32_Processor
'=================================================================================
For Each objPCAttribute In objPC
strClockSpeed = FormatNumber(objPCAttribute.MaxClockSpeed*.001,2)+"ghz"
Next
objWorksheet.cells(20,2).value = strProcessorNum&" @ "&strClockSpeed
'=================================================================================
'=============================================================================================
intLastRow = 21
'Initalize WMI For Network Information
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_NetworkAdapterConfiguration")
'=============================================================================================
'Get Attribues from Win32_NetworkAdapterConfiguration
'=================================================================================
For Each objPCAttribute In objPC
If isNull(objPCAttribute.IPAddress) Then
Else
intPlace = 0
'intCount = intCount + 1
objWorksheet.cells(intLastRow,1) = "MAC Address"
objWorksheet.cells(intLastRow,2) = "IP Addresses "
objWorksheet.cells(intLastRow+1,1) = objPCAttribute.MACAddress
objWorksheet.cells(intLastRow+1,1) = objPCAttribute.MACAddress
' wscript.echo "IP Addresses: "&Join(objPCAttribute.IPAddress, " ")
intLastRow = intLastRow + 1
intArraySize = UBound (objPCAttribute.IPAddress)+ 1
Do While intArraySize <> 0
objWorksheet.cells(intLastRow+intPlace,2) = objPCAttribute.IPAddress(intPlace)
intPlace = intPlace + 1
intArraySize = intArraySize - 1
Loop
intLastRow = intLastRow + intPlace
End If
Next
'=================================================================================
'=============================================================================================
'Initalize WMI for HD information
'=============================================================================================
Set objWMI = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputerAccount & "\root\cimv2")
Set objPC = objWMI.ExecQuery _
("Select * from Win32_LogicalDisk")
'Get HD information
'=================================================================================
objWorksheet.cells(intLastRow,1) = "Drive Letter"
objWorksheet.cells(intLastRow,2) = "Drive Size"
intLastRow = intLastRow + 1
For Each objPCAttribute In objPC
If objPCAttribute.Description = "Local Fixed Disk" Then
intCount = intCount + 1
objWorksheet.cells(intLastRow,1) = objPCAttribute.DeviceID
objWorksheet.cells(intlastrow,2) = FormatNumber(objPCAttribute.Size/1073741824,2)+"gigs"
intCount = intCount + 1
x = x+1
intCount = 0
intLastRow = intLastRow + 1
End If
Next
'=================================================================================
'=============================================================================================
End Sub
Sub AdditionalSoftwareInstalled (strComputerAccount)
'Variables
'=============================================================================================
Const HKEY_LOCAL_MACHINE = &H80000002
Dim objReg,objKey,objSlaveWorksheet,objRange2,objWorkbook
Dim arrSubKeys,arrTemp
Dim strKeyPath,strDisplayName,strDisplayVersion,strTmp,strTmp1,strSubKey
Dim i
'=============================================================================================
'Initalize Variables
'=============================================================================================
strDisplayName = "DisplayName"
strDisplayVersion = "DisplayVersion"
Set objReg = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & _
strComputerAccount & "\root\default:StdRegProv")
strKeyPath = "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall"
i = 1
Set objSlaveWorksheet = objExcel.Worksheets(2)
'=============================================================================================
'Write Add/Remove to Seperate WorkSheet
'=============================================================================================
objReg.EnumKey HKEY_LOCAL_MACHINE, strKeyPath, arrSubKeys
For Each objKey In arrSubKeys
On Error Resume Next
objReg.GetStringValue HKEY_LOCAL_MACHINE,strKeyPath+"\"+objKey,strDisplayName,strTmp
objReg.GetStringValue HKEY_LOCAL_MACHINE,strKeyPath+"\"+objKey,strDisplayVersion,strTmp1
objSlaveWorksheet.cells(i,1).value = strTmp+"*"+strTmp1
i = i + 1
Next
'=============================================================================================
'Sort Add/Remove Sheet
'=============================================================================================
Set objRange = objSlaveWorksheet.range("A:A")
Set objRange2 = objSlaveWorksheet.Range("A1")
objRange.Sort objRange2, xlDescending, , , , , , xlYes
Set objRange2 = objSlaveWorksheet.range("1:1")
objRange2.delete xlUp
'=============================================================================================
'Rewrite To Section from Sheet2
'=============================================================================================
objWorksheet.cells(intLastRow,1).value = "Additional Software Installed"
intLastRow = intLastRow + 1
intInstalledSoftwareStart = intLastRow
objWorksheet.cells(intLastRow,1).value = "Software"
objWorksheet.cells(intLastRow,2).value = "Verision"
intLastRow = intLastRow + 1
i = FindEndOfSheet(objSlaveWorksheet,1) - 1
Do Until i = 0
arrTemp = SplitVerision(objSlaveWorksheet.cells(i,1))
objWorksheet.cells(intLastRow,1).value = arrTemp(0)
objWorksheet.cells(intLastRow,2).value = arrTemp(1)
intLastRow = intLastRow + 1
i = i -1
Loop
intInstalledSoftwareStop = intLastRow
'=============================================================================================
Set objSlaveWorksheet = Nothing
Set objWorkbook = objExcel.Workbooks(1)
objWorkbook.Worksheets("Sheet2").delete
Set objWorkbook = Nothing
End Sub
Sub BackUpSchedule
objWorksheet.cells(intLastRow,1).value = "Backup Schedule"
IntBackupSchedule = intLastRow
intLastRow = intLastRow + 1
Set objRange = objWorksheet.range("A" & intLastRow & ":B" & intLastRow + 4)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = xlTop
.WrapText = True
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
objWorksheet.cells(intLastRow,1).value = form1.BackupSchedule.Value
intLastRow = intLastRow + 5
End Sub
Sub RebootProcedures ()
objWorksheet.cells(intLastRow,1).value = "Reboot Procedures"
intRebootProcedures = intLastRow
intLastRow = intLastRow + 1
Set objRange = objWorksheet.range("A" & intLastRow & ":B" & intLastRow + 4)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = Xltop
.WrapText = True
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
objWorksheet.cells(intLastRow,1).value = form1.RebootProcedures.Value
intLastRow = intLastRow + 5
End Sub
Sub Dependancies()
objWorksheet.cells(intLastRow,1).value = "Dependancies"
intDependancies = intLastRow
intLastRow = intLastRow + 1
Set objRange = objWorksheet.range("A" & intLastRow & ":B" & intLastRow + 4)
With objRange
.HorizontalAlignment = xlGeneral
.VerticalAlignment = Xltop
.WrapText = True
.Orientation = 0
.AddIndent = False
.IndentLevel = 0
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
objWorksheet.cells(intLastRow,1).value = form1.Dependancies.Value
End Sub
Sub CloseFile()
End sub
Function SplitVerision(strTemp)
Dim arrTmp
arrTmp = Split (strTemp,"*")
SplitVerision = arrTmp
End Function
Function FindEndOfSheet (objTempSheet,strColumn)
Dim strEOS
strEOS = 1
Do Until Len(objTempSheet.cells(strEOS,strColumn).value) = 0
strEOS = strEOS + 1
Loop
FindEndOfSheet = strEOS
End Function
</script>
<hta:application
applicationname="MyHTA"
border="dialog"
borderstyle="normal"
caption="Server Documentation"
contextmenu="no"
icon="myicon.ico"
maximizebutton="no"
minimizebutton="yes"
navigable="no"
scroll="no"
selection="no"
showintaskbar="yes"
singleinstance="yes"
sysmenu="yes"
version="1.0"
windowstate="normal"
>
</head>
<body>
<!-- HTML goes here -->
<form action="" method="get" name="form1">
Server Name: <INPUT TYPE="text" NAME="txtComputerName" SIZE=20 MAXLENGTH=20 VALUE="">
<br>
<br>
Criticality of Server:
<br>
<INPUT TYPE="checkbox" NAME="High"> High
<INPUT TYPE="checkbox" NAME="Medium"> Medium
<INPUT TYPE="checkbox" NAME="Low"> Low
<br>
<br>
Primary Purpose:
<br>
<textarea name="PrimaryPurpose" cols="40" rows="5"></textarea><br>
Back Up Schedule:
<br>
<textarea name="BackUpSchedule" cols="40" rows="5"></textarea><br>
Reboot Procedures :<br>
<textarea name="RebootProcedures" cols="40" rows="5"></textarea><br>
Dependancies :<br>
<textarea name="Dependancies" cols="40" rows="5"></textarea><br>
<INPUT TYPE="Button" NAME="CreateDocument" VALUE="Create Server Documentation" onclick= "Main">
</body>
</html>
Open in new window
0
Featured Post
Technology Partners: We Want Your Opinion!
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
Recently I finished a vbscript that I thought I'd share. It uses a text file with a list of server names to loop through and get various status reports, then writes them all into an Excel file. Originally it was put together for our Altiris server…
With User Account Control (UAC) enabled in Windows 7, one needs to open an elevated Command Prompt in order to run scripts under administrative privileges. Although the elevated Command Prompt accomplishes the task, the question How to run as script…
In response to a need for security and privacy, and to continue fostering an environment members can turn to for support, solutions, and education, Experts Exchange has created anonymous question capabilities. This new feature is available to our Pr…
Please read the paragraph below before following the instructions in the video — there are important caveats in the paragraph that I did not mention in the video. If your PaperPort 12 or PaperPort 14 is failing to start, or crashing, or hanging, …
Suggested Courses
824 members asked questions and received personalized solutions in the past 7 days.
Join the community of 500,000 technology professionals and ask your questions.
Join & Ask a Question
|
__label__pos
| 0.965483 |
viernes, septiembre 26, 2008
Cloth Physics
I have been working in something very interesting, that our 3D engine (Unity) doesnt come with, and its cloth physics, who would think that something as simple as cloth would be so difficult to simulate in a PC!.
In order to start correctly, I would like to point out that there are several methods for cloth simulation, but the one I selected for my simulation was the mass spring model system =).
In this model, we treat the cloth itself as a grid of nodes at which all the mass of the cloth is assumed to be concentrated, and then we let the nodes interact with each other.
This particular method used in this simulation connects the grid with a series of linear springs designed to resist forces pulling the fabric apart.
This mass spring model uses three types of springs in order to mantain the cloth shape.
Structural Springs (Blue): These springs connect each node with its 4 adjacent non-diagonal neighbors, and serve to keep the cloth in a "sheet".
Shear Springs(Black): These springs connect each node with the four adjacent diagonal neighbors, and oppose shearing deformations.
Flexion Springs (Green): The flexion springs connect each node with the node two over orizontally and diagonally. These flexion springs have little impact unless the points are non-coplanar, in wich case the flexion springs serve to restrict the bending of the sheet.
These springs are used to set up a system of differential equations to solve the position of the nodes. At any time t, the forces applied to each node can be calculated from the spring forces and external forces like gravity, wind, balls hitting, etc.
The position can be derived through a simple Euler method integration:
\longrightarrow a ( t + dt) = \frac{1}{\gamma} * F(t)
\longrightarrow v ( t + dt) = v(t) + dt * a(t + dt)
\longrightarrow p(t+dt) = p(t) + dt * v(t+dt)
where:
* miu: mass of the particles.
* dt : (discrete) time step. (a too large time step will blow up! the simulation).
In Unity, I used a set of Empty Game Objecs in order to simulate the nodes with the masses, added the springs with a script, wich you can play with the spring force values, and made all the Game Objects as rigid bodies for the interaction with the gravity.
Here's a demo of what I came up with: (Sadly the player only works for Mac OS or Windows, no linux support T_T)
* press Esc to reset the scene.
* Click on the scene and move the mouse to move the cloth.
For Cloth with No colliders in the nodes:
And for Cloth with colliders in the nodes: (the rotations of the nodes are frozen, that's why we see the cloth different from the abobe one)
No hay comentarios.:
|
__label__pos
| 0.830685 |
Answers
Solutions by everydaycalculation.com
Answers.everydaycalculation.com » A% of what number is B
6 percent of what number is 320?
320 is 6% of 5333.33
Steps to solve "320 is 6 percent of what number?"
1. We have, 6% × x = 320
2. or, 6/100 × x = 320
3. Multiplying both sides by 100 and dividing both sides by 6,
we have x = 320 × 100/6
4. x = 5333.33
If you are using a calculator, simply enter 320×100÷6, which will give you the answer.
MathStep (Works offline)
Download our mobile app and learn how to work with percentages in your own time:
Android and iPhone/ iPad
More percentage problems:
Find another
is % of
© everydaycalculation.com
|
__label__pos
| 0.995153 |
by
ru
en
by
Avemey
logo
ФайлыГалоўнаяСпасылкіФатаграфііАнімэ
Прыклады кода для ZColorStringGrid і ZCLabel
ZColorStringGrid правяраўся на: Delphi 7, 2005-XE2; C++Builder 6.
ZCLabel правяраўся на:
• Lazarus 0.9.28.2 (FPC 2.2.4 + Debian 5.0 + KDE3.5), Lazarus 0.9.30.2 (FPC 2.4.4 + Debian 6.0.3 + GNOME / Windows)
• Delphi 7, 2005-XE2
• C++Builder 6
Прыклады выкарыстання ZColorStringGrid:
Стыль вочка
ZColorStringGrid стыль вочка
Прыклад кода для Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
i, j: integer;
begin
//запаўняем табліцу
for i := 1 to ZColorStringGrid1.ColCount - 1 do
for j := 1 to ZColorStringGrid1.RowCount - 1 do
ZColorStringGrid1.Cells[i, j] := IntToStr(i * j);
//Колер фону
ZColorStringGrid1.CellStyle[1, 1].BGColor := clYellow;
ZColorStringGrid1.CellStyle[2, 1].BGColor := clGreen;
ZColorStringGrid1.CellStyle[3, 1].BGColor := clLime;
//Змяняем стыль у калонкі
ZColorStringGrid1.CellStyleCol[1, false] := ZColorStringGrid1.CellStyle[1, 1];
ZColorStringGrid1.CellStyleCol[2, true] := ZColorStringGrid1.CellStyle[2, 1];
//Змяняем стыль у радку
ZColorStringGrid1.CellStyleRow[3, false] := ZColorStringGrid1.CellStyle[3, 1];
//Шрыфт
ZColorStringGrid1.CellStyle[2, 3].Font.Size := 12;
ZColorStringGrid1.CellStyle[2, 3].Font.Name := 'Tahoma';
ZColorStringGrid1.CellStyle[2, 3].Font.Style := [fsBold, fsItalic];
ZColorStringGrid1.CellStyle[2, 2].Font.Color := clWhite;
//Рамка вочка
ZColorStringGrid1.CellStyle[3, 3].BorderCellStyle := sgLowered;
ZColorStringGrid1.CellStyle[4, 3].BorderCellStyle := sgRaised;
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
int i = 0;
int j = 0;
//запаўняем табліцу
for (i = 0; i < ZColorStringGrid1->ColCount; i++)
for (j = 0; j < ZColorStringGrid1->RowCount; j++)
{
ZColorStringGrid1->Cells[i][j] = IntToStr(i * j);
}
//Колер фону
ZColorStringGrid1->CellStyle[1][1]->BGColor = clYellow;
ZColorStringGrid1->CellStyle[2][1]->BGColor = clGreen;
ZColorStringGrid1->CellStyle[3][1]->BGColor = clLime;
//Змяняем стыль у калонкі
ZColorStringGrid1->CellStyleCol[1][false] = ZColorStringGrid1->CellStyle[1][1];
ZColorStringGrid1->CellStyleCol[2][true] = ZColorStringGrid1->CellStyle[2][1];
//Змяняем стыль у радку
ZColorStringGrid1->CellStyleRow[3][false] = ZColorStringGrid1->CellStyle[3][1];
//Шрыфт
ZColorStringGrid1->CellStyle[2][3]->Font->Size = 12;
ZColorStringGrid1->CellStyle[2][3]->Font->Name = "Tahoma";
ZColorStringGrid1->CellStyle[2][3]->Font->Style =
TFontStyles() << fsBold << fsItalic;
ZColorStringGrid1->CellStyle[2][2]->Font->Color = clWhite;
//Рамка вочка
ZColorStringGrid1->CellStyle[3][3]->BorderCellStyle = sgLowered;
ZColorStringGrid1->CellStyle[4][3]->BorderCellStyle = sgRaised;
}
Выраўноўванне і водступы
ZColorStringGrid Водступы і выраўноўванне ў вочку
Прыклад кода для Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
i, j, r: byte;
begin
//Выраўноўванне: гарызанталь - злева, вертыкаль - цэнтр
ZColorStringGrid1.Cells[1, 0] := 'H:L + V:C';
//Выраўноўванне: гарызанталь - справа, вертыкаль - цэнтр
ZColorStringGrid1.Cells[2, 0] := 'H:R + V:C';
//Выраўноўванне: гарызанталь - злева, вертыкаль - зверху
ZColorStringGrid1.Cells[3, 0] := 'H:L + V:T';
//Выраўноўванне: гарызанталь - па цэнтры, вертыкаль - знізу
ZColorStringGrid1.Cells[4, 0] := 'H:З + V:D';
for i := 0 to 5 do
begin
r := i + 1;
for j := 1 to 4 do
begin
ZColorStringGrid1.Cells[j, r] := IntToStr(i);
//Водступ па гарызанталі
ZColorStringGrid1.CellStyle[j, r].IndentH := i;
end;
//Справа
ZColorStringGrid1.CellStyle[2, r].HorizontalAlignment := taRightJustify;
//Зверху
ZColorStringGrid1.CellStyle[3, r].VerticalAlignment := vaTop;
//па цэнтры
ZColorStringGrid1.CellStyle[4, r].HorizontalAlignment := taCenter;
//Знізу
ZColorStringGrid1.CellStyle[4, r].VerticalAlignment := vaBottom;
//Водступ па вертыкалі
for j := 3 to 4 do
ZColorStringGrid1.CellStyle[j, r].IndentV := i;
end;
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
//Выраўноўванне: гарызанталь - злева, вертыкаль - цэнтр
ZColorStringGrid1->Cells[1][0] = "H:L + V:C";
//Выраўноўванне: гарызанталь - справа, вертыкаль - цэнтр
ZColorStringGrid1->Cells[2][0] = "H:R + V:C";
//Выраўноўванне: гарызанталь - злева, вертыкаль - зверху
ZColorStringGrid1->Cells[3][0] = "H:L + V:T";
//Выраўноўванне: гарызанталь - па цэнтры, вертыкаль - знізу
ZColorStringGrid1->Cells[4][0] = "H:З + V:D";
int i = 0;
int j = 0;
int r = 0;
for (i = 0; i <= 5; i++)
{
r = i + 1;
for (j = 1; j < 5; j++)
{
ZColorStringGrid1->Cells[j][r] = IntToStr(i);
//Водступ па гарызанталі
ZColorStringGrid1->CellStyle[j][r]->IndentH = i;
}
//Справа
ZColorStringGrid1->CellStyle[2][r]->HorizontalAlignment = taRightJustify;
//Зверху
ZColorStringGrid1->CellStyle[3][r]->VerticalAlignment = vaTop;
//па цэнтры
ZColorStringGrid1->CellStyle[4][r]->HorizontalAlignment = taCenter;
//Знізу
ZColorStringGrid1->CellStyle[4][r]->VerticalAlignment = vaBottom;
//Водступ па вертыкалі
for (j = 3; j < 5; j++)
ZColorStringGrid1->CellStyle[j][r]->IndentV = i;
}
}
Аб'яднанне вочак
ZColorStringGrid Аб'яднанне вочак
Прыклад кода для Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
Rct: TRect;
res: integer;
begin
//Аб'яднанне фіксаваных вочак
res := ZColorStringGrid1.MergeCells.AddRectXY(1, 0, 3, 0);
if (res <> 0) then
begin
{Калі не атрымалася аб'яднаць вочкі}
end;
ZColorStringGrid1.Cells[1, 0] := 'Аб''яднанае вочка';
Rct.Top := 1;
Rct.Left := 0;
Rct.Right := 0;
Rct.Bottom := 3;
ZColorStringGrid1.MergeCells.AddRect(Rct);
ZColorStringGrid1.Cells[0, 1] := 'Тэкст';
//Спроба аб'яднання фіксаваных вочак з нефіксаванымі
res := ZColorStringGrid1.MergeCells.AddRectXY(0, 4, 3, 5);
if (res <> 0) then
ZColorStringGrid1.Cells[0, 4] := 'Правал'
else
ZColorStringGrid1.Cells[0, 4] := 'Атрымалася!';
//Праверку на аб'яднанне можна не рабіць
ZColorStringGrid1.MergeCells.AddRectXY(2, 1, 4, 4);
ZColorStringGrid1.Cells[2, 1] := ZColorStringGrid1.Cells[1, 0];
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
int res = 0;
//Аб'яднанне фіксаваных вочак
res = ZColorStringGrid1->MergeCells->AddRectXY(1, 0, 3, 0);
if (res != 0)
{
/* Калі не атрымалася аб'яднаць вочкі */
}
ZColorStringGrid1->Cells[1][0] = "Аб'яднанае вочка";
TRect Rct;
Rct.Top = 1;
Rct.Left = 0;
Rct.Right = 0;
Rct.Bottom = 3;
ZColorStringGrid1->MergeCells->AddRect(Rct);
ZColorStringGrid1->Cells[0][1] = "Тэкст";
//Спроба аб'яднання фіксаваных вочак з нефіксаванымі
res = ZColorStringGrid1->MergeCells->AddRectXY(0, 4, 3, 5);
if (res != 0)
{ZColorStringGrid1->Cells[0][4] = "Правал";}
else
{ZColorStringGrid1->Cells[0][4] = "Атрымалася!";};
//Праверку на аб'яднанне можна не рабіць
ZColorStringGrid1->MergeCells->AddRectXY(2, 1, 4, 4);
ZColorStringGrid1->Cells[2][1] = ZColorStringGrid1->Cells[1][0];
}
Заваротак тэксту ў вочках
УВАГА: Паварочваць тэкст можна толькі выкарыстоўваючы TrueType шрыфты!
ZColorStringGrid Заваротак тэксту
Прыклад кода для Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
i, j: integer;
begin
//Усталёўваны ўсім вочкам TrueType шрыфт
for i := 0 to ZColorStringGrid1.ColCount - 1 do
for j := 0 to ZColorStringGrid1.RowCount - 1 do
begin
ZColorStringGrid1.CellStyle[i, j].Font.Name := 'Tahoma';
ZColorStringGrid1.CellStyle[i, j].Font.Size := 12;
end;
//Заваротак у аб'яднаных вочках
ZColorStringGrid1.MergeCells.AddRectXY(0, 1, 0, 4);
ZColorStringGrid1.CellStyle[0, 1].Rotate := 90;
ZColorStringGrid1.Cells[0, 1] := 'Заваротак' + sLineBreak +
'тэксту на' + sLineBreak + '90 градусаў';
ZColorStringGrid1.MergeCells.AddRectXY(1, 0, 3, 0);
ZColorStringGrid1.CellStyle[1, 0].Rotate := 180;
ZColorStringGrid1.Cells[1, 0] := 'Заваротак на 180 градусаў';
ZColorStringGrid1.MergeCells.AddRectXY(2, 1, 4, 6);
ZColorStringGrid1.CellStyle[2, 1].Rotate := 60;
ZColorStringGrid1.CellStyle[2, 1].Font.Size := 16;
ZColorStringGrid1.CellStyle[2, 1].HorizontalAlignment := taCenter;
ZColorStringGrid1.Cells[2, 1] := 'Заваротак' + sLineBreak +
'шматрадковага' + sLineBreak + 'тэксту на' +
sLineBreak + '60 градусаў';
//Заваротак у звычайных вочках
ZColorStringGrid1.CellStyle[1, 1].Rotate := 180;
ZColorStringGrid1.Cells[1, 1] := 'Тэкст';
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
int i = 0;
int j = 0;
//Усталёўваны ўсім вочкам TrueType шрыфт
for (i = 0; i < ZColorStringGrid1->ColCount; i++)
for (j = 0; j < ZColorStringGrid1->RowCount; j++)
{
ZColorStringGrid1->CellStyle[i][j]->Font->Name = "Tahoma";
ZColorStringGrid1->CellStyle[i][j]->Font->Size = 12;
}
//Заваротак у аб'яднаных вочках
ZColorStringGrid1->MergeCells->AddRectXY(0, 1, 0, 4);
ZColorStringGrid1->CellStyle[0][1]->Rotate = 90;
ZColorStringGrid1->Cells[0][1] = String("Заваротак") + sLineBreak +
String("тэксту на") + sLineBreak + String("90 градусаў");
ZColorStringGrid1->MergeCells->AddRectXY(1, 0, 3, 0);
ZColorStringGrid1->CellStyle[1][0]->Rotate = 180;
ZColorStringGrid1->Cells[1][0] = "Заваротак на 180 градусаў";
ZColorStringGrid1->MergeCells->AddRectXY(2, 1, 4, 6);
ZColorStringGrid1->CellStyle[2][1]->Rotate = 60;
ZColorStringGrid1->CellStyle[2][1]->Font->Size = 16;
ZColorStringGrid1->CellStyle[2][1]->HorizontalAlignment = taCenter;
ZColorStringGrid1->Cells[2][1] = String("Заваротак") + sLineBreak +
String("шматрадковага") + sLineBreak + String("тэксту на") +
sLineBreak + String("60 градусаў");
//Заваротак у звычайных вочках
ZColorStringGrid1->CellStyle[1][1]->Rotate = 180;
ZColorStringGrid1->Cells[1][1] = "Тэкст";
}
Прыклады выкарыстання ZCLabel:
Выраўноўванне і паварот тэксту
Для паспяховага запуску на форме трэба змесцаваць 15 ZClabel-аў, усталяваць ім: TrueType шрыфт, шырыню і вышыню (80 і 70).
ZCLabel выраўноўванне і заваротак тэксту
Прыклад кода для Lazarus/Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
i: integer;
num: integer;
ZCL: TZCLabel;
s: string;
ang: integer;
sEOL: string;
begin
num := 0;
ang := 0;
{$IFDEF FPC}
sEOL := LineEnding;
{$ELSE}
sEOL := sLineBreak;
{$ENDIF}
s := 'Прыклад тэксту' + sEOL + 'нейкі';
//прабягаем па ўсіх кампанентах
for i := 0 to ComponentCount - 1 do
if (Components[i] is TZCLabel) then
begin
ZCL := Components[i] as TZCLabel;
ZCL.Font.Size := 10;
ZCL.Caption := s;
inc(num);
if (num mod 5 in [1, 2, 3]) then
begin
ZCL.Transparent := false;
ZCL.Color := clYellow;
end else
begin
ZCL.Font.Size := 14;
inc(ang, 50);
//вугал павароту
ZCL.Rotate := ang;
end;
//Выраўноўванне па гарызанталі
ZCL.AlignmentHorizontal := num mod 3;
//Выраўноўванне па вертыкалі
ZCL.AlignmentVertical := num mod 5;
end; //if
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
int num = 0;
int ang = 0;
int t;
TZCLabel * ZCL;
String s = String("Прыклад тэксту") + sLineBreak + String("нейкі");
//прабягаем па ўсіх кампанентах
int i = 0;
for (i = 0; i < ComponentCount; i++)
{
if (ZCL = dynamic_cast<TZCLabel *>(Components[i]))
{
ZCL->Font->Size = 10;
ZCL->Caption = s;
num++;
t = num % 5;
if ((t > 0) && (t < 4))
{
ZCL->Transparent = false;
ZCL->Color = clYellow;
} else
{
ZCL->Font->Size = 14;
ang += 50;
//вугал павароту
ZCL->Rotate = ang;
}
//Выраўноўванне па гарызанталі
ZCL->AlignmentHorizontal = num % 3;
//Выраўноўванне па вертыкалі
ZCL->AlignmentVertical = num % 5;
}
}
}
Маляванне на адвольным палатне
ZCLabel Маляванне на адвольным палатне
Прыклад кода для Lazarus/Delphi Прыклад кода для C++Builder-a
procedure TfrmMain.FormCreate(Sender: TObject);
var
Rct: TRect;
i: integer;
t: byte;
Cl: TColor;
sEOL, s: string;
begin
{$IFDEF FPC}
sEOL := LineEnding;
{$ELSE}
sEOL := sLineBreak;
{$ENDIF}
ZCLabel1.Visible := false;
Image1.Canvas.Brush.Color := clWhite;
Image1.Canvas.Brush.Style := bsSolid;
Image1.Canvas.Rectangle(0, 0, Image1.Width, Image1.Height);
Rct.Left := 0;
Rct.Top := 0;
Rct.Right := 120;
Rct.Bottom := 50;
for i := 1 to 5 do
begin
ZCLabel1.Font.Size := 10 + i;
//Малюе тэкст на палатне колерам шрыфта
ZCLabel1.DrawTextOn(Image1.Canvas, 'Some text', Rct, false);
Rct.Top := Rct.Top + 10;
Rct.Bottom := Rct.Bottom + 10;
Rct.Left := Rct.Left + 1;
Rct.Right := Rct.Right + 1;
end;
Rct.Left := 0;
Rct.Right := Image1.Width;
Rct.Top := 0;
Rct.Bottom := Image1.Height;
i := 0;
ZCLabel1.AlignmentVertical := 2;
ZCLabel1.AlignmentHorizontal := 0;
t := 255;
while (i <= 90) do
begin
ZCLabel1.Rotate := i;
cl := t shl 8;
//Малюе тэкст на палатне зададзеным колерам
ZCLabel1.DrawTextOn(Image1.Canvas, 'Some text', Rct, cl, false);
inc(i, 5);
dec(t, 14);
end;
//Выраўноўванне
ZCLabel1.AlignmentVertical := 0;
ZCLabel1.AlignmentHorizontal := 2;
i := 0;
while (i <= 90) do
begin
ZCLabel1.Rotate := i;
ZCLabel1.DrawTextOn(Image1.Canvas, 'Some text', Rct, false);
inc(i, 10);
end;
ZCLabel1.Font.Size := 20;
ZCLabel1.Rotate := -30;
ZCLabel1.AlignmentVertical := 1;
ZCLabel1.AlignmentHorizontal := 1;
//Адлегласць паміж радкамі
ZCLabel1.LineSpacing := -10;
s := 'ZClabel' + sEOL + 'прыклад малявання' + sEOL +
'на адвольным палатне';
ZCLabel1.DrawTextOn(Image1.Canvas, s, Rct, clGreen, false);
//Захаваем атрыманы малюнак у файл
Image1.Picture.Bitmap.SaveToFile({some_path}'1.bmp');
end;
void __fastcall TfrmMain::FormCreate(TObject *Sender)
{
ZCLabel1->Visible = false;
TRect Rct;
Rct.Left = 0;
Rct.Top = 0;
Rct.Right = 120;
Rct.Bottom = 50;
Graphics::TCanvas *CNV;
CNV = Image1->Canvas;
CNV->Brush->Color = clWhite;
CNV->Brush->Style = bsSolid;
CNV->Rectangle(0, 0, Image1->Width, Image1->Height);
CNV->TextOut(10, 10, "dasd");
String z = "Some text";
int i = 0;
for (i = 1; i < 6; i++)
{
ZCLabel1->Font->Size = 10 + i;
//Малюе тэкст на палатне колерам шрыфта
ZCLabel1->DrawTextOn(CNV, z, Rct, false);
Rct.Top += 10;
Rct.Bottom += 10;
Rct.Left += 1;
Rct.Right += + 1;
}
Rct.Left = 0;
Rct.Right = Image1->Width;
Rct.Top = 0;
Rct.Bottom = Image1->Height;
i = 0;
ZCLabel1->AlignmentVertical = 2;
ZCLabel1->AlignmentHorizontal = 0;
int t = 255;
TColor cl = clBlack;
while (i <= 90)
{
ZCLabel1->Rotate = i;
cl = t << 8;
//Малюе тэкст на палатне зададзеным колерам
ZCLabel1->DrawTextOn(CNV, "Some text", Rct, cl, false);
i += 5;
t -= 14;
}
//Выраўноўванне
ZCLabel1->AlignmentVertical = 0;
ZCLabel1->AlignmentHorizontal = 2;
i = 0;
while (i <= 90)
{
ZCLabel1->Rotate = i;
ZCLabel1->DrawTextOn(CNV, "Some text", Rct, false);
i += 10;
}
ZCLabel1->Font->Size = 20;
ZCLabel1->Rotate = -30;
ZCLabel1->AlignmentVertical = 1;
ZCLabel1->AlignmentHorizontal = 1;
//Адлегласць паміж радкамі
ZCLabel1->LineSpacing = -10;
String s = String("ZClabel") + sLineBreak + String("прыклад малявання") +
sLineBreak + String("на адвольным палатне");
ZCLabel1->DrawTextOn(CNV, s, Rct, clGreen, false);
//Захаваем атрыманы малюнак у файл
Image1->Picture->Bitmap->SaveToFile(/* some_path */"1.bmp");
}
ФайлыГалоўнаяСпасылкіФатаграфііАнімэ
Copyright © 2006-2012 Небарак Руслан Уладзіміравіч
|
__label__pos
| 0.907949 |
The process of encrypting individual files on a storage medium and permitting access to the encrypted data only after proper authentication is provided.
learn more… | top users | synonyms
90
votes
11answers
13k views
Technology that can survive a “Rubber-Hose attack”
In the documentary film Citizenfour, Edward Snowden says about documents: I'm comfortable in my technical ability to protect [documents]. I mean you could literally shoot me or torture me and ...
45
votes
3answers
10k views
What security scheme is used by PDF password encryption, and why is it so weak?
Many PDFs are distributed as encrypted PDFs to lock out some of their functionality (eg printing, writing, copying). However, PDF cracking software is available online, which usually cracks the PDF ...
41
votes
3answers
6k views
Why would an encrypted file be ~35% larger than an unencrypted one?
According to the ownCloud documentation, if you enable encryption, file sizes can be ~35% larger than their unencrypted forms. From my understanding of encryption, the file sizes should be more-or-...
32
votes
5answers
47k views
How secure is a Windows password protected zip file?
I need to send some sensitive information to a client. I thought I would email a password protected zip file I created using Windows XP and then call them with the password. Presuming I pick a good ...
25
votes
6answers
3k views
Do spaces in a passphrase really add any more security/entropy?
I often see passphrase suggestions written as a sentence, with spaces. In that format are they more susceptible to a dictionary attack because each word is on it's own as opposed to a large unbroken ...
21
votes
3answers
2k views
Would it be plausible to write your own anti-crypto-ransomware tool? [closed]
Question After reading about how basic ransomware targets and encrypts your files. I was wondering if it would be plausible to write your own script to try and detect such activities? Initial ...
19
votes
2answers
26k views
Brute Forcing Password to a Truecrypt-encrypted file with Partial Knowledge
A while back, I encrypted a few files with Truecrypt, and stored the password in my head. Now I need to access it again, the password isn't working. I'm sure most of it is right, but I'm off by one or ...
18
votes
8answers
25k views
Does password protecting an archived file actually encrypt it?
For example if I use WinRAR to encrypt a file and put a password on the archive how secure is it? I keep a personal journal and am thinking of doing this, or is there a better way? It's just one huge ....
17
votes
6answers
20k views
How secure is NTFS encryption?
How secure is the data in a encrypted NTFS folder on Windows (XP, 7)? (The encryption option under file|folder -> properties -> advanced -> encrypt.) If the user uses a decent password, can this ...
15
votes
5answers
10k views
Encrypting files in a windows environment
We have certain client data which must be encrypted at all times. The part we have been struggling with is encrypting files on network shares. Currently we have network folders encrypted using PGP ...
14
votes
4answers
5k views
VeraCrypt/TruCrypt - I can't understand why you'd want to create a “hidden” volume…?
I can't think of a reason as to why you'd want to create a hidden volume in VeraCrypt. It says because "you may be asked to hand the information," but why would I need to hand it over? Nobody has any ...
14
votes
4answers
7k views
What are you doing when you move your mouse randomly during a truecrypt volume creation?
Is that called a 'round' every time you move your mouse when creating a new volume? I'm talking about the screen with the random numbers during the volume creation process. What is the purpose of ...
13
votes
5answers
7k views
How can I decrypt data with Java, without hard-coding the key?
I hope this is not a chicken-egg problem or reinventing the wheel but here goes. I have a Java application that needs to access a password protected file (actually during the application startup). The ...
12
votes
5answers
793 views
Security of using passwords or even passphrases to encrypt files
Is it ever appropriate to use real-world passwords to encrypt files to be sent via unsecure means. By real world, I mean a password that is memorable and memorisable by a mere person? I am implying ...
12
votes
4answers
123k views
How to recover a lost zip file password
I have some files I was given by my teacher at University, I could chase him up, but I may as well try getting blood from a stone, his response rate isn't great and I completed my degree a year ago! ...
12
votes
4answers
723 views
Encrypting data by parts vs encrypting the whole data
Suppose I have a file with records, and I have two options to encrypt it: encrypt the file as a whole encrypt each record separately and store them together. Which way is generally preferable and ...
11
votes
6answers
11k views
Good file encryption tools [closed]
Could you please help me find a good file encryption tool? The target OS is Linux, but tools for Windows are welcome, too. FOSS tools are preferred. The tool must be reliable both for the recovery of ...
11
votes
2answers
4k views
How to encrypt data on the server?
I have some data on the server (running Linux) which needs to be encrypted (company policy). This data is being served by an application running on this machine. Now I consider a few possibilities: 1)...
11
votes
6answers
478 views
How to protect against adversaries snatching booted laptops to defeat full disk encryption?
I read an article describing how FBI agents snatched Ross Ulbricht's laptop while it was running to defeat full-disk encryption: Two plainclothes FBI agents, one male and one female, walked up ...
11
votes
4answers
9k views
What's the standard way to encrypt a file with a public key in Java?
I'm writing a client app in Java that sends encrypted data back to the server. All clients have my RSA public key. Is there a standard way to encrypt a given file with it to send it back to the server?...
10
votes
4answers
4k views
Is Changing a File Type and Name an Effective Security Solution?
Is renaming folders & files and changing file types an effective solution for file security of a PC? I am an application programmer and have an extensive background in it. I have written a ...
10
votes
2answers
5k views
After How Much Data Encryption (AES-256) we should change key?
Considering AES-256 encryption, what is the maximum amount of data should be encrypted with one key? Does Block cipher modes/IV/counters also governs the limit? If say the maximum amount is 50GB, ...
9
votes
3answers
2k views
time to crack file-encryption password - more than just iteration
I have often seen that takes x amount of time to crack a certain length password. But this just seems to be the amount of time it takes to iterate through all the possibilities. What about the time it ...
9
votes
2answers
6k views
Simple way to encrypt files on Linux and decrypt on Windows?
We are in need of an encryption process for backups of very valuable data. This data will be stored on a distributed filesystem, so even with permissions set right, its not out of reach that this data ...
9
votes
3answers
408 views
Linux Plausibly Deniable File System
Is there any way to encrypt a linux filesystem in such a way to maintain plausible deniability? E.g. "Hidden OS support," the way Truecrypt and Veracrypt work, they only support Windows OS due to low ...
8
votes
10answers
7k views
How can I share files with other individuals using the cloud in a secure way?
I would like to use a dropbox because it is easy and convenient. But I want to encrypt the files with the public key of the intended recipient, so he will be the only one who can access the data in ...
8
votes
3answers
519 views
Secure cleaning of deleted files
So I know how to secure delete files. But at our company, we have a laptop which had many important documents, which now have been deleted, but not in a secure way. We can't perform a full format of ...
8
votes
3answers
382 views
How can I allow acces to encrypted data if only 2 out of 3 users provide a secret?
I want to encrypt data, but make sure that no single user can decrypt the data. Further, it is important not to require ALL users to be present to decrypt. This will allow for some flexibility and ...
8
votes
7answers
2k views
Verify the password of an encrypted file without waiting for complete decryption
I am currently working on an application that encrypts files with AES 256. The problem is that I want the user to see whether the password is wrong before the file is decrypted. For example if you ...
8
votes
4answers
7k views
How secure are the password files used by Password Safe and Password Gorilla?
Password Safe and Password Gorilla are both programs to manage passwords. Both store a list of user passwords in a file, which is encrypted using a master password. They use the same file format, so ...
7
votes
7answers
5k views
Are there any “real world” implementations of secret-sharing encryption schemes?
Imagine something like TrueCrypt where user A can decrypt his files, or any 3 of the 10 directors in his organization can decrypt user A's files. As I understand it this is similar to the way the ...
7
votes
4answers
5k views
when people say a file has a checked md5 hash, what exactly does that mean?
ok I was just reading this site: http://www.zdnet.com/blog/bott/stay-safe-online-5-secrets-every-pc-and-mac-owner-should-know/3542?pg=4&tag=mantle_skin;content and thought of something I always ...
7
votes
3answers
3k views
Is ENCFS secure for encrypting Dropbox?
I've encrypted my dropbox-folder with encfs according to a few tutorials I found on the web, advocating this approach. But I've found the following critical statement concerning encfs'security in ...
7
votes
2answers
3k views
Microsoft Office 2013 File Encryption
To what extent can the native file encryption provided by Microsoft Office 2013 (Word, Powerpoint, Excel, etc.) be relied upon to maintain the confidentiality of documents, especially within the ...
7
votes
1answer
218 views
Encrypting individual files on OS X
I am about to go paperless, and I want to protect the documents that I scan. The use case is that I'll be scanning paper documents, encrypting each file, and storing them locally and on various backup ...
6
votes
6answers
747 views
Applying file deltas to an encrypted file
I am developing software that will be used for data backup. The server will run on Linux. Security in transportation is not an issue (HTTPS or SSH), but the data must be stored encrypted on the ...
6
votes
3answers
683 views
Encrypted password inside compressed archive
File compression utilities like Winrar or ZIP or 7zip encrypt the password and store it inside the archive. How safe is that? I mean you are giving away the archive with the password inside,it's not ...
6
votes
4answers
6k views
What does Windows's built-in encryption do, if I can seemingly always read my encrypted files?
I'm playing around with certificates and encryption on a Windows system, but either it isn't working or I don't know how it should works. When I encrypt a file its name changes to green. But then, I ...
6
votes
4answers
5k views
TrueCrypt -vs- MacOS encrypted sparse image - equally secure? Other alternatives?
I'm currently trying to find my personal best solution for storing passwords and other sensible data. Recently I've used 1Password on iPhone and Mac but I'm not really satisfied with its predefined ...
6
votes
7answers
1k views
How do I secure patient data cheaply in a small doctor's office?
I am a single practitioner with 3 work stations and a server. What is the best cost effective way to secure my data and I suppose encrypt it. The gov't wants us to prevent any unsecure access to ...
6
votes
2answers
5k views
Security of Microsoft OneDrive
A friend asked about putting some of his data on Microsoft's OneDrive. I did some research, and what I learned seems very surprising. It appears that all the user data on MS OneDrive is store ...
6
votes
1answer
254 views
John McAfee's Explanation of iOS Hack
I'm referring to this YouTube video, and further comment from John: I hope everyone knows that the Apple explanation was VASTLY dumbed down for the press. I know the A7 system chip well, ...
6
votes
1answer
276 views
Could a consumer-friendly PGP-like file encryption application theoretically be secure?
In a situation where there are desktop and mobile clients, and a central public key repository to and from which the desktop and mobile clients automatically upload and download public keys as needed, ...
6
votes
1answer
331 views
Should an app remove or encrypt locally stored user data after a user logs out?
I’ve been poking around in the Application Support folder of a popular app, trying to extract some chat logs. (By “popular”, I mean that most people reading this will have heard of it. I’m not sure if ...
6
votes
2answers
260 views
Decrypting arbitrary offset of encrypted file
I want to store encrypted files on some storage backend that allows me to fetch bytes X through Y of the encrypted file. I can obviously decrypt the entire file locally and send it back to the client. ...
5
votes
2answers
1k views
Designing a cryptographic file-sharing protocol
As a learning project, I am trying to implement a secure way to share files with a friend over dropbox. (I am not looking for existing software, I am doing this in order to learn how to do this right.)...
5
votes
1answer
5k views
Why is GPG file encryption so much slower than other AES implementations?
Correct me if I'm wrong: When encrypting a file, GPG creates a one-off AES encryption key and encrypts that key using RSA. This is supposedly to take advantage of AES's superior performance when ...
5
votes
4answers
884 views
Is Encrpyting Home Sufficient?
I currently have a fully encrypted Windows System, and I'm looking at switching to Ubuntu. Is encrypting my home directory (with ecryptfs) sufficient? On windows this would be problematic due to ...
5
votes
3answers
2k views
Single File Encryption in a Windows Environment
Would it be correct to assume that encrypting a single file in a Windows environment, for example a simple text file containing login credentials for a variety of accounts, is inherently insecure? It ...
5
votes
3answers
1k views
Will editing a Word file from a mounted Truecrypt volume leave any trace behind on the host computer?
I've heard that even if you have a Word document encrpyted (just using the built in Word encryption tools) and are editing it, it can still leave behind remnants of a file on the local computer in ...
|
__label__pos
| 0.556224 |
Faces, Vertices and Edges in a Pentagonal Prism
The pentagonal prism is a prism that has two parallel pentagonal bases and five rectangular side faces. These prisms are also considered as heptahedra. These three-dimensional figures have a total of 7 faces, 10 vertices, and 15 edges. Each of the pentagonal faces has five edges and five vertices. On the other hand, each of the rectangular faces has four edges and four vertices.
Here, we will learn about the faces, vertices, and edges of pentagonal prisms in more detail. Also, we will use some diagrams to illustrate the concepts.
GEOMETRY
vertices of a pentagonal prism
Relevant for
Learning about faces, vertices, and edges of pentagonal prisms.
See faces
GEOMETRY
vertices of a pentagonal prism
Relevant for
Learning about faces, vertices, and edges of pentagonal prisms.
See faces
Faces of pentagonal prisms
The faces of the pentagonal prism are the flat surfaces of the prism. These prisms are composed of two pentagonal faces that are called the bases. The bases are parallel and congruent with each other.
These bases are joined with five rectangular side faces. If the pentagonal bases are regular, the five rectangular faces will be congruent. Therefore, a pentagonal prism has a total of 7 faces.
diagram of a pentagonal prism
The surface area of the prism is obtained by adding the areas of all the faces. The area of each pentagonal face is equal to 3.44 l², where l is the length of one of the sides of the pentagonal base.
Therefore, the area of both pentagonal faces is equal to 6.8 l². On the other hand, the area of each rectangular face is equal to lh, where h is the height of the prism. Therefore, the area of the five rectangular faces is equal to 5lh.
This means that the surface area of the pentagonal prism is equal to 3.44 l² + 5lh.
Vertices of pentagonal prisms
The vertices of a pentagonal prism are the points where three edges meet. In general, vertices are defined as the points where two or more line segments meet. We can also define the vertices as the points where three faces of the prism meet, two rectangular faces and a pentagonal face. In total, a pentagonal prism has 10 vertices.
vertices of a pentagonal prism
Edges of pentagonal prisms
The edges of a pentagonal prism are the line segments that connect two vertices. The edges are at the limits of the prism. In general, edges are defined as the line segments that join two vertices of a three-dimensional figure.
We can also consider the edges as the segments where two faces of the polyhedron meet. In total, a pentagonal prism has 15 edges.
edges of a pentagonal prism
See also
Interested in learning more about pentagonal prisms? Take a look at these pages:
Learn mathematics with our additional resources in different topics
LEARN MORE
|
__label__pos
| 0.939284 |
Take the 2-minute tour ×
Game Development Stack Exchange is a question and answer site for professional and independent game developers. It's 100% free, no registration required.
this is what i do to blit FBO onto screen in ViewController:
[m_Context presentRenderbuffer:GL_RENDERBUFFER];
but i need something like glSwapBuffers, so that i can call it somewhere from the engine code ( which is cpp ) - to refresh the screen in special cases - is this available for iOS / how can i implement it - if i can at all..
share|improve this question
1 Answer 1
Ok, i have solved it by a small util function:
void SwapBuffers()
{
EAGLContext* context = [EAGLContext currentContext];
[context presentRenderbuffer:GL_RENDERBUFFER];
}
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.893223 |
How is noise implemented in `default.mixed` device?
Currently, default.qubit is used for ideal simulations and default.mixed is used for noisy simulations.
What is the difference between the two that prevents the former device from implementing noise models?
When I use a noise gate like qml.DepolarizingChannel, what is being implemented in terms of gates on the circuit?
The documentation refers to Kraus operators relating to each Pauli operator. Can the noise gate be simulated by probabilistically applying the Pauli gates on the default.qubit device?
For example, is the below code correctly applying the noise channel?
@qml.qnode(dev)
def circ():
qml.RY(.5, wires=0)
# Depolarising Noise
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliX(0)
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliY(0)
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliZ(0)
return qml.expval(qml.PauliX(0))
Hi @ankit27kh,
With default.mixed the density matrix remains the same and the probabilistic nature of the DepolarizingChannel is taken into account within the device. This allows for a more complex study of error and error correction.
Your approach would be equivalent to using default.mixed if you don’t need any deep analysis of error, but note that the state of the circuit and the density matrix will be different every time you run your circuit since you are essentially creating a different deterministic circuit every time, instead of creating one probabilistic circuit.
As an example:
After your code run
print('circ v1',circ())
qml.draw_mpl(circ)()
print('circ v2',circ())
qml.draw_mpl(circ)()
You will notice that you essentially build a different circuit every time.
Instead, if you run the following code you will basically get the same circuit every time:
dev2 = qml.device('default.mixed', wires = 1)
@qml.qnode(dev2)
def circ2():
qml.RY(.5, wires=0)
qml.DepolarizingChannel(d, wires=0)
return qml.density_matrix([0])
print('circ2 v1',circ2())
qml.draw_mpl(circ2)()
print('circ2 v2',circ2())
qml.draw_mpl(circ2)()
Please let me know if this is clear or if you need any other help!
Hey @CatalinaAlbornoz, thanks for the response.
I have some further questions regarding using the default.qubit device for simulating noise.
1. How are the measurements returned when using the gates directly? For example, when calculating expectation values with a given number of shots, does it consider the probability I have supplied for each gate?
2. What happens when using None shots with the probabilistically applied gates?
3. At least when using None shots, shouldn’t the result from both devices match if they are doing the same computation?
Hey @ankit27kh , let me answer these questions :slightly_smiling_face: When working with probabilities in this way inside the circuit there is no vector that correctly defines the state. You can see that if you calculate qml.state each time you will get a different thing. However, the default.mixed has only one way to represent that state through the density matrix.
If you are going to return qml.probs or qml.sample you will see no real difference between using one device or the other, regardless of the number of shots you put in. In particular when you put shots = None , it is solved analytically (as if there were infinite shots) so you will see no difference.
In short, if you are not going to use qml.state() you should not notice any difference between the devices (although in the mixed you have some gates already coded that can save you work)
Hey @Guillermo_Alonso, thank you for this! It covers all of my questions.
But I am not getting identical results from the two devices.
Consider the code below:
import pennylane as qml
import pennylane.numpy as np
shots = 100
dev1 = qml.device('default.qubit', wires=1, shots=shots)
dev2 = qml.device('default.mixed', wires=1, shots=shots)
d = .7
@qml.qnode(dev2)
def circ2():
qml.RY(1, wires=0)
qml.DepolarizingChannel(d, 0)
return qml.expval(qml.PauliX(wires=0))
@qml.qnode(dev1)
def circ1():
qml.RY(1, wires=0)
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliX(0)
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliY(0)
if np.random.choice([True, False], p=[d / 3, 1 - d / 3]):
qml.PauliZ(0)
return qml.expval(qml.PauliX(wires=0))
print("default.mixed")
for _ in range(3):
np.random.seed(42)
print(circ2())
print("default.qubit")
for _ in range(3):
np.random.seed(42)
print(circ1())
This results in the output:
default.mixed
0.14
0.14
0.14
default.qubit
0.86
0.86
0.86
As you can see, these don’t match. I am also resetting the seed every time. You can also use None shots. It does not match even then.
The reason for trying this instead of just using the default.mixed device is that this device does not support JAX. So if I can replicate the results with default.qubit device, I can then use JAX-JIT to reduce my computation time.
Theoretically it should work but I don’t see the problem :thinking:I have to check the Depolarization structure, to see what is going on. As you can see, with another operator it works.
import pennylane as qml
import pennylane.numpy as np
shots = 1000
dev1 = qml.device('default.qubit', wires=1)
dev2 = qml.device('default.mixed', wires=1)
d = 0.7
@qml.qnode(dev1)
def circ1():
qml.RY(2, wires=0)
if np.random.rand() < d:
qml.PauliX(0)
return qml.expval(qml.PauliX(wires=0))
print("default.qubit:", circ1())
@qml.qnode(dev2)
def circ2():
qml.RY(2, wires=0)
qml.BitFlip(d, 0)
return qml.expval(qml.PauliX(wires=0))
print("default.mixed", circ2())
If I find out anything I will write you here
1 Like
|
__label__pos
| 0.989195 |
Refreshing XML data when returning from (Adding Data) in Form2 back to Display Form1
mond007
Active member
Joined
Apr 26, 2010
Messages
37
Location
near Solihull, Birmingham UK
Programming Experience
10+
Hi
I have a simple database of 150 questions whereby Form1 is an Enquiry Screen where a QuestionNo is entered and the answer and its attributes are returned from an XML File for Display.
The XML File layout is as follows :
QuestionNo (e.g. 2.7)
Question Text (What should you do when some one faints).
AnswerSection (Health & Emergency)
AnswerSectionColor (Red) ('Colour Coded' as mentioned above)
AnswerImage (Image of a Man being tended to). JPG, BMP, etc.(importable).
AnswerHyperlink (www.n.h.s.recovery-situation-blah-blah.com
A second Form2 is used to maintain the 150 QuestionAnswerData.xml backend file.
The problem is that when I add or modify data in Form2 and "Save" the data in maintenance Form2, the changes are NOT displayed or reflected in Form1 the display.
I know the changes have been saved because if I totally come out of the entire application and re-launch then the newly added data is present. What I need to happen is when data is added I need a mechanism or code that will re-read the XML file data when return from Form2 to Form1.
Form1
Form1.jpg
Form2
Form2.jpg
VB.NET:
Imports System.Text.RegularExpressions
Public Class Form1
Dim QuestionAnswerData As New DataSet
Dim bsQuestions As New BindingSource
Public Property DetectUrls As Boolean
Public Property SelectionIndent As Integer
Public Property SelectionHangingIndent As Integer
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
'--------------------------------------------------------------------------
QuestionAnswerData = New DataSet
QuestionAnswerData.ReadXml(GlobalVariables.RootPath.ToString & GlobalVariables.RootXMLFileName.ToString, XmlReadMode.ReadSchema)
With QuestionAnswerData.Tables(0).Columns(0)
.AutoIncrement = True
.AutoIncrementStep = 1
If QuestionAnswerData.Tables(0).Rows.Count > 0 Then
.AutoIncrementSeed = QuestionAnswerData.Tables(0).Rows.Cast(Of DataRow).Max(Function(x) CInt(x(0))) + 1
Else
.AutoIncrementSeed = 1
End If
End With
With bsQuestions
.DataSource = QuestionAnswerData
.DataMember = "Questions"
End With
Me.txtb_search_question_no.Focus()
End Sub
Private Sub txtb_search_question_no_TextChanged(sender As Object, e As EventArgs) Handles txtb_search_question_no.TextChanged
Dim myColor As Color = Color.Green
Dim iColor As Integer = myColor.ToArgb()
Dim sColor As String = iColor.ToString
Dim DataViewRecord As DataView
'Search the Questions Table for the one that you want to find
Dim foundRow As DataRow = QuestionAnswerData.Tables("Questions").Select(String.Format("QuestionNo ='{0}'", txtb_search_question_no.Text)).FirstOrDefault
DataViewRecord = New DataView(QuestionAnswerData.Tables(0))
DataViewRecord.Sort = "QuestionNo"
Dim index As Integer = DataViewRecord.Find(Me.txtb_search_question_no.Text)
If index = -1 Then
'MsgBox("Question Not Found!")
Me.RichTextBox1.Text = " QUESTION NOT FOUND "
PictureBox2.Image = Nothing
Else
Me.RichTextBox1.Text = ""
Me.txtb_hyperlink.Text = ""
Me.txtb_section.Text = DataViewRecord(index)("AnswerSection").ToString()
'------------------------- Load Answer Image -------------------------
Dim fs As System.IO.FileStream
fs = New System.IO.FileStream((GlobalVariables.RootImagesPath & Replace(Me.txtb_search_question_no.Text.ToString(), ".", "_") & ".png"), IO.FileMode.Open, IO.FileAccess.Read)
PictureBox2.Image = System.Drawing.Image.FromStream(fs)
'------------------------- Load Answer RichTextFile -------------------------
If DataViewRecord(index)("AnswerHyperlink").ToString() <> "" Then
Me.txtb_hyperlink.Text = DataViewRecord(index)("AnswerHyperlink").ToString()
End If
'------------------------- Load Answer RichTextFile -------------------------
If My.Computer.FileSystem.FileExists(GlobalVariables.RootRtfPath & Replace(DataViewRecord(index)("QuestionNo").ToString(), ".", "_") & ".rtf") Then
Me.RichTextBox1.LoadFile(GlobalVariables.RootRtfPath & Replace(DataViewRecord(index)("QuestionNo").ToString(), ".", "_") & ".rtf", RichTextBoxStreamType.RichText)
End If
End If
End Sub
Private Sub BtnMaintainData_Click(sender As Object, e As EventArgs) Handles BtnMaintainData.Click
Form2.Show()
End Sub
End Class
Public Class GlobalVariables
Public Shared driver_installed As Boolean
Public Shared RootPath As String = "C:\QuestionAnswer\"
Public Shared RootImagesPath As String = "C:\QuestionAnswer\Images\"
Public Shared RootRtfPath As String = "C:\QuestionAnswer\RichTextFiles\"
Public Shared RootXMLFileName As String = "QuestionAnswerData.xml"
Public Shared GlobalQuestionNo As String
End Class
Form2 - The Maintenance of Data
VB.NET:
Imports System.Text.RegularExpressions
Imports System.Xml
Imports System.Data
Imports System.Runtime.InteropServices
Imports System.Windows.Forms
Imports System.ComponentModel
Imports System.IO
Public Class Form2
Dim QuestionAnswerData As New DataSet
Private Sub Form2_Load(sender As Object, e As EventArgs) Handles MyBase.Load
'--------------------------------------------------------------------------
Dim bsQuestions As New BindingSource
QuestionAnswerData.ReadXml(GlobalVariables.RootPath.ToString & GlobalVariables.RootXMLFileName.ToString, XmlReadMode.ReadSchema)
With QuestionAnswerData.Tables(0).Columns(0)
.AutoIncrement = True
.AutoIncrementStep = 1
If QuestionAnswerData.Tables(0).Rows.Count > 0 Then
.AutoIncrementSeed = QuestionAnswerData.Tables(0).Rows.Cast(Of DataRow).Max(Function(x) CInt(x(0))) + 1
Else
.AutoIncrementSeed = 1
End If
End With
With bsQuestions
.DataSource = QuestionAnswerData
.DataMember = "Questions"
End With
Me.BindingSource1.DataSource = QuestionAnswerData
Me.BindingSource1.DataMember = "Questions"
Me.DataGridView1.DataSource = Me.BindingSource1
With DataGridView1
.AllowUserToAddRows = False
.AllowUserToResizeColumns = False
.SelectionMode = DataGridViewSelectionMode.FullRowSelect
.MultiSelect = False
.Columns(0).Visible = False
.RowHeadersVisible = False
.Columns(1).HeaderText = "Question No"
.Columns(1).Width = 60
.Columns(2).HeaderText = "Description"
.Columns(2).Width = 210
.Columns(3).HeaderText = "Section"
.Columns(3).Width = 145
.Columns(4).HeaderText = "Section Colour"
.Columns(4).Width = 90
.Columns(5).HeaderText = "Image"
.Columns(5).Width = 90
.Columns(6).HeaderText = "Hyperlink"
.Columns(6).Width = 329
End With
DataGridView1.DefaultCellStyle.Font = New Font("Trebuchet MS", 8)
'------------------------------------------------------
Dim bc As New DataGridViewButtonColumn
bc.Tag = False
bc.Text = "Delete"
bc.Name = "Delete"
bc.Width = 19
DataGridView1.Columns.Add(bc)
ActiveControl = DataGridView1
DataGridView1.ReadOnly = False ' Disable entire DataGridView to Read Only then set all the columns in your code as readonly.
DataGridView1.Columns("QuestionNo").ReadOnly = True ' Disable changing of Main Question ID to prevent mismatch problems.
End Sub
Private Sub btnSaveData_Click(sender As Object, e As EventArgs) Handles btnSaveData.Click
QuestionAnswerData.WriteXml(GlobalVariables.RootPath.ToString & GlobalVariables.RootXMLFileName.ToString, XmlWriteMode.WriteSchema)
MsgBox("Question and Answers Information Saved", vbInformation)
End Sub
End Class
It is well to say this is an extremely cut down skeleton version of a much bigger Application which took several months and I am positively hoping I will not have to abandon this at the 11th hour so to speak.
I believe this issue not insurmountable for someone who has superior knowledge of this type of Application. It is only a question of resetting or reloading the data once back from the second Form2.
Any help would be greatly appreciated as I believe there are many experts out there with far more experience than a novice like myself.
Thanks in Advance.
ps I have tried bindingSource1.ResetBindings(False) to no avail.
JohnH
VB.NET Forum Moderator
Staff member
Joined
Dec 17, 2005
Messages
15,557
Location
Norway
Programming Experience
10+
It is only a question of resetting or reloading the data once back from the second Form2.
That is correct, as with current code you have load the data again from xml file when that file is changed, because you're using different datasets in those forms. There are ways to achieve that, but since both forms are using same data it would be better that you passed QuestionAnswerData dataset from form1 to form2 and updated that, those changes would then be shown in form1 also.
mond007
Active member
Joined
Apr 26, 2010
Messages
37
Location
near Solihull, Birmingham UK
Programming Experience
10+
Indeed, I am glad someone sees the simplicity of this but my problem is that I am not how to pass the data from one form to the other. I am afterall a novice. :-s
I will research how to achieve this as I am confident its not that hard.
If you could provide a pointer then this would be great but I will have a go. Thanks
JohnH
VB.NET Forum Moderator
Staff member
Joined
Dec 17, 2005
Messages
15,557
Location
Norway
Programming Experience
10+
Use a property (like Form2.TheProperty = value), or a method with parameter (like Form2.TheMethod(value)) to transfer data from one to the other.
mond007
Active member
Joined
Apr 26, 2010
Messages
37
Location
near Solihull, Birmingham UK
Programming Experience
10+
I managed to find the solution in the end.
VB.NET:
Private Sub BtnMaintainData_Click(sender As Object, e As EventArgs) Handles BtnMaintainData.Click
Form2.ShowDialog()
UpdateView()
End Sub
Private Sub UpdateView()
'code to update data here
End Sub
the long version is :
VB.NET:
Private Sub BtnMaintainData_Click(sender As Object, e As EventArgs) Handles BtnMaintainData.Click
Form2.ShowDialog()
UpdateView()
End Sub
Private Sub UpdateView()
'code to update data here
QuestionAnswerData = New DataSet
QuestionAnswerData.ReadXml(GlobalVariables.RootPath.ToString & GlobalVariables.RootXMLFileName.ToString, XmlReadMode.ReadSchema)
With QuestionAnswerData.Tables(0).Columns(0)
.AutoIncrement = True
.AutoIncrementStep = 1
If QuestionAnswerData.Tables(0).Rows.Count > 0 Then
.AutoIncrementSeed = QuestionAnswerData.Tables(0).Rows.Cast(Of DataRow).Max(Function(x)CInt(x(0))) + 1
Else
.AutoIncrementSeed = 1
End If
End With
With bsQuestions
.DataSource = QuestionAnswerData
.DataMember = "Questions"
End With
End Sub
Hope this helps anyone.
Thanks for all you help.
Top Bottom
|
__label__pos
| 0.70108 |
Java Program to Check Harshad number
Learn to write a simple java program to verify if a given number is harshad number or not.
1. what is a harshad number
A number is called a harshad number (or niven number) is an integer that is divisible by the sum of its digits. i.e. A number MN is divisible by (M+N).
For example, consider following example of number 40.
Given number is : 40
Sum of digits : 4 + 0 = 4
Is 40 divisible by 4? Yes. So 40 is harshad number.
A number which is a harshad number in every number base is called an all-harshad number, or an all-Niven number. There are only four all-harshad numbers: 1, 2, 4, and 6.
2. Algorithm to determine harshad number
To find if a given number is harshad or not –
1. Calculate the sum of each digit present in number.
2. Divide the number with sum of digits. If number is divisible with remainder zero, the number i harshad number; else not.
3. Java Program to find harshad number
public class Main
{
public static void main(String[] args) {
System.out.println("20 is harshad number " + isHarshadNumber(20));
System.out.println("12 is harshad number " + isHarshadNumber(12));
System.out.println("42 is harshad number " + isHarshadNumber(42));
System.out.println("13 is harshad number " + isHarshadNumber(13));
System.out.println("19 is harshad number " + isHarshadNumber(19));
System.out.println("25 is harshad number " + isHarshadNumber(25));
}
static boolean isHarshadNumber(int numberToCheck)
{
int temp = numberToCheck;
int sumOfDigits = 0;
while (temp > 0) {
long rem = temp % 10;
sumOfDigits += rem;
temp = temp / 10;
}
return numberToCheck % sumOfDigits == 0 ? true : false;
}
}
Program output.
20 is harshad number true
12 is harshad number true
42 is harshad number true
13 is harshad number false
19 is harshad number false
25 is harshad number false
Happy Learning !!
Ref : Wikipedia
Comments
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
About Us
HowToDoInJava provides tutorials and how-to guides on Java and related technologies.
It also shares the best practices, algorithms & solutions and frequently asked interview questions.
|
__label__pos
| 0.998501 |
サンプルで学ぶ Go 言語:Context
In the previous example we looked at setting up a simple HTTP server. HTTP servers are useful for demonstrating the usage of context.Context for controlling cancellation. A Context carries deadlines, cancellation signals, and other request-scoped values across API boundaries and goroutines.
package main
import (
"fmt"
"net/http"
"time"
)
func hello(w http.ResponseWriter, req *http.Request) {
A context.Context is created for each request by the net/http machinery, and is available with the Context() method.
ctx := req.Context()
fmt.Println("server: hello handler started")
defer fmt.Println("server: hello handler ended")
Wait for a few seconds before sending a reply to the client. This could simulate some work the server is doing. While working, keep an eye on the context’s Done() channel for a signal that we should cancel the work and return as soon as possible.
select {
case <-time.After(10 * time.Second):
fmt.Fprintf(w, "hello\n")
case <-ctx.Done():
The context’s Err() method returns an error that explains why the Done() channel was closed.
err := ctx.Err()
fmt.Println("server:", err)
internalError := http.StatusInternalServerError
http.Error(w, err.Error(), internalError)
}
}
func main() {
As before, we register our handler on the “/hello” route, and start serving.
http.HandleFunc("/hello", hello)
http.ListenAndServe(":8090", nil)
}
Run the server in the background.
$ go run context-in-http-servers.go &
Simulate a client request to /hello, hitting Ctrl+C shortly after starting to signal cancellation.
$ curl localhost:8090/hello
server: hello handler started
^C
server: context canceled
server: hello handler ended
次の例:Spawning Processes
|
__label__pos
| 0.977966 |
Month: June 2012
Character Classes or Character Sets
http://www.regular-expressions.info/charclass.html
Note that the only special characters or metacharacters inside a character class are the closing bracket (]), the backslash (\), the caret (^) and the hyphen (-). The usual metacharacters are normal characters inside a character class, and do not need to be escaped by a backslash. To search for a star or plus, use [+*]. Your regex will work fine if you escape the regular metacharacters inside a character class, but doing so significantly reduces readability.
Advertisements
get the result from commands executed in expect
http://www.perlmonks.org/?node_id=58634
If you have ever used Expect to execute commands on a remote system I am sure you have run into the problem of parsing the command output from the output of the Expect methods exp_before(), clear_accum(), etc… . If you send your command to this subroutine it will parse the output for you and return the command output as it would be returned from a backtick execution. Note: This subroutine uses a global Expect object variable which already has an established connection. The subroutine will not take commands that end in & .. for those commands just use $expect->print method.
# Executes a command via a global expect object and
# returns the output of the command.
#
# Arguments:
# 1 – Command <string variable containing the command string
+>
# Returns:
# String variable cotaining the output of the specified command.
#
sub expect_execute($) {
$expect->clear_accum();
my $command = shift;
my $x = ”;
my $temp = ”;
if ( $command =~ /;$/ ) {
chop( $command );
}
print $expect “$command | sed ‘s/^/COMMAND_OUT: /g’; echo -n E
+ND_; echo EXPECT\n”;
$expect->expect( 300, -re => ‘^END_EXPECT’ );
my $result = $expect->exp_before();
( my @result ) = split( /\n/, $result );
$result = ”;
foreach $x ( @result ) {
$temp = $x;
if ( chop( $temp ) eq “\r” ) {
chop( $x );
}
if ( $x =~ m/^COMMAND_OUT: / ) {
$temp = substr( $x, 13 );
$result = $result . $temp . “\n”;
}
}
return $result;
}
What the hell is Perl is?
Perl or Practical Extraction and Report Language is described by Larry Wall, Perl’s author, as follows: “Perl is an interpreted language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. It’s also a good language for any system management tasks. The language is intended to be practical (easy to use, efficient, complete) rather than beautiful (tiny, elegant, minimal).”
In Unix, how do I use the scp command to securely transfer files between two computers?
Unlike rcp or FTP, scpencrypts both the file and any passwords exchanged so that anyone snooping on the network can’t view them.
Warning: Be careful when copying between hosts files that have the same names; you may accidently overwrite them.
The syntax for the scp command is:
scp [options] [[user@]host1:]filename1 … [[user@]host2:]filename2
[[user@]host1:]filename1 is the source file and path, and [[user@]host2:]filename2 is the destination.
For example, if user dvader is on a computer called empire.gov, and wants to copy a file called file1.txt to a directory called somedir in his account on a computer called deathstar.com, he would enter:
scp file1.txt [email protected]:somedir
Likewise, if he wanted to copy the entire contents of the somedir directory on deathstar.com back to his empire.gov account, he would enter:
scp -r [email protected]:somedir somedir
Similarly, if he is working on another computer, but wanted to copy a file called file1.txt from his home directory on empire.gov to a directory called somedir in his account on deathstar.com, he would enter:
scp [email protected]:file1.txt [email protected]:somedir When using wildcards (e.g., * and ? ) to copy multiple files from a remote system, be sure to enclose the filenames in quotes. This is because the Unix shell, not the scp command, expands unquoted wildcards
executing external commands in Perl
There are many ways to execute external commands from Perl. The most commons are:
• system function
• exec function
• backticks (“) operator
• open function
All of these methods have different behaviour, so you should choose which one to use depending of your particular need. In brief, these are the recommendations:
method use if …
system() you want to execute a command and don’t want to capture its output
exec you don’t want to return to the calling perl script
backticks you want to capture the output of the command
open you want to pipe the command (as input or output) to your script
More detailed explanations of each method follows:
• Using system()
system() executes the command specified. It doesn’t capture the output of the command.
system() accepts as argument either a scalar or an array. If the argument is a scalar, system() uses a shell to execute the command (“/bin/sh -c command”); if the argument is an array it executes the command directly, considering the first element of the array as the command name and the remaining array elements as arguments to the command to be executed.
For that reason, it’s highly recommended for efficiency and safety reasons (specially if you’re running a cgi script) that you use an array to pass arguments to system()
Example:
#-- calling 'command' with arguments
system("command arg1 arg2 arg3");
#-- better way of calling the same command
system("command", "arg1", "arg2", "arg3");
The return value is set in $?; this value is the exit status of the command as returned by the ‘wait’ call; to get the real exit status of the command you have to shift right by 8 the value of $? ($? >> 8).
If the value of $? is -1, then the command failed to execute, in that case you may check the value of $! for the reason of the failure.
Example:
system("command", "arg1");
if ( $? == -1 )
{
print "command failed: $!\n";
}
else
{
printf "command exited with value %d", $? >> 8;
}
• Using exec()
The exec() function executes the command specified and never returns to the calling program, except in the case of failure because the specified command does not exist AND the exec argument is an array.
Like in system(), is recommended to pass the arguments of the functions as an array.
• Using backticks (“)
In this case the command to be executed is surrounded by backticks. The command is executed and the output of the command is returned to the calling script.
In scalar context it returns a single (possibly multiline) string, in list context it returns a list of lines or an empty list if the command failed.
The exit status of the executed command is stored in $? (see system() above for details).
Example:
#-- scalar context
$result = `command arg1 arg2`;
#-- the same command in list context
@result = `command arg2 arg2`;
Notice that the only output captured is STDOUT, to collect messages sent to STDERR you should redirect STDERR to STDOUT
Example:
#-- capture STDERR as well as STDOUT
$result = `command 2>&1`;
• Using open()
http://www.perlhowto.com/executing_external_commands
Use open() when you want to:
– capture the data of a command (syntax: open(“command |”))
– feed an external command with data generated from the Perl script (syntax: open(“| command”))
Examples:
#-- list the processes running on your system
open(PS,"ps -e -o pid,stime,args |") || die "Failed: $!\n";
while ( <PS> )
{
#-- do something here
}
#-- send an email to user@localhost
open(MAIL, "| /bin/mailx -s test user\@localhost ") || die "mailx failed: $!\n";
print MAIL "This is a test message";
*****************************************************
There are several ways to invoke a programm. One of the main differences is in the returning value.
• system("wc -l");
Will call the given command and return the return value of that command. This is usually not what you want, because most of the times wc -l will mean that you want to get the number of lines back from that call and not if that call was successful or not.
• $nol = `wc -l`
The backticks call the command and return it’s output into the variable (here $nol. In this case this will be what you want.
• Another way of doing this is to use
$nol = qx/wc -l/;
(mnemonic: qx quote execute). I think is just the same as the backquotes (at least I don’t know any difference)
• Of course there are other ways (exec,fork) that behave different with respect to processes, but I don’t know much about this
Hope this helps…
|
__label__pos
| 0.800587 |
Search 73,937 tutors
FIND TUTORS
Product Rule Explanation
It is not always necessary to compute derivatives directly from the definition. Several rules have been developed for finding the derivatives without having to use the definition directly. These rules simplify the process of differentiation. The Product Rule is a formula developed by Leibniz used to find the derivatives of products of functions.
The Product Rule is defined as the product of the first function and the derivative of the second function plus the product of the derivative of the first function and the second function:
The Formula for the Product Rule
Product Rule Example
Find f'(x) of
We can see that there is a product, so we can apply the product rule. First, we take the product of the first term and the derivative of the second term.
Second, we take the product of the derivative of the first term and the second term.
Then, we add them together to get our derivative.
Notice that if we multiplied them together at the start, the product would be 21x5. Taking the derivative after we multiplied it out would give us the same answer - 105x4. The product rule helps take the derivative of harder products of functions that require you use the rule instead of multiplying them together beforehand.
Let's look at a harder example:
Differentiate:
We can see that we cannot multiply first and then take the derivative. We must use the product rule.
Sign up for free to access more calculus resources like . WyzAnt Resources features blogs, videos, lessons, and more about calculus and over 250 other subjects. Stop struggling and start learning today with thousands of free resources!
|
__label__pos
| 0.977317 |
Synced tables
Synced tables
Synced tables are an ideal solution for conducting data analysis within your Notion databases. Once your database is synchronized, you can generate pivot tables from your data and sync them as simple tables in Notion. This process provides a concise summary and enables the creation of dynamic dashboards that automatically stay updated.
To create a synced table, you must assign it a name and then copy and paste the link to the Notion page where you want the table to appear (ensure this is a regular page, not a full-page database). This feature isn’t limited to pivot tables; it also supports regular sheets, allowing you to utilize Google Sheets formulas. Furthermore, it seamlessly handles new and deleted rows, ensuring your data remains synchronized without manual intervention.
You can watch the following video with this feature in action:
Related Features
|
__label__pos
| 0.553468 |
Passwords and the Active Directory module
Passwords and the Active Directory module
The HelpMaster Active Directory module does not synchronize passwords. The password for a HelpMaster client and the password used for a Windows login (logging into your computer) are different things. Although a user can manually set a HelpMaster client password to match a Windows password, these passwords will never be synchronized via the Active Directory module.
How does the automatic HelpMaster logon work?
When the HelpMaster Active Directory module synchronizes Windows accounts with HelpMaster clients, it stores the unique Windows SID (Security Identifier) of an Active Directory account against the corresponding HelpMaster client record in the HelpMaster database. Whenever HelpMaster is initially run and Active Directory logon is enabled, it first checks to see if the SID of the currently logged on (to Windows) person can be matched with any SID stored in the HelpMaster database. If a match is found against a staff record, the logon screen is bypassed, and the HelpMaster client matching the SID of person logged onto Windows is automatically logged into HelpMaster. The Windows password never plays a role in this operation. It is based purely off of the Windows Security Identifier.
What if I want to log onto HelpMaster as someone other than the person logged onto Windows?
As you start a HelpMaster module, hold down the Shift key. This will display the regular HelpMaster logon screen.
Things to remember…
• The HelpMaster username / password is independent from a Windows username and password. You can set them to be the same, but they are completely de-coupled from each other.
• The Active Directory manager will not synchronize your Windows password with your HelpMaster password
• When using the automatic logon (for Desktop or Web), you should never see the HelpMaster logon screen - you should automatically be logged on
• Active Directory automatic logon does not mean that you can use your Windows username / password combination in the logon screen for HelpMaster - it just means that HelpMaster recognizes that the person who is logged onto Windows has a corresponding account in HelpMaster.
See Also
Resetting a client’s HelpMaster password
Resetting a client’s Windows password
|
__label__pos
| 0.519926 |
WordPress.org
Ready to get started?Download WordPress
Plugin Directory
DBC Backup 2
DBC Backup 2 is a safe & simple way to schedule regular WordPress database backups using the wp-cron batch jobs.
1. Upload the folder dbc-backup-2 to the /wp-content/plugins/ directory
2. Activate the plugin through the 'Plugins' menu in WordPress
3. You can click on the Settings link from the Installed Plugins page or from the link 'DBC Backup' on the Tools Menu.
4. Configure the plugin settings and you are ready. You'll need to know your server path to a folder you want the backup saved.
• If the plugin can't create the export directory you will have to do it manually (folder needs to group read/write
Requires: 3.6 or higher
Compatible up to: 3.8
Last Updated: 2013-12-14
Downloads: 9,249
Ratings
4 stars
4 out of 5 stars
Support
Got something to say? Need help?
Compatibility
+
=
Not enough data
1 person says it works.
0 people say it's broken.
100,1,1 100,3,3
100,2,2
100,1,1 100,1,1
|
__label__pos
| 0.720183 |
Cody
Solution 510315
Submitted on 8 Oct 2014 by Christian
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
Test Suite
Test Status Code Input and Output
1 Pass
%% x = 4*pi; y_correct = 1; assert(isequal(your_fcn_name(x),y_correct))
2 Pass
%% x = 400*pi; y_correct = 10; assert(isequal(your_fcn_name(x),y_correct))
3 Pass
%% x = 40000*pi; y_correct = 100; assert(isequal(your_fcn_name(x),y_correct))
4 Pass
%% x = -4*pi; y_correct = 1i; assert(isequal(your_fcn_name(x),y_correct))
|
__label__pos
| 0.998775 |
GeetCode Hub
You are given an m * n matrix, mat, and an integer k, which has its rows sorted in non-decreasing order.
You are allowed to choose exactly 1 element from each row to form an array. Return the Kth smallest array sum among all possible arrays.
Example 1:
Input: mat = [[1,3,11],[2,4,6]], k = 5
Output: 7
Explanation: Choosing one element from each row, the first k smallest sum are:
[1,2], [1,4], [3,2], [3,4], [1,6]. Where the 5th sum is 7.
Example 2:
Input: mat = [[1,3,11],[2,4,6]], k = 9
Output: 17
Example 3:
Input: mat = [[1,10,10],[1,4,5],[2,3,6]], k = 7
Output: 9
Explanation: Choosing one element from each row, the first k smallest sum are:
[1,1,2], [1,1,3], [1,4,2], [1,4,3], [1,1,6], [1,5,2], [1,5,3]. Where the 7th sum is 9.
Example 4:
Input: mat = [[1,1,10],[2,2,9]], k = 7
Output: 12
Constraints:
• m == mat.length
• n == mat.length[i]
• 1 <= m, n <= 40
• 1 <= k <= min(200, n ^ m)
• 1 <= mat[i][j] <= 5000
• mat[i] is a non decreasing array.
public class Solution { public int KthSmallest(int[][] mat, int k) { } }
|
__label__pos
| 0.998967 |
# Copyright (c) 2005-2008 Open Source Applications Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from itertools import izip from chandlerdb.util.c import UUID, isuuid, Nil, Default from chandlerdb.item.c import ItemValue from chandlerdb.persistence.c import CView from chandlerdb.item.Monitors import Monitors from chandlerdb.item.Indexed import Indexed from chandlerdb.item.Collection import Collection from chandlerdb.item.RefCollections import RefList class AbstractSet(ItemValue, Indexed): def __init__(self, view, id): super(AbstractSet, self).__init__(view, None, None) self._init_indexed() self._otherName = None self.id = id def __contains__(self, item, excludeMutating=False, excludeIndexes=False): raise NotImplementedError, "%s.__contains__" %(type(self)) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): raise NotImplementedError, "%s.sourceChanged" %(type(self)) def __repr__(self): return self._repr_() def __getitem__(self, uuid): return self.itsView[uuid] def __eq__(self, value): if self is value: return True return (type(value) is type(self) and list(value.iterSources()) == list(self.iterSources())) def __ne__(self, value): if self is value: return False return not (type(value) is type(self) and list(value.iterSources()) == list(self.iterSources())) def __nonzero__(self): index = self._anIndex() if index is not None: return len(index) > 0 for i in self.iterkeys(): return True return False def isEmpty(self): return not self def __iter__(self, excludeIndexes=False): if not excludeIndexes: index = self._anIndex() if index is not None: view = self.itsView return (view[key] for key in index) return self._itervalues(excludeIndexes) def itervalues(self, excludeIndexes=False): return self.__iter__(excludeIndexes) def _itervalues(self, excludeIndexes=False): raise NotImplementedError, "%s._itervalues" %(type(self)) def iterkeys(self, excludeIndexes=False): if not excludeIndexes: index = self._anIndex() if index is not None: return index.iterkeys() return self._iterkeys(excludeIndexes) # the slow way, via items, to be overridden by some implementations def _iterkeys(self, excludeIndexes=False): return (item.itsUUID for item in self.__iter__(excludeIndexes)) def iterItems(self): return self.itervalues() def iterKeys(self): return self.iterkeys() def __len__(self): index = self._anIndex() if index is not None: return len(index) return self.countKeys() def countKeys(self): count = 0 for key in self.iterkeys(True): count += 1 return count def findSource(self, id): raise NotImplementedError, "%s.findSource" %(type(self)) def _findSource(self, source, id): if isinstance(source, AbstractSet): if source.id == id: return source elif source[0] == id: return source return None def iterSources(self, recursive=False): raise NotImplementedError, "%s.iterSources" %(type(self)) def iterInnerSets(self): raise NotImplementedError, "%s.iterInnerSets" %(type(self)) def isSubset(self, superset, reasons=None): """ Tell if C{self} a subset of C{superset}. @param reasons: if specified, contains the C{(subset, superset)} pairs that caused the predicate to fail. @type reasons: a C{set} or C{None} @return: C{True} or C{False} """ raise NotImplementedError, "%s.isSubset" %(type(self)) def isSuperset(self, subset, reasons=None): """ Tell if C{self} a superset of C{subset}. @param reasons: if specified, contains the C{(subset, superset)} pairs that caused the predicate to fail. @type reasons: a C{set} or C{None} @return: C{True} or C{False} """ raise NotImplementedError, "%s.isSuperset" %(type(self)) def _isSourceSubset(self, source, superset, reasons): if isinstance(source, AbstractSet): return source.isSubset(superset, reasons) uItem, srcAttr = source return getattr(self.itsView[uItem], srcAttr).isSubset(superset, reasons) def _isSourceSuperset(self, source, subset, reasons): if isinstance(source, AbstractSet): return source.isSuperset(subset, reasons) uItem, srcAttr = source return getattr(self.itsView[uItem], srcAttr).isSuperset(subset, reasons) def _iterSourceItems(self): for item, attribute in self.iterSources(): yield item def _iterSources(self, source, recursive=False): if isinstance(source, AbstractSet): for source in source.iterSources(recursive): yield source else: uItem, srcAttr = source srcItem = self.itsView[uItem] yield srcItem, srcAttr if recursive: set = getattr(srcItem, srcAttr) if isinstance(set, AbstractSet): for source in set.iterSources(True): yield source def _inspect__(self, indent): return "\n%s<%s>" %(' ' * indent, type(self).__name__) def dir(self): """ Debugging: print all items referenced in this set """ for item in self: print item._repr_() def _setView(self, view): self._view = view def _prepareSource(self, source): if isinstance(source, AbstractSet): return source.itsView, source elif isinstance(source, Collection): return source.getSourceCollection() elif isuuid(source[0]): return None, source else: return source[0].itsView, (source[0].itsUUID, source[1]) def _sourceContains(self, item, source, excludeMutating=False, excludeIndexes=False): if item is None: return False if not isinstance(source, AbstractSet): source = getattr(self.itsView[source[0]], source[1]) return source.__contains__(item, excludeMutating, excludeIndexes) def _getSource(self, source): if isinstance(source, AbstractSet): return source return getattr(self.itsView[source[0]], source[1]) def _inspectSource(self, source, indent): if isinstance(source, AbstractSet): return source._inspect_(indent) return self.itsView[source[0]]._inspectCollection(source[1], indent) def _aSourceIndex(self, source): if isinstance(source, AbstractSet): return source._anIndex() return getattr(self.itsView[source[0]], source[1])._anIndex() def _iterSource(self, source, excludeIndexes=False): if isinstance(source, AbstractSet): for item in source.__iter__(excludeIndexes): yield item else: for item in getattr(self.itsView[source[0]], source[1]).__iter__(excludeIndexes): yield item def _iterSourceKeys(self, source, excludeIndexes=False): if isinstance(source, AbstractSet): return source.iterkeys(excludeIndexes) return getattr(self.itsView[source[0]], source[1]).iterkeys(excludeIndexes) def _sourceLen(self, source): if isinstance(source, AbstractSet): return len(source) return len(getattr(self.itsView[source[0]], source[1])) def _reprSource(self, source, replace): if isinstance(source, AbstractSet): return source._repr_(replace) if replace is not None: replaceItem = replace[source[0]] if replaceItem is not Nil: source = (replaceItem.itsUUID, source[1]) return "(UUID('%s'), '%s')" %(source[0].str64(), source[1]) def _reprSourceId(self, replace): id = self.id if id is not None: if replace is not None: replaceItem = replace[id] if replaceItem is not Nil: id = replaceItem.itsUUID return ", id=UUID('%s')" %(id.str64()) return '' def _setSourceItem(self, source, item, attribute, oldItem, oldAttribute): if isinstance(source, AbstractSet): source._setOwner(item, attribute) elif item is not oldItem: view = self.itsView if not view.isLoading(): if item is None: sourceItem = view.findUUID(source[0]) if sourceItem is not None: # was deleted oldItem._unwatchSet(sourceItem, source[1], oldAttribute) else: item._watchSet(view[source[0]], source[1], attribute) def _setSourceView(self, source, view): if isinstance(source, AbstractSet): source._setView(view) def _sourceChanged(self, source, op, change, sourceOwner, sourceName, other, dirties, actualSource): if isinstance(source, AbstractSet): if actualSource is not None: if source is not actualSource: op = None else: op = source.sourceChanged(op, change, sourceOwner, sourceName, True, other, dirties) elif (sourceName == source[1] and (isuuid(sourceOwner) and sourceOwner == source[0] or sourceOwner is self.itsView[source[0]])): pass else: op = None return op def _collectionChanged(self, op, change, other, dirties, local=False): item, attribute = self.itsOwner if item is not None: if change == 'collection': if op in ('add', 'remove'): otherItem = self.itsView.find(other) if op == 'add': if (otherItem is not None and otherItem.isDeferringOrDeleting()): return if not (local or self._otherName is None): if otherItem is not None: refs = otherItem.itsRefs if op == 'add': refs._addRef(self._otherName, item, attribute, True) else: refs._removeRef(self._otherName, item) elif op == 'add': raise AssertionError, ("op == 'add' but item not found", other) if self._indexes: dirty = False if op == 'add': for index in self._indexes.itervalues(): if other not in index: index.insertKey(other, Default, False, True) dirty = True else: for index in self._indexes.itervalues(): if index.removeKey(other): dirty = True if dirty: self._setDirty(True) elif op == 'refresh': pass else: raise ValueError, op item._collectionChanged(op, change, attribute, other, dirties) def removeByIndex(self, indexName, position): raise TypeError, "%s contents are computed" %(type(self)) def insertByIndex(self, indexName, position, item): raise TypeError, "%s contents are computed" %(type(self)) def replaceByIndex(self, indexName, position, withItem): raise TypeError, "%s contents are computed" %(type(self)) def _copy(self, item, attribute, copyPolicy, copyFn): # in the bi-ref case, set owner and value on item as needed # in non bi-ref case, Values sets owner and value on item otherName = self._otherName if otherName is not None: copy = item.itsRefs.get(attribute, Nil) if copy is not Nil: return copy policy = (copyPolicy or item.getAttributeAspect(attribute, 'copyPolicy', False, None, 'copy')) replace = {} for sourceItem in self._iterSourceItems(): if copyFn is not None: replace[sourceItem.itsUUID] = copyFn(item, sourceItem, policy) else: replace[sourceItem.itsUUID] = sourceItem copy = eval(self._repr_(replace)) copy._setView(item.itsView) if otherName is not None: item.itsRefs[attribute] = copy copy._setOwner(item, attribute) return copy def _clone(self, item, attribute): clone = eval(self._repr_()) clone._setView(item.itsView) return clone def copy(self, id=None): copy = eval(self._repr_()) copy._setView(self.itsView) copy.id = id or self.id return copy def _check(self, logger, item, attribute, repair): result = True try: sources = set() def checkSources(_self): for source in _self.iterSources(): srcItem, srcAttr = source value = getattr(srcItem, srcAttr) if not value._indexes: if source in sources: logger.error("Set '%s', value of attribute '%s' on %s has duplicated source (%s, %s)", self, attribute, item._repr_(), srcItem._repr_(), srcAttr) return False else: sources.add(source) if isinstance(value, AbstractSet): if not checkSources(value): return False return True result = checkSources(self) except: logger.exception("Set '%s', value of attribute '%s' on %s could not be checked for duplicates because of error", self, attribute, item._repr_()) result = False if result: result = (super(AbstractSet, self)._check(logger, item, attribute, repair) and self._checkIndexes(logger, item, attribute, repair)) return result def _setDirty(self, noFireChanges=False): self._dirty = True item = self._owner() if item is not None: try: view = item.itsView verify = view._status & CView.VERIFY if verify: view._status &= ~CView.VERIFY if self._otherName is None: item.setDirty(item.VDIRTY, self._attribute, item._values, noFireChanges) else: item.setDirty(item.RDIRTY, self._attribute, item.itsRefs, noFireChanges) finally: if verify: view._status |= CView.VERIFY @classmethod def makeValue(cls, string): return eval(string) @classmethod def makeString(cls, value): return value._repr_() def _setOwner(self, item, attribute): if item is None: self._removeIndexes() result = super(AbstractSet, self)._setOwner(item, attribute) if item is None: self._otherName = None else: self._otherName = item.itsKind.getOtherName(attribute, item, None) return result # refs part def _isRefs(self): return True def _isList(self): return False def _isSet(self): return True def _isDict(self): return False def _setRef(self, other, alias=None, dictKey=None, otherKey=None, ignore=False): self._owner().add(other) self.itsView._notifyChange(self._collectionChanged, 'add', 'collection', other.itsUUID, (), True) def _removeRef(self, other, dictKey=None): if other in self: self._owner().remove(other) self.itsView._notifyChange(self._collectionChanged, 'remove', 'collection', other.itsUUID, (), True) def _removeRefs(self): if self._otherName is not None: for item in self: item.itsRefs._removeRef(self._otherName, self._owner()) def _fillRefs(self): if self._otherName is not None: for item in self: item.itsRefs._setRef(self._otherName, self._owner(), self._attribute) def clear(self, ignore=None): self._removeRefs() class EmptySet(AbstractSet): def __init__(self, id=None): super(EmptySet, self).__init__(None, id) def __contains__(self, item, excludeMutating=False, excludeIndexes=False): return False def _itervalues(self, excludeIndexes=False): return iter(()) def _iterkeys(self, excludeIndexes=False): return iter(()) def countKeys(self): return 0 def _reprSourceId(self, replace): id = self.id if id is not None: if replace is not None: replaceItem = replace[id] if replaceItem is not Nil: id = replaceItem.itsUUID return "id=UUID('%s')" %(id.str64()) return '' def _repr_(self, replace=None): return "%s(%s)" %(type(self).__name__, self._reprSourceId(replace)) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): return None def findSource(self, id): return None def iterSources(self, recursive=False): return iter(()) def iterInnerSets(self): return iter(()) def isSubset(self, superset, reasons=None): return True def isSuperset(self, subset, reasons): if isinstance(subset, EmptySet): return True if reasons is not None: reasons.add((subset, self)) return False def _inspect_(self, indent): return '%s' %(self._inspect__(indent)) class Set(AbstractSet): def __init__(self, source, id=None): view, self._source = self._prepareSource(source) super(Set, self).__init__(view, id) def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index return self._sourceContains(item, self._source, excludeMutating, excludeIndexes) def _itervalues(self, excludeIndexes=False): return self._iterSource(self._source, excludeIndexes) def _iterkeys(self, excludeIndexes=False): return self._iterSourceKeys(self._source, excludeIndexes) def countKeys(self): return self._sourceLen(self._source) def _repr_(self, replace=None): return "%s(%s%s)" %(type(self).__name__, self._reprSource(self._source, replace), self._reprSourceId(replace)) def _setOwner(self, item, attribute): oldItem, oldAttribute, x = super(Set, self)._setOwner(item, attribute) self._setSourceItem(self._source, item, attribute, oldItem, oldAttribute) return oldItem, oldAttribute, x def _setView(self, view): super(Set, self)._setView(view) self._setSourceView(self._source, view) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): if change == 'collection': op = self._sourceChanged(self._source, op, change, sourceOwner, sourceName, other, dirties, source) elif change == 'notification': if other not in self: op = None if not (inner is True or op is None): self._collectionChanged(op, change, other, dirties) return op def findSource(self, id): return self._findSource(self._source, id) def iterSources(self, recursive=False): return self._iterSources(self._source, recursive) def iterInnerSets(self): if isinstance(self._source, AbstractSet): yield self._source def isSubset(self, superset, reasons=None): return self._isSourceSubset(self._source, superset, reasons) def isSuperset(self, subset, reasons=None): return self._isSourceSuperset(self._source, subset, reasons) def _inspect_(self, indent): return '%s%s' %(self._inspect__(indent), self._inspectSource(self._source, indent + 1)) class BiSet(AbstractSet): def __init__(self, left, right, id=None): view, self._left = self._prepareSource(left) view, self._right = self._prepareSource(right) super(BiSet, self).__init__(view, id) def _repr_(self, replace=None): return "%s(%s, %s%s)" %(type(self).__name__, self._reprSource(self._left, replace), self._reprSource(self._right, replace), self._reprSourceId(replace)) def _setOwner(self, item, attribute): oldItem, oldAttribute, x = super(BiSet, self)._setOwner(item, attribute) self._setSourceItem(self._left, item, attribute, oldItem, oldAttribute) self._setSourceItem(self._right, item, attribute, oldItem, oldAttribute) return oldItem, oldAttribute, x def _setView(self, view): super(BiSet, self)._setView(view) self._setSourceView(self._left, view) self._setSourceView(self._right, view) def _op(self, leftOp, rightOp, other): raise NotImplementedError, "%s._op" %(type(self)) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): if change == 'collection': leftOp = self._sourceChanged(self._left, op, change, sourceOwner, sourceName, other, dirties, source) rightOp = self._sourceChanged(self._right, op, change, sourceOwner, sourceName, other, dirties, source) if op == 'refresh': op = self._op(leftOp, rightOp, other) or 'refresh' else: op = self._op(leftOp, rightOp, other) elif change == 'notification': if other not in self: op = None if not (inner is True or op is None): self._collectionChanged(op, change, other, dirties) return op def findSource(self, id): source = self._findSource(self._left, id) if source is not None: return source source = self._findSource(self._right, id) if source is not None: return source return None def iterSources(self, recursive=False): for source in self._iterSources(self._left, recursive): yield source for source in self._iterSources(self._right, recursive): yield source def iterInnerSets(self): if isinstance(self._left, AbstractSet): yield self._left if isinstance(self._right, AbstractSet): yield self._right def _inspect_(self, indent): return '%s%s%s' %(self._inspect__(indent), self._inspectSource(self._left, indent + 1), self._inspectSource(self._right, indent + 1)) class Union(BiSet): def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index return (self._sourceContains(item, self._left, excludeMutating, excludeIndexes) or self._sourceContains(item, self._right, excludeMutating, excludeIndexes)) def _itervalues(self, excludeIndexes=False): left = self._left for item in self._iterSource(left, excludeIndexes): yield item for item in self._iterSource(self._right, excludeIndexes): if not self._sourceContains(item, left, False, excludeIndexes): yield item def _iterkeys(self, excludeIndexes=False): if not excludeIndexes: leftIndex = self._aSourceIndex(self._left) if leftIndex is not None: for key in leftIndex: yield key for key in self._iterSourceKeys(self._right): if key not in leftIndex: yield key return for key in self._iterSourceKeys(self._left, excludeIndexes): yield key left = self._getSource(self._left) for key in self._iterSourceKeys(self._right, excludeIndexes): if not left.__contains__(key, False, excludeIndexes): yield key def _op(self, leftOp, rightOp, other): left = self._left right = self._right if (leftOp == 'add' and not self._sourceContains(other, right) or rightOp == 'add' and not self._sourceContains(other, left)): return 'add' elif (leftOp == 'remove' and not self._sourceContains(other, right) or rightOp == 'remove' and not self._sourceContains(other, left)): return 'remove' return None def isSubset(self, superset, reasons=None): return (self._isSourceSubset(self._left, superset, reasons) and self._isSourceSubset(self._right, superset, reasons)) def isSuperset(self, subset, reasons=None): if (self._isSourceSuperset(self._left, subset, reasons) or self._isSourceSuperset(self._right, subset, reasons)): if reasons: reasons.clear() return True if reasons is not None and not reasons: reasons.add((subset, self)) return False class Intersection(BiSet): def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index return (self._sourceContains(item, self._left, excludeMutating, excludeIndexes) and self._sourceContains(item, self._right, excludeMutating, excludeIndexes)) def _itervalues(self, excludeIndexes=False): left = self._left right = self._right for item in self._iterSource(left, excludeIndexes): if self._sourceContains(item, right, False, excludeIndexes): yield item def _iterkeys(self, excludeIndexes=False): if not excludeIndexes: rightIndex = self._aSourceIndex(self._right) if rightIndex is not None: for key in self._iterSourceKeys(self._left): if key in rightIndex: yield key return right = self._getSource(self._right) for key in self._iterSourceKeys(self._left, excludeIndexes): if right.__contains__(key, False, excludeIndexes): yield key def _op(self, leftOp, rightOp, other): left = self._left right = self._right inLeft = self._sourceContains(other, left) inRight = self._sourceContains(other, right) if (leftOp == 'add' and inRight or rightOp == 'add' and inLeft): return 'add' elif (leftOp == 'remove' and inRight or rightOp == 'remove' and inLeft): return 'remove' index = self._anIndex() if index is not None: if not (inRight or inLeft) and 'remove' in (leftOp, rightOp): if other in index: return 'remove' if (inRight and inLeft) and 'add' in (leftOp, rightOp): if other not in index: return 'add' return None def isSubset(self, superset, reasons=None): if (self._isSourceSubset(self._left, superset, reasons) or self._isSourceSubset(self._right, superset, reasons)): if reasons: reasons.clear() return True return False def isSuperset(self, subset, reasons=None): return (self._isSourceSuperset(self._left, subset, reasons) and self._isSourceSuperset(self._right, subset, reasons)) class Difference(BiSet): def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index return (self._sourceContains(item, self._left, excludeMutating, excludeIndexes) and not self._sourceContains(item, self._right, excludeMutating, excludeIndexes)) def _itervalues(self, excludeIndexes=False): left = self._left right = self._right for item in self._iterSource(left, excludeIndexes): if not self._sourceContains(item, right, False, excludeIndexes): yield item def _iterkeys(self, excludeIndexes=False): if not excludeIndexes: rightIndex = self._aSourceIndex(self._right) if rightIndex is not None: for key in self._iterSourceKeys(self._left): if key not in rightIndex: yield key return right = self._getSource(self._right) for key in self._iterSourceKeys(self._left, excludeIndexes): if not right.__contains__(key, False, excludeIndexes): yield key def _op(self, leftOp, rightOp, other): left = self._left right = self._right if (leftOp == 'add' and not self._sourceContains(other, right) or rightOp == 'remove' and self._sourceContains(other, left, True)): return 'add' elif (leftOp == 'remove' and not self._sourceContains(other, right) or rightOp == 'add' and self._sourceContains(other, left, True)): return 'remove' return None def isSubset(self, superset, reasons=None): return self._isSourceSubset(self._left, superset, reasons) def isSuperset(self, subset, reasons=None): return self._isSourceSuperset(self._left, subset, reasons) class MultiSet(AbstractSet): def __init__(self, *sources, **kwds): self._sources = [] view = None for source in sources: view, source = self._prepareSource(source) self._sources.append(source) super(MultiSet, self).__init__(view, kwds.get('id', None)) def _repr_(self, replace=None): return "%s(%s%s)" %(type(self).__name__, ", ".join([self._reprSource(source, replace) for source in self._sources]), self._reprSourceId(replace)) def _setOwner(self, item, attribute): oldItem, oldAttribute, x = super(MultiSet, self)._setOwner(item, attribute) for source in self._sources: self._setSourceItem(source, item, attribute, oldItem, oldAttribute) return oldItem, oldAttribute, x def _setView(self, view): super(MultiSet, self)._setView(view) for source in self._sources: self._setSourceView(source, view) def _op(self, ops, other): raise NotImplementedError, "%s._op" %(type(self)) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): if change == 'collection': ops = [self._sourceChanged(_source, op, change, sourceOwner, sourceName, other, dirties, source) for _source in self._sources] if op == 'refresh': op = self._op(ops, other) or 'refresh' else: op = self._op(ops, other) elif change == 'notification': if other not in self: op = None if not (inner is True or op is None): self._collectionChanged(op, change, other, dirties) return op def findSource(self, id): for source in self._sources: src = self._findSource(source, id) if src is not None: return src return None def iterSources(self, recursive=False): for source in self._sources: for src in self._iterSources(source, recursive): yield src def iterInnerSets(self): for source in self._sources: if isinstance(source, AbstractSet): yield source def _inspect_(self, indent): return '%s%s' %(self._inspect__(indent), ''.join([self._inspectSource(source, indent + 1) for source in self._sources])) class MultiUnion(MultiSet): def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index for source in self._sources: if self._sourceContains(item, source, excludeMutating, excludeIndexes): return True return False def _iterkeys(self, excludeIndexes=False): sources = self._sources for source in sources: for key in self._iterSourceKeys(source, excludeIndexes): unique = True for src in sources: if src is source: break if self._sourceContains(key, src, False, excludeIndexes): unique = False break if unique: yield key def _itervalues(self, excludeIndexes=False): sources = self._sources for source in sources: for item in self._iterSource(source, excludeIndexes): unique = True for src in sources: if src is source: break if self._sourceContains(item, src, False, excludeIndexes): unique = False break if unique: yield item def _op(self, ops, other): sources = self._sources for op, source in izip(ops, sources): if op is not None: unique = True for src in sources: if src is source: continue if self._sourceContains(other, src): unique = False break if unique: return op return None def isSubset(self, superset, reasons=None): for source in self._sources: if not self._isSourceSubset(source, superset, reasons): return False return True def isSuperset(self, subset, reasons=None): for source in self._sources: if self._isSourceSuperset(source, subset, reasons): if reasons: reasons.clear() return True if reasons is not None and not reasons: reasons.add((subset, self)) return False class MultiIntersection(MultiSet): def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index for source in self._sources: if not self._sourceContains(item, source, excludeMutating, excludeIndexes): return False return True def _iterkeys(self, excludeIndexes=False): sources = self._sources if len(sources) > 1: source = sources[0] for key in self._iterSourceKeys(source, excludeIndexes): everywhere = True for src in sources: if src is source: continue if not self._sourceContains(key, src, False, excludeIndexes): everywhere = False break if everywhere: yield key def _itervalues(self, excludeIndexes=False): sources = self._sources if len(sources) > 1: source = sources[0] for item in self._iterSource(source, excludeIndexes): everywhere = True for src in sources: if src is source: continue if not self._sourceContains(item, src, False, excludeIndexes): everywhere = False break if everywhere: yield item def _op(self, ops, other): sources = self._sources if len(sources) > 1: for op, source in izip(ops, sources): if op is not None: everywhere = True for src in sources: if src is source: continue if not self._sourceContains(other, src): everywhere = False break if everywhere: return op return None def isSubset(self, superset, reasons=None): for source in self._sources: if self._isSourceSubset(source, superset, reasons): if reasons: reasons.clear() return True if reasons is not None and not reasons: reasons.add((self, superset)) return False def isSuperset(self, subset, reasons=None): for source in self._sources: if not self._isSourceSuperset(source, subset, reasons): return False return True class KindSet(Set): def __init__(self, kind, recursive=False, id=None): # kind is either a Kind item or an Extent UUID if isinstance(kind, UUID): self._extent = kind else: kind = kind.extent self._extent = kind.itsUUID self._recursive = recursive super(KindSet, self).__init__((kind, 'extent'), id) def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False kind = self.itsView[self._extent].kind if isuuid(item): instance = self.itsView.find(item, False) if instance is None: return kind.isKeyForInstance(item, self._recursive) else: item = instance if self._recursive: contains = item.isItemOf(kind) else: contains = item.itsKind is kind if contains: if (excludeMutating and item.isMutating() and (item._futureKind is None or not item._futureKind.isKindOf(kind))): return False return not item.isDeferred() return False def _sourceContains(self, item, source, excludeMutating=False, excludeIndexes=False): return self.__contains__(item, excludeMutating, excludeIndexes) def _itervalues(self, excludeIndexes=False): return self.itsView[self._extent].iterItems(self._recursive) def _iterkeys(self, excludeIndexes=False): return self.itsView[self._extent].iterKeys(self._recursive) def _repr_(self, replace=None): return "%s(UUID('%s'), %s%s)" %(type(self).__name__, self._extent.str64(), self._recursive, self._reprSourceId(replace)) def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): if change == 'collection': op = self._sourceChanged(self._source, op, change, sourceOwner, sourceName, other, dirties, source) elif change == 'notification': if other not in self: op = None if not (inner is True or op is None): self._collectionChanged(op, change, other, dirties) return op def countKeys(self): return AbstractSet.countKeys(self) def iterSources(self, recursive=False): return iter(()) def iterInnerSets(self): return iter(()) def isSubset(self, superset, reasons=None): if self is superset: return True if isinstance(superset, KindSet): superKind = self.itsView[superset._extent].kind if self.itsView[self._extent].kind.isKindOf(superKind): return True elif isinstance(superset, AbstractSet): return superset.isSuperset(superset, reasons) if reasons is not None: reasons.add((self, superset)) return False def isSuperset(self, subset, reasons=None): if self is subset: return True if isinstance(subset, KindSet): subKind = self.itsView[subset._extent].kind if subKind.isKindOf(self.itsView[self._extent].kind): return True elif isinstance(subset, AbstractSet): return subset.isSubset(self, reasons) elif isinstance(subset, RefList): item, attr = subset.itsOwner subKind = item.getAttributeAspect(attr, 'type') if (subKind is not None and subKind.isKindOf(self.itsView[self._extent].kind)): return True if reasons is not None: reasons.add((subset, self)) return False def _inspect_(self, indent): return "%s\n%skind: %s" %(self._inspect__(indent), ' ' * (indent + 1), self.itsView[self._extent].kind.itsPath) class FilteredSet(Set): def __init__(self, source, attrs=None, id=None): super(FilteredSet, self).__init__(source, id) self.attributes = attrs def __contains__(self, item, excludeMutating=False, excludeIndexes=False): if item is None: return False if not excludeIndexes: index = self._anIndex() if index is not None: return item.itsUUID in index if self._sourceContains(item, self._source, excludeMutating, excludeIndexes): return self.filter(item.itsUUID) return False def _iterkeys(self, excludeIndexes=False): for uuid in self._iterSourceKeys(self._source, excludeIndexes): if self.filter(uuid): yield uuid def _itervalues(self, excludeIndexes=False): for item in self._iterSource(self._source, excludeIndexes): if self.filter(item.itsUUID): yield item def countKeys(self): count = 0 for key in self._iterkeys(True): count += 1 return count def _setOwner(self, item, attribute): oldItem, oldAttribute, x = \ super(FilteredSet, self)._setOwner(item, attribute) if item is not oldItem: if not self.itsView.isLoading(): attrs = self.attributes if oldItem is not None: if attrs: def detach(op, name): Monitors.detachFilterMonitor(oldItem, op, name, oldAttribute) for name in attrs: detach('init', name) detach('set', name) detach('remove', name) if item is not None: if attrs: def attach(op, name): Monitors.attachFilterMonitor(item, op, name, attribute) for name in attrs: attach('init', name) attach('set', name) attach('remove', name) return oldItem, oldAttribute, x def sourceChanged(self, op, change, sourceOwner, sourceName, inner, other, dirties, source=None): if change == 'collection': op = self._sourceChanged(self._source, op, change, sourceOwner, sourceName, other, dirties, source) if op == 'add': index = self._anIndex() if index is not None and other in index: op = None elif not self.filter(other): op = None elif op == 'remove': index = self._anIndex() if index is not None: if other not in index: op = None elif not self.filter(other): otherItem = self.itsView.find(other) if not (otherItem is None or otherItem.isDeleting()): op = None elif change == 'notification': if other not in self: op = None if not (inner is True or op is None): self._collectionChanged(op, change, other, dirties) return op def itemChanged(self, other, attribute): if self._sourceContains(other, self._source): matched = self.filter(other) if self._indexes: contains = other in self._indexes.itervalues().next() else: contains = None if matched and not contains is True: item = self.itsView.find(other) if item is None or not item.isDeferring(): self._collectionChanged('add', 'collection', other, ()) elif not matched and not contains is False: self._collectionChanged('remove', 'collection', other, ()) class ExpressionFilteredSet(FilteredSet): def __init__(self, source, expr, attrs=None, id=None): super(ExpressionFilteredSet, self).__init__(source, attrs, id) self.filterExpression = expr self._filter = eval("lambda view, uuid: " + expr) def filter(self, uuid): try: return self._filter(self.itsView, uuid) except Exception, e: e.args = ("Error in filter", self.filterExpression) + e.args raise def _repr_(self, replace=None): return "%s(%s, \"\"\"%s\"\"\", %s%s)" %(type(self).__name__, self._reprSource(self._source, replace), self.filterExpression, self.attributes, self._reprSourceId(replace)) def _inspect_(self, indent): i = indent + 1 return "%s\n%sfilter: %s\n%s attrs: %s%s" %(self._inspect__(indent), ' ' * i, self.filterExpression, ' ' * i, ', '.join(str(a) for a in self.attributes), self._inspectSource(self._source, i)) class MethodFilteredSet(FilteredSet): def __init__(self, source, filterMethod, attrs=None, id=None): super(MethodFilteredSet, self).__init__(source, attrs, id) item, methodName = filterMethod self.filterMethod = (item.itsUUID, methodName) def filter(self, uuid): view = self.itsView uItem, methodName = self.filterMethod return getattr(view[uItem], methodName)(view, uuid) def _repr_(self, replace=None): uItem, methodName = self.filterMethod return "%s(%s, (UUID('%s'), '%s'), %s%s)" %(type(self).__name__, self._reprSource(self._source, replace), uItem.str64(), methodName, self.attributes, self._reprSourceId(replace)) def _inspect_(self, indent): i = indent + 1 return "%s\n%sfilter: %s\n%s attrs: %s%s" %(self._inspect__(indent), ' ' * i, self.filterMethod, ' ' * i, ', '.join(str(a) for a in self.attributes), self._inspectSource(self._source, i))
|
__label__pos
| 0.971413 |
Adding rows to an excel spreadsheet using apache poi
In a recent project for a client I was tasked with modifying an existing excel spreadsheet to add data from a query. Being familiar with Java and ColdFusion I assumed this would be a pretty trivial exercise. Read the existing file, get the sheet and then write the data, but I ran into an issue where adding rows using shiftRows didn’t make them writable and/or visible to apache poi. I realized I needed to literally add the rows and the columns to the excel spreadsheet to be able to change the values. Not a big deal code-wise and also really fast to complete, but frustrating to figure out.
currentCharterTemplate = 'existingWorkbook.xlsx';
currentFilePath = getDirectoryFromPath(getCurrentTemplatePath());
javaFile = createObject('java', 'java.io.File').init(currentFilePath & currentCharterTemplate);
excelFile = createObject('java', 'java.io.FileInputStream').init(javaFile);
xssfWorkbook = createObject('java', 'org.apache.poi.xssf.usermodel.XSSFWorkbook').init(excelFile);
summarySheet = xssfWorkbook.getSheetAt(0);
totalColumns = 12;
rowsToAdd = query.recordCount;
//add enough rows/columns to the spreadsheet to handle the record count of the query and the sort fields
for (rows = 1; rows <= rowsToAdd; rows++) {
summarySheet.createRow(rows);
theCurrentRow = summarySheet.getRow(rows);
for (columns = 0; columns <= totalColumns; columns++) {
theCurrentRow.createCell(columns);
theCurrentRow.getCell(columns);
}
}
|
__label__pos
| 0.987829 |
2
$\begingroup$
hopefully this isn't a duplicate of another question (at least I didn't find one).
Here is a question I have about completeness and sufficiency:
Problem: Suppose $T(x)$ is complete sufficient for $\theta$ given data $x$. Show that if a minimal sufficient statistic $S(x)$ for $\theta$ exists, then $T(x)$ is also minimal sufficient.
My solution: Since $T(x)$ is complete we have that $T(X)$ is the unique MVUE for $\mathbb{E}[T(X)]=m(\theta)$ for a specific function $m$.
Consider now $$V(X)=\mathbb{E}[T(X)|S(X)].$$
By Rao-Blackwell we know that $Var(V(X))\leq Var(T(X))$. Hence, by uniqueness of MVUEs we must have that $V(X)=T(X)$, i.e. that $T(X)=g(S(X))$ from the definition of $V(X)$ (for some function $g$). However, as $T$ is a function of minimal sufficient statistic, it is also minimal sufficient.
The problem with my solution is that I don't use the minimal sufficiency of $S$ until the very end, in comparison to the author's solution. Its idea is to say that $V(X)=h(S(X))$ by definition of the conditional expectation and then argue that $V(X)=f(T(X))$ as $S$ is minimal sufficient. The result then follows from the completeness of $T$.
I also seem to prove that every complete sufficient statistic for $\theta$ is a function of any other sufficient statistic for $\theta$. Is that true or have I made a mistake somewhere?
$\endgroup$
• $\begingroup$ How does $Var(V(X))\le Var(T(X))$ guarantee that $V(X)$ (a function of $S(X)$) is UMVUE? You are missing some details. The idea is correct, but I think it is a slightly convoluted way of showing $T(X)=V(X)$. $\endgroup$ – StubbornAtom Nov 14 '19 at 21:15
• $\begingroup$ @StubbornAtom, hey! Well both of them are unbiased for $m(\theta) $ and V has lower variance than T which is an MVUE (and is in particular, unique). Doesn't that suffice? $\endgroup$ – asdf Nov 15 '19 at 11:33
1
$\begingroup$
A complete sufficient statistic is a minimal sufficient statistic whenever a minimal sufficient statistic exists.
Suppose for a family of distributions parameterized by $\theta$, there exists a minimal sufficient statistic $S(X)$ and a complete sufficient statistic $T(X)$ based on the data $X$. We show that $T$ is also minimal sufficient.
As $S$ is minimal sufficient and $T$ is sufficient, by definition of minimal sufficiency there exists a measurable function $h$ such that $S=h(T)$.
Consider the function $g(T)=T-E_{\theta}[T\mid S]=T-E[T\mid S]$, so that $E_{\theta}[g(T)]=0$ for every $\theta$.
As $T$ is complete, this implies $g(T)=0$ almost everywhere. That is, $$T=E[T\mid S]\quad,\text{a.e.}$$
So $T$ is a function of $S$. And as $S$ is a function of any other sufficient statistic, so is $T$.
Therefore $T$ is minimal sufficient and equivalent to $S$.
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.621527 |
How do I set the Origin Point dynamically?
• Hi!
I want to build a game with Drag and Drop feature. It matters where i touch the piece, so the physics should do their job at this point.
For example: take a sheet of paper between 2 fingers, it will rotate around the "holding point". That is the origin point in Construct 2. But therefor i need to set it to the position, where the mouse/finger drags/holds the piece.
Is there a possibility to do that or is it a good idea for a Construct 2 update?
Greetings
Kevin
• Try Construct 3
Develop games in your browser. Powerful, performant & highly capable.
Try Now Construct 3 users don't see these ads
• As far as I know this isn't possible without writing/rewriting plugins.
• Moving the origin will not help with your problem.
Mixing standard behaviors with physics is almost always wrong, and a bad path to follow.
The physics behavior attempts to simulate real-world physical interaction with it's environment, whereas the other behaviors do not. This means that when you move a physics object with, for instance, drag and drop, the physics object can end up 'teleporting' in order to update itself. This can cause seriously undesirable effects.
Physics-based objects should really be moved and controlled using the built-in physics commands.
From the manual:
"Using Physics in Construct 2
The Physics behavior simulates physics separately to the Construct 2 layout. Construct 2 will try to keep the Physics and Construct 2 "worlds" synchronised if one changes but not the other, but this can be unpredictable. For example, setting an object's position or angle will cause Construct 2 to teleport the corresponding object in the physics simulation to the object's new position, which does not always properly take in to account collisions. The same is true of using other Construct 2 behaviors at the same time as Physics.
Therefore it is highly recommended to control Physics objects entirely via the Physics behavior (by setting forces, impulses, torques etc.), rather than trying to manipulate objects by Set position, Set angle etc.
Another consequence is Physics won't respond to objects with the Solid or Jumpthru behaviors. These behaviors are totally redundant when using Physics and have no effect. Instead, use the Immovable property."
Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)
|
__label__pos
| 0.993005 |
Amazon Interview Experience 2020 for SDE-1
Hi geeks, I appeared for the amazon’s interview for SDE 1, and here is my experience
Round 1: This round was online assessment and questions asked were:
1. Stickler Thief
2. Binary Tree – Distance b/w two given nodes.
Round 2: I don’t know why my second round was an SDM round. This was more concentrated on my previous projects,
1. Give me the situations where you failed and pushed back.
2. Tell me the hardest task done by you till now and how you solved it.
3. If you are assigned two interns and there are two other interns who are assigned to someone else outside the team. But these two are also in your team, and they notice other two interns asking questions to you and so they start asking thing to you, due to this you are lacking in your productive time, how will you react?
4. Tell me the thing that hurt you the most in your career and how did you react to it?
5. Some leadership-related questions were also there.
And some more questions on OS and CN.
Round 3: This was straight coding round.
1. The zigzag traversal of a Binary tree: Gave a solution with BFS using Queue and Stack but he wanted me to optimize it, so used two stacks and that worked. ( no miss in edge cases)
2. How to delete a node from Singly linked list when you are given only the node (head is not provided): simply copy the next nodes data to current and delete the next. Coded as well ( no miss of edge cases)
Round 4: Here we started with the discussions of my previous projects and some system designs and how I solved the problems that occurred in the way of my projects.
Only one question was asked to create an HTML validator, I couldn’t find it on Gfg, but I understood this is a variation of balanced parentheses problem and gave a solution with stack and coded as well (here I missed 2 edge cases).
It’s been 1 week since I appeared for these interviews, no updates till now and I think I am rejected :-(.
But if I get some more updates, I’ll update this article.
Tips:
• Keep asking for the clarifications and edge cases.
• Keep track of all edged cases and ask your interviewer about them.
• Don’t make your own assumptions, tell them that you are making these assumptions, and if they are good with that then only proceed with your solution.
Write your Interview Experience or mail it to [email protected]
My Personal Notes arrow_drop_up
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Article Tags :
Practice Tags :
10
Please write to us at [email protected] to report any issue with the above content.
|
__label__pos
| 0.583836 |
I decided to revive my (many months) inactive site today using things I've learned about C++ CGI and the Document Object Model along with CSS. The site is still down, but I'm trying to get back into CGI after leaving it alone for a while (also months). I've spent the past three hours debugging this (90 minutes fixing late-night typos and a few missing semicolons, the rest spent trying to fix the problem described herein) but I can't get the function I'm basically writing it for to work. The (cgi::find) function.
This http://24.148.160.212/cgi-bin/test.cgi?cheese=cheese& is supposed to output:
cheese = cheese
I would like cheese today.
I also shouldn't need an & at the end of the query, but for some reason and hours of fiddling with my code to sort a query into variables and contents of those variables I still need the & at the end.
My server is running Apache 2.2.4 and on Windows XP and the code was compiled using Borland C++ 5.5.1 for Win32.
/*
A CGI tool allowing the use of form.find("input name") in scripts after running init()
example in main:
#include "cgi.cpp"
int main(int argc, char *argv[])
{
cgi form = init();
string cheese;
form.find( "cheese", cheese );
cout<<"<html>";
cout<<cheese;
cout<<"</html>";
}
would generate a page with the content of query or post input 'cheese' on it.
I don't know if my array use is secure or not, it supports a megabyte at the moment.
I'll begin optimization on return from vacation, any suggestions?
*/
//planned features:
//function 'text("filename")' outputs the file to a browser - can be used for blogs
//function 'cgi.record("name","filename")' puts the content of name at the end of file filename
#include <iostream>
#include <string>
#include <cstdlib>
using namespace std;
class cgi
{
friend cgi init();
private: string name; //post/query variable
string content; //contents of post/query variable
cgi *nextone; //nextone cgi. functions stop at NULL when searching for a certain name.
public: cgi(string n=" ", string c=" ") //initialize the cgi
{
name = n;
content = c;
nextone = NULL;
}
void reset(string n, string c) //reset the cgi
{
name = n;
content = c;
nextone = NULL;
}
void add(string n, string c) //add a variable to the end
{
if (nextone==NULL)
{
nextone = new cgi(n,c);
}
else
nextone->add(n,c);
}
bool find(string n, string &c) //find n, and if found, return true and copy n's content to c. if not, return false
{
if ( ! name.compare( n ) ) //if name is equal to n
{
c = content;
return true;
}
else if (nextone==NULL) //if this is the last element of the cgi
return false;
else //pass the find to the nextone
return nextone->find(n,c);
}
private: void convert()
{
int length = content.length();
int con; //content position
int nco=0; //new content position
string newcontent;
for (con = 0; con<length; con++)
{
if (content[con]=='%') //URL encoded, leave normally encoded things alone, convert what is safe to use in HTML
{
switch (content[con+1])
{
case '2':
switch (content[con+2])
{
case '0': newcontent[nco++]=' '; continue;
case '4': newcontent[nco++]='$'; continue;
case '6': newcontent[nco++]='&'; continue;
case 'b':
case 'B': newcontent[nco++]='+'; continue;
case 'c':
case 'C': newcontent[nco++]=','; continue;
case 'f':
case 'F': newcontent[nco++]='/'; continue;
default: newcontent[nco++] = '%';
newcontent[nco++] = content[con++];
newcontent[nco++] = content[con++];
continue;
}
con+=2;
nco++;
continue;
case '3':
switch (content[con+2])
{
case 'a':
case 'A': newcontent[nco++]=':'; continue;
case 'b':
case 'B': newcontent[nco++]=';'; continue;
case 'd':
case 'D': newcontent[nco++]='='; continue;
case 'f':
case 'F': newcontent[nco++]='?'; continue;
default: newcontent[nco++] = '%';
newcontent[nco++] = content[con++];
newcontent[nco++] = content[con++];
continue;
}
con+=2;
nco++;
continue;
case '4':
if (content[con+2]=='0')
newcontent[nco++]='@';
else
{
newcontent[nco++] = '%';
newcontent[nco++] = content[con++];
newcontent[nco++] = content[con++];
continue;
}
con+=2;
nco++;
continue;
default: newcontent[nco++] = '%';
newcontent[nco++] = content[con++];
newcontent[nco++] = content[con++];
continue;
}
}
else //if already plaintext
{
newcontent[nco] = content[con];
nco++;
}
}
newcontent[nco]='\0';
content = newcontent;
if (nextone!=NULL)
nextone->convert();
}
};
cgi init()
{
cgi raw; //before url unencoding
cgi plain; //after url unencoding
string query; //in case of query string
int length; //content length, if existent
int a; //integer variable
int b; //integer variable
string c; //string variable
string d; //string variable
char buffer[1024000]; //support a megabyte of form data
cout<<"Content-type: text/html\n\n"; //give the browser a heads up
if ( getenv( "QUERY_STRING" ) ) //if there is a query string
{
query = getenv( "QUERY_STRING" );
}
else if ( getenv( "CONTENT_LENGTH" ) )
{
length = atoi( getenv( "CONTENT_LENGTH") );
cin.read( buffer, length );
query = buffer;
}
else
{
cout<<"Content-type: text/html\n\n";
cout<<"No form input.";
}
//initialize the raw cgi
a = query.find_first_of('='); //find the end of the variable name
c = query.substr( 0 , a ); //assign c the variable name
b = query.find_first_of('&'); //find the end of the variable content
if (query.find_first_of('&') == -1 ) //if & not found
b = query.length();
d = query.substr( a+1 , b ); //assign d the variable content
query = query.substr( b+1, query.length() ); //shorten the query string accordingly
raw.reset( c, d );
cout<<c<<" = "<<d<<endl<<"<br>";
while (query.length()>1)
{
a = query.find_first_of('='); //find the end of the variable name
if (query.find_first_of('=') == -1) //there are no more variables
break;
c = query.substr( 0 , a ); //assign c the variable name
b = query.find_first_of('&'); //find the end of the variable content
if (query.find_first_of('&') == -1 ) //if & not found
b = query.length();
d = query.substr( a+1 , b ); //assign d the variable content
query = query.substr( b+1, query.length() ); //shorten the query string accordingly
raw.add( c, d ); //add to the cgi instead of resetting it
cout<<c<<" = "<<d<<"<br>";
}
//herein lies the conversion of all raw cgi content's to plaintext (not a grammatical error)
raw.convert();
plain = raw;
//herein lies the returning of the converted content goodness
return plain;
}
I had a blog, comment system, and forum working before, but that code was such a terrible mess (yes, worse than this) that I decided not to use it. Also, my previous code used RudeCGI which was difficult to cross-platform. I'm trying to write (at least) a platform-portable .cpp file allowing me to write programs that read form input from a browser and access the content of the form's inputs using form.find("inputname"), or better yet form["inputname"].
I'm finding it truer and truer that coding is small% writing and majority% debugging/maintenance.
Any suggestions, assistance, or tips on my rusty C++ coding style are appreciated.
That is quite messy.
Better move to perl/php
commented: Good point about the exploits of poorly written c++ +12
That is quite messy.
Better move to perl/php
Yes, it's quite messy because you used TABs for indentation instead of 3-4 SPACEs. Outside of that the formatting seems fine.
And moving to Perl or PHP is one of the most asinine suggestions I've ever heard. Why would someone want to develop a C++ program in Perl or PHP?
Well despite their C++ skills, there's already one glaring security hole just waiting to be exploited.
Writing internet facing applications needs a lot more security awareness than this. Using languages which protect you from dumb stuff like unguarded reads into finite length char arrays for example.
commented: excellent point! +12
And moving to Perl or PHP is one of the most asinine suggestions I've ever heard. Why would someone want to develop a C++ program in Perl or PHP?
CGI stuff can coded more quickly in perl and php
CGI stuff can coded more quickly in perl and php
Immaterial. You develop programs in the language
1) your skillset lies
2) your boss wants it developed in
3) you decide is necessary to develop the code
I agree PHP and Perl may be better, but to suggest it to someone that is developing a system in another language is IMO a rude suggestion. It the same as suggesting a student use vectors when they are learning (or barely understand) arrays.
|
__label__pos
| 0.942728 |
Fixed Registration does not press for password even if it is left empty
planetzu
Member
Hello, if a user were to leave the password fields empty while registering, the system does not intimate him of the same. The user gets registered instead and then has no way to login as he does not have a password. Is this a bug?
Is there a way to set the password fields as 'required'?
Jeremy
Well-known member
Are they logging in or registering via Facebook? Can you replicate this with add-ons disabled?
planetzu
Member
Regular registration. I tried doing the same here on the xenforo forum and was able to replicate similar behavior.
ENF
Well-known member
Confirmed, standard registration form doesn't require a password to be entered.
After registering, a confirmation email is sent and this error is also displayed:
"Please enter a whole number"
If the user comes back to try and login, XF tries to treat it as a new user. Thus, the user cannot login.
To fix this, a user can reset their password and all is good. (but the initial reg. form should probably be fixed)
Top
|
__label__pos
| 0.974484 |
Internet
Fact-checked
At EasyTechJunkie, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.
Learn more...
What is the Difference Between a Rootkit and a Virus?
Rootkits and viruses both pose threats to computers, but they operate differently. A virus replicates itself and spreads, corrupting files along the way. A rootkit, however, burrows deep into the system to hide and grant unauthorized access. Understanding their distinct behaviors is crucial for effective cybersecurity. How can you protect your system from these insidious threats? Continue reading to arm yourself with knowledge.
G. Wiesen
G. Wiesen
While a rootkit and a virus are both types of malicious software, or malware, they are typically used to achieve different purposes in a computer attack. A rootkit typically is installed onto a computer system to either allow an unauthorized user to continue to gain access to that system or to hide the presence and activities of other types of malware. Viruses, on the other hand, are types of malware that typically are designed to attack a computer system in a very specific way and to achieve a particular goal.
Despite the fact that a rootkit and a virus are both forms of malware, they are utilized to achieve different tasks. A rootkit is a malicious program that can be installed onto a computer, at various levels within the operating system (OS), and then mask other activities. This type of program typically infects the “root” of the OS on a computer, hence the name, allowing for other activities to occur with that system that are then hidden by the rootkit. A rootkit is often used to create a backdoor entry point into a computer system for an unauthorized user to use to gain access to that system in the future or may be used to hide an infection by a virus or other type of malware.
A rootkit and a virus are both types of malicious software.
A rootkit and a virus are both types of malicious software.
The major difference between a rootkit and a virus is that a virus usually does not work to hide the activities of other programs or to allow access to a system. A virus is typically developed to achieve a certain effect, often by launching an attack upon a particular computer system. Though a virus can lay fairly dormant on a computer system, and remain hidden, until a particular event activates the virus, it will usually be created to launch a very specific attack on the system it infects.
Not all computer rootkits are dangerous.
Not all computer rootkits are dangerous.
There are also some major differences in how a rootkit and a virus can be removed from a computer system or OS. Viruses can often be found and removed through the user of an antivirus program, though very new viruses may elude detection for some time. A rootkit, however, can be very difficult to find, usually involving very elaborate security procedures, and nearly impossible to remove. The hard drive on a computer may need to be completely erased and the OS reinstalled to eliminate a rootkit from a computer. Ultimately, however, both a rootkit and a virus can be very destructive to a computer and efforts should be made by every computer user to avoid any type of malware.
You might also Like
Discuss this Article
Post your comments
Login:
Forgot password?
Register:
• A rootkit and a virus are both types of malicious software.
By: alexskopje
A rootkit and a virus are both types of malicious software.
• Not all computer rootkits are dangerous.
By: enens
Not all computer rootkits are dangerous.
|
__label__pos
| 0.985237 |
German Salutation - Velocity Script
Gerard_Donnell4
Level 10
German Salutation - Velocity Script
I am completely new to Velocity Scripting and have been tasked with setting up German Salutation to auto populate. Can you tell me if this would work or what is wrong with it. My email will basically have a {{My.Salutation}} at the top that will pipe in the salutation based on the script token below. Thanks in advance for any help.
#if (${lead.Salutation} == "")
Sehr geehrte/r Frau/Herr ${lead.last_Name}
#elseif
(${lead.Salutation} == "Herr")
Sehr geehrter Herr ${lead.last_Name}
#elseif
(${lead.Salutation} == "Frau")
Sehr geehrte Frau ${lead.last_Name}
#elseif
(${lead.Salutation} == "Herr Dr.")
Sehr geehrter Herr Dr. ${lead.last_Name}
#elseif
(${lead.Salutation} == "Frau Dr.")
Sehr geehrte Frau Dr. ${lead.last_Name}
#elseif
(${lead.Salutation} == "Herr Prof.")
Sehr geehrter Herr Prof. ${lead.last_Name}
#elseif
(${lead.Salutation} == "Frau Prof.")
Sehr geehrte Frau Prof. ${lead.last_Name}
#elseif
(${lead.Salutation} == "Herr Prof. Dr.")
Sehr geehrter Herr Prof. Dr. ${lead.last_Name}
#elseif
(${lead.Salutation} == "Frau Prof. Dr.")
Sehr geehrte Frau Prof. Dr. ${lead.last_Name}
#end
16 REPLIES 16
Gerard_Donnell4
Level 10
Re: German Salutation - Velocity Script
Sanford Whiteman Im hoping you could help me solve this.
Thanks,
Gerard
Casey_Grimes
Level 10
Re: German Salutation - Velocity Script
So, just taking a cursory look at this, this script could be incredibly simplified to:
#set ($greetCheck = ${lead.Salutation})
#if ($greetCheck.contains('Herr'))
Sehr geehter $greetCheck ${lead.LastName}
#elseif ($greetCheck.contains('Frau'))
Sehr geehrte $greetCheck {lead.LastName}
#else
Sehr geehrte/r Frau/Herr ${lead.LastName}
#end
which should work just fine for you.
SanfordWhiteman
Level 10 - Community Moderator
Re: German Salutation - Velocity Script
It's not really a contains, though. It's like a startsWith\b, which needs a regex. Plus unknown Salutations aren't necessarily supposed to be output.
SanfordWhiteman
Level 10 - Community Moderator
Re: German Salutation - Velocity Script
You want to use a dictionary object, rather than a lot of conditions. Also looks there's a lot of repetition here:
• the salutation is present in the condition and then also printed exactly as is in the output
• the difference is based on the first word being Herr vs Frau, not on the full salutation
So following the DRY (Don't Repeat Yourself) maxim, let's check only if the field starts with your interesting words, rather than a full match:
#set( $greetingsBySalutationStart = {
"Herr" : "Sehr geehrter",
"Frau" : "Sehr geehrte",
"$" : "Sehr geehrte/r Frau/Herr"
})
## ---- NO NEED TO TOUCH ANYTHING BELOW THIS LINE! ----
#set( $greeting = $greetingsBySalutationStart["$"] )
#foreach( $startPattern in $greetingsBySalutationStart.keySet() )
#if( $lead.Salutation.matches("^${startPattern}\b.*") )
#set( $greeting = "${greetingsBySalutationStart[$startPattern]} ${lead.Salutation}" )
#end
#end
${greeting} ${lead.LastName}
Notes:
• I used $lead.LastName because that's the Velocity name for Last Name in my instances. Does yours really use $lead.last_Name?
• the index ["$"] is the default (you probably figured that out)
Gerard_Donnell4
Level 10
Re: German Salutation - Velocity Script
Hi Sanford Whiteman
So does the start of this script check if the Salutation contains "Herr" or "Frau" or if it equals "Herr" or "Frau"? The reason Im asking is because our German forms actually have all 8 of the salutations mentioned above.
Thanks,
Gerard
1. #set( $greetingsBySalutationStart = {
2. "Herr" : "Sehr geehrter"
3. "Frau" : "Sehr geehrte"
4. "$" : "Sehr geehrte/r Frau/Herr"
5. })
SanfordWhiteman
Level 10 - Community Moderator
Re: German Salutation - Velocity Script
The object at the top (the dictionary) is of *first words* in the salutation.
It's used in the code with a regex (pattern match) fixed to the start of the string. So "Herr" will match just "Herr," and also "Herr Prof," "Herr Dr." or "Herr Anything." But it won't (purposely) match "Herry" or "John Herr" -- those would use the default greeting.
Gerard_Donnell4
Level 10
Re: German Salutation - Velocity Script
Hi Sanford Whiteman,
For some reason I couldn't get that to work, it kept defaulting to the general salutation. I managed to get the version below to work. I based it off the documentation. It is certainly not nice looking and isn't following DRY principles but it works for the meantime.
##check if the Salutation is Herr
#if(${lead.Salutation} == "Herr")
##if the Salutation is Herr, use the salutation 'Sehr geehrter Herr'
#set($greeting = "Sehr geehrter Herr ${lead.LastName},")
##check is the Salutation is Frau
#elseif(${lead.Salutation} == "Frau")
##if female, use the salutation 'Sehr geehrte Frau'
#set($greeting = "Sehr geehrte Frau ${lead.LastName},")
##check if the Salutation is Herr Dr.
#elseif(${lead.Salutation} == "Herr Dr.")
##if the Salutation is Herr Dr., use the salutation 'Sehr geehrter Herr'
#set($greeting = "Sehr geehrter Herr Dr. ${lead.LastName},")
##check is the Salutation is Frau Dr.
#elseif(${lead.Salutation} == "Frau Dr.")
##if Frau Dr., use the salutation 'Sehr geehrte Frau Dr.'
#set($greeting = "Sehr geehrte Frau Dr. ${lead.LastName},")
##check is the Salutation is Herr Prof.
#elseif(${lead.Salutation} == "Herr Prof.")
##if Herr Prof., use the salutation 'Sehr geehrter Herr Prof.'
#set($greeting = "Sehr geehrter Herr Prof. ${lead.LastName},")
##check is the Salutation is Frau Prof.
#elseif(${lead.Salutation} == "Frau Prof.")
##if Frau Prof., use the salutation 'Sehr geehrte Frau Prof.'
#set($greeting = "Sehr geehrte Frau Prof. ${lead.LastName},")
##check is the Salutation is Herr Prof. Dr.
#elseif(${lead.Salutation} == "Herr Prof. Dr.")
##if Herr Prof. Dr., use the salutation 'Sehr geehrter Herr Prof. Dr.'
#set($greeting = "Sehr geehrter Herr Prof. Dr. ${lead.LastName},")
##check is the Salutation is Frau Prof. Dr.
#elseif(${lead.Salutation} == "Frau Prof. Dr.")
##if Frau Prof. Dr., use the salutation 'Sehr geehrte Frau Prof. Dr.'
#set($greeting = "Sehr geehrte Frau Prof. Dr. ${lead.LastName},")
#else
##otherwise, use this Salutation
#set($greeting = "Sehr geehrte/r Frau/Herr ${lead.FirstName},")
#end
##print the greeting and some content
${greeting}
SanfordWhiteman
Level 10 - Community Moderator
Re: German Salutation - Velocity Script
My code was tested thoroughly against representative test values... can't really comment on this monstrosity but I hope you will give another try for cleaner code later.
SanfordWhiteman
Level 10 - Community Moderator
Re: German Salutation - Velocity Script
Run this test:
#set( $testLead1 = {
"LastName" : "Namor",
"Salutation" : "Herr"
})
#set( $greetingsBySalutationStart = {
"Herr" : "Sehr geehrter",
"Frau" : "Sehr geehrte",
"$" : "Sehr geehrte/r Frau/Herr"
})
#set( $greeting = $greetingsBySalutationStart["$"] )
#foreach( $startPattern in $greetingsBySalutationStart.keySet() )
#if( $testLead1.Salutation.matches("^${startPattern}\b.*") )
#set( $greeting = "${greetingsBySalutationStart[$startPattern]} ${testLead1.Salutation}" )
#end
#end
${greeting} ${testLead1.LastName}
#set( $testLead2 = {
"LastName" : "Lastman",
"Salutation" : "Herr Dr."
})
#set( $greetingsBySalutationStart = {
"Herr" : "Sehr geehrter",
"Frau" : "Sehr geehrte",
"$" : "Sehr geehrte/r Frau/Herr"
})
#set( $greeting = $greetingsBySalutationStart["$"] )
#foreach( $startPattern in $greetingsBySalutationStart.keySet() )
#if( $testLead2.Salutation.matches("^${startPattern}\b.*") )
#set( $greeting = "${greetingsBySalutationStart[$startPattern]} ${testLead2.Salutation}" )
#end
#end
${greeting} ${testLead2.LastName}
#set( $testLead3 = {
"LastName" : "Whiteman",
"Salutation" : "Friar"
})
#set( $greetingsBySalutationStart = {
"Herr" : "Sehr geehrter",
"Frau" : "Sehr geehrte",
"$" : "Sehr geehrte/r Frau/Herr"
})
#set( $greeting = $greetingsBySalutationStart["$"] )
#foreach( $startPattern in $greetingsBySalutationStart.keySet() )
#if( $testLead3.Salutation.matches("^${startPattern}\b.*") )
#set( $greeting = "${greetingsBySalutationStart[$startPattern]} ${testLead3.Salutation}" )
#end
#end
${greeting} ${testLead3.LastName}
#set( $testLead4 = {
"LastName" : "Absent",
"Salutation" : ""
})
#set( $greetingsBySalutationStart = {
"Herr" : "Sehr geehrter",
"Frau" : "Sehr geehrte",
"$" : "Sehr geehrte/r Frau/Herr"
})
#set( $greeting = $greetingsBySalutationStart["$"] )
#foreach( $startPattern in $greetingsBySalutationStart.keySet() )
#if( $testLead4.Salutation.matches("^${startPattern}\b.*") )
#set( $greeting = "${greetingsBySalutationStart[$startPattern]} ${testLead4.Salutation}" )
#end
#end
${greeting} ${testLead4.LastName}
Expected (and observed on my primary instance) output:
Sehr geehrter Herr Namor
Sehr geehrter Herr Dr. Lastman
Sehr geehrte/r Frau/Herr Whiteman
Sehr geehrte/r Frau/Herr Absent
|
__label__pos
| 0.968417 |
Boost C++ Libraries
...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards
boost/stacktrace/detail/safe_dump_posix.ipp
// Copyright Antony Polukhin, 2016-2019.
//
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_STACKTRACE_DETAIL_SAFE_DUMP_POSIX_IPP
#define BOOST_STACKTRACE_DETAIL_SAFE_DUMP_POSIX_IPP
#include <boost/config.hpp>
#ifdef BOOST_HAS_PRAGMA_ONCE
# pragma once
#endif
#include <boost/stacktrace/safe_dump_to.hpp>
#include <unistd.h> // ::write
#include <fcntl.h> // ::open
#include <sys/stat.h> // S_IWUSR and friends
namespace boost { namespace stacktrace { namespace detail {
std::size_t dump(int fd, const native_frame_ptr_t* frames, std::size_t frames_count) BOOST_NOEXCEPT {
// We do not retry, because this function must be typically called from signal handler so it's:
// * to scary to continue in case of EINTR
// * EAGAIN or EWOULDBLOCK may occur only in case of O_NONBLOCK is set for fd,
// so it seems that user does not want to block
if (::write(fd, frames, sizeof(native_frame_ptr_t) * frames_count) == -1) {
return 0;
}
return frames_count;
}
std::size_t dump(const char* file, const native_frame_ptr_t* frames, std::size_t frames_count) BOOST_NOEXCEPT {
const int fd = ::open(
file,
O_CREAT | O_WRONLY | O_TRUNC,
#if defined(S_IWUSR) && defined(S_IRUSR) // Workarounds for some Android OSes
S_IWUSR | S_IRUSR
#elif defined(S_IWRITE) && defined(S_IREAD)
S_IWRITE | S_IREAD
#else
0
#endif
);
if (fd == -1) {
return 0;
}
const std::size_t size = boost::stacktrace::detail::dump(fd, frames, frames_count);
::close(fd);
return size;
}
}}} // namespace boost::stacktrace::detail
#endif // BOOST_STACKTRACE_DETAIL_SAFE_DUMP_POSIX_IPP
|
__label__pos
| 0.9958 |
Разворачиваем почтовый сервер Postfix+Docecot+Roundcube+Active Directory
Понедельник, 23 октября 2017 11:43
Разворачиваем почтовый сервер Postfix+Docecot+Roundcube+Active Directory
Автор
Оцените материал
(3 голосов)
Рано или поздно любая уважающая себя компания столкнется с необходимостью иметь свой почтовый сервис. Можно конечно использовать сервисы, предоставляемые провайдером. Но в этом случае имеем ряд ограничений: от количества почтовых ящиков пользователей до слишком маленького размера этих ящиков. И все это сказывается на цене, которую приходится платить хозяину сервиса. Поэтому лучше иметь свой почтовый сервер. Т.к. в стране идет активный процесс "импортозамещения", то об использовании самой продвинутой (на мой скромный взгляд) почтовой системы MS Exchange речи быть не может. В этой статье пойдет речь об установке и настройке почтовой системы Postfix, входящие в различные операционные системы, допущенные к эксплуатации в государственных учреждениях. Кроме самого Postfix будем настраивать всевозможные "кирпичики" к нему, что бы хоть как то приблизить эту почтовую систему к удобству работы, предоставляемую MS Exchange. Все работы будут производится на ОС ROSA Linux Cobalt (он же CentOS 7, он же RedHat 7)
Несколько слов о том, что будет установлено:
1. Все почтовые сервисы будут установлены на одном сервере.
2. Сервер будет обслуживать внутренний домен, развернутый на примере предыдущих статей
3. Будут использованы надстройки и дополнительные сервисы: Dovecot 2 + MySQL + PostfixAdmin + Postgrey + Postscreen + ClamAV + DKIM + Sieve + RoundCube (+плагины к RoundCube) + Active Directory
4. Будет настроена связь почтовой системы с базой пользователей Active Directory
5. Имя сервера, на котором будет установлена почтовая система: mail
6. Имя домена, обслуживаемого данным сервером: dest.loc
7. Почтовый субдомен: mail.dest.loc
8. Имя и почтовый адрес мастер-пользователя: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
9. Логин почтового ящика для авто-BCC неизвестных пользователей родных доменов: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
10. Ip адрес домена: 192.168.20.100
11. Ip адрес почтового сервера: 192.168.20.104
12. GUID, группа пользователей виртуальных почтовых ящиков: 5000 vmail
13. UID, системный пользователь виртуальных ящиков: 5000 vmail
14. Базовый каталог почты MailDir: "/opt/mail/vmail/" Формат создаваемых каталогов: "/opt/mail/vmail/dest.loc/_пользователь_/"
15. Каталог глобальных ACL: "/etc/dovecot/acl/%d"
16. Каталог, в котором будет создан вложенный подкаталог "sieve" с глобальными и персональными Sieve-скриптами: "/opt/dovecot"
17. Каталог виртуальных папок плагина "virtual" Dovecot: "/opt/dovecot/virtual"
18. Порт OpenDKIM: 8891
19. Порт ClamAV-Milter: 7357
20. Порт Postgrey: 10023
21. Порт ManageSieve: 4190
22. Файл SQL-запроса виртуального почтового ящика: "/etc/postfix/mysql_virtual_maps.cf"
23. Файл SQL-запроса домена: "/etc/postfix/mysql_virtual_domains.cf"
24. Файл SQL-запроса почтового ящика, для которого запрашиваемый является алиасом: "/etc/postfix/mysql_virtual_alias_maps.cf"
25. Файл SQL-запроса ящика домена, для которого домен запрашиваемого ящика является алиасом: "/etc/postfix/mysql_virtual_alias_domain_maps.cf"
26. Файл SQL-запроса для авто-BCC в ящики зарегистрированных отправителей: "/etc/postfix/mysql_bcc_mailbox_maps.cf"
27. Файл SQL-запроса ящика для авто-BCC не зарегистрированных отправителей родных доменов: "/etc/postfix/mysql_bcc_domain_maps.cf"
28. Postfix SSL ключ: "/etc/postfix/ssl.key.pem"
29. Postfix SSL сертификат: "/etc/postfix/ssl.cert.pem"
30. Dovecot SSL ключ: "/etc/dovecot/ssl.key.pem"
31. Dovecot SSL сертификат: "/etc/dovecot/ssl.cert.pem"
32. Путь к корневой папке с web-файлами для https Apache2: "/opt/www/html/"
33. Имя MySQL базы PostfixAdmin: "POSTFIXADMIN_BASE_"
34. Имя пользователя MySQL базы PostfixAdmin: "POSTADMIN_USER_"
35. Пароль пользователя MySQL базы PostfixAdmin: "_POSTFIXADMIN_SQL_PASSWORD_"
36. Имя MySQL базы Roundcube: "roundcubemail"
37. Имя пользователя MySQL базы Roundcube: "roundcube"
38. Пароль пользователя MySQL базы Roundcube: "_ROUNDCUBE_SQL_PASSWORD_"
Предварительная настройка сервера.
1. Устанавливаем ОС в режиме минимальной установки.
2. Отключаем Selinux
3. Отключаем ipv6, если он не используется в сети компании.
4. Добавляем ip адрес и имя (в том числе и полное) в файл /etc/hosts
5. Настраиваем синхронизацию времени с контролером домена (или любым другим источником времени)
6. На сервере DNS должны быть внесена соответствующие A, MX и PTR записи
7. Для работы DKIM и SPF в DNS должны быть внесены соответствующие TXT записи. Например:
Для SPF
dest.loc. IN TXT "v=spf1 a mx 192.168.10.104 ~all"
После имени домена обязательно должна быть точка.
Для DKIM:
mail._domainkey IN TXT "v=DKIM1;k=rsa;p=_КЛЮЧ_СГЕНЕРИРОВАННЫЙ_СООТВЕТСТВУЮЩИМ_ПО_"
Определяем политику использования DKIM в домене - добавляем в DNS еще одну TXT-запись
_adsp._domainkey IN TXT "dkim=all"
Где:
unknown - отправка не подписанных сообщений разрешена (значение по умолчанию)
all - отправка не подписанных сообщений запрещена
discardable - все не подписанные сообщения должны быть заблокированы на стороне получателя
После всех манипуляций можно найти какой-нибудь из множества сайтов, бесплатно осуществляющих проверку этих настроек, в т.ч. через отправку тестового письма.
8. Будем считать, что MySQL уже установлен и настроен на сервере. Единственная рекомендация для данной установки: ограничить адреса подключений к серверу. Делается это путем добавления в файл конфигурации my.cnf одной строки:
......
bind-address = 127.0.0.1
......
После этого перезапускаем SQL сервер
Установка PostfixAdmin
Postfixadmin лучше установить до Postfix, чтобы потом сразу использовать правильные имена MySQL-таблиц Postfixadmin в конфигах Postfix.
Предполагается, что "apache2", "php5" и "mysql-server" - уже установлены. Их установка здесь не рассматривается.
Нужно в MySQL создать базу и пользователя для PostfixAdmin.
Устанавливаем необходимые пакеты:
yum install php-imap php-mbstring
Забираем исходник. На момент написание сего опуса самая свежая версия была 3.1:
wget https://sourceforge.net/projects/postfixadmin/files/postfixadmin/postfixadmin-3.1/postfixadmin-3.1.tar.gz
Распаковываем и помещаем в папку "/opt/www/html/postfixadmin/"
Создаем базу данных для работы postfixadmin и пользователя этой базы данных:
mysql -u root -p
create database postfixadmin_base default character set utf8 default collate utf8_general_ci;
grant all on postfixadmin_base.* to 'postadmin_user'@'localhost' identified by '_POSTFIXADMIN_SQL_PASSWORD_';
Открываем файл /opt/www/html/postfixadmin/conf.inc.php и вносим изменения:
$CONF['configured'] = true;
$CONF['default_languge'] = 'ru';
$CONF['database_user'] = 'postadmin_user';
$CONF['database_password'] = '_POSTFIXADMIN_SQL_PASSWORD_';
$CONF['database_name'] = 'postfixadmin_base';
Создаем дополнительную папку:
mkdir /opt/www/html/postfixadmin/templates_c
chown -R apache:apache /opt/www/html/postfixadmin/templates_c
И запускаем установщик в браузере: https://mail.dest.loc/postfixadmin/setup.php
Установщик проверит корректность настроек и создаст необходимую структуру базы данных.
На странице требуется ввести пароль установщика. После ввода пароля страница перезагрузится и выдаст сообщение:
If you want to use the password you entered as setup password, edit config.inc.php or config.local.php and set
$CONF['setup_password'] = 'dsvhajh34525jbhwjt534tbsa887sfdbkj8sfljhlsfdg8shfl898wkj';
Т.е. требуется ввести этот хеш пароля в соответствующий раздел в файле conf.inc.php
После того, как внесли изменения в конфигурационный файл, вводим учетные данные пользователя Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. и нажимаем "Добавить администратора"
После добавления администратора можно заходить на страницу https://mail.dest.loc/postfixadmin/ и вводить учетные данные вновь созданного администратора.
Создаем папку postfixadmin-log, назначаем владельца папки пользователя от имени которого запущен web-сервер.
mkdir /opt/www/html/postfixadmin/postfixadmin-log
chown -R apache:apache /opt/www/html/postfixadmin/postfixadmin-log
Создаем в этой папке файл .htaccess и настраиваем вывод содержимого папки с сортировкой по дате:
Options All Indexes
IndexOptions FancyIndexing FoldersFirst
#IndexOrderDefault Ascending Date
IndexOrderDefault Descending Date
Автоответчик (Vacation)
Данная возможность в PostfixAdmin реализована не очень удачно. Будем использовать аналогичный плагин к Dovecot (настройка описывается ниже)
Скрипты
Не все скрипты нужны, но на будущее будет легче быстро вносить изменения, если они уже будут готовы и настроены.
Первые два скрипта просто пишут логи. Последний - удаляет юзера из RoundCube и папку пользователя MailDir.
Создаем скрипт, автоматом выполняемый после добавления новой почты через web-интерфейс PostfixAdmin addmail.sh
touch /opt/www/html/postfixadmin/addmail.sh
chown apache:apache /opt/www/html/postfixadmin/addmail.sh
chmod 0700 /opt/www/html/postfixadmin/addmail.sh
Содержимое скрипта:
#!/bin/bash
daten=`date -R`
printf "$daten \n CREATE mailbox: $1\n Domain: $2\n MailDir: $3\n Quota: $4 B\n\n" >> /opt/www/html/postfixadmin/postfixadmin-log/addmailbox.log
# if [[ -d /opt/mail/vmail/incron_mailuser_monitor/ ]]
# then
# userdomain=(${1//@/ })
# touch /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]}
# fi
Закомментированные строки в конце - раскомментим позже, когда будем настраивать автоматический мониторинг через incron папки "Отправленные" для каждого ящика.
Создаем скрипт, выполняемый после редактирования ящика через web-интерфейс PostfixAdmin editmail.sh
touch /opt/www/html/postfixadmin/editmail.sh
chown apache:apache /opt/www/html/postfixadmin/editmail.sh
chmod 0700 /opt/www/html/postfixadmin/editmail.sh
Содержимое скрипта:
#!/bin/bash
daten=`date -R`
printf "$daten \n EDIT mailbox: $1\n Domain: $2\n MailDir: $3\n Quota: $4 B\n\n" >> /opt/www/html/postfixadmin/postfixadmin-log/editmailbox.log
Создаем скрипт после удаления ящика через web-интерфейс PostfixAdmin (удаляет из RoundCube и папки* в папке почты домена) delmail.sh.
Почтовые папки удалит incron (т.к. владелец не "apache", а "vmail") !
touch /opt/www/html/postfixadmin/delmail.sh
chown apache:apache /opt/www/html/postfixadmin/delmail.sh
chmod 0700 /opt/www/html/postfixadmin/delmail.sh
Содержимое скрипта:
#!/bin/bash
daten=`date -R`
#del RoundCube user:
host="127.0.0.1"
user="_ROUNDCUBE_SQL_USER_ "
pass="_ROUNDCUBE_SQL_PASSWORD_"
db="_ROUNDCUBE_SQL_BASE_"
sql="SELECT user_id FROM users WHERE username = '$1'"
RES=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf "$daten \n DELETE mailbox: $1\n Domain: $2\n MailDir: $3\n Quota: $4 B\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
printf " >>> RoundCube SQL-query START \n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
for i in $RES; do
if [ "$i" != "user_id" ]; then
printf " ! FOUND RECORD: USER_ID = $i ! ( $1 )\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Find user_id:
printf " SQL: $sql \n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Cache:
sql="DELETE FROM cache WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from ContactGroupMembers:
sql="DELETE FROM contactgroupmembers WHERE contactgroup_id IN (SELECT contactgroup_id FROM contactgroups WHERE user_id = $i)"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from ContactGroups:
sql="DELETE FROM contactgroups WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Contacts:
sql="DELETE FROM contacts WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Identities:
sql="DELETE FROM identities WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Cache Messages
sql="DELETE FROM cache_messages WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Cache Thread
sql="DELETE FROM cache_thread WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# Delete user from Users:
sql="DELETE FROM users WHERE user_id = $i"
RESD=`mysql --host=$host --port=3306 --user=$user --password=$pass --database=$db --execute="$sql"`
printf " SQL: $sql\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
fi
done
printf " >>> RoundCube SQL-query END \n\n" >> /opt/www/html/postfixadmin/postfixadmin-log/delmailbox.log
# if [[ -d /opt/mail/vmail/incron_mailuser_monitor/ ]]
# then
# userdomain=(${1//@/ })
# if [[ -f /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]} ]]
# then
# rm /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]}
# fi
# fi
Закомментированные строки в конце - раскомментим позже, когда будем настраивать автоматический мониторинг через incron папки "Отправленные" для каждого ящика
Возможно придется в скриптах откорректировать имена таблиц Postfixadmin.
Postfixadmin запускает скрипты приблизительно так:
/opt/www/html/postfixadmin/scripts/editmail.sh Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.' 'dest.loc' 'dest.loc/test/' '1234567890'
Если пробовать скрипты из консоли - надо не забывать менять пользователя, в т.ч. у лог-файлов, т.к. будет "permission denied". Если что не так - смотреть error.log Apache2.
Установка Postfix и Dovecot
Предпологаем, что на сервере уже установлены openssl и mysql. Установка и настройка данного ПО не рассматривается в рамках этой статьи.
Прежде чем начать установку Postfix, необходимо удалить из системы какие-либо установленные программы обработки почты (sendmail и подобные).
Устанавливаем необходимые пакеты:
yum install postfix dovecot dovecot-mysql dovecot-pigeonhole
Проверяем установку:
postconf -a
cyrus
dovecot
Конфигурационные файлы Postfix
В процессе установки пакета были созданы пользователь и группа postfix. От имени этого пользователя будет работать наш почтовый сервер.
Для дальнейшей работы нам необходимо создать еще одного пользователя и группу (для работы с виртуальными почтовыми ящиками):
groupadd -r -g 5000 vmail
useradd -r -g vmail -u 5000 vmail -d /opt/mail/vmail -m
Не забываем сделать копии основных конфигурационных файлов:
cp /etc/postfix/main.cf /etc/postfix/main.cf.orig
cp /etc/postfix/master.cf /etc/postfix/master.cf.orig
И приступаем к настройке.
Файл master.cf должен быть примерно такого содержания:
#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master").
#
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
#smtp inet n - - - - smtpd
# +++ Postscreen +++
smtpd pass - - n - - smtpd
smtp inet n - n - 1 postscreen
# -o soft_bounce=yes
tlsproxy unix - - n - 0 tlsproxy
#dnsblog unix - - n - 0 dnsblog
#
# Remap 25 port to 587 port
submission inet n - - - - smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_path=private/auth
-o smtpd_sasl_security_options=noanonymous
-o smtpd_sender_login_maps=mysql:/etc/postfix/mysql_virtual_maps.cf
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o smtpd_sender_restrictions=reject_sender_login_mismatch
-o smtpd_recipient_restrictions=reject_unverified_recipient,reject_unknown_recipient_domain,reject_non_fqdn_recipient,permit_sasl_authenticated,reject
#smtps inet n - - - - smtpd
#628 inet n - - - - qmqpd
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
#qmgr fifo n - - 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
-o fallback_relay=
-o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq unix n - - - - showq
error unix - - - - - error
discard unix - - - - - discard
#local unix - n n - - local
#virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#
# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}
# python-postfix-policyd-spf
policyd-spf unix - n n - 0 spawn
user=nobody argv=/usr/bin/python /usr/bin/policyd-spf
retry unix - - - - - error
Файл main.cf приводим приблизительно к такому виду:
# +++ Debug:
#debug_peer_level = 2
#debug_peer_list = 127.0.0.1
#debug_peer_list = 127.0.0.1, dest.loc
#syslog_facility = mail
smtpd_banner = $myhostname ESMTP
# +
# smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
#smtpd_banner = $myhostname ESMTP ($mail_version)
biff = no
# user@host => user@host.$mydomain (default=yes)
append_dot_mydomain = no
# + (help doc postfix)
#readme_directory = no
# + mydomain
mydomain = dest.loc
# + myorigin (add after '@' for output)
#myorigin = /etc/mailname
#myorigin = mail.dest.loc
myorigin = $mydomain
myhostname = mail.dest.loc
inet_protocols = ipv4
# !!! Do not use $mydomain for mydestination !!!
#mydestination = $myhostname, localhost.$mydomain
mydestination = $myhostname, localhost, localhost.$mydomain
#mynetworks_style = subnet
mynetworks_style = host
# + mynetworks
# Trusted networks
# If this parameter apply - "mynetworks_style" ignore
mynetworks = 127.0.0.1/32
#inet_interfaces = 1.2.3.4
# +
#inet_interfaces = all
inet_interfaces = 192.168.20.104, 127.0.0.1
# Replace address "user@host" => "user@$myorigin " (default: yes)
#append_at_myorigin = no
#masquerade_domains =
#masquerade_exceptions = root
#masquerade_classes = envelope_sender, header_sender, header_recipient
# +
# Authorisation - dovecot:
# smtpd_sasl_type = dovecot
#smtpd_sasl_path = private/auth
#smtpd_sasl_auth_enable = yes
# + relayhost (default - no relay host)
#relayhost =
# +
# BCC
# !!! Move to master.cf !!!
# recipient_bcc_maps = type:table
#recipient_bcc_maps = hash:/etc/postfix/recepient_bcc
#always_bcc = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
#receive_override_options = no_address_mappings
#sender_bcc_maps = mysql:/etc/postfix/mysql_bcc_mailbox_maps.cf
sender_bcc_maps = mysql:/etc/postfix/mysql_bcc_mailbox_maps.cf, mysql:/etc/postfix/mysql_bcc_domain_maps.cf
# +
# parameter specifies the directory where UNIX-style mailboxes are kept.
# default # mailbox file is /var/spool/mail/user or /var/mail/user
# mail_spool_directory =
# +
# For a non virtual user setup ( as when Dovecot mail_location = maildir:~/.maildir ) :
mailbox_transport = lmtp:unix:private/dovecot-lmtp
#mailbox_transport = local
# address MAIL FROM for probe with "verify" (default=double-bounce@$myorigin)
address_verify_sender =
# This uses non-persistent storage only.
# empty = disable cache
# default = btree:$data_directory/verify_cache # ($data_directory = /var/lib/postfix)
# address_verify_map =
# +
#virtual_destination_recipient_limit = 1
# +
# Version Postfix > 2.9
default_destination_recipient_limit = 1
#dovecot_destination_recipient_limit = 1
# !!!
# not needed if the Dovecot LDA or LMTP is used
# (these options are only relevant for the Postfix LDA: "virtual"):
#virtual_mailbox_base = /var/vmail
# End symbol "/" - maildir format
#virtual_mailbox_base = /var/vmail/
#virtual_minimum_uid = 100
#virtual_uid_maps = static:5000
#virtual_gid_maps = static:5000
#virtual_mailbox_domains = dest.loc, dest1.loc
# virtual_mailbox_domains = $mydomain, mysql:/etc/postfix/mysql_virtual_domains.cf
# virtual_mailbox_domains = $mydomain
virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_domains.cf
# +
virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_maps.cf
#virtual_alias_maps = hash:/etc/postfix/virtual
# +
#virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_domain_maps.cf,mysql:/etc/postfix/mysql_virtual_alias_maps.cf
# +
#virtual_transport = dovecot
#virtual_transport = lmtp:unix:private/dovecot
virtual_transport = lmtp:unix:private/dovecot-lmtp
# Use "strict_rfc821_envelopes = no" to accept "RCPT TO:>".
# Postfix will ignore the "User Name" part and deliver to the address.
strict_rfc821_envelopes = yes
# Disable for: stops some methods used to harvest email addresses during the connection to the server.
# def: no
# disable_vrfy_command = yes
# RULES and POLICES
smtpd_helo_required = yes
# * Note
# If a remote SMTP client is authenticated, the permit_sasl_authenticated access restriction can be used to permit relay (for dovecot?) access.
# !!! permit_sasl_authenticated - MOVE TO master.cf for submission (587 port for MUA)
# reject - for all others, which do not permit
# permit - for all others, which do not reject
# note! Postfix no check MX. need Policy
smtpd_client_restrictions =
permit_mynetworks,
reject_unknown_client_hostname,
permit_sasl_authenticated,
# check spam (blacklist servers)
# reject_rhsbl_client blackhole.securitysage.com,
# reject_rhsbl_sender blackhole.securitysage.com,
# reject_rbl_client bl.spamcop.net,
# reject_rbl_client dnsbl.sorbs.net,
# reject_rbl_client zen.spamhaus.org,
# reject_rbl_client dnsbl-1.uceprotect.net
#
# reject_rbl_client zombie.dnsbl.sorbs.net,
# reject_rbl_client cbl.abuseat.org,
# reject_rbl_client multihop.dsbl.org,
# reject_rbl_client work.rsbs.express.ru,
permit
# reject_unknown_reverse_client_hostname
# check_client_access hash:/etc/postfix/client_access
smtpd_helo_restrictions =
permit_mynetworks,
reject_invalid_helo_hostname,
reject_non_fqdn_helo_hostname,
reject_unknown_helo_hostname,
check_helo_access hash:/etc/postfix/hello_access,
permit
smtpd_sender_restrictions =
reject_unknown_sender_domain,
reject_non_fqdn_sender,
# reject_unverified_sender, - rejected automailers? Maybe make trust list?
permit
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_unknown_recipient_domain,
reject_non_fqdn_recipient,
reject_unverified_recipient,
reject_unauth_destination,
# postgrey:
check_policy_service inet:127.0.0.1:10023,
# policy-spf (see master.cf):
check_policy_service unix:private/policyd-spf,
permit
#127.0.0.1:10023_time_limit = 180 ##Only for line by "master.cf"
## check_policy_service unix:public/postgrey
# check_recipient_access hash:/etc/postfix/maps/access_recipient,
#smtpd_policy_service_max_idle = 300
#smtpd_policy_service_max_ttl = 1000
#smtpd_policy_service_timeout = 100
command_time_limit = 240
#policyd-spf_time_limit = 3600
policyd-spf_time_limit = 180
# Need: install dkim-milter and sid-milter
##smtpd_milters = unix:public/dkim-filter
##non_smtpd_milters = unix:public/dkim-filter
##milter_protocol = 6
# DKIM + ClamAV
# OpenDkim with Milter:
milter_default_action = accept
milter_protocol = 6
#smtpd_milters = inet:localhost:8891
#non_smtpd_milters = inet:localhost:8891
smtpd_milters = inet:127.0.0.1:8891, inet:127.0.0.1:7357
non_smtpd_milters = inet:127.0.0.1:8891, inet:127.0.0.1:7357
# +++ Postscreen +++
#postscreen_watchdog_timeout = 10 (default: 10s)
#postscreen_cache_cleanup_interval (default: 12h)
#postscreen_cache_retention_time (default: 7d)
postscreen_cache_retention_time = 90d
postscreen_access_list = permit_mynetworks,
cidr:/etc/postfix/postscreen_access.cidr
#postscreen_reject_footer = Postscreen Test
#postscreen_dnsbl_threshold = 2
#postscreen_dnsbl_sites = zen.spamhaus.org*2
# bl.spamcop.net*1 b.barracudacentral.org*1
#postscreen_dnsbl_reply_map = texthash:/etc/postfix/dnsbl_reply
postscreen_cache_map = btree:$data_directory/postscreen_cache
# wait greet banner before send:
postscreen_greet_banner = Hello from $mydomain !
postscreen_greet_wait = 10s
# reject 550 and logging:
postscreen_greet_action = enforce
# wait answr after command:
postscreen_pipelining_enable = yes
# reject 550 and logging:
postscreen_pipelining_action = enforce
# non-SMTP command and header ("...: text")
postscreen_non_smtp_command_enable = yes
postscreen_forbidden_commands = CONNECT, GET, POST
# reject 550 and logging:
postscreen_non_smtp_command_action = enforce
# test end-symbols:
postscreen_bare_newline_enable = yes
# reject 550 and logging:
postscreen_bare_newline_action = enforce
#unknown_address_reject_code = 554
#unknown_hostname_reject_code = 554
#unknown_client_reject_code = 554
#unknown_local_recipient_reject_code = 550
#unverified_recipient_reject_code = 450
smtpd_tls_cert_file=/etc/postfix/ssl.cert.pem
smtpd_tls_key_file=/etc/postfix/ssl.key.pem
# (old - "smtpd_use_tls"). Value: "no", "may" (at client), "encript" (TLS only)
# setting "smtpd_tls_security_level = encrypt" implies "smtpd_tls_auth_only = yes"
smtpd_tls_security_level = may
smtp_tls_security_level = may
smtpd_tls_ask_ccert = yes
smtpd_tls_loglevel = 1
smtp_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_scache
#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
# +
# data_directory > Postfix 2.5
# Not need - default!
#data_directory = /var/lib/postfix
# For local users:
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
#canonical_maps = hash:/etc/postfix/canonical
smtp_generic_maps = hash:/etc/postfix/aliases_smtp_output
lmtp_generic_maps = hash:/etc/postfix/aliases_lmtp
#lmtp_generic_maps = mysql:/etc/postfix/mysql_virtual_alias_domain_maps.cf,mysql:/etc/postfix/mysql_virtual_alias_maps.cf,hash:/etc/postfix/aliases_lmtp
# +
# after "user" before "@" extension - Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
recipient_delimiter = +
#local_recipient_maps = unix:passwd.byname $alias_maps
#local_recipient_maps = proxy:unix:passwd.byname $alias_maps
#local_recipient_maps =
#mailbox_size_limit = 0
mailbox_size_limit = 1024000000
message_size_limit = 20480000
# Default:
#queue_run_delay = 300s
#minimal_backoff_time = 300s
#master_service_disable =
#content_filter = Email content filter
queue_directory = /var/spool/postfix
# Mail notice (default "postmaster") :
delay_notice_recipient = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
bounce_notice_recipient = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
2bounce_notice_recipient = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
error_notice_recipient = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
reject_rbl_client - имеет смысл включать только на период спам-атак. Иначе они могут сильно навредить, поскольку сторонние сервисы: во-первых - часто ошибаются, во-вторых - замедляют работу почты и создают дополнительную нагрузку.
reject_unverified_sender - аналогично, поскольку проверка всех отправителей приведет к тому, что не будут доходить письма списков рассылки, которые чаще всего не принимают обратных писем (при проверке используется эмуляция отправки обратного письма).
Дополнительные файлы с данными и параметрами.
Форматы таблиц данных - в основном "hash" (возможны "cidr" и др.). После создания файла формата "hash", должна быть создана база, командой postmap (см. ниже пример). Особенностью некоторых форматов является то, что после изменений необходимо перечитать (или перезагрузить) Postfix ("postfix reload"), иначе изменения не будут задействованы (зато выше скорость чтения данных).
ВАЖНО! Необходимо создать файлы и базы со списками, которые были подключены в конфиге выше, даже если они будут пустыми.
Файл для правила "check_helo_access hash:/etc/postfix/hello_access"
mail.dest.loc REJECT Don't use my server name
dest.loc REJECT Don't use my server name
Затем выполняем команду:
postmap /etc/postfix/hello_access
Тестирование может выглядеть примерно так (в консоли из внешнего сервера - после окончания всей настройки!):
# telnet mail.domain.tld 25
Trying ip1.ip2.ip3.ip4...
Connected to mail.domain.tld.
Escape character is '^]'.
220 mail.domain.tld ESMTP
# helo mail.domain.tld
250 mail.domain.tld
# mail from:
250 2.1.0 Ok
# rcpt to:
554 5.7.1 : Helo command rejected: Don't use my server name
# quit
221 2.0.0 Bye
Файл для правила "check_client_access hash:/etc/postfix/client_access"
ip.ip.ip.ip ПРАВИЛО
ВАЖНО! - необходимо заменить "ip.ip.ip.ip" и "ПРАВИЛО" на те для которых это будет выполняться (например "1.2.3.4 REJECT").
Выполняем :
postmap /etc/postfix/client_access
Аналогично, если это будет использоваться, необходимо создать файлы и базы (таблицы) для правил:
"check_recipient_access hash:/etc/postfix/maps/access_recipient" - список получателей и правил для них (можно отсекать прием для ящика рассылки)
"canonical_maps = hash:/etc/postfix/canonical"
Прежде, чем демон cleanup передаст входящую почту в incoming, он использует таблицу канонического преобразования, чтобы переписать все адреса в окружении сообщения, и в заголовках сообщения, локальных или удаленных. Маппинг канонических адресов удобен для приведения указанных имен к виду "Firstname.Lastname", или для замены недействительных доменов на допустимые. В дополнение к каноническому маппингу, использующемуся для обоих адресов: отправителя и получателя, можно указать отдельные канонические таблицы, только для адреса отправителя или только для адреса получателя.
Канонические преобразования sender_canonical_maps и recipient_canonical_maps происходят до общих canonical_maps.
Пример таблицы замены в файле /etc/postfix/cononical
vasy Vasiliy.Pupkin
ВАЖНО! Нельзя забывать, что "sender" и "recipient" зависят от направления почты - входящая/исходящая. Т.е. "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." может быть "sender" в случае отправки почты от его имени, и тогда для него сработает правило sender_canonical_maps, но в случае приема почты в его адрес, он уже будет "recipient", и тогда правило sender_canonical_maps его не коснется.
Протестировать можно следующим образом:
Создадим ящики "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." и "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.".
Сделаем "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." алиасом для "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." с использованием canonical_maps.
• При получении почты с внешнего адреса на подопытный "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.", письмо доставляется в ящик "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.". В письме изменяется только "Delivered-To: ;" и встречается упоминание в верхнем "Received: ..." - "... for ;; ...".
В остальных "Received: ..." фигурирует "... for ;; ...", как и в "To: ... ;"
• При отправке почты от "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." (в т.ч. через sendmail) на внешний адрес, изучение заголовков у получателя показывает, что все "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." заменены на "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.". Единственное место, где может остаться упоминание о "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript." - это "From: ; (test)".
Важно также то, что цифровая подпись (DKIM) валидна - по-видимому наложение подписи происходит после обработки canonical_maps.
Канонические преобразования могут быть отключены принудительно - глобально или избирательно.
"smtp_generic_maps = hash:/etc/postfix/aliases_smtp_output" - подмена локальных адресов отправителя для почты наружу.
Параметр smtp_generic_maps производит подмену адресов отправителей только для почты, уходящей за пределы домена (через SMTP-клиент). Можно указывать конкретные адреса, а можно только домены.
ВАЖНО! Могут возникнуть проблемы при совместном использовании smtp_generic_maps и DKIM
* С версии 2.3 можно аналогично для LMTP использовать lmtp_generic_maps.
Например создаем файл /etc/postfix/aliases_smtp_output с содержимым:
root Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Выполняем команду:
postmap /etc/postfix/aliases_smtp_output
Отправим из консоли почту для "Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.", находясь под "root"
echo "Test generic mapping" | sendmail Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Пара замечаний по примеру:
• Поскольку обработка идет только для SMTP-клиента, то замена адреса будет произведена только для исходящей (наружу) почты
• При попытке подмены адреса получателя, происходит следующее - сервер пытается доставить на изначальный домен письмо с новым получателем. Поэтому если там его нет, то доставка не произойдет.
• Не забываем про очередность - подмена адресов этим способом происходит после формирования BCC (а также после канонического преобразования и т.п.). Например, sender_bcc_maps - скрытая копия локального отправителя "root", будет сопоставлена изначальному (не измененному) отправителю, хотя адресату за пределы домена письмо придет с измененным отправителем. Кроме того, скрытая копия так-же может быть будет пропущена через преобразование адресов - если она отправляется наружу.
• В приведенном выше примере письмо удаленному получателю поступит почти без следов "root". Отправитель будет заменен везде, за исключением одного упоминания From: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. (root) .
Этим способом можно, например, организовать подмену адреса отправителя для пользователя "apache" для писем отправляемых сайтом через PHP
"lmtp_generic_maps = hash:/etc/postfix/aliases_lmtp".
• Поскольку обрабатывается почта проходящая только через LMTP, то замена адреса будет произведена как для входящей (снаружи) почты, так и для почты отправляемой в пределах родного домена (друг другу), но не затронет почту, отправляемую изнутри наружу.
• Этот параметр производит замену адреса как отправителя, так и получателя (баг или фича?) снаружи внутрь, и изнутри - внутрь.
• От отправителя он не оставляет почти никаких следов (кроме хоста MTA и может быть упоминания в "X-Sender:").
• С получателем сложнее. Он изменяется в "Delivered-To:" и "To:" (и доставляется на измененный адрес), но остается и след первоначального адреса - упоминание в каждом "Received:" после "for"
• Обработка адреса происходит после создания BCC. Поэтому, если скрытая копия пройдет через LMTP (внутрь сервера), то ее адрес тоже может быть подвергнут обработке.
ВАЖНО! Для вышеперечисленных файлов обязательно завершающее создание базы для каждого командой: postmap ...
Файлы для правил:
• "alias_maps = hash:/etc/aliases"
• "alias_database = hash:/etc/aliases"
Файл /etc/aliases обычно присутствует в системе изначально. Предназначен для доставки почты локальным пользователям. Например:
root: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Локальная почта, предназначенная для пользователя "root" перенаправится на адрес Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
После внесений изменений в данный файл необходимо пересобрать базу данных командой
newaliases
Файл для правила "postscreen_access_list permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr" (если будет использоваться специальный список доступа для Postscreen)
ip.ip.ip.ip ПРАВИЛО
ВАЖНО! - необходимо заменить "ip.ip.ip.ip" и "ПРАВИЛО" на те для которых это будет выполняться (например "1.2.3.4/32 permit").
Этот формат не требует никаких последующих действий для создания базы.
ВАЖНО! Некоторые файлы могут быть пустыми, но они должны быть созданы!
После создания всех файлов необходимо выполнить команду:
postfix reload
Создадим файлы, хранящие SQL-запросы, проверяя имена таблиц и полей
/etc/postfix/mysql_virtual_domains.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT domain FROM domain WHERE domain = '%s' AND backupmx = '0' AND active = '1'
/etc/postfix/mysql_virtual_maps.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT username FROM mailbox WHERE username='%s' AND active = '1'
Запрос для авто-BCC ящика зарегистрированного пользователя /etc/postfix/mysql_bcc_mailbox_maps.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT CONCAT('%u', '+bccflag', '@', '%d') FROM mailbox WHERE username='%s' AND active = '1'
Запрос ящика для авто-BCC не зарегистрированных пользователей родного домена /etc/posfix/mysql_bcc_domain_maps.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.' FROM domain WHERE domain='%d' AND active = '1'
eMail-алиасы /etc/postfix/mysql_virtual_alias_maps.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT goto FROM alias WHERE address='%s' AND active = '1'
Доменные алиасы /etc/postfix/mysql_virtual_alias_domain.cf
user = _POSTFIXADMIN_SQL_USER_
password = _POSTFIXADMIN_SQL_PASSWORD_
hosts = 127.0.0.1
dbname = _POSTFIXADMIN_SQL_BASE_
query = SELECT CONCAT('%u', '@', target_domain) FROM alias_domain WHERE alias_domain = '%d' AND active = '1'
Выставим права на все конфиги:
chgrp postfix /etc/postfix/*.cf
chmod u=rw,g=r,o= /etc/postfix/*.cf
ВАЖНО! Здесь параметр hosts надо указывать тот же, что указан в "my.cnf" MySQL (bind-address)!
После того, как все подготовили, перезапускаем postfix:
systemctl restart postfix
Вносим первые данные в наш почтовый сервер.
Теперь вернемся к Posfixadmin и внесем первые данные через его web-интерфейс по адресу: https://dest.loc/postfixadmin/
1. Внесем свой новый домен - dest.loc
2. Временно внесем еще один тестовый домен - test.com (его потом надо будет удалить !!!)
3. Алиас ДЛЯ домена dest.loc - вся почта домена test.com перенаправляется на dest.loc
4. Создаем ящик - [email protected]
5. Превращаем ящик [email protected] в алиас для [email protected] (этот ящик заводить не нужно - достаточно вписать его, как целевой для ящика-алиаса [email protected]).
6. Обязательно (!) создаем ящик - [email protected]
* Тут важно не запутаться. В редактировании домена создется алиас ДЛЯ редактируемого домена, а в редактировании ящика - уже сам редактируемый ящик превращается в алиас для перечисленных.
** Кстати при создании ящика обязательно надо слать приветственное письмо, т.к. за создание папки отвечает Dovecot (Posfix вообще к этим папкам не имеет доступа - любой физический доступ осуществляется ЧЕРЕЗ Dovecot).
*** Аутентификация будет происходить очень интересно. Наличие/отсутствие имен ящиков, Postfix будет проверять сам в таблицах PostfixAdmin. Пароли Postfix в своих таблицах запрашивать не будет - они будут проверяться средствами Dovecot, но... в таблицах PostfixAdmin. :)
Проверки.
Проверяем домен:
postmap -q dest.loc mysql:/etc/postfix/mysql_virtual_domains.cf
должно вернуть dest.loc
Юзер@домен найден:
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_virtual_maps.cf
должно вернуть Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
BCC - Юзер@домен найден:
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_bcc_mailbox_maps.cf
должно вернуть Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
BCC - Юзер@домен НЕ найден:
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_bcc_mailbox_maps.cf
Пустой ответ
BCC - домен найден:
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_bcc_domain_maps.cf
должно вернуть Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
BCC - Юзер@домен НЕ найден:
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_bcc_domain_maps.cf
Пустой ответ
Для ящика-алиаса:
* вернет список ящиков для пересылки (рассылки) для ящика X - список может содержать любые валидные (домены ящиков проверяется через DNS) имена (напр ***@gmail.com)!
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_virtual_alias_maps.cf
должно вернуть Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.'
Для домена-алиаса:
В данном экзотическом варианте - вернет список рассылки ящика-алиаса, ДЛЯ которого запрашиваемый ящик является алиасом.
* К алиас-домену привязываются имена ВСЕХ ящиков основного домена, но ТОЛЬКО те, что есть в основном домене.
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. mysql:/etc/postfix/mysql_virtual_alias_domain_maps.cf
должно вернуть Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.'
Postgrey
Это проверенная временем технология, суть которой в том, что для "новичков" (неизвестных ранее) допуск дается только после проверки их настойчивости - не с первого раза. Сначала происходит мягкий отказ (с кодом напр.: "450"), после чего ожидаются повторные попытки. Расчет идет на то, что серверы спамеров экономят ресурсы и не тратят время на стандартную процедуру повторных попыток, которая по умолчанию используется большинством нормальных MTA. Далее работает уже белый список, в котором информация без обновления хранится около месяца (это настраивается) и отказов уже не происходит - проверенная однажды почта, в дальнейшем принимается сразу.
В ОС Rosa Cobalt данный пакет отсутствует в репозитариях. В CentOS он присутствует в репозитариях EPEL. Поэтому далее речь пойдет на реализации для CentOS.
Устанавливается просто:
yum install postgrey
Postgrey сохраняет записи-триплеты вида: "IP_клиента"/"Отправитель"/"Получатель". в базах, которые по умолчанию находятся /var/lib/postgrey
Белые списки клиентов: /etc/postgrey/whitelist_clients
Белые списки получателей: /etc/postgrey/whitelist_recipients
Подключение Postgrey к Postfix находится в файле /etc/postfix/main.cf (для Rosa Cobalt их нужно отключить):
...
smtpd_recipient_restrictions = ...,
...,
check_policy_service inet:127.0.0.1:10023
...
Параметр check_policy_service лучше указывать после всех остальных "smtpd_recipient_restrictions". Важно, чтобы он был указан после reject_unauth_destination, иначе система превратиться в открытый релей. Очевидно также, что Postgrey-policy нужно подключать до "policy-spf", т.к. нет смысла производить эти проверки в обратном порядке.
Настройки самого Postgrey хранятся в файле /etc/default/postgrey . Можно увеличить в нем, например, задержку писем от "новичков" до 30 минут (по умолчанию - 300 сек), уменьшить время неприкосновенности уже "допущенных" до 10 дней ( по умолчанию - 35 дней, и любая активность рестартит этот срок), и после 20 принятых писем автоматом переносить клиента в белый список (по умолчанию - 5). Настраиваем Postgrey в файле /etc/default/postgrey:
...
POSTGREY_OPTS="--inet=10023 --delay=200 --max-age=40 --auto-whitelist-clients=4"
...
200 сек отказывать новичкам (по умолчанию 300 сек),
40 дней без обновления хранить информацию о них в белом листе (по умолчанию 35 дней),
4 раза принимать письма с проверкой, после чего автоматом добавлять в белый лист.
Перезапускаем сервис:
systemctl restart postgrey
Получение отчета из лога о текущих "отказниках":
cat /var/log/maillog | postgreyreport --nosingle_line --check_sender=mx,a --show_tries --separate_by_subnet=":-----------------------\n"
Postscreen
Зомби-блокировщик Postscreen - это первый уровень защиты, доступных в Postfix. Postscreen доступен в Postfix с версии 2.8 и выше, а некоторые функции (например "cache sharing") - с версии 2.9.
ВАЖНО! Postscreen не может сосуществовать на одном порту с MUA!
Большинство писем является спамом, а большинство спама рассылается зомби-ботами. Зомби-боты - это зараженные компьютеры, хозяева которых могут и не знать о том, что они источники спама. Разработчик Postfix Wietse Venema предсказывает, что в дальнейшем проблема зомби-ботов будет нарастать и ситуация будет только ухудшаться. Идея защиты состоит в том, что такие компьютеры, как правило, не соблюдают все стандарты SMTP-протокола, и кроме того, могут проявлять неестественную настойчивость, подключаясь после отказа снова и снова. Для того чтобы не нагружать smtpd, и используется postscreen. Postscreen проводит ряд тестов SMTP-клиента, и, если выявляет признаки "зомби", удерживает его на эмуляции SMTP-сессии, не давая подключиться к основному демону - smtpd.
При этом Postscreen поддерживает временный белый список для клиентов, к которым эти тесты не применяются.
Настройка в конфигах "/etc/postfix/master.cf" и "/etc/postfix/main.cf", описанные ранее.
Там же - описание "/etc/postfix/postscreen_access.cidr" для параметра postscreen_access_list в файле "/etc/postfix/main.cf".
Polycyd-SPF
SPF - это TXT-запись в DNS, в которой, в простейшем варианте должен быть указан IP сервера, которому разрешено отправлять почту.
Описание того как это должно выглядеть в DNS описывалось выше.
ОС Rosa Cobalt данный пакет не входит. В CentOS он присутствует в репозитариях EPEL.Установка:
yum install pypolicyd-spf
Настройка в конфигах "/etc/postfix/master.cf" и "/etc/postfix/main.cf", описанных выше.
Кроме того проверяем и донастраиваем файл /etc/python-policyd-spf/policyd-spf.conf:
# For a fully commented sample config file see policyd-spf.conf.commented
debugLevel = 0
defaultSeedOnly = 0
#HELO_reject = SPF_Not_Pass
HELO_reject = False
#Mail_From_reject = Fail
Mail_From_reject = False
PermError_reject = False
TempError_Defer = False
#skip_addresses = 127.0.0.0/8,::ffff:127.0.0.0//104,::1//128
skip_addresses = 127.0.0.1/32
После этого заставляем Postfix прочитать новую конфигурацию:
service postfix restart
Проверка:
Посылаем письмо снаружи и смотрим на заголовки у себя. В письме, прошедшем SPF-проверку, должно быть что-то вроде:
...
Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=1.2.3.4; helo=mail.outside.tld; envelope-from=Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.; receiver=Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
...
OpenDKIM
В репозитариях Rosa Cobalt отсутствует. Настраиваем на CentOS
Установка:
yum install opendkim
О настройке DKIM в DNS описывалось выше.
Создадим папки:
mkdir -pv /etc/opendkim/mail/
chown -Rv opendkim:opendkim /etc/opendkim
chmod go-rwx /etc/opendkim/*
cd /etc/opendkim/mail/
Сначала создаем файл в котором перечислены хосты, для которых подпись не нужна /etc/opendkim/mail/opendkimhosts
127.0.0.1
localhost
domain
# Your IP addresses (one per line)
192.168.20.104
#Your hostnames (one per line)
dest.loc
Генерируем ключи:
opendkim-genkey -D /etc/opendkim/mail/ -b 1024 -d dest.loc -s mail
-r - означает, что только для почты
-s - селектор
Раздаем права на файлы:
# cd /etc/opendkim/mail/
# chown opendkim:opendkim *
# chmod u=rw,go-rwx *
Настраиваем OpenDKIM в файле /etc/opendkim.conf. Комментируем все настройки и добавляем:
...
Domain dest.loc
KeyFile /etc/opendkim/mail/mail.private
Selector mail
InternalHosts /etc/opendkim/mail/opendkimhosts
ExternalIgnoreList /etc/opendkim/mail/opendkimhosts
AutoRestart yes
Background yes
Canonicalization simple
DNSTimeout 5
Mode sv
SignatureAlgorithm rsa-sha256
SubDomains yes
#UseASPDiscard no
#Version rfc4871
Настраиваем порт в /etc/default/opendkim. Так же все комментируем и добавляем строку:
...
SOCKET="inet:8891@localhost"
Настройки DKIM в Postfix - в файле "/etc/postfix/main.cf". В Rosa Cobalt их комментируем:
...
milter_default_action = accept
milter_protocol = 2
smtpd_milters = inet:localhost:8891
non_smtpd_milters = inet:localhost:8891
...
Первая строка говорит о том, чтобы все же принимать даже ту почту, которая не подписана.
Открытый ключ (сгенерированный ранее) должен быть здесь: /etc/opendkim/mail/mail.txt
Копируем его в DNS (см. выше как). Там же создаем TXT-запись о политике использования DKIM в домене.
ВАЖНО! Ключ в DNS должен быть указан одной длинной строкой. Добавляется только то, что в кавычках - вместе с кавычками.
Если все сделали, то рестарт сервисов:
/etc/init.d/opendkim restart
service postfix restart
ClamAV-Milter
Антивирус ClamAV может быть подключен к Postfix разными способами. У системы, которая хорошо защищена и не слишком нагружена (имеет достаточный резерв ресурсов), ClamAV может быть подключен до очереди, и успевать проверять письма до завершения SMTP-сессии. Это позволит давать немедленный отказ клиентам, даже не принимая сообщение в очередь.
Если до антивируса будет доходить только те письма, которые уже прошли Postscreen и Postgrey, то можно рассчитывать на резерв ресурсов, достаточный для того, чтобы ClamAV успевал отработать.
У нас ClamAV будет подключен через Milter-протокол.
yum install clamav clamav-scanner clamav-update clamav-milter libtommath
Базы находятся в каталоге /var/lib/clamav/
После установки пакетов необходимо обновить вирусные базы. Редактируем файл /etc/freshclam.conf. Комментируем параметр Example. Добавляем (раскоментируем) следующие строки:
DatabaseDirectory /var/lib/clamav
UpdateLogFile /var/log/freshclam.log
LogFileMaxSize 20M
LogTime yes
LogRotate yes
PidFile /var/run/freshclam.pid
AllowSupplementaryGroups yes
DatabaseMirror database.clamav.net
NotifyClamd /etc/clam.d/scan.conf
Check 4
Остальные строки оставляем без изменений.
Запускаем обновление баз:
freshclam
Настраиваем сервис clamd в файле /etc/clam.d/scan.conf. Комментируем строку Example. Остальные параметры:
LogFile /var/log/clamd.scan
LogFileMaxSize 20M
LogTime yes
LogSyslog no
LogFacility LOG_MAIL
LogRotate yes
ExtendetDetectionInfo yes
PidFile /var/run/clamd.scan/clamd.pid
TemporaryDirectory /var/tmp
DatabaseDirectory /var/lib/clamav
LocalSocket /var/run/clamd.scan/clamd.sock
LocalSocketMode 660
FixStaleSocket yes
TCPSocket 3310
TCPAddr 127.0.0.1
StreamMinPort 30000
StreamMaxPort 32000
MaxThreads 20
ReadTimeout 300
MaxQueue 200
IdleTimeout 60
User clamscan
AllowSupplementaryGroups yes
ScanMail yes
PhishingSignatures yes
PhishingScanURLs yes
MaxScanSize 150M
MaxFileSize 30M
Запускаем сервис:
/etc/init.d/clamd.scan start
Добавляем автозапуск:
chkconfig --level 2345 clamd.scan on
Настраиваем ClamAV-milter в файле /etc/mail/clamav-milter.conf. Комментируем строку Example. Остальные настройки:
#MilterSocket /var/run/clamav-milter/clamav-milter.socket
MilterSocket inet:[email protected]
MilterSocketMode 660
FixStaleSocket yes
User clamilt
AllowSupplementaryGroups yes
ReadTimeOut 300
PidFile /var/run/clamav-milter/clamav-milter.pid
TemporaryDirectory /var/tmp
ClamdSocket unix:/var/run/clamd.scan/clamd.sock
OnClean Accept
OnInfected Reject
OnFail Reject
RejectMsg "Virus detectd: %v. Mail delivery error (reject from virus scanner)."
AddHeader Add
LogFile /var/log/clamav-milter.log
LogFileUnlock yes
LogFileMaxSize 50M
LogTime yes
LogSyslog no
LogInfected Full
LogClean Basic
Запускаем сервис:
/etc/init.d/clamav-milter start
Добавляем автозапуск:
chkconfig --level 2345 clamav-milter on
В конфигурационном файле Postfix /etc/postfix/main.cf (см. выше), необходимо добавить через запятую к параметрам smtpd_milters и non_smtpd_milters значения, касющиеся ClamAV:
...
smtpd_milters = inet:127.0.0.1:8891, inet:127.0.0.1:7357
non_smtpd_milters = inet:127.0.0.1:8891, inet:127.0.0.1:7357
...
В письме присланном снаружи должны присутствовать строки заголовка подобные следующим:
...
X-Virus-Status: Clean
X-Virus-Scanned: clamav-milter 0.97.8 at domain
...
Для того чтобы проверить свой антивирус, можно послать себе письмо, которое будет распознано как зараженное. Для это существует т.н. "тестовый вирус" EICAR-Test-File. Нужно вставить эту выделенную жирным часть текста в поисковик :) и найти тестовую строку на одном из сайтов (например в Википедии). После чего скопировать прямо в содержание письма, указанную там тестовую строку, и послать себе из внешнего почтовика (можно из GMail). Через непродолжительное время (скорее всего несколько секунд) должно вернуться сообщение о том, что письмо не доставлено. В нем должно присутствовать указанное чуть выше в настройках сообщение (Virus detected: "%v". ) и код ответа (550).
Вообще, с помощью теста EICAR-Test-File удобно налаживать работу антивирусной системы.
Настройка Dovecot
Установку Dovecot мы провели ранее, вместе с установкой Postfix. Теперь пришло время его настроить. Для того, что бы посмотреть текущие настройки, в консоле нужно ввести команду:
doveconf
В рассматриваемой конфигурации Dovecot будет слушать пока только 143 (IMAP) порт и только на локалхосте (127.0.0.1).
На этот порт будет подключаться MUA - RoundCube. Наружу используется 25 порт (Postfix). При этом для отправки писем RoundCube использует другой порт - 587 (Postfix).
Настройка протоколов.
Удаляем ненужные протоколы. В файле /etc/dovecot/dovecot.conf пишем:
protocols = imap lmtp sieve
...
!include conf.d/*.conf
...
Все необходимые нам для данной конфигурации файлы.
/etc/dovecot/dovecot.conf
protocols = imap lmtp sieve
dict {
#quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
#expire = sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext
}
!include conf.d/*.conf
#!include_try local.conf
/etc/dovecot/conf.d/10-auth.conf
# Connect only after start SSL/TLS
# If not local network only !
disable_plaintext_auth = yes
auth_cache_size = 1M
auth_cache_negative_ttl = 0
auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@
auth_master_user_separator = *
auth_mechanisms = plain
#!include auth-deny.conf.ext
#!include auth-master.conf.ext
#!include auth-system.conf.ext
!include auth-sql.conf.ext
#!include auth-ldap.conf.ext
#!include auth-passwdfile.conf.ext
#!include auth-checkpassword.conf.ext
#!include auth-vpopmail.conf.ext
#!include auth-static.conf.ext
/etc/dovecot/conf.d/10-director.conf
service director {
unix_listener login/director {
#mode = 0666
}
fifo_listener login/proxy-notify {
#mode = 0666
}
unix_listener director-userdb {
#mode = 0600
}
inet_listener {
#port =
}
}
# Enable director for the wanted login services by telling them to
# connect to director socket instead of the default login socket:
service imap-login {
#executable = imap-login director
}
#service pop3-login {
#executable = pop3-login director
#}
# Enable director for LMTP proxying:
protocol lmtp {
#auth_socket_path = director-userdb
}
/etc/dovecot/conf.d/10-logging.conf
# Log file to use for error messages. "syslog" logs to syslog,
# /dev/stderr logs to stderr.
log_path = /var/log/dovecot.log
info_log_path = /var/log/dovecot-info.log
debug_log_path = /var/log/dovecot-debug.log
auth_verbose = no
auth_verbose_passwords = no
auth_debug = no
auth_debug_passwords = no
mail_debug = no
verbose_ssl = no
#plugin {
#}
#log_timestamp = "%b %d %H:%M:%S "
#login_log_format_elements = user= method=%m rip=%r lip=%l mpid=%e %c
#login_log_format = %$: %s
#mail_log_prefix = "%s(%u): "
# Format to use for logging mail deliveries. You can use variables:
# %$ - Delivery status message (e.g. "saved to INBOX")
# %m - Message-ID
# %s - Subject
# %f - From address
# %p - Physical size
# %w - Virtual size
#deliver_log_format = msgid=%m: %$
Возможно придется изменить права на файле /var/log/dovecot.log на 666 Информацию об ошибках и других событиях, связаных с работой почтовой системы нужно так же искать в файлах:
/var/log/dovecot.log
/var/log/maillog
Для настройки и мониторинга системы полезно будет внести в конец файла /etc/rsyslog.conf следующие строки:
...
# Add
# Postfix + Dovecot
local1.* -/var/log/dovecot.log
local1.info -/var/log/dovecot.info
local1.warn -/var/log/dovecot.warn
local1.err -/var/log/dovecot.err
:msg,contains,"stored mail into mailbox" -/var/log/dovecot.lmtp
Перезапускаем сервис:
service rsyslog restart
Что бы не забился диск на сервере, логи нужно периодически обнулять. Делаем ротацию логов. Содаем файл /etc/logrotate.d/dovecot со следующим содержимым:
/var/log/dovecot.log
/var/log/dovecot.info
/var/log/dovecot.warn
/var/log/dovecot.err
/var/log/dovecot.lmtp
{
weekly
rotate 52
missingok
notifempty
compress
delaycompress
create 640 root root
sharedscripts
postrotate
/bin/kill -USR1 'cat /var/run/dovecot/master.pid 2>/dev/null' 2>/dev/null || true
endscript
}
В следующем конфиге ("10-mail.conf") присутствует namespace virt, для которого плагин virtual должен быть включен в трех конфигах:
• 20-imap.conf
• 20-lmtp.conf
• 15-lda.conf
• 20-managesieve.conf
ВАЖНО! В конфигах "15-lda.conf" и "20-managesieve.conf" включение плагинов должно быть без дублирования уже включенных: mail_plugins = ... virtual, а в остальных - с дублированием: mail_plugins = $mail_plugins ... virtual
В конфиге "10-mail.conf" строка "location = " для "namespace virt" настраивает плагин так, что каталог с правилами будет общий для всех пользователей, а индексные каталоги - у каждого свои.
Но сначала создадим необходимые каталоги (в консоли):
mkdir /opt/mail/dovecot/virtual
mkdir /opt/mail/dovecot/virtual/Folder1
chown -hR vmail:vmail /opt/mail/dovecot/virtual
chmod -R 700 /opt/mail/dovecot/virtual
ВАЖНО! В дальнейшем для всех подпапок внутри /opt/mail/dovecot/virtual нужно будет так же изменять владельца и права!
Папка с индексами "virtual_index" для виртуальных каталогов будет создаваться у каждого ящика автоматически при любом доступе.
Файл с фильтрами для виртуального каталога может выглядеть так (для виртуальной папки Folder1) /opt/mail/dovecot/virtual/Folder1/dovecot-virtual:
virtual/Folder1
inthread refs x-mailbox INBOX
Настраиваем дальше.
/etc/dovecot/conf.d/10-mail.conf
mail_location = maildir:/opt/mail/vmail/%d/%n:INBOX=/opt/mail/vmail/%d/%n/Inbox
namespace virt {
# type = private
prefix = virtual/
separator = /
location = virtual:/opt/mail/dovecot/virtual:INDEX=/opt/mail/vmail/%d/%n/virtual_index:CONTROL=/opt/mail/vmail/%d/%n/virtual_index
inbox = no
hidden = yes
list = yes
subscriptions = yes
#mailbox Folder1 {
# auto=subscribe
#}
}
namespace allusers {
type = public
separator = /
prefix = "allmail/%d/"
location = maildir:/opt/mail/vmail/%d:LAYOUT=fs:INDEX=/opt/mail/vmail/%d/%n/allmail_index
inbox = no
hidden = yes
list = yes
subscriptions = no
}
namespace system_users {
type = private
separator = /
prefix = "system_users/"
location = mbox:/var/mail/:INDEX=/opt/mail/vmail/system_users_index
inbox = no
hidden = yes
list = yes
subscriptions = yes
}
namespace inbox {
type = private
separator = /
prefix =
inbox = yes
hidden = no
list = yes
subscriptions = yes
}
mail_uid = 5000
mail_gid = 5000
#mail_nfs_storage = no
#mail_nfs_index = no
#first_valid_uid = 500
#last_valid_uid = 0
#first_valid_gid = 1
#last_valid_gid = 0
#mail_attachment_dir =
#mail_attachment_min_size = 128k
#mail_attachment_fs = sis posix
# Hash format to use in attachment filenames. You can add any text and
# variables: %{md4}, %{md5}, %{sha1}, %{sha256}, %{sha512}, %{size}.
# Variables can be truncated, e.g. %{sha256:80} returns only first 80 bits
#mail_attachment_hash = %{sha1}
Чтобы читать почту системных юзеров (namespace system_users) надо дать разрешение на папку "/var/mail/" и на файл каждого пользователя - чтение и запись для всех
/etc/dovecot/conf.d/10-master.conf
#default_process_limit = 100
#default_client_limit = 1000
#default_vsz_limit = 256M
#default_login_user = dovenull
#default_internal_user = dovecot
service imap-login {
inet_listener imap {
address = 127.0.0.1
port = 143
#ssl = yes
}
inet_listener imaps {
#port = 993
port = 0
#ssl = yes
}
#service_count = 1
#process_min_avail = 0
#vsz_limit = $default_vsz_limit
}
#service pop3-login {
#inet_listener pop3 {
#port = 110
#}
#inet_listener pop3s {
#port = 995
#ssl = yes
#}
#}
service lmtp {
unix_listener lmtp {
path = /var/spool/postfix/private/dovecot-lmtp
group = postfix
mode = 0660
user = postfix
##mode = 0666
}
#unix_listener /var/spool/postfix/private/dovecot-lmtp {
# group = postfix
# mode = 0660
# user = postfix
# }
# process_min_avail = 5
executable = lmtp -L
}
service imap {
#vsz_limit = $default_vsz_limit
# Max. number of IMAP processes (connections)
#process_limit = 1024
#executable = imap
}
#service pop3 {
# Max. number of POP3 processes (connections)
#process_limit = 1024
#}
service auth {
unix_listener auth {
path = /var/spool/postfix/private/auth
mode = 0660
user = postfix
group = postfix
}
user = $default_internal_user
}
service auth-worker {
user = $default_internal_user
}
# Detail Process title in ps
#verbose_proctitle = yes
#service dict {
#unix_listener dict {
#}
#}
/etc/dovecot/conf.d/10-ssl.conf
#ssl = yes
ssl = required
ssl_cert = ‹/etc/dovecot/ssl.cert.pem
ssl_key = ‹/etc/dovecot/ssl.key.pem
#ssl_key_password =
#ssl_ca =
#ssl_require_crl = yes
#ssl_verify_client_cert = no
#auth_ssl_username_from_cert=yes.
#ssl_cert_username_field = commonName
#ssl_parameters_regenerate = 168
ssl_parameters_regenerate = 0
ssl_protocols = !SSLv2
#ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL
#ssl_crypto_device =
/etc/dovecot/conf.d/10-tcpwrapper.conf
# service name for hosts.{allow|deny} are those defined as
# inet_listener in master.conf
#
#login_access_sockets = tcpwrap
#
#service tcpwrap {
# unix_listener login/tcpwrap {
# group = $default_login_user
# mode = 0600
# user = $default_login_user
# }
#}
/etc/dovecot/conf.d/15-lda.conf
postmaster_address = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
hostname = dest.loc
#quota_full_tempfail = no
#sendmail_path = /usr/sbin/sendmail
#submission_host =
#rejection_subject = Rejected: %s
# %n = CRLF, %r = reason, %s = original subject, %t = recipient
rejection_reason = Your message to was automatically rejected:%n%r
#recipient_delimiter = +
#lda_original_recipient_header =
#lda_mailbox_autocreate = no
#lda_mailbox_autosubscribe = no
protocol lda {
mail_plugins = sieve virtual
# log_path = /var/log/mail-dovecot-lda-errors.log
# info_log_path = /var/log/mail-dovecot-lda.log
# auth_socket_path = /var/run/dovecot/auth-master
# auth_socket_path = auth-userdb
# global_script_path = /var/lib/dovecot/sieve/global/globalsieverc
}
Отключаем 15-mailboxes.conf, т.к. мы его пока не используем:
mv /etc/dovecot/conf.d/15-mailboxes.conf /etc/dovecot/conf.d/15-mailboxes.conf_
/etc/dovecot/conf.d/20-imap.conf
protocol imap {
mail_plugins = $mail_plugins imap_acl imap_quota mail_log notify acl quota virtual
ssl_cert = ‹/etc/dovecot/ssl.cert.pem
ssl_key = ‹/etc/dovecot/ssl.key.pem
info_log_pth = /var/log/dovecot-imap.log
#iap_max_line_length = 64k
#mail_max_userip_connections = 10
# IMAP logout format string:
# %i - total number of bytes read from client
# %o - total number of bytes sent to client
#imap_logout_format = bytes=%i/%o
#imap_capability =
#imap_idle_notify_interval = 2 mins
#imap_id_send =
#imap_id_log =
# Workarounds for various client bugs:
# delay-newmail:
# tb-extra-mailbox-sep:
# tb-lsub-flags:
# The list is space-separated.
#imap_client_workarounds =
}
/etc/dovecot/conf.d/20-lmtp.conf
#lmtp_proxy = no
#lmtp_save_to_detail_mailbox = no
protocol lmtp {
mail_plugins = $mail_plugins quota sieve virtual
postmaster_address = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
#info_log_path = /var/log/dovecot-lmtp.log
}
/etc/dovecot/conf.d/20-managesieve.conf
service managesieve-login {
inet_listener sieve {
address = 127.0.0.1
port = 4190
}
service_count = 1
#process_min_avail = 0
vsz_limit = 64M
}
#service managesieve {
# Max. number of ManageSieve processes (connections)
#process_count = 1024
#}
mail_plugins = virtual
protocol sieve {
#managesieve_max_line_length = 65536
#mail_max_userip_connections = 10
#mail_plugins = virtual
# MANAGESIEVE logout format string:
# %i - total number of bytes read from client
# %o - total number of bytes sent to client
#managesieve_logout_format = bytes=%i/%o
#managesieve_implementation_string = Dovecot Pigeonhole
#managesieve_sieve_capability =
#managesieve_notify_capability =
#managesieve_max_compile_errors = 5
}
/etc/dovecot/conf.d/90-acl.conf
plugin {
#acl = vfile:/etc/dovecot/acl/%d:cache_secs=300
acl = vfile:/etc/dovecot/acl/%d
}
plugin {
#acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes
}
Здесь "%d" - папка, совпадающая с доменным именем.
Чуть ниже - пара примеров настройки ACL для всех пользователей.
Следующая настройка запрещает пользователю всё, кроме чтения/просмотра/удаления, но разрешает сохранение для LDA.
В каталоге с именем домена указываем ACL для папки ".Sent".
/etc/dovecot/acl/dest.loc/Sent
owner lrwstpe
Аналогично можно настроить остальные папки
Закрываем доступ для всех кроме LDA к файлу ".dovecot.lda-dupes", который может быть виден в MUA как фантомная папка "lda-dupes":
mkdir /etc/dovecot/acl/dest.loc/dovecot/
ВАЖНО! - точка воспринимается как маркер вложенной папки.
/etc/dovecot/acl/domain.tld/dovecot/lda-dupes
anyone rp
/etc/dovecot/conf.d/90-plugin.conf
plugin {
# mail_plugins = $mail_plugins mail_log notify acl quota
# For Plugin mail_log:
mail_log_events = copy
mail_log_fields = uid box msgid size
}
/etc/dovecot/conf.d/90-quota.conf
plugin {
quota = dict:user::file:/opt/mail/vmail/%d/%n/dovecot-quota
quota_rule = *:storage=1GB
quota_rule2 = Trash:storage=+10%%
}
# Note that % needs to be escaped as %%, otherwise "% " expands to empty.
plugin {
#quota_warning = storage=95%% quota-warning 95 %u
#quota_warning2 = storage=80%% quota-warning 80 %u
}
#service quota-warning {
# executable = script /usr/local/bin/quota-warning.sh
# user = dovecot
# unix_listener quota-warning {
# user = vmail
# }
#}
plugin {
#quota = dirsize:User quota
#quota = maildir:User quota
#quota = dict:User quota::proxy::quota
#quota = fs:User quota
}
plugin {
#quota = dict:user::proxy::quota
#quota2 = dict:domain:%d:proxy::quota_domain
#quota_rule = *:storage=102400
#quota2_rule = *:storage=1048576
}
Здесь указана квота для пользователя 1 ГБ, а для Корзины - 10 процентов от общей квоты.
Sieve
Очень полезный плагин для организации сортировки писем на стороне сервера (до того, как пользователь откроет свой почтовый клиент).
Все глобальные скрипты и конфигурационные файлы будут находится в соответствующих папках в: /opt/mail/dovecot/sieve/global/
Все персональные настройки и файлы будут находится либо в папках пользователей, либо в каталоге: /opt/mail/dovecot/sieve/private/ - в соответствующих папках.
Создаем необходимые папки и назначаем права доступа:
mkdir /opt/mail/dovecot/sieve/
mkdir /opt/mail/dovecot/sieve/global/
mkdir /opt/mail/dovecot/sieve/private/
chown -hR root:root /opt/mail/dovecot/sieve/
chown -hR root:root /opt/mail/dovecot/sieve/global/
chown -hR vmail:root /opt/mail/dovecot/sieve/private/
chmod -R 755 /opt/mail/dovecot/sieve/
chmod -R 755 /opt/mail/dovecot/sieve/global/
chmod -R 700 /opt/mail/dovecot/sieve/private/
После всех остальных настроек (в конфиге ниже), папки в каталоге "/opt/mail/dovecot/sieve/private/" будут создаваться автоматически (включая каталог домена) при первом доступе пользователя к фильтру в настройках MUA RoundCube.
/etc/dovecot/conf.d/90-sieve.conf
plugin {
# sieve_user_log = /var/lib/dovecot/sieve/private/%d/%n/.main.peronal.log
sieve = /opt/mail/dovecot/sieve/private/%d/%n/.main.personal.sieve
#sieve_default = /opt/mail/dovecot/sieve/default.sieve
sieve_dir = /opt/mail/dovecot/sieve/private/%d/%n
sieve_global_dir = /opt/mail/dovecot/sieve/global/
#sieve_before2 =
sieve_before = /opt/mail/dovecot/sieve/global/incoming_deduplicate.sieve
#sieve_after =
#sieve_after2 =
sieve_extensions = +editheader
sieve_global_extensions = +vnd.dovecot.duplicate
sieve_duplicate_period = 1d
#sieve_plugins =
recipient_delimiter = +
#sieve_max_script_size = 1M
#sieve_max_actions = 32
#sieve_max_redirects = 4
#sieve_quota_max_scripts = 0
#sieve_quota_max_storage = 0
}
Немного подробнее о глобальном sieve-скрипте "incoming_deduplicate.sieve" и контроле всех отправляемых писем.
В данной конфигурации реализовано сохранение всей исходящей почты - в т.ч. отправленной с неучтенных ящиков или из внешнего MUA. Реализация включает в себя Sieve-скрипты в Dovecot, а также некоторые настройки Postfix.
Суть метода в том, что для любых писем отправленных через Postfix создается дополнительная копия, доставляемая отправителю (только если отправитель в "родном" домене). При этом, если отправитель неучтенный (отсутствует в базе Postfixadmin), то BCC-копия доставляется в единый ящик для неизвестных отправителей. Эта копия маркируется прямо в Postfix - в заголовке письма модифицируется адрес доставки "Delivered-To" примерно так: "[email protected]" (либо "[email protected]" - для неучтенных отправителей). При доставке письма в ящик (через Dovecot) Sieve-скрипт отлавливает эти маркированные письма и пропускает их через временную базу учета дубликатов с помощью расширения vnd.dovecot.duplicate (дубликаты отслеживаются по уникальному ID письма). Кроме того, проверке подвергаются письма, принудительно "перепосланные" из папки "Отправленные" - пропускаются через ту же базу дубликатов. Для этого, письма, отправленные "родным" MUA, проверяются (с помощью incron они "ловятся" при появлении в папке "Отправленные") и, если нет маркера - маркируются (заголовок модифицируется - добавляется "X-Deduplicate: IMAP refiltering") и пересылаются через Dovecot-LDA во "Входящие", где распознаются как подлежащие проверке на предмет дублирования. Какое из маркированных (с помощью Postfix или incron) писем придет первым - то и сохранится окончательно в "Отправленные" (последующие дубликаты уничтожаются).
Побочным эффектом всей схемы является проблема при принудительном перемещении писем в Отправленные через MUA (например мышью в web-интерфейсе) - некоторые из таких писем просто попадают во "Входящие". Эта проблема может быть частично решена через Dovecot-ACL.
Модули, задействованные в данном процессе:
• Posfix - параметр sender_bcc_maps, конфига "main.cf"
• Posfix - файлы /etc/postfix/mysql_bcc_mailbox_maps.cf и /etc/postfix/mysql_bcc_domain_maps.cf - для параметра sender_bcc_maps
• Posfix - параметр recipient_delimiter (конфиг "main.cf")
• Dovecot - параметры sieve_before, sieve_extensions, sieve_global_extensions и прочие конфига "90-sieve.conf"
• Dovecot - параметр recipient_delimiter (конфиг "15-lda.conf")
• Dovecot - параметр recipient_delimiter (конфиг "90-sieve.conf")
• Dovecot - Sieve-скрипт "incoming_deduplicate.sieve" контроля дубликатов во входящих.
• Dovecot - Sieve-скрипт "move_to_lda_refiltering.sieve" переотправки письма из "Отправленные" во "Входящие" через Dovecot-LDA.
• Roundcube - распознавание маркированного "двойного" адреса, параметр: "$CONF['recipient_delimiter']" - описание ниже
• Postfixadmin - скрипты постобработки добавления/удаления ящика, для файлов в служебной папке "incron_mailuser_monitor"
• incron - задание "/etc/incron.d/00-del_mailuser_dir_monitor" наблюдения за удалением файлов из служебной папки "/opt/mail/vmail/incron_mailuser_monitor".
• incron - задание "/etc/incron.d/00-add_mailuser_dir_monitor" наблюдения за добавлением файлов в служебную папку "/opt/mail/vmail/iincron_mailuser_monitor".
• Скрипт - "/etc/dovecot/add_del_mailuser_monitor.sh" запускающийся из заданий "/etc/incron.d/00-del_mailuser_dir_monitor" и "/etc/incron.d/00-add_mailuser_dir_monitor".
• incron - файлы заданий "/etc/incron.d/maildomain@mailuser" наблюдения за папками пользователей "Отправленные" (каждому ящику - в отдельном файле). Эти задания создаются/удаляются скриптом "add_del_mailuser_monitor.sh".
• Скрипт - "/etc/dovecot/sent_refilter.sh" запускающийся из заданий наблюдения за папками пользователей "Отправленные".
ВАЖНО! Рекомендую изменить названия служебных папок для большей безопасности.
Добавим глобальный скрипт, который будет запускаться по событию входящей почты. Sieve-скрипт для sieve_before конфига "90-sieve.conf"
/opt/mail/dovecot/sieve/global/incoming_deduplicate.sieve
require ["vnd.dovecot.duplicate", "editheader", "fileinto", "envelope", "subaddress", "imap4flags"];
if allof (not exists "X-Deduplicate", anyof (envelope :detail "to" "bccflag", not exists "Delivered-To")) {
if duplicate {
discard;
stop;
}
else {
if not exists "Delivered-To" {
# addheader :last "X-Deduplicate" "IMAP refiltering";
addheader "X-Deduplicate" "IMAP refiltering";
setflag "\\flagged";
}
else {
# addheader :last "X-Deduplicate" "bccflag";
addheader "X-Deduplicate" "bccflag";
}
fileinto "Sent";
}
}
Создадим Sieve-скрипт для обработки виртуальных почтовых папок "Sent", который будет запускаться с помощью инструмента sieve-test
/opt/mail/dovecot/sieve/global/move_to_lda_refiltering.sieve
if not exists "X-Deduplicate" {
discard;
stop;
}
Компилируем скрипты , проверяя, чтобы не было никаких ошибок:
sievec /opt/mail/dovecot/sieve/global/incoming_deduplicate.sieve
sievec /opt/mail/dovecot/sieve/global/move_to_lda_refiltering.sieve
Устанавливаем incron:
yum install incron
systemctl enable incrond
systemctl start incrond
Даем разрешение root в /etc/incron.allow
root
Создаem служебную папку. В папке PostfixAdmin будет создавать и удалять пустые файлы с соответствующим именем, при создании и удалении почтовых ящиков. После этого устанавливаем наблюдение за этой папкой (через incron).
mkdir /opt/mail/vmail/incron_mailuser_monitor
chmod 670 /opt/mail/vmail/incron_mailuser_monitor
chown root:apache /opt/mail/vmail/incron_mailuser_monitor
Формат файла для каждого ящика в этой папке должен быть следующий:
"maildomain@mailuser".
Т.е. это почтовый адрес, в котором домен и пользователь заменяны местами (например для "[email protected]", имя файла будет "dest.loc@user").
Раскомментируем последние строки в скриптах пост-обработки PostfixAdmin, отвечающие за добавление/удаление файлов-пустышек в служебной папке параллельно с добавлением/удалением ящиков
/opt/www/html/postfixadmin/scripts/addmail.sh
...
if [[ -d /opt/mail/vmail/incron_mailuser_monitor/ ]]
then
userdomain=(${1//@/ })
touch /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]}
fi
/opt/www/html/postfixadmin/scripts/delmail.sh
...
if [[ -d /opt/mail/vmail/incron_mailuser_monitor/ ]]
then
userdomain=(${1//@/ })
if [[ -f /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]} ]]
then
rm /opt/mail/vmail/incron_mailuser_monitor/${userdomain[1]}@${userdomain[0]}
fi
fi
При появлении/удалении в служебной папке "incron_mailuser_monitor" файла с именем почтового ящика (описанного выше формата) будет запускаться скрипт "add_del_mailuser_monitor.sh", в аргументах которого будет присутствовать имя файла. Главное же здесь то, что этот скрипт будет запускаться от "root" - ради этого и реализована идея промежуточной т.н. "служебной папки".
/etc/incron.d/00-del_mailuser_dir_monitor
/opt/mail/vmail/incron_mailuser_monitor/ IN_DELETE /etc/dovecot/add_del_mailuser_monitor.sh $# del
Создаем задание наблюдения за добавлением файлов в служебной папке в incron
/etc/incron.d/00-add_mailuser_dir_monitor
/opt/mail/vmail/incron_mailuser_monitor/ IN_CREATE /etc/dovecot/add_del_mailuser_monitor.sh $# add
Перезапускаем incron:
systemctl restart incrond
ВАЖНО! Появление файла в служебной папке будет вызывать появление задания наблюдения за папкой "Отправленные" соответствующего ящика через incron, а удаление файла в служебной папке будет вызывать не только удаления задания наблюдения за "Отправленные", но и автоматическое удаление Maildir-папки со всеми письмами этого ящика!
Создадим теперь сам скрипт "add_del_mailuser_monitor.sh". Этот скрипт будет, при появлении нового ящика, добавлять задание в incron - наблюдать за папкой "Отправленные" (".../.Sent/cur/") этого ящика. А при удалении ящика - удалять это задание из incron.
/etc/dovecot/add_del_mailuser_monitor.sh
#!/bin/bash
if [[ "$2" == del ]]
then
userdomain=(${1//@/ })
if [[ -f /etc/incron.d/$1 ]]
then
`rm /etc/incron.d/$1`
`/bin/systemctl restart incrond`
fi
if [[ -d /opt/mail/vmail/${userdomain[0]}/ ]]
then
if [[ -d /opt/mail/vmail/${userdomain[0]}/${userdomain[1]}/ ]]
then
`rm -rf /opt/mail/vmail/${userdomain[0]}/${userdomain[1]}`
fi
fi
fi
if [[ "$2" == add ]]
then
userdomain=(${1//@/ })
# /usr/libexec/dovecot/dovecot-lda -d ${userdomain[1]}@${userdomain[0]} -p /etc/dovecot/mail_for_new
`sleep 3s`
printf "/opt/mail/vmail/${userdomain[0]}/${userdomain[1]}/.Sent/cur/ IN_MOVED_TO /etc/dovecot/sent_refilter.sh /opt/mail/vmail/${userdomain[0]}/${userdomain[1]}/ ${userdomain[1]}@${userdomain[0]} \$#" >> /etc/incron.d/$1
`/bin/systemctl restart incrond`
fi
"printf ..." - должен быть одной строкой без переносов!!!
sleep 3s - 3 секундная задержка нужна, чтобы Dovecot успел создать Maildir-папки.
Если Maildir-папки не успеют создаться, то новое задание наблюдения за папкой "Отправленные" будет выдавать ошибку о том. что incron не может найти наблюдаемую папку. В этом случае необходимо было бы каждый раз перезапускать incron.
/usr/libexec/dovecot/dovecot-lda ... - закоментированная строка отправляет приветственное письмо через Dovecot-LDA. Если это будет использовано, то такое письмо нужно будет создать (в файле "/etc/dovecot/mail_for_new").
ВАЖНО! Строки, отвечающие за удаление Maildir-каталога стоит использовать только если есть уверенность в полной безопасности (выделено жирным шрифтом).
Создадим скрипт, который будет вызываться по событию в папках "Отправленные"
/etc/dovecot/sent_refilter.sh
#!/bin/bash
qresult="$(sieve-test -l $1 /opt/mail/dovecot/sieve/global/move_to_lda_refiltering.sieve $1/.Sent/cur/$3)"
if [[ "$qresult" == *discard* ]]
then
/usr/libexec/dovecot/dovecot-lda -d $2 -p $1/.Sent/cur/$3
rm $1/.Sent/cur/$3
doveadm index -u $2 mailbox Sent
fi
Назначаем права:
chmod 700 /etc/dovecot/add_del_mailuser_monitor.sh
chmod 700 /etc/dovecot/sent_refilter.sh
Теперь все отправляемые письма, как и чем бы они не отправлялись (в т.ч. внешним MUA) - попадут в папку "Отправленные" к тому, от чьего имени они были отправлены. А письма, которые были отправлены от незарегистрированных пользователей - попадут в специальный ящик для таких писем. Это позволит четко контролировать отправляемую почту, в т.ч. и контролировать возможный захват почтовых инструментов спамерами.
Для проверки можно завести ящик пользователя "[email protected]" и отправить кому-нибудь тестовое письмо из PHP-скрипта. Письмо должно появиться в папке "Отправленные" почтового ящика. После чего следует удалить ящик и отправить письмо из PHP-скрипта еще раз. Теперь оно должно появиться в ящике, предназначенном для отправленных писем неучтенных (без ящика) пользователей "[email protected]".
Далее.
/etc/dovecot/conf.d/auth-sql.conf.ext
#passdb {
# driver = passwd-file
# args = username_format=%u /var/vmail/auth.d/%d/passwd
#}
# Master-user:
auth_master_user_separator = *
#auth_debug = yes
passdb {
driver = sql
args = /etc/dovecot/dovecot-sql-master.conf.ext
master = yes
pass = yes
}
passdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
# default_fields = userdb_gid=5000 userdb_uid=5000
}
userdb {
driver = prefetch
}
userdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
# default_fields = uid=5000 gid=5000
}
Создаем SQL-конфиги.
/etc/dovecot/dovecot-sql.conf.ext
driver = mysql
connect = host=127.0.0.1 dbname=_POSTFIXADMIN_SQL_BASE_ user=_POSTFIXADMIN_SQL_USER_ password=_POSTFIXADMIN_SQL_PASSWORD_
#default_pass_scheme = PLAIN-MD5
default_pass_scheme = MD5-CRYPT
# %u = entire user@domain
# %n = user part of user@domain
# %d = domain part of user@domain
password_query = SELECT username as user, password, '%u' AS userdb_master_user, CONCAT('/opt/mail/vmail/', maildir) AS userdb_home, 5000 AS userdb_uid, 5000 AS userdb_gid, CONCAT('*:storage=', quota, 'B') as userdb_quota_rule FROM mailbox WHERE username = '%u' AND active = '1'
user_query = SELECT CONCAT('/opt/mail/vmail/', maildir) AS home, 5000 AS uid, 5000 AS gid, CONCAT('*:storage=', quota, 'B') as quota_rule FROM mailbox WHERE username = '%u' AND active = '1'
ВАЖНО!
password_query = ..., user_query = ... - должны быть ОДНОЙ строкой (без переносов)!
Кроме того, в этом конфиге надо использовать только "127.0.0.1"! При использования "localhost", коннект происходит на файловый сокет. Этот адрес должен совпадать с указанным в "/etc/misql/my.conf".
/etc/dovecot/dovecot-sql-master.conf.ext
driver = mysql
#default_pass_scheme = PLAIN-MD5
default_pass_scheme = MD5-CRYPT
connect = host=127.0.0.1 dbname=_POSTFIXADMIN_SQL_BASE_ user=_POSTFIXADMIN_SQL_USER_ password=_POSTFIXADMIN_SQL_PASSWORD_
password_query = SELECT username AS user, password FROM admin WHERE username = '%u' AND active = '1'
Отключаем пока неиспользуемые конфиги:
mv /etc/dovecot/conf.d/auth-checkpassword.conf.ext /etc/dovecot/conf.d/auth-checkpassword.conf.ext_
mv /etc/dovecot/conf.d/auth-deny.conf.ext /etc/dovecot/conf.d/auth-deny.conf.ext_
mv /etc/dovecot/conf.d/auth-master.conf.ext /etc/dovecot/conf.d/auth-master.conf.ext_
mv /etc/dovecot/conf.d/auth-passwdfile.conf.ext /etc/dovecot/conf.d/auth-passwdfile.conf.ext_
mv /etc/dovecot/conf.d/auth-static.conf.ext /etc/dovecot/conf.d/auth-static.conf.ext_
mv /etc/dovecot/conf.d/auth-system.conf.ext /etc/dovecot/conf.d/auth-system.conf.ext_
mv /etc/dovecot/conf.d/auth-vpopmail.conf.ext /etc/dovecot/conf.d/auth-vpopmail.conf.ext_
Задаем права и владельца конфигов:
chgrp vmail /etc/dovecot/*.conf
chmod g+r /etc/dovecot/*.conf
chgrp vmail /etc/dovecot/*.ext
chmod g+r /etc/dovecot/*.ext
chgrp vmail /etc/dovecot/conf.d/*.conf
chmod g+r /etc/dovecot/conf.d/*.conf
chgrp vmail /etc/dovecot/conf.d/*.ext
chmod g+r /etc/dovecot/conf.d/*.ext
Сертификаты
Если есть валидные сертификаты, или есть возможность создать такие, то лучше использовать их. Здесь мы рассмотрим вариант использования самоподписанных сертификатов.
Создаем для использования Postfix:
openssl req -new -x509 -days 3650 -nodes -out /etc/postfix/ssl.cert.pem -keyout /etc/postfix/ssl.key.pem
chmod o= /etc/postfix/ssl.key.pem
Создаем для использования Dovecot:
openssl req -new -x509 -days 3650 -nodes -out /etc/dovecot/ssl.cert.pem -keyout /etc/dovecot/ssl.key.pem
chmod 444 /etc/dovecot/ssl.cert.pem
chmod 400 /etc/dovecot/ssl.key.pem
chown root:root /etc/dovecot/ssl.cert.pem
chown root:root /etc/dovecot/ssl.key.pem
Проверка работы SSL
openssl s_client -tls1 -crlf -connect localhost:25
openssl s_client -tls1 -crlf -connect mail.dest.loc:25
openssl s_client -starttls smtp -showcerts -connect localhost:25
openssl s_client -starttls smtp -showcerts -connect mail.dest.loc:25
openssl s_client -starttls smtp -crlf -connect mail.dest.loc:25
openssl s_client -starttls imap -crlf -connect localhost:143
Последняя команда может выдавать ошибку, если IMAP на 143 порту для localhost настроен без TLS.
Перезапускаем dovecot:
systemctl restart dovecot
Проверяем работу dovecot.
Проверка авторизации, где _USER_ - полностью имя ящика, а _PASSWORD_ - пароль
doveadm auth -x service=imap -x rip=127.0.0.1 _USER_ _PASSWORD_
Проверка конфигурации:
doveconf -a
В конце вывода не должно быть ошибок.
Установка web-клиента Roundcube
Доустанавливаем необходимые пакеты:
yum install php-xml php-intl php-gd php-pear ImageMagick ImageMagick-devel java
Вносим изменения в /etc/php.ini
error_reporting E_ALL & ~E_NOTICE & ~E_STRICT
memory_limit = 128MB
post_max_size = 16M
file_uploads On
upload_max_filesize = 10M
date.timezone = Europe/Moscow ;Установить вашу временную зону session.auto_start = 0 mbstring.func_overload = 0
Добавляем модуль imagick:
pecl install imagick
Начнется процесс компиляции модуля. На вопрос: “Please provide the prefix of Imagemagick installation” отвечаем: “all” и жмём enter. Процесс завершится приблизительно такими строками:
Build process completed successfully
install ok: channel://pecl.php.net/imagick-3.0.1
You should add "extension=imagick.so" to php.ini
добавляем модуль в конфигурацию php:
echo "extension=imagick.so" > /etc/php.d/imagick.ini
Перезапускаем web-сервер:
systemctl restart httpd
cd /opt/www/html
Скачиваем у разработчика последнюю стабильную верси. На момент написания статьи это была версия 1.3.2:
https://github.com/roundcube/roundcubemail/releases/download/1.3.2/roundcubemail-1.3.2-complete.tar.gz
Распакованные файлы помещаем в каталог /opt/www/html
Назначаем владельца:
chown -R apache:apache /opt/www/html/*
Устанавливаем composer, следуя инструкциям на https://getcomposer.org/download/
Далее выполняем:
cp /opt/www/html/composer.json-dist /opt/www/html/composer.json
В файле composer.json необходимо перенести строку "kolab/Net_LDAP3": "dev-master" из раздела "suggest" в раздел "require". Это нам понадобится в дальнейшем для организации корпаративной адресной книги.
Далее выполняем:
php composer.phar install --nodev
Проверяем наличие и права доступа на каталоги:
/op/www/html/logs
/opt/www/html/temp
Web-сервер должен иметь право на запись в эти каталоги.
Далее содаем бызу данных для Roundcube:
#mysql -p
> CREATE DATABASE roundcubemail DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
> GRANT ALL PRIVILEGES ON roundcubemail.* to roundcube@localhost IDENTIFIED BY '_ROUNDCUBE_SQL_PASSWORD_';
quit
Далее создаем таблицы в этой базе данных:
mysql -u roundcube -p roundcubemail /opt/www/html/SQL/mysql.initial.sq
Подготовительные опрерации завершены. Можно приступать к настройке Roundcube. В браузере открываем страницу установщика: http://mail.dest.loc/installer/. На этой странице диагностики по всем пунктам, кроме неустановленных баз данных, должно стоять "Ok". Если это не так, то смотрим, что не устраивает и исправляем. После того, как все исправлено жмем кнопку "NEXT" и переходим на страницу создания конфига. Заполняем необходимые поля:
• product_name - Название организации. Отображается на странице клиента
• database password - Пароль созданного пользователя roundcube
• default_host - tls://127.0.0.1
• smtp_server - tls://127.0.0.1
• language ru_RU
• Выбираем плагины, которые необходимо подключить
Жмем "Create config". Конфигурационный файл запишется. Жмем "Continue" и переходим на страницу тестов.
На странице проверяем везде ли стоит признак "Ok", проверяем возможность подключения по SMTP и IMAP. Если все в порядке, то основные настройки закончены и можно открывать на основной странице сайта почтовый клиент.
Осталось только настроить всевозможные плагины. В данной статье о них речь не пойдет.
Для доступа с почтовых клиентов включаем IMAPs. Открываем параметры в файле /etc/dovecot/conf.d/10-master.conf
service imap-login {
inet_listener imap {
address = 127.0.0.1
port = 143
}
inet_listener imaps {
address = 192.168.20.104
port = 993
ssl = yes
}
}
Для того, что бы пользователи со своих ПК с помощью почтовых клиентов смогли отправлять почту через postfix, в файле /etc/postfix/main.cf комментируем строчку ограничения SMTP (при желании можно будет настроить позже под свои потребности:
# check_helo_access hash:/etc/postfix/hello_access,
После этого перезапускаем Postfix
Открываем соответствующий порт в файерволе:
firewall-cmd --permanent --add-port=993/tcp
firewall-cmd --reload
В итоге мы получили готовый почтовый сервер, управление пользователями (добавление-удаление-настройка) на котором производится посредством web-интерфейса Postfixadmin. А что делать, если в сети уже развернута структура Active Directory с множеством пользователей и каждому требуется доступ к почтовому сервису. Заводить для каждого почтовый ящик это не вариант. Решение описано дальше.
Active Directory
Вводные данные:
• Доступ к службам реализован только с помощью SSL по соображениям безопасности;
• Уровень домена и леса Active Directory должен быть не выше Windows 2008 R2;
• В данной версии инструкции не рассматривается конфигурация для подключения Microsoft Exchange на основе MAPI или EWS;
• Для подключения в данной конфигурации можно использовать Microsoft Outlook версии, начиная с 2003 в режиме подключения почты по IMAP;
Важно:
Доступ к серверу LDAP осуществляется по протоколу ldap без шифрования. Для SambaDC отключите обязательный ldaps в /etc/samba/smb.conf в секции [global]:
ldap server require strong auth = no
Создание пользователей в Active Directory.
Создаётся пользователь vmail с не истекающей учётной записью:
samba-tool user create -W Users vmail
samba-tool user setexpiry vmail --noexpiry
В каталоге /etc/postfix изменяем файлы для домена dest.loc:
main.cf
mailbox_command = /usr/libexec/dovecot/dovecot-lda -f "$SENDER" -a "$RECIPIENT"
virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_maps.cf,ldap:/etc/postfix/ad_local_recipients.cf
virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_domain_maps.cf,mysql:/etc/postfix/mysql_virtual_alias_maps.cf,ldap:/etc/postfix/ad_mail_groups.cf
local_transport = virtual
local_recipient_maps = $virtual_mailbox_maps
smtpd_use_tls = yes
#smtpd_tls_security_level = encrypt
smtpd_tls_security_level = may
smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain = dest.loc
smtpd_sasl_path = private/auth
smtpd_sasl_type = dovecot
smtpd_sender_login_maps = ldap:/etc/postfix/ad_sender_login.cf
#smtpd_tls_auth_only = yes
#smtpd_recipient_restrictions = permit_mynetworks, reject_unauth_destination, permit_sasl_authenticated, reject
#smtpd_sender_restrictions = reject_authenticated_sender_login_mismatch
master.cf
dovecot unix - n n - - pipe
flags=DRhu user=mail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient}
smtps inet n - n - - smtpd
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
Создаем файл /etc/postfix/ad_local_recipients.cf
version = 3
server_host = dest-dc-01.dest.loc.loc:389
search_base = ou=Firma,dc=dest,dc=loc
scope = sub
query_filter = (&(|(mail=%s)(otherMailbox=%u@%d))(sAMAccountType=805306368))
result_filter = %s
result_attribute = mail
special_result_attribute = member
bind = yes
bind_dn = cn=vmail,cn=users,dc=dest,dc=loc
bind_pw = Pa$$word
Создаем файл /etc/postfix/ad_mail_groups.cf
version = 3
server_host = dest-dc-01.dest.loc.alt:389
search_base = ou=Firma,dc=dest,dc=loc
timeout = 3
scope = sub
#query_filter = (&(mail=%s)(sAMAccountType=268435456))
query_filter = (&(objectclass=group)(mail=%s))
result_filter = %s
#result_attribute = mail
leaf_result_attribute = mail
special_result_attribute = member
bind = yes
bind_dn = cn=vmail,cn=users,dc=dest,dc=loc
bind_pw = Pa$$word
Создаем файл /etc/postfix/ad_sender_login.cf
version = 3
server_host = dest-dc-01.dest.loc:389
search_base = ou=Firma,dc=dest,dc=loc
scope = sub
query_filter = (&(objectClass=user)(|(sAMAccountName=%s)(mail=%s)))
result_attribute = mail
bind = yes
bind_dn = cn=vmail,cn=users,dc=test,dc=alt
bind_pw = Pa$$word
В данном случае в свойствах пользователя Active Directory должны быть заполнены атрибуты "mail" и "otherMailbox" (они должны быть одинаковы и содержать имя пользователя (желательно совпадающее с логином) и имя локального домена - Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.)
Проверяем конфигурацию Postfix и перезапускаем:
postconf >/dev/null
systemctl restart postfix
Проверяем LDAP-запросы.
Проверка наличия почтового ящика у пользователя test
postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. ldap:/etc/postfix/ad_local_recipients.cf
Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Если все правильно, то выдаст почтовый адрес пользователя test. Если вывод пустой, то либо неправильно заполнены атрибуты пользователя, либо ошибка в запросе.
Проверка входа:
# postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. ldap:/etc/postfix/ad_sender_login.cf
Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Создаем в Active Directory группу распространения. Присваиваем ей адрес (например Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.). Добавляем в эту группу пользователей с заполненными почтовыми атрибутами. Это группа будет являтся группой рассылки с указанным адресом (Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.). Для нашего примера в группу будут входить пользователя test и test2. Проверяем:
# postmap -q Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript. ldap:/etc/postfix/ad_mail_groups.cf
Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.,Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Настраиваем dovecot.
Создаём файл /etc/dovecot/dovecot-ldap.conf.ext
hosts = dest-dc-01.dest.loc:3268
ldap_version = 3
auth_bind = yes
dn = cn=vmail,cn=Users,dc=test,dc=alt
dnpass = Pa$$word
base = ou=Firma,dc=dest,dc=loc
scope = subtree
deref = never
user_filter = (&(objectClass=user)(|(mail=%Lu)(sAMAccountName=%Lu)))
user_attrs = =uid=5000,gid=5000,mail=user
pass_filter = (&(objectClass=user)(|(mail=%Lu)(sAMAccountName=%Lu)))
pass_attrs = mail=user
Важно! Значение base не должно содержать только значения dc, иначе при попытке использования будет 'Operation error'.
Вносим правки в конфиги Dovecot
10-auth.conf:
#auth_username_format = %Lu
#auth_gssapi_hostname = "$ALL"
#auth_krb5_keytab = /etc/dovecot/dovecot.keytab
#auth_use_winbind = no
#auth_winbind_helper_path = /usr/bin/ntlm_auth
#auth_failure_delay = 2 secs
auth_mechanisms = plain
!include auth-ldap.conf.ext
10-mail.conf
mail_location = maildir:/opt/mail/vmail/%d/%n:UTF-8:INBOX=/opt/mail/vmail/%d/%n/Inbox
mail_uid = 5000
mail_gid = 5000
first_valid_uid = 5
first_valid_gid = 5
10-master.conf
service imap-login {
inet_listener imap {
address = 127.0.0.1
port = 143
# port = 0
}
inet_listener imaps {
address = 192.168.20.104
port = 993
ssl = yes
}
}
service pop3-login {
inet_listener pop3 {
port = 0
}
inet_listener pop3s {
port = 0
}
}
service lmtp {
unix_listener lmtp {
path = /var/spool/postfix/private/dovecot-lmtp
group = postfix
mode = 0660
user = postfix
}
executable = lmtp -L
}
service imap {
}
service pop3 {
}
service auth {
unix_listener auth {
path = /var/spool/postfix/private/auth
mode = 0660
user = postfix
group = postfix
}
unix_listener auth-userdb {
}
# unix_listener /var/spool/postfix/private/auth {
# mode = 0600
# user = postfix
# group = postfix
# }
user = $default_internal_user
}
service auth-worker {
user = $default_internal_user
}
service dict {
unix_listener dict {
}
}
15-lda.conf
protocol lda {
hostname = dest.loc
postmaster_address = Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
}
15-mailboxes.conf
namespace inbox {
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Junk {
auto = subscribe
special_use = \Junk
}
mailbox Trash {
auto = subscribe
special_use = \Trash
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
}
Проверяем и перезапускаем Dovecot:
doveconf >/dev/null
systemctl restart dovecot
В связи с тем, что конфигурационные файлы содержат пароль пользователя LDAP, их необходимо сделать недоступным для чтения прочим пользователям:
chown dovecot:root /etc/dovecot/dovecot-ldap.conf.ext
chmod 0640 /etc/dovecot/dovecot-ldap.conf.ext
chown root:postfix /etc/postfix/ad_local_recipients.cf /etc/postfix/ad_mail_groups.cf /etc/postfix/ad_sender_login.cf
chmod 0640 /etc/postfix/ad_local_recipients.cf /etc/postfix/ad_mail_groups.cf /etc/postfix/ad_sender_login.cf
Подключение адресной книги локальных пользователей. Для большой организации как правило пользователям нужна адресная книга, которая содержит все почтовые адреса организации. Настроим такую в клиенте Roundcube. Открываем файл /opt/www/html/config/config.inc.php и добавляем следующие строки:
// Active Directory Address Book
$config['ldap_public'] = array(
'MyAdLdap' =>
array (
'name' => 'Фирма',
'hosts' =>
array (
0 => 'dest-dc-01.dest.loc, dest-dc-02.dest.loc',
),
'sizelimit' => 6000,
'port' => 389,
'use_tls' => false,
'user_specific' => false,
'base_dn' => 'ou=Firma,dc=dest,dc=loc',
'bind_dn' => Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.',
'bind_pass' => 'pa$$word',
'writable' => false,
'ldap_version' =>3,
'search_fields' =>
array (
0 => 'mail',
1 => 'cn',
),
'name_field' => 'cn',
'email_field' => 'mail',
'first_field' => 'giveName',
'sort' => 'sn',
'scope' => 'sub',
'filter' => '(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))',
'global_search' => true,
'fuzzy_search' => true,
),
);
Итог.
Мы получили работающий почтовый сервер, который использует базу Active Directory для локальных адресов пользователей и mysql для прочих доменов. Управление локальными пользователями осуществляется псредством Active Directory, а управление прочими пользователями через PostfixAdmin.
Дополнительная информация
Прочитано 25133 раз Последнее изменение Пятница, 10 ноября 2017 16:22
Авторизуйтесь, чтобы получить возможность оставлять комментарии
Top
Этот шаблон Joomla был скачан с сайта ДжуМикс.
|
__label__pos
| 0.790033 |
AVIO Consulting
Message Aggregation in Oracle SOA Suite 12c
Jan 6, 2015 | BPEL, Oracle, SOA
Within the Oracle SOA Suite, message aggregation is a concept that allows for multiple messages to be routed to the same BPEL process, based on a value(s) defined within the incoming payload. This is implemented within BPEL through the use of correlation sets.
To implement this, a correlation set is defined and will contain one or more properties. These properties have aliases to values within the input payload message (i.e. PurchaseOrderNumber), which are used for the correlation of messages.
property
Oracle SOA Suite 12c has provided a wizard that allows for easy definition of the correlation set, properties and property aliases which are all necessary for correlation to occur. Below are the steps required to configure your BPEL process to perform message aggregation via correlation set.
Configuration Steps
The first step is to create the Correlation Set. Right click on the initial receive of the process and select ‘Setup Correlation…’
setup correlation
For the initial receive in the process, select the ‘Create Correlation Set’ radio button. In this same screen, select to create the properties for the correlation set (plus icon, as shown below). This creates a row in which the name of the property and type can be changed from the default value to a name and type more appropriate for the solution. At this step, be sure to enter tab after changing the name, to ensure the name gets updated.
define correlation set
In the Initiate Settings, be sure to select ‘yes’ for the Initiate value. This designates that the activity will be the initiator of the correlation:
initiate settings
The properties defined in the first step will now be mapped to the fields within the BPEL process input message, which will be used for the correlation. Each property defined will need to be mapped to an alias (an element in the input payload). There are two ways to do this within the ‘Property Aliases’ step of the wizard. The first way is to select the pencil icon, which brings up the property alias editor. From there, select the field(s) to map to.
property aliases
The second method is by using the Drag and Drop editor (icon next to pencil):
correlation wizard
Once this editor is opened, select the field from the process variable into the property alias:
process variable
The next step in the wizard is to add activities that will be participating in the correlation. For message aggregation to work, a mid-process receive needs to be defined to receive the subsequent messages sent after the initial message.
If mid-process receive has been defined within the BPEL process, select the add icon, then choose this receive activity within the activity browser, then next, finish.
Otherwise, if it has not been defined yet, select next, and then finish. Once the mid-process receive has been defined, right click on it, select ‘Setup Correlation…’ (similar to the first step), then ‘Choose Existing Correlation Set’ to configure with the same correlation set as the initial receive:
existing correlation set
Once in the ‘Initiate Settings’ of the wizard, be sure the value for Initiate is set to ‘no’. Continue through the wizard, selecting ‘Next’, accepting the defaults, then ‘Finish’.
Now that the correlation is configured for the receive activities, the next step is to ensure the process waits for only a given period of time to receive subsequent messages. To do this, we will need to configure a timeout value within the mid-process receive. This will tell it how long to wait for any more messages before timing out. This can be done as shown, within the Timeout tab on the activity:
timeout
Also, since we are unsure of how many messages will be received, we will stick a while loop into the process. The catch block for the timeout fault (thrown when the receive activity timer is expired) will contain an assign activity (AssignEndOfProcess in example below) that will set the value that will cause the loop to be exited.
assign end of process
One last configuration that is required is to add a BPEL component property. This is an important step, which will enable aggregation to function correctly (further details on this in a bit).
To set the property, in the Composite editor, select the BPEL component. Make sure the Properties tab is visible in the bottom pane in JDeveloper:
visible
Within the Properties tab, select the add ‘plus’ icon. In the ‘Create Property’ dialog box, enter the following:
Name:
bpel.config.reenableAggregationOnComplete
Value:
true
Testing the Aggregation
When the composite is invoked multiple times, specifying the same value in each instance for the correlation value, the instances created in EM will look as shown below. Each time the composite was invoked (three times in this example), an instance is created. But notice, that only one shows as in a Running state, while the other two are already Completed:
testing the aggregation
What is happening is the instance in the Running state is receiving all the messages (aggregating them). The first message received initiates the instance, then the subsequent messages are received within the mid-process receive. Below is the trace for this test, showing exactly that:
fault meassages
You’ll also notice the ‘Recovered’ State. This occurred because of the timeout exception that was thrown. Since it was caught, the state is showing as recovered.
I mentioned earlier that the component property was an important step in successfully implementing aggregation via correlation. If this property is not set, the correlation will work the first time, but after that, issues will arise. I will walk you through an example of when this happened.
When the initial test was run, the correlated values given (i.e. PONum = 123) caused the process to accept and aggregate the messages (like shown above) without issue. Once the process reached the timeout threshold and shut down, the subsequent invocations to the composite with the same correlation value (i.e. PONum = 123) resulted in the message being accepted, but not processed within the composite.
The below trace provides what this scenario looks like within the EM. The instance is showing as Completed, but did not traverse through the process:
trace messages
Within the soainfra.dlv_aggregation table, it becomes evident why. The second record in the screen shot below is the instance that was run prior (the initial test at 09:19:29) to when the above example (the failled test at 9:21:33). The value for the State for this record is 0 (should be 1 since the process was completed):
state
When the composite completed, it did not change the state to 1 (due to the BPEL property not being set). When this happened, the next time the composite was invoked with the same correlation values, a new instance was not created.
By adding the component property, bpel.config.reenableAggregationOnComplete, this corrected the issue allowing subsequent instances to correlate and process correctly.
Below is the trace for the process run after the change was made. The State within the table is set to 1 (see first row in the table above, 09:56:35). The next run of the process with the same correlation values shows a different trace within EM, one that includes processing of the message:
trace
|
__label__pos
| 0.858111 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
PerlMonks
Re: Golf: Embedded In Order
by japhy (Canon)
on Apr 27, 2001 at 18:01 UTC ( #76095=note: print w/ replies, xml ) Need Help??
Help for this page
Select Code to Download
1. or download this
# 37 chars (between the {...})
sub seq{shift=~join'.*',map"\Q$_",split//,pop}
2. or download this
sub seq{($t=pop)=~s/./$&.*/sg;pop=~/$t/s}
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://76095]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (2)
As of 2015-11-29 01:06 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
What would be the most significant thing to happen if a rope (or wire) tied the Earth and the Moon together?
Results (746 votes), past polls
|
__label__pos
| 0.902135 |
Архив за Сентябрь 2008
День программиста
12 сентября мировое сообщество “программеров”, а вместе с ними и все прогрессивное человечество отмечают День программиста — неофициальный праздник тех, кто посредством написания софта вдыхает “душу” в компьютерное “железо.
День программиста отмечается на 256-й день года. Число 256 (два в восьмой степени) выбрано потому, что это количество чисел, которые можно выразить с помощью одного байта. В високосные годы этот праздник попадает на 12 сентября, в невисокосные — на 13 сентября.
Как в MySQL при наличии некоторых данных изменить их, а при отсутствии - добавить?
Зачастую возникают ситуации, когда нужно при наличии определенной строки в таблице в MySQL изменить ее, а при ее отсутствии - добавить. Обычно решают такую задачу в два приема: сначала проверяют, есть ли данные (SELECT с нужными условиями), а затем в зависимости от полученного результата либо изменяют их (UPDATE), либо добавляют (INSERT).
Конструкция получается достаточно трудоемкая. В этом случае будет эффективнее использовать команду MySQL REPLACE. По синтаксису она эквивалентна INSERT:
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name [(col_name,...)]
VALUES (expression,...),(...),...
или REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name [(col_name,...)]
SELECT ...
или REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name
SET col_name=expression, col_name=expression,...
Логика работы REPLACE такова: если в существующей строке есть поле, для которого построен индекс типа PRIMARY KEY или UNIQUE, с тем же значением, что и в новой строке, то старая строка удаляется, а новая - добавляется. Если строки с таким значением в указанном поле нет, то строка просто добавляется.
Есть еще вариант с использованием ON DUPLICATE KEY UPDATE:
INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name [(col_name,...)]
VALUES (expression,...),(...),...
[ ON DUPLICATE KEY UPDATE col_name=expression, ... ]
или INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name [(col_name,...)]
SELECT ...
или INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name
SET col_name=(expression | DEFAULT), ...
[ ON DUPLICATE KEY UPDATE col_name=expression, ... ]
Если вы указываете ON DUPLICATE KEY UPDATE (новшество в MySQL 4.1.0), и производится вставка строки, которая вызывает ошибку дублирующегося первичного (PRIMARY) или уникального (UNIQUE) ключа, то вполняется UPDATE старой строки. Например:
INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=c+1;
Если a определяется как UNIQUE и уже содержит 1, то тогда вышеуказанная команда будет аналогична следующей:
UPDATE table SET c=c+1 WHERE a=1;
Ссылки по теме:
Календарь на PHP
Основная проблема при формировании календаря заключается в том, что месяц, как правило, начинается с середины недели. Сам календарь представляет собой таблицу с количеством строк, равных количеству дней в неделе, и количеством столбцов, равных количеству недель в месяце, поэтому формирование календаря целесообразно проводить сразу в двухмерном массиве.
<?php
setlocale ( LC_ALL, '' );
// Название месяца
echo '<h3>'.strftime( '%B' ).'</h3>';
// Вычисляем число дней в текущем месяце
$dayofmonth = date('t');
// Счётчик для дней месяца
$day_count = 1;
// Первая неделя
$num = 0;
for( $i = 0; $i < 7; $i++ ) {
// Вычисляем номер дня недели для числа
$dayofweek = date('w', mktime(0, 0, 0, date('m'), $day_count, date('Y')));
// Приводим к числа к формату 1 - понедельник, ..., 6 - суббота
$dayofweek = $dayofweek - 1;
if($dayofweek == -1) $dayofweek = 6;
if($dayofweek == $i) {
// Если дни недели совпадают, заполняем массив $week числами месяца
$week[$num][$i] = $day_count;
$day_count++;
} else {
$week[$num][$i] = '';
}
}
// Последующие недели месяца
while( true ) {
$num++;
for( $i = 0; $i < 7; $i++ ) {
$week[$num][$i] = $day_count;
$day_count++;
// Если достигли конца месяца - выходим из цикла
if( $day_count > $dayofmonth ) break;
}
// Если достигли конца месяца - выходим из цикла
if( $day_count > $dayofmonth ) break;
}
// Какой сегодня день
$today = date( 'j' );
// Выводим содержимое массива $week в виде календаря
echo '<table border="1" style="border-collapse:collapse" cellpadding="5" cellspacing="0">';
for( $j = 0; $j < 7; $j++ ) {
echo '<tr align="right">';
for( $i = 0; $i < count($week); $i++ ) {
if( !empty( $week[$i][$j] ) ) {
if ( $week[$i][$j] == $today )
echo '<td bgcolor="#DDDDDD">';
else
echo '<td>';
// Если имеем дело с субботой и воскресеньем - подсвечиваем их
if( $j == 5 || $j == 6 )
echo '<span style="color:red">'.$week[$i][$j].'</span>';
else
echo $week[$i][$j];
echo '</td>';
} else {
echo '<td> </td>';
}
}
echo '</tr>';
}
echo '</table>';
?>
Чтобы изменить формат вывода календаря недель месяца с вертикального на горизонтальный - достаточно внести незначительные изменения в тот фрагмент кода, который отвечает за вывод сформированного массива в виде таблицы:
<?php
setlocale ( LC_ALL, '' );
// Название месяца
echo '<h3>'.strftime( '%B' ).'</h3>';
// Вычисляем число дней в текущем месяце
$dayofmonth = date('t');
// Счётчик для дней месяца
$day_count = 1;
// Первая неделя
$num = 0;
for( $i = 0; $i < 7; $i++ ) {
// Вычисляем номер дня недели для числа
$dayofweek = date('w', mktime(0, 0, 0, date('m'), $day_count, date('Y')));
// Приводим к числа к формату 1 - понедельник, ..., 6 - суббота
$dayofweek = $dayofweek - 1;
if($dayofweek == -1) $dayofweek = 6;
if($dayofweek == $i) {
// Если дни недели совпадают, заполняем массив $week числами месяца
$week[$num][$i] = $day_count;
$day_count++;
} else {
$week[$num][$i] = '';
}
}
// Последующие недели месяца
while( true ) {
$num++;
for( $i = 0; $i < 7; $i++ ) {
$week[$num][$i] = $day_count;
$day_count++;
// Если достигли конца месяца - выходим из цикла
if( $day_count > $dayofmonth ) break;
}
// Если достигли конца месяца - выходим из цикла
if( $day_count > $dayofmonth ) break;
}
// Какой сегодня день
$today = date( 'j' );
// Выводим содержимое массива $week в виде календаря
echo '<table border="1" style="border-collapse:collapse" cellpadding="5" cellspacing="0">';
for( $i = 0; $i < count($week); $i++ ) {
echo '<tr align="right">';
for( $j = 0; $j < 7; $j++ ) {
if( !empty( $week[$i][$j] ) ) {
if ( $week[$i][$j] == $today )
echo '<td bgcolor="#DDDDDD">';
else
echo '<td>';
// Если имеем дело с субботой и воскресеньем - подсвечиваем их
if( $j == 5 || $j == 6 )
echo '<span style="color:red">'.$week[$i][$j].'</span>';
else
echo $week[$i][$j];
echo '</td>';
} else {
echo '<td> </td>';
}
}
echo '</tr>';
}
echo '</table>';
?>
|
__label__pos
| 0.859623 |
Cold boot: power button > power off and then start normally?!
Hi,
I'm having sort of a 'problem' with my system. Not really a problem because it is working very well, but basically something that worries me.
When I do a cold boot (system powered off -and switched off- for over 12 hours) and push the power button, I get this:
1) System powers on, fans start spinning...for 1 second
2) Then system powers off...for about 3 seconds
3) Systems powers on again and works fine from then on
Why is my system doing this on cold boot? Is it something to worry about?
I'm running a 3930K CPU @ 4.4GHz.
greetings,
Dries
5 answers Last reply
More about cold boot power button power start normally
1. that is a basic thing tha tasus motherboards do and some other types of mobo's do, if it is not giving you any problems(no post) then dont worry about it its perfectly normal
2. Yea mine actually does it. It doesnt really affect anything.
3. yeah, just a normal part of the mobo, weird but does not mean any damage or problems
4. Phew, glad to hear it' no biggy!
Thanks for pointing it out it's an 'Asus thing'.
greetings,
Dries
5. no problem :na:
dont forget to pick a "best answer" and i was glade to help :D
Ask a new question
Read More
Homebuilt Boot Power Systems
|
__label__pos
| 0.972789 |
This part of the reference documentation covers all the technologies that are absolutely integral to the Spring Framework.
Foremost amongst these is the Spring Framework’s Inversion of Control (IoC) container. A thorough treatment of the Spring Framework’s IoC container is closely followed by comprehensive coverage of Spring’s Aspect-Oriented Programming (AOP) technologies. The Spring Framework has its own AOP framework, which is conceptually easy to understand and which successfully addresses the 80% sweet spot of AOP requirements in Java enterprise programming.
Coverage of Spring’s integration with AspectJ (currently the richest — in terms of features — and certainly most mature AOP implementation in the Java enterprise space) is also provided.
1. The IoC Container
This chapter covers Spring’s Inversion of Control (IoC) container.
1.1. Introduction to the Spring IoC Container and Beans
This chapter covers the Spring Framework implementation of the Inversion of Control (IoC) principle. IoC is also known as dependency injection (DI). It is a process whereby objects define their dependencies (that is, the other objects they work with) only through constructor arguments, arguments to a factory method, or properties that are set on the object instance after it is constructed or returned from a factory method. The container then injects those dependencies when it creates the bean. This process is fundamentally the inverse (hence the name, Inversion of Control) of the bean itself controlling the instantiation or location of its dependencies by using direct construction of classes or a mechanism such as the Service Locator pattern.
The org.springframework.beans and org.springframework.context packages are the basis for Spring Framework’s IoC container. The BeanFactory interface provides an advanced configuration mechanism capable of managing any type of object. ApplicationContext is a sub-interface of BeanFactory. It adds:
• Easier integration with Spring’s AOP features
• Message resource handling (for use in internationalization)
• Event publication
• Application-layer specific contexts such as the WebApplicationContext for use in web applications.
In short, the BeanFactory provides the configuration framework and basic functionality, and the ApplicationContext adds more enterprise-specific functionality. The ApplicationContext is a complete superset of the BeanFactory and is used exclusively in this chapter in descriptions of Spring’s IoC container. For more information on using the BeanFactory instead of the ApplicationContext, see The BeanFactory.
In Spring, the objects that form the backbone of your application and that are managed by the Spring IoC container are called beans. A bean is an object that is instantiated, assembled, and otherwise managed by a Spring IoC container. Otherwise, a bean is simply one of many objects in your application. Beans, and the dependencies among them, are reflected in the configuration metadata used by a container.
1.2. Container Overview
The org.springframework.context.ApplicationContext interface represents the Spring IoC container and is responsible for instantiating, configuring, and assembling the beans. The container gets its instructions on what objects to instantiate, configure, and assemble by reading configuration metadata. The configuration metadata is represented in XML, Java annotations, or Java code. It lets you express the objects that compose your application and the rich interdependencies between those objects.
Several implementations of the ApplicationContext interface are supplied with Spring. In stand-alone applications, it is common to create an instance of ClassPathXmlApplicationContext or FileSystemXmlApplicationContext. While XML has been the traditional format for defining configuration metadata, you can instruct the container to use Java annotations or code as the metadata format by providing a small amount of XML configuration to declaratively enable support for these additional metadata formats.
In most application scenarios, explicit user code is not required to instantiate one or more instances of a Spring IoC container. For example, in a web application scenario, a simple eight (or so) lines of boilerplate web descriptor XML in the web.xml file of the application typically suffices (see Convenient ApplicationContext Instantiation for Web Applications). If you use the Spring Tools for Eclipse (an Eclipse-powered development environment), you can easily create this boilerplate configuration with a few mouse clicks or keystrokes.
The following diagram shows a high-level view of how Spring works. Your application classes are combined with configuration metadata so that, after the ApplicationContext is created and initialized, you have a fully configured and executable system or application.
container magic
Figure 1. The Spring IoC container
1.2.1. Configuration Metadata
As the preceding diagram shows, the Spring IoC container consumes a form of configuration metadata. This configuration metadata represents how you, as an application developer, tell the Spring container to instantiate, configure, and assemble the objects in your application.
Configuration metadata is traditionally supplied in a simple and intuitive XML format, which is what most of this chapter uses to convey key concepts and features of the Spring IoC container.
XML-based metadata is not the only allowed form of configuration metadata. The Spring IoC container itself is totally decoupled from the format in which this configuration metadata is actually written. These days, many developers choose Java-based configuration for their Spring applications.
For information about using other forms of metadata with the Spring container, see:
• Annotation-based configuration: Spring 2.5 introduced support for annotation-based configuration metadata.
• Java-based configuration: Starting with Spring 3.0, many features provided by the Spring JavaConfig project became part of the core Spring Framework. Thus, you can define beans external to your application classes by using Java rather than XML files. To use these new features, see the @Configuration, @Bean, @Import, and @DependsOn annotations.
Spring configuration consists of at least one and typically more than one bean definition that the container must manage. XML-based configuration metadata configures these beans as <bean/> elements inside a top-level <beans/> element. Java configuration typically uses @Bean-annotated methods within a @Configuration class.
These bean definitions correspond to the actual objects that make up your application. Typically, you define service layer objects, data access objects (DAOs), presentation objects such as Struts Action instances, infrastructure objects such as Hibernate SessionFactories, JMS Queues, and so forth. Typically, one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create and load domain objects. However, you can use Spring’s integration with AspectJ to configure objects that have been created outside the control of an IoC container. See Using AspectJ to dependency-inject domain objects with Spring.
The following example shows the basic structure of XML-based configuration metadata:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="..." class="..."> (1) (2)
<!-- collaborators and configuration for this bean go here -->
</bean>
<bean id="..." class="...">
<!-- collaborators and configuration for this bean go here -->
</bean>
<!-- more bean definitions go here -->
</beans>
1 The id attribute is a string that identifies the individual bean definition.
2 The class attribute defines the type of the bean and uses the fully qualified classname.
The value of the id attribute refers to collaborating objects. The XML for referring to collaborating objects is not shown in this example. See Dependencies for more information.
1.2.2. Instantiating a Container
The location path or paths supplied to an ApplicationContext constructor are resource strings that let the container load configuration metadata from a variety of external resources, such as the local file system, the Java CLASSPATH, and so on.
Java
ApplicationContext context = new ClassPathXmlApplicationContext("services.xml", "daos.xml");
Kotlin
val context = ClassPathXmlApplicationContext("services.xml", "daos.xml")
After you learn about Spring’s IoC container, you may want to know more about Spring’s Resource abstraction (as described in Resources), which provides a convenient mechanism for reading an InputStream from locations defined in a URI syntax. In particular, Resource paths are used to construct applications contexts, as described in Application Contexts and Resource Paths.
The following example shows the service layer objects (services.xml) configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- services -->
<bean id="petStore" class="org.springframework.samples.jpetstore.services.PetStoreServiceImpl">
<property name="accountDao" ref="accountDao"/>
<property name="itemDao" ref="itemDao"/>
<!-- additional collaborators and configuration for this bean go here -->
</bean>
<!-- more bean definitions for services go here -->
</beans>
The following example shows the data access objects daos.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="accountDao"
class="org.springframework.samples.jpetstore.dao.jpa.JpaAccountDao">
<!-- additional collaborators and configuration for this bean go here -->
</bean>
<bean id="itemDao" class="org.springframework.samples.jpetstore.dao.jpa.JpaItemDao">
<!-- additional collaborators and configuration for this bean go here -->
</bean>
<!-- more bean definitions for data access objects go here -->
</beans>
In the preceding example, the service layer consists of the PetStoreServiceImpl class and two data access objects of the types JpaAccountDao and JpaItemDao (based on the JPA Object-Relational Mapping standard). The property name element refers to the name of the JavaBean property, and the ref element refers to the name of another bean definition. This linkage between id and ref elements expresses the dependency between collaborating objects. For details of configuring an object’s dependencies, see Dependencies.
Composing XML-based Configuration Metadata
It can be useful to have bean definitions span multiple XML files. Often, each individual XML configuration file represents a logical layer or module in your architecture.
You can use the application context constructor to load bean definitions from all these XML fragments. This constructor takes multiple Resource locations, as was shown in the previous section. Alternatively, use one or more occurrences of the <import/> element to load bean definitions from another file or files. The following example shows how to do so:
<beans>
<import resource="services.xml"/>
<import resource="resources/messageSource.xml"/>
<import resource="/resources/themeSource.xml"/>
<bean id="bean1" class="..."/>
<bean id="bean2" class="..."/>
</beans>
In the preceding example, external bean definitions are loaded from three files: services.xml, messageSource.xml, and themeSource.xml. All location paths are relative to the definition file doing the importing, so services.xml must be in the same directory or classpath location as the file doing the importing, while messageSource.xml and themeSource.xml must be in a resources location below the location of the importing file. As you can see, a leading slash is ignored. However, given that these paths are relative, it is better form not to use the slash at all. The contents of the files being imported, including the top level <beans/> element, must be valid XML bean definitions, according to the Spring Schema.
It is possible, but not recommended, to reference files in parent directories using a relative "../" path. Doing so creates a dependency on a file that is outside the current application. In particular, this reference is not recommended for classpath: URLs (for example, classpath:../services.xml), where the runtime resolution process chooses the “nearest” classpath root and then looks into its parent directory. Classpath configuration changes may lead to the choice of a different, incorrect directory.
You can always use fully qualified resource locations instead of relative paths: for example, file:C:/config/services.xml or classpath:/config/services.xml. However, be aware that you are coupling your application’s configuration to specific absolute locations. It is generally preferable to keep an indirection for such absolute locations — for example, through "${…}" placeholders that are resolved against JVM system properties at runtime.
The namespace itself provides the import directive feature. Further configuration features beyond plain bean definitions are available in a selection of XML namespaces provided by Spring — for example, the context and util namespaces.
The Groovy Bean Definition DSL
As a further example for externalized configuration metadata, bean definitions can also be expressed in Spring’s Groovy Bean Definition DSL, as known from the Grails framework. Typically, such configuration live in a ".groovy" file with the structure shown in the following example:
beans {
dataSource(BasicDataSource) {
driverClassName = "org.hsqldb.jdbcDriver"
url = "jdbc:hsqldb:mem:grailsDB"
username = "sa"
password = ""
settings = [mynew:"setting"]
}
sessionFactory(SessionFactory) {
dataSource = dataSource
}
myService(MyService) {
nestedBean = { AnotherBean bean ->
dataSource = dataSource
}
}
}
This configuration style is largely equivalent to XML bean definitions and even supports Spring’s XML configuration namespaces. It also allows for importing XML bean definition files through an importBeans directive.
1.2.3. Using the Container
The ApplicationContext is the interface for an advanced factory capable of maintaining a registry of different beans and their dependencies. By using the method T getBean(String name, Class<T> requiredType), you can retrieve instances of your beans.
The ApplicationContext lets you read bean definitions and access them, as the following example shows:
Java
// create and configure beans
ApplicationContext context = new ClassPathXmlApplicationContext("services.xml", "daos.xml");
// retrieve configured instance
PetStoreService service = context.getBean("petStore", PetStoreService.class);
// use configured instance
List<String> userList = service.getUsernameList();
Kotlin
import org.springframework.beans.factory.getBean
// create and configure beans
val context = ClassPathXmlApplicationContext("services.xml", "daos.xml")
// retrieve configured instance
val service = context.getBean<PetStoreService>("petStore")
// use configured instance
var userList = service.getUsernameList()
With Groovy configuration, bootstrapping looks very similar. It has a different context implementation class which is Groovy-aware (but also understands XML bean definitions). The following example shows Groovy configuration:
Java
ApplicationContext context = new GenericGroovyApplicationContext("services.groovy", "daos.groovy");
Kotlin
val context = GenericGroovyApplicationContext("services.groovy", "daos.groovy")
The most flexible variant is GenericApplicationContext in combination with reader delegates — for example, with XmlBeanDefinitionReader for XML files, as the following example shows:
Java
GenericApplicationContext context = new GenericApplicationContext();
new XmlBeanDefinitionReader(context).loadBeanDefinitions("services.xml", "daos.xml");
context.refresh();
Kotlin
val context = GenericApplicationContext()
XmlBeanDefinitionReader(context).loadBeanDefinitions("services.xml", "daos.xml")
context.refresh()
You can also use the GroovyBeanDefinitionReader for Groovy files, as the following example shows:
Java
GenericApplicationContext context = new GenericApplicationContext();
new GroovyBeanDefinitionReader(context).loadBeanDefinitions("services.groovy", "daos.groovy");
context.refresh();
Kotlin
val context = GenericApplicationContext()
GroovyBeanDefinitionReader(context).loadBeanDefinitions("services.groovy", "daos.groovy")
context.refresh()
You can mix and match such reader delegates on the same ApplicationContext, reading bean definitions from diverse configuration sources.
You can then use getBean to retrieve instances of your beans. The ApplicationContext interface has a few other methods for retrieving beans, but, ideally, your application code should never use them. Indeed, your application code should have no calls to the getBean() method at all and thus have no dependency on Spring APIs at all. For example, Spring’s integration with web frameworks provides dependency injection for various web framework components such as controllers and JSF-managed beans, letting you declare a dependency on a specific bean through metadata (such as an autowiring annotation).
1.3. Bean Overview
A Spring IoC container manages one or more beans. These beans are created with the configuration metadata that you supply to the container (for example, in the form of XML <bean/> definitions).
Within the container itself, these bean definitions are represented as BeanDefinition objects, which contain (among other information) the following metadata:
• A package-qualified class name: typically, the actual implementation class of the bean being defined.
• Bean behavioral configuration elements, which state how the bean should behave in the container (scope, lifecycle callbacks, and so forth).
• References to other beans that are needed for the bean to do its work. These references are also called collaborators or dependencies.
• Other configuration settings to set in the newly created object — for example, the size limit of the pool or the number of connections to use in a bean that manages a connection pool.
This metadata translates to a set of properties that make up each bean definition. The following table describes these properties:
Table 1. The bean definition
Property Explained in…
Class
Instantiating Beans
Name
Naming Beans
Scope
Bean Scopes
Constructor arguments
Dependency Injection
Properties
Dependency Injection
Autowiring mode
Autowiring Collaborators
Lazy initialization mode
Lazy-initialized Beans
Initialization method
Initialization Callbacks
Destruction method
Destruction Callbacks
In addition to bean definitions that contain information on how to create a specific bean, the ApplicationContext implementations also permit the registration of existing objects that are created outside the container (by users). This is done by accessing the ApplicationContext’s BeanFactory through the getBeanFactory() method, which returns the BeanFactory DefaultListableBeanFactory implementation. DefaultListableBeanFactory supports this registration through the registerSingleton(..) and registerBeanDefinition(..) methods. However, typical applications work solely with beans defined through regular bean definition metadata.
Bean metadata and manually supplied singleton instances need to be registered as early as possible, in order for the container to properly reason about them during autowiring and other introspection steps. While overriding existing metadata and existing singleton instances is supported to some degree, the registration of new beans at runtime (concurrently with live access to the factory) is not officially supported and may lead to concurrent access exceptions, inconsistent state in the bean container, or both.
1.3.1. Naming Beans
Every bean has one or more identifiers. These identifiers must be unique within the container that hosts the bean. A bean usually has only one identifier. However, if it requires more than one, the extra ones can be considered aliases.
In XML-based configuration metadata, you use the id attribute, the name attribute, or both to specify the bean identifiers. The id attribute lets you specify exactly one id. Conventionally, these names are alphanumeric ('myBean', 'someService', etc.), but they can contain special characters as well. If you want to introduce other aliases for the bean, you can also specify them in the name attribute, separated by a comma (,), semicolon (;), or white space. As a historical note, in versions prior to Spring 3.1, the id attribute was defined as an xsd:ID type, which constrained possible characters. As of 3.1, it is defined as an xsd:string type. Note that bean id uniqueness is still enforced by the container, though no longer by XML parsers.
You are not required to supply a name or an id for a bean. If you do not supply a name or id explicitly, the container generates a unique name for that bean. However, if you want to refer to that bean by name, through the use of the ref element or a Service Locator style lookup, you must provide a name. Motivations for not supplying a name are related to using inner beans and autowiring collaborators.
Bean Naming Conventions
The convention is to use the standard Java convention for instance field names when naming beans. That is, bean names start with a lowercase letter and are camel-cased from there. Examples of such names include accountManager, accountService, userDao, loginController, and so forth.
Naming beans consistently makes your configuration easier to read and understand. Also, if you use Spring AOP, it helps a lot when applying advice to a set of beans related by name.
With component scanning in the classpath, Spring generates bean names for unnamed components, following the rules described earlier: essentially, taking the simple class name and turning its initial character to lower-case. However, in the (unusual) special case when there is more than one character and both the first and second characters are upper case, the original casing gets preserved. These are the same rules as defined by java.beans.Introspector.decapitalize (which Spring uses here).
Aliasing a Bean outside the Bean Definition
In a bean definition itself, you can supply more than one name for the bean, by using a combination of up to one name specified by the id attribute and any number of other names in the name attribute. These names can be equivalent aliases to the same bean and are useful for some situations, such as letting each component in an application refer to a common dependency by using a bean name that is specific to that component itself.
Specifying all aliases where the bean is actually defined is not always adequate, however. It is sometimes desirable to introduce an alias for a bean that is defined elsewhere. This is commonly the case in large systems where configuration is split amongst each subsystem, with each subsystem having its own set of object definitions. In XML-based configuration metadata, you can use the <alias/> element to accomplish this. The following example shows how to do so:
<alias name="fromName" alias="toName"/>
In this case, a bean (in the same container) named fromName may also, after the use of this alias definition, be referred to as toName.
For example, the configuration metadata for subsystem A may refer to a DataSource by the name of subsystemA-dataSource. The configuration metadata for subsystem B may refer to a DataSource by the name of subsystemB-dataSource. When composing the main application that uses both these subsystems, the main application refers to the DataSource by the name of myApp-dataSource. To have all three names refer to the same object, you can add the following alias definitions to the configuration metadata:
<alias name="myApp-dataSource" alias="subsystemA-dataSource"/>
<alias name="myApp-dataSource" alias="subsystemB-dataSource"/>
Now each component and the main application can refer to the dataSource through a name that is unique and guaranteed not to clash with any other definition (effectively creating a namespace), yet they refer to the same bean.
Java-configuration
If you use Javaconfiguration, the @Bean annotation can be used to provide aliases. See Using the @Bean Annotation for details.
1.3.2. Instantiating Beans
A bean definition is essentially a recipe for creating one or more objects. The container looks at the recipe for a named bean when asked and uses the configuration metadata encapsulated by that bean definition to create (or acquire) an actual object.
If you use XML-based configuration metadata, you specify the type (or class) of object that is to be instantiated in the class attribute of the <bean/> element. This class attribute (which, internally, is a Class property on a BeanDefinition instance) is usually mandatory. (For exceptions, see Instantiation by Using an Instance Factory Method and Bean Definition Inheritance.) You can use the Class property in one of two ways:
• Typically, to specify the bean class to be constructed in the case where the container itself directly creates the bean by calling its constructor reflectively, somewhat equivalent to Java code with the new operator.
• To specify the actual class containing the static factory method that is invoked to create the object, in the less common case where the container invokes a static factory method on a class to create the bean. The object type returned from the invocation of the static factory method may be the same class or another class entirely.
Inner class names
If you want to configure a bean definition for a static nested class, you have to use the binary name of the nested class.
For example, if you have a class called SomeThing in the com.example package, and this SomeThing class has a static nested class called OtherThing, the value of the class attribute on a bean definition would be com.example.SomeThing$OtherThing.
Notice the use of the $ character in the name to separate the nested class name from the outer class name.
Instantiation with a Constructor
When you create a bean by the constructor approach, all normal classes are usable by and compatible with Spring. That is, the class being developed does not need to implement any specific interfaces or to be coded in a specific fashion. Simply specifying the bean class should suffice. However, depending on what type of IoC you use for that specific bean, you may need a default (empty) constructor.
The Spring IoC container can manage virtually any class you want it to manage. It is not limited to managing true JavaBeans. Most Spring users prefer actual JavaBeans with only a default (no-argument) constructor and appropriate setters and getters modeled after the properties in the container. You can also have more exotic non-bean-style classes in your container. If, for example, you need to use a legacy connection pool that absolutely does not adhere to the JavaBean specification, Spring can manage it as well.
With XML-based configuration metadata you can specify your bean class as follows:
<bean id="exampleBean" class="examples.ExampleBean"/>
<bean name="anotherExample" class="examples.ExampleBeanTwo"/>
For details about the mechanism for supplying arguments to the constructor (if required) and setting object instance properties after the object is constructed, see Injecting Dependencies.
Instantiation with a Static Factory Method
When defining a bean that you create with a static factory method, use the class attribute to specify the class that contains the static factory method and an attribute named factory-method to specify the name of the factory method itself. You should be able to call this method (with optional arguments, as described later) and return a live object, which subsequently is treated as if it had been created through a constructor. One use for such a bean definition is to call static factories in legacy code.
The following bean definition specifies that the bean be created by calling a factory method. The definition does not specify the type (class) of the returned object, only the class containing the factory method. In this example, the createInstance() method must be a static method. The following example shows how to specify a factory method:
<bean id="clientService"
class="examples.ClientService"
factory-method="createInstance"/>
The following example shows a class that would work with the preceding bean definition:
Java
public class ClientService {
private static ClientService clientService = new ClientService();
private ClientService() {}
public static ClientService createInstance() {
return clientService;
}
}
Kotlin
class ClientService private constructor() {
companion object {
private val clientService = ClientService()
fun createInstance() = clientService
}
}
For details about the mechanism for supplying (optional) arguments to the factory method and setting object instance properties after the object is returned from the factory, see Dependencies and Configuration in Detail.
Instantiation by Using an Instance Factory Method
Similar to instantiation through a static factory method, instantiation with an instance factory method invokes a non-static method of an existing bean from the container to create a new bean. To use this mechanism, leave the class attribute empty and, in the factory-bean attribute, specify the name of a bean in the current (or parent or ancestor) container that contains the instance method that is to be invoked to create the object. Set the name of the factory method itself with the factory-method attribute. The following example shows how to configure such a bean:
<!-- the factory bean, which contains a method called createInstance() -->
<bean id="serviceLocator" class="examples.DefaultServiceLocator">
<!-- inject any dependencies required by this locator bean -->
</bean>
<!-- the bean to be created via the factory bean -->
<bean id="clientService"
factory-bean="serviceLocator"
factory-method="createClientServiceInstance"/>
The following example shows the corresponding class:
Java
public class DefaultServiceLocator {
private static ClientService clientService = new ClientServiceImpl();
public ClientService createClientServiceInstance() {
return clientService;
}
}
Kotlin
class DefaultServiceLocator {
companion object {
private val clientService = ClientServiceImpl()
}
fun createClientServiceInstance(): ClientService {
return clientService
}
}
One factory class can also hold more than one factory method, as the following example shows:
<bean id="serviceLocator" class="examples.DefaultServiceLocator">
<!-- inject any dependencies required by this locator bean -->
</bean>
<bean id="clientService"
factory-bean="serviceLocator"
factory-method="createClientServiceInstance"/>
<bean id="accountService"
factory-bean="serviceLocator"
factory-method="createAccountServiceInstance"/>
The following example shows the corresponding class:
Java
public class DefaultServiceLocator {
private static ClientService clientService = new ClientServiceImpl();
private static AccountService accountService = new AccountServiceImpl();
public ClientService createClientServiceInstance() {
return clientService;
}
public AccountService createAccountServiceInstance() {
return accountService;
}
}
Kotlin
class DefaultServiceLocator {
companion object {
private val clientService = ClientServiceImpl()
private val accountService = AccountServiceImpl()
}
fun createClientServiceInstance(): ClientService {
return clientService
}
fun createAccountServiceInstance(): AccountService {
return accountService
}
}
This approach shows that the factory bean itself can be managed and configured through dependency injection (DI). See Dependencies and Configuration in Detail.
In Spring documentation, “factory bean” refers to a bean that is configured in the Spring container and that creates objects through an instance or static factory method. By contrast, FactoryBean (notice the capitalization) refers to a Spring-specific FactoryBean implementation class.
Determining a Bean’s Runtime Type
The runtime type of a specific bean is non-trivial to determine. A specified class in the bean metadata definition is just an initial class reference, potentially combined with a declared factory method or being a FactoryBean class which may lead to a different runtime type of the bean, or not being set at all in case of an instance-level factory method (which is resolved via the specified factory-bean name instead). Additionally, AOP proxying may wrap a bean instance with an interface-based proxy with limited exposure of the target bean’s actual type (just its implemented interfaces).
The recommended way to find out about the actual runtime type of a particular bean is a BeanFactory.getType call for the specified bean name. This takes all of the above cases into account and returns the type of object that a BeanFactory.getBean call is going to return for the same bean name.
1.4. Dependencies
A typical enterprise application does not consist of a single object (or bean in the Spring parlance). Even the simplest application has a few objects that work together to present what the end-user sees as a coherent application. This next section explains how you go from defining a number of bean definitions that stand alone to a fully realized application where objects collaborate to achieve a goal.
1.4.1. Dependency Injection
Dependency injection (DI) is a process whereby objects define their dependencies (that is, the other objects with which they work) only through constructor arguments, arguments to a factory method, or properties that are set on the object instance after it is constructed or returned from a factory method. The container then injects those dependencies when it creates the bean. This process is fundamentally the inverse (hence the name, Inversion of Control) of the bean itself controlling the instantiation or location of its dependencies on its own by using direct construction of classes or the Service Locator pattern.
Code is cleaner with the DI principle, and decoupling is more effective when objects are provided with their dependencies. The object does not look up its dependencies and does not know the location or class of the dependencies. As a result, your classes become easier to test, particularly when the dependencies are on interfaces or abstract base classes, which allow for stub or mock implementations to be used in unit tests.
Constructor-based Dependency Injection
Constructor-based DI is accomplished by the container invoking a constructor with a number of arguments, each representing a dependency. Calling a static factory method with specific arguments to construct the bean is nearly equivalent, and this discussion treats arguments to a constructor and to a static factory method similarly. The following example shows a class that can only be dependency-injected with constructor injection:
Java
public class SimpleMovieLister {
// the SimpleMovieLister has a dependency on a MovieFinder
private MovieFinder movieFinder;
// a constructor so that the Spring container can inject a MovieFinder
public SimpleMovieLister(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// business logic that actually uses the injected MovieFinder is omitted...
}
Kotlin
// a constructor so that the Spring container can inject a MovieFinder
class SimpleMovieLister(private val movieFinder: MovieFinder) {
// business logic that actually uses the injected MovieFinder is omitted...
}
Notice that there is nothing special about this class. It is a POJO that has no dependencies on container specific interfaces, base classes or annotations.
Constructor Argument Resolution
Constructor argument resolution matching occurs by using the argument’s type. If no potential ambiguity exists in the constructor arguments of a bean definition, the order in which the constructor arguments are defined in a bean definition is the order in which those arguments are supplied to the appropriate constructor when the bean is being instantiated. Consider the following class:
Java
package x.y;
public class ThingOne {
public ThingOne(ThingTwo thingTwo, ThingThree thingThree) {
// ...
}
}
Kotlin
package x.y
class ThingOne(thingTwo: ThingTwo, thingThree: ThingThree)
Assuming that ThingTwo and ThingThree classes are not related by inheritance, no potential ambiguity exists. Thus, the following configuration works fine, and you do not need to specify the constructor argument indexes or types explicitly in the <constructor-arg/> element.
<beans>
<bean id="beanOne" class="x.y.ThingOne">
<constructor-arg ref="beanTwo"/>
<constructor-arg ref="beanThree"/>
</bean>
<bean id="beanTwo" class="x.y.ThingTwo"/>
<bean id="beanThree" class="x.y.ThingThree"/>
</beans>
When another bean is referenced, the type is known, and matching can occur (as was the case with the preceding example). When a simple type is used, such as <value>true</value>, Spring cannot determine the type of the value, and so cannot match by type without help. Consider the following class:
Java
package examples;
public class ExampleBean {
// Number of years to calculate the Ultimate Answer
private int years;
// The Answer to Life, the Universe, and Everything
private String ultimateAnswer;
public ExampleBean(int years, String ultimateAnswer) {
this.years = years;
this.ultimateAnswer = ultimateAnswer;
}
}
Kotlin
package examples
class ExampleBean(
private val years: Int, // Number of years to calculate the Ultimate Answer
private val ultimateAnswer: String// The Answer to Life, the Universe, and Everything
)
Constructor argument type matching
In the preceding scenario, the container can use type matching with simple types if you explicitly specify the type of the constructor argument by using the type attribute. as the following example shows:
<bean id="exampleBean" class="examples.ExampleBean">
<constructor-arg type="int" value="7500000"/>
<constructor-arg type="java.lang.String" value="42"/>
</bean>
Constructor argument index
You can use the index attribute to specify explicitly the index of constructor arguments, as the following example shows:
<bean id="exampleBean" class="examples.ExampleBean">
<constructor-arg index="0" value="7500000"/>
<constructor-arg index="1" value="42"/>
</bean>
In addition to resolving the ambiguity of multiple simple values, specifying an index resolves ambiguity where a constructor has two arguments of the same type.
The index is 0-based.
Constructor argument name
You can also use the constructor parameter name for value disambiguation, as the following example shows:
<bean id="exampleBean" class="examples.ExampleBean">
<constructor-arg name="years" value="7500000"/>
<constructor-arg name="ultimateAnswer" value="42"/>
</bean>
Keep in mind that, to make this work out of the box, your code must be compiled with the debug flag enabled so that Spring can look up the parameter name from the constructor. If you cannot or do not want to compile your code with the debug flag, you can use the @ConstructorProperties JDK annotation to explicitly name your constructor arguments. The sample class would then have to look as follows:
Java
package examples;
public class ExampleBean {
// Fields omitted
@ConstructorProperties({"years", "ultimateAnswer"})
public ExampleBean(int years, String ultimateAnswer) {
this.years = years;
this.ultimateAnswer = ultimateAnswer;
}
}
Kotlin
package examples
class ExampleBean
@ConstructorProperties("years", "ultimateAnswer")
constructor(val years: Int, val ultimateAnswer: String)
Setter-based Dependency Injection
Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or a no-argument static factory method to instantiate your bean.
The following example shows a class that can only be dependency-injected by using pure setter injection. This class is conventional Java. It is a POJO that has no dependencies on container specific interfaces, base classes, or annotations.
Java
public class SimpleMovieLister {
// the SimpleMovieLister has a dependency on the MovieFinder
private MovieFinder movieFinder;
// a setter method so that the Spring container can inject a MovieFinder
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// business logic that actually uses the injected MovieFinder is omitted...
}
Kotlin
class SimpleMovieLister {
// a late-initialized property so that the Spring container can inject a MovieFinder
lateinit var movieFinder: MovieFinder
// business logic that actually uses the injected MovieFinder is omitted...
}
The ApplicationContext supports constructor-based and setter-based DI for the beans it manages. It also supports setter-based DI after some dependencies have already been injected through the constructor approach. You configure the dependencies in the form of a BeanDefinition, which you use in conjunction with PropertyEditor instances to convert properties from one format to another. However, most Spring users do not work with these classes directly (that is, programmatically) but rather with XML bean definitions, annotated components (that is, classes annotated with @Component, @Controller, and so forth), or @Bean methods in Java-based @Configuration classes. These sources are then converted internally into instances of BeanDefinition and used to load an entire Spring IoC container instance.
Constructor-based or setter-based DI?
Since you can mix constructor-based and setter-based DI, it is a good rule of thumb to use constructors for mandatory dependencies and setter methods or configuration methods for optional dependencies. Note that use of the @Required annotation on a setter method can be used to make the property be a required dependency; however, constructor injection with programmatic validation of arguments is preferable.
The Spring team generally advocates constructor injection, as it lets you implement application components as immutable objects and ensures that required dependencies are not null. Furthermore, constructor-injected components are always returned to the client (calling) code in a fully initialized state. As a side note, a large number of constructor arguments is a bad code smell, implying that the class likely has too many responsibilities and should be refactored to better address proper separation of concerns.
Setter injection should primarily only be used for optional dependencies that can be assigned reasonable default values within the class. Otherwise, not-null checks must be performed everywhere the code uses the dependency. One benefit of setter injection is that setter methods make objects of that class amenable to reconfiguration or re-injection later. Management through JMX MBeans is therefore a compelling use case for setter injection.
Use the DI style that makes the most sense for a particular class. Sometimes, when dealing with third-party classes for which you do not have the source, the choice is made for you. For example, if a third-party class does not expose any setter methods, then constructor injection may be the only available form of DI.
Dependency Resolution Process
The container performs bean dependency resolution as follows:
• The ApplicationContext is created and initialized with configuration metadata that describes all the beans. Configuration metadata can be specified by XML, Java code, or annotations.
• For each bean, its dependencies are expressed in the form of properties, constructor arguments, or arguments to the static-factory method (if you use that instead of a normal constructor). These dependencies are provided to the bean, when the bean is actually created.
• Each property or constructor argument is an actual definition of the value to set, or a reference to another bean in the container.
• Each property or constructor argument that is a value is converted from its specified format to the actual type of that property or constructor argument. By default, Spring can convert a value supplied in string format to all built-in types, such as int, long, String, boolean, and so forth.
The Spring container validates the configuration of each bean as the container is created. However, the bean properties themselves are not set until the bean is actually created. Beans that are singleton-scoped and set to be pre-instantiated (the default) are created when the container is created. Scopes are defined in Bean Scopes. Otherwise, the bean is created only when it is requested. Creation of a bean potentially causes a graph of beans to be created, as the bean’s dependencies and its dependencies' dependencies (and so on) are created and assigned. Note that resolution mismatches among those dependencies may show up late — that is, on first creation of the affected bean.
Circular dependencies
If you use predominantly constructor injection, it is possible to create an unresolvable circular dependency scenario.
For example: Class A requires an instance of class B through constructor injection, and class B requires an instance of class A through constructor injection. If you configure beans for classes A and B to be injected into each other, the Spring IoC container detects this circular reference at runtime, and throws a BeanCurrentlyInCreationException.
One possible solution is to edit the source code of some classes to be configured by setters rather than constructors. Alternatively, avoid constructor injection and use setter injection only. In other words, although it is not recommended, you can configure circular dependencies with setter injection.
Unlike the typical case (with no circular dependencies), a circular dependency between bean A and bean B forces one of the beans to be injected into the other prior to being fully initialized itself (a classic chicken-and-egg scenario).
You can generally trust Spring to do the right thing. It detects configuration problems, such as references to non-existent beans and circular dependencies, at container load-time. Spring sets properties and resolves dependencies as late as possible, when the bean is actually created. This means that a Spring container that has loaded correctly can later generate an exception when you request an object if there is a problem creating that object or one of its dependencies — for example, the bean throws an exception as a result of a missing or invalid property. This potentially delayed visibility of some configuration issues is why ApplicationContext implementations by default pre-instantiate singleton beans. At the cost of some upfront time and memory to create these beans before they are actually needed, you discover configuration issues when the ApplicationContext is created, not later. You can still override this default behavior so that singleton beans initialize lazily, rather than being pre-instantiated.
If no circular dependencies exist, when one or more collaborating beans are being injected into a dependent bean, each collaborating bean is totally configured prior to being injected into the dependent bean. This means that, if bean A has a dependency on bean B, the Spring IoC container completely configures bean B prior to invoking the setter method on bean A. In other words, the bean is instantiated (if it is not a pre-instantiated singleton), its dependencies are set, and the relevant lifecycle methods (such as a configured init method or the InitializingBean callback method) are invoked.
Examples of Dependency Injection
The following example uses XML-based configuration metadata for setter-based DI. A small part of a Spring XML configuration file specifies some bean definitions as follows:
<bean id="exampleBean" class="examples.ExampleBean">
<!-- setter injection using the nested ref element -->
<property name="beanOne">
<ref bean="anotherExampleBean"/>
</property>
<!-- setter injection using the neater ref attribute -->
<property name="beanTwo" ref="yetAnotherBean"/>
<property name="integerProperty" value="1"/>
</bean>
<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
The following example shows the corresponding ExampleBean class:
Java
public class ExampleBean {
private AnotherBean beanOne;
private YetAnotherBean beanTwo;
private int i;
public void setBeanOne(AnotherBean beanOne) {
this.beanOne = beanOne;
}
public void setBeanTwo(YetAnotherBean beanTwo) {
this.beanTwo = beanTwo;
}
public void setIntegerProperty(int i) {
this.i = i;
}
}
Kotlin
class ExampleBean {
lateinit var beanOne: AnotherBean
lateinit var beanTwo: YetAnotherBean
var i: Int = 0
}
In the preceding example, setters are declared to match against the properties specified in the XML file. The following example uses constructor-based DI:
<bean id="exampleBean" class="examples.ExampleBean">
<!-- constructor injection using the nested ref element -->
<constructor-arg>
<ref bean="anotherExampleBean"/>
</constructor-arg>
<!-- constructor injection using the neater ref attribute -->
<constructor-arg ref="yetAnotherBean"/>
<constructor-arg type="int" value="1"/>
</bean>
<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
The following example shows the corresponding ExampleBean class:
Java
public class ExampleBean {
private AnotherBean beanOne;
private YetAnotherBean beanTwo;
private int i;
public ExampleBean(
AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) {
this.beanOne = anotherBean;
this.beanTwo = yetAnotherBean;
this.i = i;
}
}
Kotlin
class ExampleBean(
private val beanOne: AnotherBean,
private val beanTwo: YetAnotherBean,
private val i: Int)
The constructor arguments specified in the bean definition are used as arguments to the constructor of the ExampleBean.
Now consider a variant of this example, where, instead of using a constructor, Spring is told to call a static factory method to return an instance of the object:
<bean id="exampleBean" class="examples.ExampleBean" factory-method="createInstance">
<constructor-arg ref="anotherExampleBean"/>
<constructor-arg ref="yetAnotherBean"/>
<constructor-arg value="1"/>
</bean>
<bean id="anotherExampleBean" class="examples.AnotherBean"/>
<bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
The following example shows the corresponding ExampleBean class:
Java
public class ExampleBean {
// a private constructor
private ExampleBean(...) {
...
}
// a static factory method; the arguments to this method can be
// considered the dependencies of the bean that is returned,
// regardless of how those arguments are actually used.
public static ExampleBean createInstance (
AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) {
ExampleBean eb = new ExampleBean (...);
// some other operations...
return eb;
}
}
Kotlin
class ExampleBean private constructor() {
companion object {
// a static factory method; the arguments to this method can be
// considered the dependencies of the bean that is returned,
// regardless of how those arguments are actually used.
fun createInstance(anotherBean: AnotherBean, yetAnotherBean: YetAnotherBean, i: Int): ExampleBean {
val eb = ExampleBean (...)
// some other operations...
return eb
}
}
}
Arguments to the static factory method are supplied by <constructor-arg/> elements, exactly the same as if a constructor had actually been used. The type of the class being returned by the factory method does not have to be of the same type as the class that contains the static factory method (although, in this example, it is). An instance (non-static) factory method can be used in an essentially identical fashion (aside from the use of the factory-bean attribute instead of the class attribute), so we do not discuss those details here.
1.4.2. Dependencies and Configuration in Detail
As mentioned in the previous section, you can define bean properties and constructor arguments as references to other managed beans (collaborators) or as values defined inline. Spring’s XML-based configuration metadata supports sub-element types within its <property/> and <constructor-arg/> elements for this purpose.
Straight Values (Primitives, Strings, and so on)
The value attribute of the <property/> element specifies a property or constructor argument as a human-readable string representation. Spring’s conversion service is used to convert these values from a String to the actual type of the property or argument. The following example shows various values being set:
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<!-- results in a setDriverClassName(String) call -->
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="jdbc:mysql://localhost:3306/mydb"/>
<property name="username" value="root"/>
<property name="password" value="masterkaoli"/>
</bean>
The following example uses the p-namespace for even more succinct XML configuration:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close"
p:driverClassName="com.mysql.jdbc.Driver"
p:url="jdbc:mysql://localhost:3306/mydb"
p:username="root"
p:password="masterkaoli"/>
</beans>
The preceding XML is more succinct. However, typos are discovered at runtime rather than design time, unless you use an IDE (such as IntelliJ IDEA or the Spring Tools for Eclipse) that supports automatic property completion when you create bean definitions. Such IDE assistance is highly recommended.
You can also configure a java.util.Properties instance, as follows:
<bean id="mappings"
class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
<!-- typed as a java.util.Properties -->
<property name="properties">
<value>
jdbc.driver.className=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/mydb
</value>
</property>
</bean>
The Spring container converts the text inside the <value/> element into a java.util.Properties instance by using the JavaBeans PropertyEditor mechanism. This is a nice shortcut, and is one of a few places where the Spring team do favor the use of the nested <value/> element over the value attribute style.
The idref element
The idref element is simply an error-proof way to pass the id (a string value - not a reference) of another bean in the container to a <constructor-arg/> or <property/> element. The following example shows how to use it:
<bean id="theTargetBean" class="..."/>
<bean id="theClientBean" class="...">
<property name="targetName">
<idref bean="theTargetBean"/>
</property>
</bean>
The preceding bean definition snippet is exactly equivalent (at runtime) to the following snippet:
<bean id="theTargetBean" class="..." />
<bean id="client" class="...">
<property name="targetName" value="theTargetBean"/>
</bean>
The first form is preferable to the second, because using the idref tag lets the container validate at deployment time that the referenced, named bean actually exists. In the second variation, no validation is performed on the value that is passed to the targetName property of the client bean. Typos are only discovered (with most likely fatal results) when the client bean is actually instantiated. If the client bean is a prototype bean, this typo and the resulting exception may only be discovered long after the container is deployed.
The local attribute on the idref element is no longer supported in the 4.0 beans XSD, since it does not provide value over a regular bean reference any more. Change your existing idref local references to idref bean when upgrading to the 4.0 schema.
A common place (at least in versions earlier than Spring 2.0) where the <idref/> element brings value is in the configuration of AOP interceptors in a ProxyFactoryBean bean definition. Using <idref/> elements when you specify the interceptor names prevents you from misspelling an interceptor ID.
References to Other Beans (Collaborators)
The ref element is the final element inside a <constructor-arg/> or <property/> definition element. Here, you set the value of the specified property of a bean to be a reference to another bean (a collaborator) managed by the container. The referenced bean is a dependency of the bean whose property is to be set, and it is initialized on demand as needed before the property is set. (If the collaborator is a singleton bean, it may already be initialized by the container.) All references are ultimately a reference to another object. Scoping and validation depend on whether you specify the ID or name of the other object through the bean or parent attribute.
Specifying the target bean through the bean attribute of the <ref/> tag is the most general form and allows creation of a reference to any bean in the same container or parent container, regardless of whether it is in the same XML file. The value of the bean attribute may be the same as the id attribute of the target bean or be the same as one of the values in the name attribute of the target bean. The following example shows how to use a ref element:
<ref bean="someBean"/>
Specifying the target bean through the parent attribute creates a reference to a bean that is in a parent container of the current container. The value of the parent attribute may be the same as either the id attribute of the target bean or one of the values in the name attribute of the target bean. The target bean must be in a parent container of the current one. You should use this bean reference variant mainly when you have a hierarchy of containers and you want to wrap an existing bean in a parent container with a proxy that has the same name as the parent bean. The following pair of listings shows how to use the parent attribute:
<!-- in the parent context -->
<bean id="accountService" class="com.something.SimpleAccountService">
<!-- insert dependencies as required as here -->
</bean>
<!-- in the child (descendant) context -->
<bean id="accountService" <!-- bean name is the same as the parent bean -->
class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="target">
<ref parent="accountService"/> <!-- notice how we refer to the parent bean -->
</property>
<!-- insert other configuration and dependencies as required here -->
</bean>
The local attribute on the ref element is no longer supported in the 4.0 beans XSD, since it does not provide value over a regular bean reference any more. Change your existing ref local references to ref bean when upgrading to the 4.0 schema.
Inner Beans
A <bean/> element inside the <property/> or <constructor-arg/> elements defines an inner bean, as the following example shows:
<bean id="outer" class="...">
<!-- instead of using a reference to a target bean, simply define the target bean inline -->
<property name="target">
<bean class="com.example.Person"> <!-- this is the inner bean -->
<property name="name" value="Fiona Apple"/>
<property name="age" value="25"/>
</bean>
</property>
</bean>
An inner bean definition does not require a defined ID or name. If specified, the container does not use such a value as an identifier. The container also ignores the scope flag on creation, because inner beans are always anonymous and are always created with the outer bean. It is not possible to access inner beans independently or to inject them into collaborating beans other than into the enclosing bean.
As a corner case, it is possible to receive destruction callbacks from a custom scope — for example, for a request-scoped inner bean contained within a singleton bean. The creation of the inner bean instance is tied to its containing bean, but destruction callbacks let it participate in the request scope’s lifecycle. This is not a common scenario. Inner beans typically simply share their containing bean’s scope.
Collections
The <list/>, <set/>, <map/>, and <props/> elements set the properties and arguments of the Java Collection types List, Set, Map, and Properties, respectively. The following example shows how to use them:
<bean id="moreComplexObject" class="example.ComplexObject">
<!-- results in a setAdminEmails(java.util.Properties) call -->
<property name="adminEmails">
<props>
<prop key="administrator">[email protected]</prop>
<prop key="support">[email protected]</prop>
<prop key="development">[email protected]</prop>
</props>
</property>
<!-- results in a setSomeList(java.util.List) call -->
<property name="someList">
<list>
<value>a list element followed by a reference</value>
<ref bean="myDataSource" />
</list>
</property>
<!-- results in a setSomeMap(java.util.Map) call -->
<property name="someMap">
<map>
<entry key="an entry" value="just some string"/>
<entry key ="a ref" value-ref="myDataSource"/>
</map>
</property>
<!-- results in a setSomeSet(java.util.Set) call -->
<property name="someSet">
<set>
<value>just some string</value>
<ref bean="myDataSource" />
</set>
</property>
</bean>
The value of a map key or value, or a set value, can also be any of the following elements:
bean | ref | idref | list | set | map | props | value | null
Collection Merging
The Spring container also supports merging collections. An application developer can define a parent <list/>, <map/>, <set/> or <props/> element and have child <list/>, <map/>, <set/> or <props/> elements inherit and override values from the parent collection. That is, the child collection’s values are the result of merging the elements of the parent and child collections, with the child’s collection elements overriding values specified in the parent collection.
This section on merging discusses the parent-child bean mechanism. Readers unfamiliar with parent and child bean definitions may wish to read the relevant section before continuing.
The following example demonstrates collection merging:
<beans>
<bean id="parent" abstract="true" class="example.ComplexObject">
<property name="adminEmails">
<props>
<prop key="administrator">[email protected]</prop>
<prop key="support">[email protected]</prop>
</props>
</property>
</bean>
<bean id="child" parent="parent">
<property name="adminEmails">
<!-- the merge is specified on the child collection definition -->
<props merge="true">
<prop key="sales">[email protected]</prop>
<prop key="support">[email protected]</prop>
</props>
</property>
</bean>
<beans>
Notice the use of the merge=true attribute on the <props/> element of the adminEmails property of the child bean definition. When the child bean is resolved and instantiated by the container, the resulting instance has an adminEmails Properties collection that contains the result of merging the child’s adminEmails collection with the parent’s adminEmails collection. The following listing shows the result:
The child Properties collection’s value set inherits all property elements from the parent <props/>, and the child’s value for the support value overrides the value in the parent collection.
This merging behavior applies similarly to the <list/>, <map/>, and <set/> collection types. In the specific case of the <list/> element, the semantics associated with the List collection type (that is, the notion of an ordered collection of values) is maintained. The parent’s values precede all of the child list’s values. In the case of the Map, Set, and Properties collection types, no ordering exists. Hence, no ordering semantics are in effect for the collection types that underlie the associated Map, Set, and Properties implementation types that the container uses internally.
Limitations of Collection Merging
You cannot merge different collection types (such as a Map and a List). If you do attempt to do so, an appropriate Exception is thrown. The merge attribute must be specified on the lower, inherited, child definition. Specifying the merge attribute on a parent collection definition is redundant and does not result in the desired merging.
Strongly-typed collection
With the introduction of generic types in Java 5, you can use strongly typed collections. That is, it is possible to declare a Collection type such that it can only contain (for example) String elements. If you use Spring to dependency-inject a strongly-typed Collection into a bean, you can take advantage of Spring’s type-conversion support such that the elements of your strongly-typed Collection instances are converted to the appropriate type prior to being added to the Collection. The following Java class and bean definition show how to do so:
Java
public class SomeClass {
private Map<String, Float> accounts;
public void setAccounts(Map<String, Float> accounts) {
this.accounts = accounts;
}
}
Kotlin
class SomeClass {
lateinit var accounts: Map<String, Float>
}
<beans>
<bean id="something" class="x.y.SomeClass">
<property name="accounts">
<map>
<entry key="one" value="9.99"/>
<entry key="two" value="2.75"/>
<entry key="six" value="3.99"/>
</map>
</property>
</bean>
</beans>
When the accounts property of the something bean is prepared for injection, the generics information about the element type of the strongly-typed Map<String, Float> is available by reflection. Thus, Spring’s type conversion infrastructure recognizes the various value elements as being of type Float, and the string values (9.99, 2.75, and 3.99) are converted into an actual Float type.
Null and Empty String Values
Spring treats empty arguments for properties and the like as empty Strings. The following XML-based configuration metadata snippet sets the email property to the empty String value ("").
<bean class="ExampleBean">
<property name="email" value=""/>
</bean>
The preceding example is equivalent to the following Java code:
Java
exampleBean.setEmail("");
Kotlin
exampleBean.email = ""
The <null/> element handles null values. The following listing shows an example:
<bean class="ExampleBean">
<property name="email">
<null/>
</property>
</bean>
The preceding configuration is equivalent to the following Java code:
Java
exampleBean.setEmail(null);
Kotlin
exampleBean.email = null
XML Shortcut with the p-namespace
The p-namespace lets you use the bean element’s attributes (instead of nested <property/> elements) to describe your property values collaborating beans, or both.
Spring supports extensible configuration formats with namespaces, which are based on an XML Schema definition. The beans configuration format discussed in this chapter is defined in an XML Schema document. However, the p-namespace is not defined in an XSD file and exists only in the core of Spring.
The following example shows two XML snippets (the first uses standard XML format and the second uses the p-namespace) that resolve to the same result:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean name="classic" class="com.example.ExampleBean">
<property name="email" value="[email protected]"/>
</bean>
<bean name="p-namespace" class="com.example.ExampleBean"
p:email="[email protected]"/>
</beans>
The example shows an attribute in the p-namespace called email in the bean definition. This tells Spring to include a property declaration. As previously mentioned, the p-namespace does not have a schema definition, so you can set the name of the attribute to the property name.
This next example includes two more bean definitions that both have a reference to another bean:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean name="john-classic" class="com.example.Person">
<property name="name" value="John Doe"/>
<property name="spouse" ref="jane"/>
</bean>
<bean name="john-modern"
class="com.example.Person"
p:name="John Doe"
p:spouse-ref="jane"/>
<bean name="jane" class="com.example.Person">
<property name="name" value="Jane Doe"/>
</bean>
</beans>
This example includes not only a property value using the p-namespace but also uses a special format to declare property references. Whereas the first bean definition uses <property name="spouse" ref="jane"/> to create a reference from bean john to bean jane, the second bean definition uses p:spouse-ref="jane" as an attribute to do the exact same thing. In this case, spouse is the property name, whereas the -ref part indicates that this is not a straight value but rather a reference to another bean.
The p-namespace is not as flexible as the standard XML format. For example, the format for declaring property references clashes with properties that end in Ref, whereas the standard XML format does not. We recommend that you choose your approach carefully and communicate this to your team members to avoid producing XML documents that use all three approaches at the same time.
XML Shortcut with the c-namespace
Similar to the XML Shortcut with the p-namespace, the c-namespace, introduced in Spring 3.1, allows inlined attributes for configuring the constructor arguments rather then nested constructor-arg elements.
The following example uses the c: namespace to do the same thing as the from Constructor-based Dependency Injection:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:c="http://www.springframework.org/schema/c"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="beanTwo" class="x.y.ThingTwo"/>
<bean id="beanThree" class="x.y.ThingThree"/>
<!-- traditional declaration with optional argument names -->
<bean id="beanOne" class="x.y.ThingOne">
<constructor-arg name="thingTwo" ref="beanTwo"/>
<constructor-arg name="thingThree" ref="beanThree"/>
<constructor-arg name="email" value="[email protected]"/>
</bean>
<!-- c-namespace declaration with argument names -->
<bean id="beanOne" class="x.y.ThingOne" c:thingTwo-ref="beanTwo"
c:thingThree-ref="beanThree" c:email="[email protected]"/>
</beans>
The c: namespace uses the same conventions as the p: one (a trailing -ref for bean references) for setting the constructor arguments by their names. Similarly, it needs to be declared in the XML file even though it is not defined in an XSD schema (it exists inside the Spring core).
For the rare cases where the constructor argument names are not available (usually if the bytecode was compiled without debugging information), you can use fallback to the argument indexes, as follows:
<!-- c-namespace index declaration -->
<bean id="beanOne" class="x.y.ThingOne" c:_0-ref="beanTwo" c:_1-ref="beanThree"
c:_2="[email protected]"/>
Due to the XML grammar, the index notation requires the presence of the leading _, as XML attribute names cannot start with a number (even though some IDEs allow it). A corresponding index notation is also available for <constructor-arg> elements but not commonly used since the plain order of declaration is usually sufficient there.
In practice, the constructor resolution mechanism is quite efficient in matching arguments, so unless you really need to, we recommend using the name notation through-out your configuration.
Compound Property Names
You can use compound or nested property names when you set bean properties, as long as all components of the path except the final property name are not null. Consider the following bean definition:
<bean id="something" class="things.ThingOne">
<property name="fred.bob.sammy" value="123" />
</bean>
The something bean has a fred property, which has a bob property, which has a sammy property, and that final sammy property is being set to a value of 123. In order for this to work, the fred property of something and the bob property of fred must not be null after the bean is constructed. Otherwise, a NullPointerException is thrown.
1.4.3. Using depends-on
If a bean is a dependency of another bean, that usually means that one bean is set as a property of another. Typically you accomplish this with the <ref/> element in XML-based configuration metadata. However, sometimes dependencies between beans are less direct. An example is when a static initializer in a class needs to be triggered, such as for database driver registration. The depends-on attribute can explicitly force one or more beans to be initialized before the bean using this element is initialized. The following example uses the depends-on attribute to express a dependency on a single bean:
<bean id="beanOne" class="ExampleBean" depends-on="manager"/>
<bean id="manager" class="ManagerBean" />
To express a dependency on multiple beans, supply a list of bean names as the value of the depends-on attribute (commas, whitespace, and semicolons are valid delimiters):
<bean id="beanOne" class="ExampleBean" depends-on="manager,accountDao">
<property name="manager" ref="manager" />
</bean>
<bean id="manager" class="ManagerBean" />
<bean id="accountDao" class="x.y.jdbc.JdbcAccountDao" />
The depends-on attribute can specify both an initialization-time dependency and, in the case of singleton beans only, a corresponding destruction-time dependency. Dependent beans that define a depends-on relationship with a given bean are destroyed first, prior to the given bean itself being destroyed. Thus, depends-on can also control shutdown order.
1.4.4. Lazy-initialized Beans
By default, ApplicationContext implementations eagerly create and configure all singleton beans as part of the initialization process. Generally, this pre-instantiation is desirable, because errors in the configuration or surrounding environment are discovered immediately, as opposed to hours or even days later. When this behavior is not desirable, you can prevent pre-instantiation of a singleton bean by marking the bean definition as being lazy-initialized. A lazy-initialized bean tells the IoC container to create a bean instance when it is first requested, rather than at startup.
In XML, this behavior is controlled by the lazy-init attribute on the <bean/> element, as the following example shows:
<bean id="lazy" class="com.something.ExpensiveToCreateBean" lazy-init="true"/>
<bean name="not.lazy" class="com.something.AnotherBean"/>
When the preceding configuration is consumed by an ApplicationContext, the lazy bean is not eagerly pre-instantiated when the ApplicationContext starts, whereas the not.lazy bean is eagerly pre-instantiated.
However, when a lazy-initialized bean is a dependency of a singleton bean that is not lazy-initialized, the ApplicationContext creates the lazy-initialized bean at startup, because it must satisfy the singleton’s dependencies. The lazy-initialized bean is injected into a singleton bean elsewhere that is not lazy-initialized.
You can also control lazy-initialization at the container level by using the default-lazy-init attribute on the <beans/> element, as the following example shows:
<beans default-lazy-init="true">
<!-- no beans will be pre-instantiated... -->
</beans>
1.4.5. Autowiring Collaborators
The Spring container can autowire relationships between collaborating beans. You can let Spring resolve collaborators (other beans) automatically for your bean by inspecting the contents of the ApplicationContext. Autowiring has the following advantages:
• Autowiring can significantly reduce the need to specify properties or constructor arguments. (Other mechanisms such as a bean template discussed elsewhere in this chapter are also valuable in this regard.)
• Autowiring can update a configuration as your objects evolve. For example, if you need to add a dependency to a class, that dependency can be satisfied automatically without you needing to modify the configuration. Thus autowiring can be especially useful during development, without negating the option of switching to explicit wiring when the code base becomes more stable.
When using XML-based configuration metadata (see Dependency Injection), you can specify the autowire mode for a bean definition with the autowire attribute of the <bean/> element. The autowiring functionality has four modes. You specify autowiring per bean and can thus choose which ones to autowire. The following table describes the four autowiring modes:
Table 2. Autowiring modes
Mode Explanation
no
(Default) No autowiring. Bean references must be defined by ref elements. Changing the default setting is not recommended for larger deployments, because specifying collaborators explicitly gives greater control and clarity. To some extent, it documents the structure of a system.
byName
Autowiring by property name. Spring looks for a bean with the same name as the property that needs to be autowired. For example, if a bean definition is set to autowire by name and it contains a master property (that is, it has a setMaster(..) method), Spring looks for a bean definition named master and uses it to set the property.
byType
Lets a property be autowired if exactly one bean of the property type exists in the container. If more than one exists, a fatal exception is thrown, which indicates that you may not use byType autowiring for that bean. If there are no matching beans, nothing happens (the property is not set).
constructor
Analogous to byType but applies to constructor arguments. If there is not exactly one bean of the constructor argument type in the container, a fatal error is raised.
With byType or constructor autowiring mode, you can wire arrays and typed collections. In such cases, all autowire candidates within the container that match the expected type are provided to satisfy the dependency. You can autowire strongly-typed Map instances if the expected key type is String. An autowired Map instance’s values consist of all bean instances that match the expected type, and the Map instance’s keys contain the corresponding bean names.
Limitations and Disadvantages of Autowiring
Autowiring works best when it is used consistently across a project. If autowiring is not used in general, it might be confusing to developers to use it to wire only one or two bean definitions.
Consider the limitations and disadvantages of autowiring:
• Explicit dependencies in property and constructor-arg settings always override autowiring. You cannot autowire simple properties such as primitives, Strings, and Classes (and arrays of such simple properties). This limitation is by-design.
• Autowiring is less exact than explicit wiring. Although, as noted in the earlier table, Spring is careful to avoid guessing in case of ambiguity that might have unexpected results. The relationships between your Spring-managed objects are no longer documented explicitly.
• Wiring information may not be available to tools that may generate documentation from a Spring container.
• Multiple bean definitions within the container may match the type specified by the setter method or constructor argument to be autowired. For arrays, collections, or Map instances, this is not necessarily a problem. However, for dependencies that expect a single value, this ambiguity is not arbitrarily resolved. If no unique bean definition is available, an exception is thrown.
In the latter scenario, you have several options:
• Abandon autowiring in favor of explicit wiring.
• Avoid autowiring for a bean definition by setting its autowire-candidate attributes to false, as described in the next section.
• Designate a single bean definition as the primary candidate by setting the primary attribute of its <bean/> element to true.
• Implement the more fine-grained control available with annotation-based configuration, as described in Annotation-based Container Configuration.
Excluding a Bean from Autowiring
On a per-bean basis, you can exclude a bean from autowiring. In Spring’s XML format, set the autowire-candidate attribute of the <bean/> element to false. The container makes that specific bean definition unavailable to the autowiring infrastructure (including annotation style configurations such as @Autowired).
The autowire-candidate attribute is designed to only affect type-based autowiring. It does not affect explicit references by name, which get resolved even if the specified bean is not marked as an autowire candidate. As a consequence, autowiring by name nevertheless injects a bean if the name matches.
You can also limit autowire candidates based on pattern-matching against bean names. The top-level <beans/> element accepts one or more patterns within its default-autowire-candidates attribute. For example, to limit autowire candidate status to any bean whose name ends with Repository, provide a value of *Repository. To provide multiple patterns, define them in a comma-separated list. An explicit value of true or false for a bean definition’s autowire-candidate attribute always takes precedence. For such beans, the pattern matching rules do not apply.
These techniques are useful for beans that you never want to be injected into other beans by autowiring. It does not mean that an excluded bean cannot itself be configured by using autowiring. Rather, the bean itself is not a candidate for autowiring other beans.
1.4.6. Method Injection
In most application scenarios, most beans in the container are singletons. When a singleton bean needs to collaborate with another singleton bean or a non-singleton bean needs to collaborate with another non-singleton bean, you typically handle the dependency by defining one bean as a property of the other. A problem arises when the bean lifecycles are different. Suppose singleton bean A needs to use non-singleton (prototype) bean B, perhaps on each method invocation on A. The container creates the singleton bean A only once, and thus only gets one opportunity to set the properties. The container cannot provide bean A with a new instance of bean B every time one is needed.
A solution is to forego some inversion of control. You can make bean A aware of the container by implementing the ApplicationContextAware interface, and by making a getBean("B") call to the container ask for (a typically new) bean B instance every time bean A needs it. The following example shows this approach:
Java
// a class that uses a stateful Command-style class to perform some processing
package fiona.apple;
// Spring-API imports
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
public class CommandManager implements ApplicationContextAware {
private ApplicationContext applicationContext;
public Object process(Map commandState) {
// grab a new instance of the appropriate Command
Command command = createCommand();
// set the state on the (hopefully brand new) Command instance
command.setState(commandState);
return command.execute();
}
protected Command createCommand() {
// notice the Spring API dependency!
return this.applicationContext.getBean("command", Command.class);
}
public void setApplicationContext(
ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}
Kotlin
// a class that uses a stateful Command-style class to perform some processing
package fiona.apple
// Spring-API imports
import org.springframework.context.ApplicationContext
import org.springframework.context.ApplicationContextAware
class CommandManager : ApplicationContextAware {
private lateinit var applicationContext: ApplicationContext
fun process(commandState: Map<*, *>): Any {
// grab a new instance of the appropriate Command
val command = createCommand()
// set the state on the (hopefully brand new) Command instance
command.state = commandState
return command.execute()
}
// notice the Spring API dependency!
protected fun createCommand() =
applicationContext.getBean("command", Command::class.java)
override fun setApplicationContext(applicationContext: ApplicationContext) {
this.applicationContext = applicationContext
}
}
The preceding is not desirable, because the business code is aware of and coupled to the Spring Framework. Method Injection, a somewhat advanced feature of the Spring IoC container, lets you handle this use case cleanly.
You can read more about the motivation for Method Injection in this blog entry.
Lookup Method Injection
Lookup method injection is the ability of the container to override methods on container-managed beans and return the lookup result for another named bean in the container. The lookup typically involves a prototype bean, as in the scenario described in the preceding section. The Spring Framework implements this method injection by using bytecode generation from the CGLIB library to dynamically generate a subclass that overrides the method.
• For this dynamic subclassing to work, the class that the Spring bean container subclasses cannot be final, and the method to be overridden cannot be final, either.
• Unit-testing a class that has an abstract method requires you to subclass the class yourself and to supply a stub implementation of the abstract method.
• Concrete methods are also necessary for component scanning, which requires concrete classes to pick up.
• A further key limitation is that lookup methods do not work with factory methods and in particular not with @Bean methods in configuration classes, since, in that case, the container is not in charge of creating the instance and therefore cannot create a runtime-generated subclass on the fly.
In the case of the CommandManager class in the previous code snippet, the Spring container dynamically overrides the implementation of the createCommand() method. The CommandManager class does not have any Spring dependencies, as the reworked example shows:
Java
package fiona.apple;
// no more Spring imports!
public abstract class CommandManager {
public Object process(Object commandState) {
// grab a new instance of the appropriate Command interface
Command command = createCommand();
// set the state on the (hopefully brand new) Command instance
command.setState(commandState);
return command.execute();
}
// okay... but where is the implementation of this method?
protected abstract Command createCommand();
}
Kotlin
package fiona.apple
// no more Spring imports!
abstract class CommandManager {
fun process(commandState: Any): Any {
// grab a new instance of the appropriate Command interface
val command = createCommand()
// set the state on the (hopefully brand new) Command instance
command.state = commandState
return command.execute()
}
// okay... but where is the implementation of this method?
protected abstract fun createCommand(): Command
}
In the client class that contains the method to be injected (the CommandManager in this case), the method to be injected requires a signature of the following form:
<public|protected> [abstract] <return-type> theMethodName(no-arguments);
If the method is abstract, the dynamically-generated subclass implements the method. Otherwise, the dynamically-generated subclass overrides the concrete method defined in the original class. Consider the following example:
<!-- a stateful bean deployed as a prototype (non-singleton) -->
<bean id="myCommand" class="fiona.apple.AsyncCommand" scope="prototype">
<!-- inject dependencies here as required -->
</bean>
<!-- commandProcessor uses statefulCommandHelper -->
<bean id="commandManager" class="fiona.apple.CommandManager">
<lookup-method name="createCommand" bean="myCommand"/>
</bean>
The bean identified as commandManager calls its own createCommand() method whenever it needs a new instance of the myCommand bean. You must be careful to deploy the myCommand bean as a prototype if that is actually what is needed. If it is a singleton, the same instance of the myCommand bean is returned each time.
Alternatively, within the annotation-based component model, you can declare a lookup method through the @Lookup annotation, as the following example shows:
Java
public abstract class CommandManager {
public Object process(Object commandState) {
Command command = createCommand();
command.setState(commandState);
return command.execute();
}
@Lookup("myCommand")
protected abstract Command createCommand();
}
Kotlin
abstract class CommandManager {
fun process(commandState: Any): Any {
val command = createCommand()
command.state = commandState
return command.execute()
}
@Lookup("myCommand")
protected abstract fun createCommand(): Command
}
Or, more idiomatically, you can rely on the target bean getting resolved against the declared return type of the lookup method:
Java
public abstract class CommandManager {
public Object process(Object commandState) {
MyCommand command = createCommand();
command.setState(commandState);
return command.execute();
}
@Lookup
protected abstract MyCommand createCommand();
}
Kotlin
abstract class CommandManager {
fun process(commandState: Any): Any {
val command = createCommand()
command.state = commandState
return command.execute()
}
@Lookup
protected abstract fun createCommand(): Command
}
Note that you should typically declare such annotated lookup methods with a concrete stub implementation, in order for them to be compatible with Spring’s component scanning rules where abstract classes get ignored by default. This limitation does not apply to explicitly registered or explicitly imported bean classes.
Another way of accessing differently scoped target beans is an ObjectFactory/ Provider injection point. See Scoped Beans as Dependencies.
You may also find the ServiceLocatorFactoryBean (in the org.springframework.beans.factory.config package) to be useful.
Arbitrary Method Replacement
A less useful form of method injection than lookup method injection is the ability to replace arbitrary methods in a managed bean with another method implementation. You can safely skip the rest of this section until you actually need this functionality.
With XML-based configuration metadata, you can use the replaced-method element to replace an existing method implementation with another, for a deployed bean. Consider the following class, which has a method called computeValue that we want to override:
Java
public class MyValueCalculator {
public String computeValue(String input) {
// some real code...
}
// some other methods...
}
Kotlin
class MyValueCalculator {
fun computeValue(input: String): String {
// some real code...
}
// some other methods...
}
A class that implements the org.springframework.beans.factory.support.MethodReplacer interface provides the new method definition, as the following example shows:
Java
/**
* meant to be used to override the existing computeValue(String)
* implementation in MyValueCalculator
*/
public class ReplacementComputeValue implements MethodReplacer {
public Object reimplement(Object o, Method m, Object[] args) throws Throwable {
// get the input value, work with it, and return a computed result
String input = (String) args[0];
...
return ...;
}
}
Kotlin
/**
* meant to be used to override the existing computeValue(String)
* implementation in MyValueCalculator
*/
class ReplacementComputeValue : MethodReplacer {
override fun reimplement(obj: Any, method: Method, args: Array<out Any>): Any {
// get the input value, work with it, and return a computed result
val input = args[0] as String;
...
return ...;
}
}
The bean definition to deploy the original class and specify the method override would resemble the following example:
<bean id="myValueCalculator" class="x.y.z.MyValueCalculator">
<!-- arbitrary method replacement -->
<replaced-method name="computeValue" replacer="replacementComputeValue">
<arg-type>String</arg-type>
</replaced-method>
</bean>
<bean id="replacementComputeValue" class="a.b.c.ReplacementComputeValue"/>
You can use one or more <arg-type/> elements within the <replaced-method/> element to indicate the method signature of the method being overridden. The signature for the arguments is necessary only if the method is overloaded and multiple variants exist within the class. For convenience, the type string for an argument may be a substring of the fully qualified type name. For example, the following all match java.lang.String:
java.lang.String
String
Str
Because the number of arguments is often enough to distinguish between each possible choice, this shortcut can save a lot of typing, by letting you type only the shortest string that matches an argument type.
1.5. Bean Scopes
When you create a bean definition, you create a recipe for creating actual instances of the class defined by that bean definition. The idea that a bean definition is a recipe is important, because it means that, as with a class, you can create many object instances from a single recipe.
You can control not only the various dependencies and configuration values that are to be plugged into an object that is created from a particular bean definition but also control the scope of the objects created from a particular bean definition. This approach is powerful and flexible, because you can choose the scope of the objects you create through configuration instead of having to bake in the scope of an object at the Java class level. Beans can be defined to be deployed in one of a number of scopes. The Spring Framework supports six scopes, four of which are available only if you use a web-aware ApplicationContext. You can also create a custom scope.
The following table describes the supported scopes:
Table 3. Bean scopes
Scope Description
singleton
(Default) Scopes a single bean definition to a single object instance for each Spring IoC container.
prototype
Scopes a single bean definition to any number of object instances.
request
Scopes a single bean definition to the lifecycle of a single HTTP request. That is, each HTTP request has its own instance of a bean created off the back of a single bean definition. Only valid in the context of a web-aware Spring ApplicationContext.
session
Scopes a single bean definition to the lifecycle of an HTTP Session. Only valid in the context of a web-aware Spring ApplicationContext.
application
Scopes a single bean definition to the lifecycle of a ServletContext. Only valid in the context of a web-aware Spring ApplicationContext.
websocket
Scopes a single bean definition to the lifecycle of a WebSocket. Only valid in the context of a web-aware Spring ApplicationContext.
As of Spring 3.0, a thread scope is available but is not registered by default. For more information, see the documentation for SimpleThreadScope. For instructions on how to register this or any other custom scope, see Using a Custom Scope.
1.5.1. The Singleton Scope
Only one shared instance of a singleton bean is managed, and all requests for beans with an ID or IDs that match that bean definition result in that one specific bean instance being returned by the Spring container.
To put it another way, when you define a bean definition and it is scoped as a singleton, the Spring IoC container creates exactly one instance of the object defined by that bean definition. This single instance is stored in a cache of such singleton beans, and all subsequent requests and references for that named bean return the cached object. The following image shows how the singleton scope works:
singleton
Spring’s concept of a singleton bean differs from the singleton pattern as defined in the Gang of Four (GoF) patterns book. The GoF singleton hard-codes the scope of an object such that one and only one instance of a particular class is created per ClassLoader. The scope of the Spring singleton is best described as being per-container and per-bean. This means that, if you define one bean for a particular class in a single Spring container, the Spring container creates one and only one instance of the class defined by that bean definition. The singleton scope is the default scope in Spring. To define a bean as a singleton in XML, you can define a bean as shown in the following example:
<bean id="accountService" class="com.something.DefaultAccountService"/>
<!-- the following is equivalent, though redundant (singleton scope is the default) -->
<bean id="accountService" class="com.something.DefaultAccountService" scope="singleton"/>
1.5.2. The Prototype Scope
The non-singleton prototype scope of bean deployment results in the creation of a new bean instance every time a request for that specific bean is made. That is, the bean is injected into another bean or you request it through a getBean() method call on the container. As a rule, you should use the prototype scope for all stateful beans and the singleton scope for stateless beans.
The following diagram illustrates the Spring prototype scope:
prototype
(A data access object (DAO) is not typically configured as a prototype, because a typical DAO does not hold any conversational state. It was easier for us to reuse the core of the singleton diagram.)
The following example defines a bean as a prototype in XML:
<bean id="accountService" class="com.something.DefaultAccountService" scope="prototype"/>
In contrast to the other scopes, Spring does not manage the complete lifecycle of a prototype bean. The container instantiates, configures, and otherwise assembles a prototype object and hands it to the client, with no further record of that prototype instance. Thus, although initialization lifecycle callback methods are called on all objects regardless of scope, in the case of prototypes, configured destruction lifecycle callbacks are not called. The client code must clean up prototype-scoped objects and release expensive resources that the prototype beans hold. To get the Spring container to release resources held by prototype-scoped beans, try using a custom bean post-processor, which holds a reference to beans that need to be cleaned up.
In some respects, the Spring container’s role in regard to a prototype-scoped bean is a replacement for the Java new operator. All lifecycle management past that point must be handled by the client. (For details on the lifecycle of a bean in the Spring container, see Lifecycle Callbacks.)
1.5.3. Singleton Beans with Prototype-bean Dependencies
When you use singleton-scoped beans with dependencies on prototype beans, be aware that dependencies are resolved at instantiation time. Thus, if you dependency-inject a prototype-scoped bean into a singleton-scoped bean, a new prototype bean is instantiated and then dependency-injected into the singleton bean. The prototype instance is the sole instance that is ever supplied to the singleton-scoped bean.
However, suppose you want the singleton-scoped bean to acquire a new instance of the prototype-scoped bean repeatedly at runtime. You cannot dependency-inject a prototype-scoped bean into your singleton bean, because that injection occurs only once, when the Spring container instantiates the singleton bean and resolves and injects its dependencies. If you need a new instance of a prototype bean at runtime more than once, see Method Injection
1.5.4. Request, Session, Application, and WebSocket Scopes
The request, session, application, and websocket scopes are available only if you use a web-aware Spring ApplicationContext implementation (such as XmlWebApplicationContext). If you use these scopes with regular Spring IoC containers, such as the ClassPathXmlApplicationContext, an IllegalStateException that complains about an unknown bean scope is thrown.
Initial Web Configuration
To support the scoping of beans at the request, session, application, and websocket levels (web-scoped beans), some minor initial configuration is required before you define your beans. (This initial setup is not required for the standard scopes: singleton and prototype.)
How you accomplish this initial setup depends on your particular Servlet environment.
If you access scoped beans within Spring Web MVC, in effect, within a request that is processed by the Spring DispatcherServlet, no special setup is necessary. DispatcherServlet already exposes all relevant state.
If you use a Servlet 2.5 web container, with requests processed outside of Spring’s DispatcherServlet (for example, when using JSF or Struts), you need to register the org.springframework.web.context.request.RequestContextListener ServletRequestListener. For Servlet 3.0+, this can be done programmatically by using the WebApplicationInitializer interface. Alternatively, or for older containers, add the following declaration to your web application’s web.xml file:
<web-app>
...
<listener>
<listener-class>
org.springframework.web.context.request.RequestContextListener
</listener-class>
</listener>
...
</web-app>
Alternatively, if there are issues with your listener setup, consider using Spring’s RequestContextFilter. The filter mapping depends on the surrounding web application configuration, so you have to change it as appropriate. The following listing shows the filter part of a web application:
<web-app>
...
<filter>
<filter-name>requestContextFilter</filter-name>
<filter-class>org.springframework.web.filter.RequestContextFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>requestContextFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
...
</web-app>
DispatcherServlet, RequestContextListener, and RequestContextFilter all do exactly the same thing, namely bind the HTTP request object to the Thread that is servicing that request. This makes beans that are request- and session-scoped available further down the call chain.
Request scope
Consider the following XML configuration for a bean definition:
<bean id="loginAction" class="com.something.LoginAction" scope="request"/>
The Spring container creates a new instance of the LoginAction bean by using the loginAction bean definition for each and every HTTP request. That is, the loginAction bean is scoped at the HTTP request level. You can change the internal state of the instance that is created as much as you want, because other instances created from the same loginAction bean definition do not see these changes in state. They are particular to an individual request. When the request completes processing, the bean that is scoped to the request is discarded.
When using annotation-driven components or Java configuration, the @RequestScope annotation can be used to assign a component to the request scope. The following example shows how to do so:
Java
@RequestScope
@Component
public class LoginAction {
// ...
}
Kotlin
@RequestScope
@Component
class LoginAction {
// ...
}
Session Scope
Consider the following XML configuration for a bean definition:
<bean id="userPreferences" class="com.something.UserPreferences" scope="session"/>
The Spring container creates a new instance of the UserPreferences bean by using the userPreferences bean definition for the lifetime of a single HTTP Session. In other words, the userPreferences bean is effectively scoped at the HTTP Session level. As with request-scoped beans, you can change the internal state of the instance that is created as much as you want, knowing that other HTTP Session instances that are also using instances created from the same userPreferences bean definition do not see these changes in state, because they are particular to an individual HTTP Session. When the HTTP Session is eventually discarded, the bean that is scoped to that particular HTTP Session is also discarded.
When using annotation-driven components or Java configuration, you can use the @SessionScope annotation to assign a component to the session scope.
Java
@SessionScope
@Component
public class UserPreferences {
// ...
}
Kotlin
@SessionScope
@Component
class UserPreferences {
// ...
}
Application Scope
Consider the following XML configuration for a bean definition:
<bean id="appPreferences" class="com.something.AppPreferences" scope="application"/>
The Spring container creates a new instance of the AppPreferences bean by using the appPreferences bean definition once for the entire web application. That is, the appPreferences bean is scoped at the ServletContext level and stored as a regular ServletContext attribute. This is somewhat similar to a Spring singleton bean but differs in two important ways: It is a singleton per ServletContext, not per Spring 'ApplicationContext' (for which there may be several in any given web application), and it is actually exposed and therefore visible as a ServletContext attribute.
When using annotation-driven components or Java configuration, you can use the @ApplicationScope annotation to assign a component to the application scope. The following example shows how to do so:
Java
@ApplicationScope
@Component
public class AppPreferences {
// ...
}
Kotlin
@ApplicationScope
@Component
class AppPreferences {
// ...
}
Scoped Beans as Dependencies
The Spring IoC container manages not only the instantiation of your objects (beans), but also the wiring up of collaborators (or dependencies). If you want to inject (for example) an HTTP request-scoped bean into another bean of a longer-lived scope, you may choose to inject an AOP proxy in place of the scoped bean. That is, you need to inject a proxy object that exposes the same public interface as the scoped object but that can also retrieve the real target object from the relevant scope (such as an HTTP request) and delegate method calls onto the real object.
You may also use <aop:scoped-proxy/> between beans that are scoped as singleton, with the reference then going through an intermediate proxy that is serializable and therefore able to re-obtain the target singleton bean on deserialization.
When declaring <aop:scoped-proxy/> against a bean of scope prototype, every method call on the shared proxy leads to the creation of a new target instance to which the call is then being forwarded.
Also, scoped proxies are not the only way to access beans from shorter scopes in a lifecycle-safe fashion. You may also declare your injection point (that is, the constructor or setter argument or autowired field) as ObjectFactory<MyTargetBean>, allowing for a getObject() call to retrieve the current instance on demand every time it is needed — without holding on to the instance or storing it separately.
As an extended variant, you may declare ObjectProvider<MyTargetBean>, which delivers several additional access variants, including getIfAvailable and getIfUnique.
The JSR-330 variant of this is called Provider and is used with a Provider<MyTargetBean> declaration and a corresponding get() call for every retrieval attempt. See here for more details on JSR-330 overall.
The configuration in the following example is only one line, but it is important to understand the “why” as well as the “how” behind it:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/aop
https://www.springframework.org/schema/aop/spring-aop.xsd">
<!-- an HTTP Session-scoped bean exposed as a proxy -->
<bean id="userPreferences" class="com.something.UserPreferences" scope="session">
<!-- instructs the container to proxy the surrounding bean -->
<aop:scoped-proxy/> (1)
</bean>
<!-- a singleton-scoped bean injected with a proxy to the above bean -->
<bean id="userService" class="com.something.SimpleUserService">
<!-- a reference to the proxied userPreferences bean -->
<property name="userPreferences" ref="userPreferences"/>
</bean>
</beans>
1 The line that defines the proxy.
To create such a proxy, you insert a child <aop:scoped-proxy/> element into a scoped bean definition (see Choosing the Type of Proxy to Create and XML Schema-based configuration). Why do definitions of beans scoped at the request, session and custom-scope levels require the <aop:scoped-proxy/> element? Consider the following singleton bean definition and contrast it with what you need to define for the aforementioned scopes (note that the following userPreferences bean definition as it stands is incomplete):
<bean id="userPreferences" class="com.something.UserPreferences" scope="session"/>
<bean id="userManager" class="com.something.UserManager">
<property name="userPreferences" ref="userPreferences"/>
</bean>
In the preceding example, the singleton bean (userManager) is injected with a reference to the HTTP Session-scoped bean (userPreferences). The salient point here is that the userManager bean is a singleton: it is instantiated exactly once per container, and its dependencies (in this case only one, the userPreferences bean) are also injected only once. This means that the userManager bean operates only on the exact same userPreferences object (that is, the one with which it was originally injected.
This is not the behavior you want when injecting a shorter-lived scoped bean into a longer-lived scoped bean (for example, injecting an HTTP Session-scoped collaborating bean as a dependency into singleton bean). Rather, you need a single userManager object, and, for the lifetime of an HTTP Session, you need a userPreferences object that is specific to the HTTP Session. Thus, the container creates an object that exposes the exact same public interface as the UserPreferences class (ideally an object that is a UserPreferences instance), which can fetch the real UserPreferences object from the scoping mechanism (HTTP request, Session, and so forth). The container injects this proxy object into the userManager bean, which is unaware that this UserPreferences reference is a proxy. In this example, when a UserManager instance invokes a method on the dependency-injected UserPreferences object, it is actually invoking a method on the proxy. The proxy then fetches the real UserPreferences object from (in this case) the HTTP Session and delegates the method invocation onto the retrieved real UserPreferences object.
Thus, you need the following (correct and complete) configuration when injecting request- and session-scoped beans into collaborating objects, as the following example shows:
<bean id="userPreferences" class="com.something.UserPreferences" scope="session">
<aop:scoped-proxy/>
</bean>
<bean id="userManager" class="com.something.UserManager">
<property name="userPreferences" ref="userPreferences"/>
</bean>
Choosing the Type of Proxy to Create
By default, when the Spring container creates a proxy for a bean that is marked up with the <aop:scoped-proxy/> element, a CGLIB-based class proxy is created.
CGLIB proxies intercept only public method calls! Do not call non-public methods on such a proxy. They are not delegated to the actual scoped target object.
Alternatively, you can configure the Spring container to create standard JDK interface-based proxies for such scoped beans, by specifying false for the value of the proxy-target-class attribute of the <aop:scoped-proxy/> element. Using JDK interface-based proxies means that you do not need additional libraries in your application classpath to affect such proxying. However, it also means that the class of the scoped bean must implement at least one interface and that all collaborators into which the scoped bean is injected must reference the bean through one of its interfaces. The following example shows a proxy based on an interface:
<!-- DefaultUserPreferences implements the UserPreferences interface -->
<bean id="userPreferences" class="com.stuff.DefaultUserPreferences" scope="session">
<aop:scoped-proxy proxy-target-class="false"/>
</bean>
<bean id="userManager" class="com.stuff.UserManager">
<property name="userPreferences" ref="userPreferences"/>
</bean>
For more detailed information about choosing class-based or interface-based proxying, see Proxying Mechanisms.
1.5.5. Custom Scopes
The bean scoping mechanism is extensible. You can define your own scopes or even redefine existing scopes, although the latter is considered bad practice and you cannot override the built-in singleton and prototype scopes.
Creating a Custom Scope
To integrate your custom scopes into the Spring container, you need to implement the org.springframework.beans.factory.config.Scope interface, which is described in this section. For an idea of how to implement your own scopes, see the Scope implementations that are supplied with the Spring Framework itself and the Scope javadoc, which explains the methods you need to implement in more detail.
The Scope interface has four methods to get objects from the scope, remove them from the scope, and let them be destroyed.
The session scope implementation, for example, returns the session-scoped bean (if it does not exist, the method returns a new instance of the bean, after having bound it to the session for future reference). The following method returns the object from the underlying scope:
Java
Object get(String name, ObjectFactory<?> objectFactory)
Kotlin
fun get(name: String, objectFactory: ObjectFactory<*>): Any
The session scope implementation, for example, removes the session-scoped bean from the underlying session. The object should be returned, but you can return null if the object with the specified name is not found. The following method removes the object from the underlying scope:
Java
Object remove(String name)
Kotlin
fun remove(name: String): Any
The following method registers the callbacks the scope should execute when it is destroyed or when the specified object in the scope is destroyed:
Java
void registerDestructionCallback(String name, Runnable destructionCallback)
Kotlin
fun registerDestructionCallback(name: String, destructionCallback: Runnable)
See the javadoc or a Spring scope implementation for more information on destruction callbacks.
The following method obtains the conversation identifier for the underlying scope:
Java
String getConversationId()
Kotlin
fun getConversationId(): String
This identifier is different for each scope. For a session scoped implementation, this identifier can be the session identifier.
Using a Custom Scope
After you write and test one or more custom Scope implementations, you need to make the Spring container aware of your new scopes. The following method is the central method to register a new Scope with the Spring container:
Java
void registerScope(String scopeName, Scope scope);
Kotlin
fun registerScope(scopeName: String, scope: Scope)
This method is declared on the ConfigurableBeanFactory interface, which is available through the BeanFactory property on most of the concrete ApplicationContext implementations that ship with Spring.
The first argument to the registerScope(..) method is the unique name associated with a scope. Examples of such names in the Spring container itself are singleton and prototype. The second argument to the registerScope(..) method is an actual instance of the custom Scope implementation that you wish to register and use.
Suppose that you write your custom Scope implementation, and then register it as shown in the next example.
The next example uses SimpleThreadScope, which is included with Spring but is not registered by default. The instructions would be the same for your own custom Scope implementations.
Java
Scope threadScope = new SimpleThreadScope();
beanFactory.registerScope("thread", threadScope);
Kotlin
val threadScope = SimpleThreadScope()
beanFactory.registerScope("thread", threadScope)
You can then create bean definitions that adhere to the scoping rules of your custom Scope, as follows:
<bean id="..." class="..." scope="thread">
With a custom Scope implementation, you are not limited to programmatic registration of the scope. You can also do the Scope registration declaratively, by using the CustomScopeConfigurer class, as the following example shows:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/aop
https://www.springframework.org/schema/aop/spring-aop.xsd">
<bean class="org.springframework.beans.factory.config.CustomScopeConfigurer">
<property name="scopes">
<map>
<entry key="thread">
<bean class="org.springframework.context.support.SimpleThreadScope"/>
</entry>
</map>
</property>
</bean>
<bean id="thing2" class="x.y.Thing2" scope="thread">
<property name="name" value="Rick"/>
<aop:scoped-proxy/>
</bean>
<bean id="thing1" class="x.y.Thing1">
<property name="thing2" ref="thing2"/>
</bean>
</beans>
When you place <aop:scoped-proxy/> in a FactoryBean implementation, it is the factory bean itself that is scoped, not the object returned from getObject().
1.6. Customizing the Nature of a Bean
The Spring Framework provides a number of interfaces you can use to customize the nature of a bean. This section groups them as follows:
1.6.1. Lifecycle Callbacks
To interact with the container’s management of the bean lifecycle, you can implement the Spring InitializingBean and DisposableBean interfaces. The container calls afterPropertiesSet() for the former and destroy() for the latter to let the bean perform certain actions upon initialization and destruction of your beans.
The JSR-250 @PostConstruct and @PreDestroy annotations are generally considered best practice for receiving lifecycle callbacks in a modern Spring application. Using these annotations means that your beans are not coupled to Spring-specific interfaces. For details, see Using @PostConstruct and @PreDestroy.
If you do not want to use the JSR-250 annotations but you still want to remove coupling, consider init-method and destroy-method bean definition metadata.
Internally, the Spring Framework uses BeanPostProcessor implementations to process any callback interfaces it can find and call the appropriate methods. If you need custom features or other lifecycle behavior Spring does not by default offer, you can implement a BeanPostProcessor yourself. For more information, see Container Extension Points.
In addition to the initialization and destruction callbacks, Spring-managed objects may also implement the Lifecycle interface so that those objects can participate in the startup and shutdown process, as driven by the container’s own lifecycle.
The lifecycle callback interfaces are described in this section.
Initialization Callbacks
The org.springframework.beans.factory.InitializingBean interface lets a bean perform initialization work after the container has set all necessary properties on the bean. The InitializingBean interface specifies a single method:
Java
void afterPropertiesSet() throws Exception;
Kotlin
fun afterPropertiesSet()
We recommend that you do not use the InitializingBean interface, because it unnecessarily couples the code to Spring. Alternatively, we suggest using the @PostConstruct annotation or specifying a POJO initialization method. In the case of XML-based configuration metadata, you can use the init-method attribute to specify the name of the method that has a void no-argument signature. With Java configuration, you can use the initMethod attribute of @Bean. See Receiving Lifecycle Callbacks. Consider the following example:
<bean id="exampleInitBean" class="examples.ExampleBean" init-method="init"/>
Java
public class ExampleBean {
public void init() {
// do some initialization work
}
}
Kotlin
class ExampleBean {
fun init() {
// do some initialization work
}
}
The preceding example has almost exactly the same effect as the following example (which consists of two listings):
<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
Java
public class AnotherExampleBean implements InitializingBean {
@Override
public void afterPropertiesSet() {
// do some initialization work
}
}
Kotlin
class AnotherExampleBean : InitializingBean {
override fun afterPropertiesSet() {
// do some initialization work
}
}
However, the first of the two preceding examples does not couple the code to Spring.
Destruction Callbacks
Implementing the org.springframework.beans.factory.DisposableBean interface lets a bean get a callback when the container that contains it is destroyed. The DisposableBean interface specifies a single method:
Java
void destroy() throws Exception;
Kotlin
fun destroy()
We recommend that you do not use the DisposableBean callback interface, because it unnecessarily couples the code to Spring. Alternatively, we suggest using the @PreDestroy annotation or specifying a generic method that is supported by bean definitions. With XML-based configuration metadata, you can use the destroy-method attribute on the <bean/>. With Java configuration, you can use the destroyMethod attribute of @Bean. See Receiving Lifecycle Callbacks. Consider the following definition:
<bean id="exampleInitBean" class="examples.ExampleBean" destroy-method="cleanup"/>
Java
public class ExampleBean {
public void cleanup() {
// do some destruction work (like releasing pooled connections)
}
}
Kotlin
class ExampleBean {
fun cleanup() {
// do some destruction work (like releasing pooled connections)
}
}
The preceding definition has almost exactly the same effect as the following definition:
<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
Java
public class AnotherExampleBean implements DisposableBean {
@Override
public void destroy() {
// do some destruction work (like releasing pooled connections)
}
}
Kotlin
class AnotherExampleBean : DisposableBean {
override fun destroy() {
// do some destruction work (like releasing pooled connections)
}
}
However, the first of the two preceding definitions does not couple the code to Spring.
You can assign the destroy-method attribute of a <bean> element a special (inferred) value, which instructs Spring to automatically detect a public close or shutdown method on the specific bean class. (Any class that implements java.lang.AutoCloseable or java.io.Closeable would therefore match.) You can also set this special (inferred) value on the default-destroy-method attribute of a <beans> element to apply this behavior to an entire set of beans (see Default Initialization and Destroy Methods). Note that this is the default behavior with Java configuration.
Default Initialization and Destroy Methods
When you write initialization and destroy method callbacks that do not use the Spring-specific InitializingBean and DisposableBean callback interfaces, you typically write methods with names such as init(), initialize(), dispose(), and so on. Ideally, the names of such lifecycle callback methods are standardized across a project so that all developers use the same method names and ensure consistency.
You can configure the Spring container to “look” for named initialization and destroy callback method names on every bean. This means that you, as an application developer, can write your application classes and use an initialization callback called init(), without having to configure an init-method="init" attribute with each bean definition. The Spring IoC container calls that method when the bean is created (and in accordance with the standard lifecycle callback contract described previously). This feature also enforces a consistent naming convention for initialization and destroy method callbacks.
Suppose that your initialization callback methods are named init() and your destroy callback methods are named destroy(). Your class then resembles the class in the following example:
Java
public class DefaultBlogService implements BlogService {
private BlogDao blogDao;
public void setBlogDao(BlogDao blogDao) {
this.blogDao = blogDao;
}
// this is (unsurprisingly) the initialization callback method
public void init() {
if (this.blogDao == null) {
throw new IllegalStateException("The [blogDao] property must be set.");
}
}
}
Kotlin
class DefaultBlogService : BlogService {
private var blogDao: BlogDao? = null
// this is (unsurprisingly) the initialization callback method
fun init() {
if (blogDao == null) {
throw IllegalStateException("The [blogDao] property must be set.")
}
}
}
You could then use that class in a bean resembling the following:
<beans default-init-method="init">
<bean id="blogService" class="com.something.DefaultBlogService">
<property name="blogDao" ref="blogDao" />
</bean>
</beans>
The presence of the default-init-method attribute on the top-level <beans/> element attribute causes the Spring IoC container to recognize a method called init on the bean class as the initialization method callback. When a bean is created and assembled, if the bean class has such a method, it is invoked at the appropriate time.
You can configure destroy method callbacks similarly (in XML, that is) by using the default-destroy-method attribute on the top-level <beans/> element.
Where existing bean classes already have callback methods that are named at variance with the convention, you can override the default by specifying (in XML, that is) the method name by using the init-method and destroy-method attributes of the <bean/> itself.
The Spring container guarantees that a configured initialization callback is called immediately after a bean is supplied with all dependencies. Thus, the initialization callback is called on the raw bean reference, which means that AOP interceptors and so forth are not yet applied to the bean. A target bean is fully created first and then an AOP proxy (for example) with its interceptor chain is applied. If the target bean and the proxy are defined separately, your code can even interact with the raw target bean, bypassing the proxy. Hence, it would be inconsistent to apply the interceptors to the init method, because doing so would couple the lifecycle of the target bean to its proxy or interceptors and leave strange semantics when your code interacts directly with the raw target bean.
Combining Lifecycle Mechanisms
As of Spring 2.5, you have three options for controlling bean lifecycle behavior:
If multiple lifecycle mechanisms are configured for a bean and each mechanism is configured with a different method name, then each configured method is executed in the order listed after this note. However, if the same method name is configured — for example, init() for an initialization method — for more than one of these lifecycle mechanisms, that method is executed once, as explained in the preceding section.
Multiple lifecycle mechanisms configured for the same bean, with different initialization methods, are called as follows:
1. Methods annotated with @PostConstruct
2. afterPropertiesSet() as defined by the InitializingBean callback interface
3. A custom configured init() method
Destroy methods are called in the same order:
1. Methods annotated with @PreDestroy
2. destroy() as defined by the DisposableBean callback interface
3. A custom configured destroy() method
Startup and Shutdown Callbacks
The Lifecycle interface defines the essential methods for any object that has its own lifecycle requirements (such as starting and stopping some background process):
Java
public interface Lifecycle {
void start();
void stop();
boolean isRunning();
}
Kotlin
interface Lifecycle {
fun start()
fun stop()
val isRunning: Boolean
}
Any Spring-managed object may implement the Lifecycle interface. Then, when the ApplicationContext itself receives start and stop signals (for example, for a stop/restart scenario at runtime), it cascades those calls to all Lifecycle implementations defined within that context. It does this by delegating to a LifecycleProcessor, shown in the following listing:
Java
public interface LifecycleProcessor extends Lifecycle {
void onRefresh();
void onClose();
}
Kotlin
interface LifecycleProcessor : Lifecycle {
fun onRefresh()
fun onClose()
}
Notice that the LifecycleProcessor is itself an extension of the Lifecycle interface. It also adds two other methods for reacting to the context being refreshed and closed.
Note that the regular org.springframework.context.Lifecycle interface is a plain contract for explicit start and stop notifications and does not imply auto-startup at context refresh time. For fine-grained control over auto-startup of a specific bean (including startup phases), consider implementing org.springframework.context.SmartLifecycle instead.
Also, please note that stop notifications are not guaranteed to come before destruction. On regular shutdown, all Lifecycle beans first receive a stop notification before the general destruction callbacks are being propagated. However, on hot refresh during a context’s lifetime or on aborted refresh attempts, only destroy methods are called.
The order of startup and shutdown invocations can be important. If a “depends-on” relationship exists between any two objects, the dependent side starts after its dependency, and it stops before its dependency. However, at times, the direct dependencies are unknown. You may only know that objects of a certain type should start prior to objects of another type. In those cases, the SmartLifecycle interface defines another option, namely the getPhase() method as defined on its super-interface, Phased. The following listing shows the definition of the Phased interface:
Java
public interface Phased {
int getPhase();
}
Kotlin
interface Phased {
val phase: Int
}
The following listing shows the definition of the SmartLifecycle interface:
Java
public interface SmartLifecycle extends Lifecycle, Phased {
boolean isAutoStartup();
void stop(Runnable callback);
}
Kotlin
interface SmartLifecycle : Lifecycle, Phased {
val isAutoStartup: Boolean
fun stop(callback: Runnable)
}
When starting, the objects with the lowest phase start first. When stopping, the reverse order is followed. Therefore, an object that implements SmartLifecycle and whose getPhase() method returns Integer.MIN_VALUE would be among the first to start and the last to stop. At the other end of the spectrum, a phase value of Integer.MAX_VALUE would indicate that the object should be started last and stopped first (likely because it depends on other processes to be running). When considering the phase value, it is also important to know that the default phase for any “normal” Lifecycle object that does not implement SmartLifecycle is 0. Therefore, any negative phase value indicates that an object should start before those standard components (and stop after them). The reverse is true for any positive phase value.
The stop method defined by SmartLifecycle accepts a callback. Any implementation must invoke that callback’s run() method after that implementation’s shutdown process is complete. That enables asynchronous shutdown where necessary, since the default implementation of the LifecycleProcessor interface, DefaultLifecycleProcessor, waits up to its timeout value for the group of objects within each phase to invoke that callback. The default per-phase timeout is 30 seconds. You can override the default lifecycle processor instance by defining a bean named lifecycleProcessor within the context. If you want only to modify the timeout, defining the following would suffice:
<bean id="lifecycleProcessor" class="org.springframework.context.support.DefaultLifecycleProcessor">
<!-- timeout value in milliseconds -->
<property name="timeoutPerShutdownPhase" value="10000"/>
</bean>
As mentioned earlier, the LifecycleProcessor interface defines callback methods for the refreshing and closing of the context as well. The latter drives the shutdown process as if stop() had been called explicitly, but it happens when the context is closing. The 'refresh' callback, on the other hand, enables another feature of SmartLifecycle beans. When the context is refreshed (after all objects have been instantiated and initialized), that callback is invoked. At that point, the default lifecycle processor checks the boolean value returned by each SmartLifecycle object’s isAutoStartup() method. If true, that object is started at that point rather than waiting for an explicit invocation of the context’s or its own start() method (unlike the context refresh, the context start does not happen automatically for a standard context implementation). The phase value and any “depends-on” relationships determine the startup order as described earlier.
Shutting Down the Spring IoC Container Gracefully in Non-Web Applications
This section applies only to non-web applications. Spring’s web-based ApplicationContext implementations already have code in place to gracefully shut down the Spring IoC container when the relevant web application is shut down.
If you use Spring’s IoC container in a non-web application environment (for example, in a rich client desktop environment), register a shutdown hook with the JVM. Doing so ensures a graceful shutdown and calls the relevant destroy methods on your singleton beans so that all resources are released. You must still configure and implement these destroy callbacks correctly.
To register a shutdown hook, call the registerShutdownHook() method that is declared on the ConfigurableApplicationContext interface, as the following example shows:
Java
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public final class Boot {
public static void main(final String[] args) throws Exception {
ConfigurableApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml");
// add a shutdown hook for the above context...
ctx.registerShutdownHook();
// app runs here...
// main method exits, hook is called prior to the app shutting down...
}
}
Kotlin
import org.springframework.context.support.ClassPathXmlApplicationContext
fun main() {
val ctx = ClassPathXmlApplicationContext("beans.xml")
// add a shutdown hook for the above context...
ctx.registerShutdownHook()
// app runs here...
// main method exits, hook is called prior to the app shutting down...
}
1.6.2. ApplicationContextAware and BeanNameAware
When an ApplicationContext creates an object instance that implements the org.springframework.context.ApplicationContextAware interface, the instance is provided with a reference to that ApplicationContext. The following listing shows the definition of the ApplicationContextAware interface:
Java
public interface ApplicationContextAware {
void setApplicationContext(ApplicationContext applicationContext) throws BeansException;
}
Kotlin
interface ApplicationContextAware {
@Throws(BeansException::class)
fun setApplicationContext(applicationContext: ApplicationContext)
}
Thus, beans can programmatically manipulate the ApplicationContext that created them, through the ApplicationContext interface or by casting the reference to a known subclass of this interface (such as ConfigurableApplicationContext, which exposes additional functionality). One use would be the programmatic retrieval of other beans. Sometimes this capability is useful. However, in general, you should avoid it, because it couples the code to Spring and does not follow the Inversion of Control style, where collaborators are provided to beans as properties. Other methods of the ApplicationContext provide access to file resources, publishing application events, and accessing a MessageSource. These additional features are described in Additional Capabilities of the ApplicationContext.
Autowiring is another alternative to obtain a reference to the ApplicationContext. The traditional constructor and byType autowiring modes (as described in Autowiring Collaborators) can provide a dependency of type ApplicationContext for a constructor argument or a setter method parameter, respectively. For more flexibility, including the ability to autowire fields and multiple parameter methods, use the annotation-based autowiring features. If you do, the ApplicationContext is autowired into a field, constructor argument, or method parameter that expects the ApplicationContext type if the field, constructor, or method in question carries the @Autowired annotation. For more information, see Using @Autowired.
When an ApplicationContext creates a class that implements the org.springframework.beans.factory.BeanNameAware interface, the class is provided with a reference to the name defined in its associated object definition. The following listing shows the definition of the BeanNameAware interface:
Java
public interface BeanNameAware {
void setBeanName(String name) throws BeansException;
}
Kotlin
interface BeanNameAware {
@Throws(BeansException::class)
fun setBeanName(name: String)
}
The callback is invoked after population of normal bean properties but before an initialization callback such as InitializingBean, afterPropertiesSet, or a custom init-method.
1.6.3. Other Aware Interfaces
Besides ApplicationContextAware and BeanNameAware (discussed earlier), Spring offers a wide range of Aware callback interfaces that let beans indicate to the container that they require a certain infrastructure dependency. As a general rule, the name indicates the dependency type. The following table summarizes the most important Aware interfaces:
Table 4. Aware interfaces
Name Injected Dependency Explained in…
ApplicationContextAware
Declaring ApplicationContext.
ApplicationContextAware and BeanNameAware
ApplicationEventPublisherAware
Event publisher of the enclosing ApplicationContext.
Additional Capabilities of the ApplicationContext
BeanClassLoaderAware
Class loader used to load the bean classes.
Instantiating Beans
BeanFactoryAware
Declaring BeanFactory.
ApplicationContextAware and BeanNameAware
BeanNameAware
Name of the declaring bean.
ApplicationContextAware and BeanNameAware
LoadTimeWeaverAware
Defined weaver for processing class definition at load time.
Load-time Weaving with AspectJ in the Spring Framework
MessageSourceAware
Configured strategy for resolving messages (with support for parametrization and internationalization).
Additional Capabilities of the ApplicationContext
NotificationPublisherAware
Spring JMX notification publisher.
Notifications
ResourceLoaderAware
Configured loader for low-level access to resources.
Resources
ServletConfigAware
Current ServletConfig the container runs in. Valid only in a web-aware Spring ApplicationContext.
Spring MVC
ServletContextAware
Current ServletContext the container runs in. Valid only in a web-aware Spring ApplicationContext.
Spring MVC
Note again that using these interfaces ties your code to the Spring API and does not follow the Inversion of Control style. As a result, we recommend them for infrastructure beans that require programmatic access to the container.
1.7. Bean Definition Inheritance
A bean definition can contain a lot of configuration information, including constructor arguments, property values, and container-specific information, such as the initialization method, a static factory method name, and so on. A child bean definition inherits configuration data from a parent definition. The child definition can override some values or add others as needed. Using parent and child bean definitions can save a lot of typing. Effectively, this is a form of templating.
If you work with an ApplicationContext interface programmatically, child bean definitions are represented by the ChildBeanDefinition class. Most users do not work with them on this level. Instead, they configure bean definitions declaratively in a class such as the ClassPathXmlApplicationContext. When you use XML-based configuration metadata, you can indicate a child bean definition by using the parent attribute, specifying the parent bean as the value of this attribute. The following example shows how to do so:
<bean id="inheritedTestBean" abstract="true"
class="org.springframework.beans.TestBean">
<property name="name" value="parent"/>
<property name="age" value="1"/>
</bean>
<bean id="inheritsWithDifferentClass"
class="org.springframework.beans.DerivedTestBean"
parent="inheritedTestBean" init-method="initialize"> (1)
<property name="name" value="override"/>
<!-- the age property value of 1 will be inherited from parent -->
</bean>
1 Note the parent attribute.
A child bean definition uses the bean class from the parent definition if none is specified but can also override it. In the latter case, the child bean class must be compatible with the parent (that is, it must accept the parent’s property values).
A child bean definition inherits scope, constructor argument values, property values, and method overrides from the parent, with the option to add new values. Any scope, initialization method, destroy method, or static factory method settings that you specify override the corresponding parent settings.
The remaining settings are always taken from the child definition: depends on, autowire mode, dependency check, singleton, and lazy init.
The preceding example explicitly marks the parent bean definition as abstract by using the abstract attribute. If the parent definition does not specify a class, explicitly marking the parent bean definition as abstract is required, as the following example shows:
<bean id="inheritedTestBeanWithoutClass" abstract="true">
<property name="name" value="parent"/>
<property name="age" value="1"/>
</bean>
<bean id="inheritsWithClass" class="org.springframework.beans.DerivedTestBean"
parent="inheritedTestBeanWithoutClass" init-method="initialize">
<property name="name" value="override"/>
<!-- age will inherit the value of 1 from the parent bean definition-->
</bean>
The parent bean cannot be instantiated on its own because it is incomplete, and it is also explicitly marked as abstract. When a definition is abstract, it is usable only as a pure template bean definition that serves as a parent definition for child definitions. Trying to use such an abstract parent bean on its own, by referring to it as a ref property of another bean or doing an explicit getBean() call with the parent bean ID returns an error. Similarly, the container’s internal preInstantiateSingletons() method ignores bean definitions that are defined as abstract.
ApplicationContext pre-instantiates all singletons by default. Therefore, it is important (at least for singleton beans) that if you have a (parent) bean definition which you intend to use only as a template, and this definition specifies a class, you must make sure to set the abstract attribute to true, otherwise the application context will actually (attempt to) pre-instantiate the abstract bean.
1.8. Container Extension Points
Typically, an application developer does not need to subclass ApplicationContext implementation classes. Instead, the Spring IoC container can be extended by plugging in implementations of special integration interfaces. The next few sections describe these integration interfaces.
1.8.1. Customizing Beans by Using a BeanPostProcessor
The BeanPostProcessor interface defines callback methods that you can implement to provide your own (or override the container’s default) instantiation logic, dependency resolution logic, and so forth. If you want to implement some custom logic after the Spring container finishes instantiating, configuring, and initializing a bean, you can plug in one or more custom BeanPostProcessor implementations.
You can configure multiple BeanPostProcessor instances, and you can control the order in which these BeanPostProcessor instances execute by setting the order property. You can set this property only if the BeanPostProcessor implements the Ordered interface. If you write your own BeanPostProcessor, you should consider implementing the Ordered interface, too. For further details, see the javadoc of the BeanPostProcessor and Ordered interfaces. See also the note on programmatic registration of BeanPostProcessor instances.
BeanPostProcessor instances operate on bean (or object) instances. That is, the Spring IoC container instantiates a bean instance and then BeanPostProcessor instances do their work.
BeanPostProcessor instances are scoped per-container. This is relevant only if you use container hierarchies. If you define a BeanPostProcessor in one container, it post-processes only the beans in that container. In other words, beans that are defined in one container are not post-processed by a BeanPostProcessor defined in another container, even if both containers are part of the same hierarchy.
To change the actual bean definition (that is, the blueprint that defines the bean), you instead need to use a BeanFactoryPostProcessor, as described in Customizing Configuration Metadata with a BeanFactoryPostProcessor.
The org.springframework.beans.factory.config.BeanPostProcessor interface consists of exactly two callback methods. When such a class is registered as a post-processor with the container, for each bean instance that is created by the container, the post-processor gets a callback from the container both before container initialization methods (such as InitializingBean.afterPropertiesSet() or any declared init method) are called, and after any bean initialization callbacks. The post-processor can take any action with the bean instance, including ignoring the callback completely. A bean post-processor typically checks for callback interfaces, or it may wrap a bean with a proxy. Some Spring AOP infrastructure classes are implemented as bean post-processors in order to provide proxy-wrapping logic.
An ApplicationContext automatically detects any beans that are defined in the configuration metadata that implements the BeanPostProcessor interface. The ApplicationContext registers these beans as post-processors so that they can be called later, upon bean creation. Bean post-processors can be deployed in the container in the same fashion as any other beans.
Note that, when declaring a BeanPostProcessor by using an @Bean factory method on a configuration class, the return type of the factory method should be the implementation class itself or at least the org.springframework.beans.factory.config.BeanPostProcessor interface, clearly indicating the post-processor nature of that bean. Otherwise, the ApplicationContext cannot autodetect it by type before fully creating it. Since a BeanPostProcessor needs to be instantiated early in order to apply to the initialization of other beans in the context, this early type detection is critical.
Programmatically registering BeanPostProcessor instances
While the recommended approach for BeanPostProcessor registration is through ApplicationContext auto-detection (as described earlier), you can register them programmatically against a ConfigurableBeanFactory by using the addBeanPostProcessor method. This can be useful when you need to evaluate conditional logic before registration or even for copying bean post processors across contexts in a hierarchy. Note, however, that BeanPostProcessor instances added programmatically do not respect the Ordered interface. Here, it is the order of registration that dictates the order of execution. Note also that BeanPostProcessor instances registered programmatically are always processed before those registered through auto-detection, regardless of any explicit ordering.
BeanPostProcessor instances and AOP auto-proxying
Classes that implement the BeanPostProcessor interface are special and are treated differently by the container. All BeanPostProcessor instances and beans that they directly reference are instantiated on startup, as part of the special startup phase of the ApplicationContext. Next, all BeanPostProcessor instances are registered in a sorted fashion and applied to all further beans in the container. Because AOP auto-proxying is implemented as a BeanPostProcessor itself, neither BeanPostProcessor instances nor the beans they directly reference are eligible for auto-proxying and, thus, do not have aspects woven into them.
For any such bean, you should see an informational log message: Bean someBean is not eligible for getting processed by all BeanPostProcessor interfaces (for example: not eligible for auto-proxying).
If you have beans wired into your BeanPostProcessor by using autowiring or @Resource (which may fall back to autowiring), Spring might access unexpected beans when searching for type-matching dependency candidates and, therefore, make them ineligible for auto-proxying or other kinds of bean post-processing. For example, if you have a dependency annotated with @Resource where the field or setter name does not directly correspond to the declared name of a bean and no name attribute is used, Spring accesses other beans for matching them by type.
The following examples show how to write, register, and use BeanPostProcessor instances in an ApplicationContext.
Example: Hello World, BeanPostProcessor-style
This first example illustrates basic usage. The example shows a custom BeanPostProcessor implementation that invokes the toString() method of each bean as it is created by the container and prints the resulting string to the system console.
The following listing shows the custom BeanPostProcessor implementation class definition:
Java
package scripting;
import org.springframework.beans.factory.config.BeanPostProcessor;
public class InstantiationTracingBeanPostProcessor implements BeanPostProcessor {
// simply return the instantiated bean as-is
public Object postProcessBeforeInitialization(Object bean, String beanName) {
return bean; // we could potentially return any object reference here...
}
public Object postProcessAfterInitialization(Object bean, String beanName) {
System.out.println("Bean '" + beanName + "' created : " + bean.toString());
return bean;
}
}
Kotlin
import org.springframework.beans.factory.config.BeanPostProcessor
class InstantiationTracingBeanPostProcessor : BeanPostProcessor {
// simply return the instantiated bean as-is
override fun postProcessBeforeInitialization(bean: Any, beanName: String): Any? {
return bean // we could potentially return any object reference here...
}
override fun postProcessAfterInitialization(bean: Any, beanName: String): Any? {
println("Bean '$beanName' created : $bean")
return bean
}
}
The following beans element uses the InstantiationTracingBeanPostProcessor:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:lang="http://www.springframework.org/schema/lang"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/lang
https://www.springframework.org/schema/lang/spring-lang.xsd">
<lang:groovy id="messenger"
script-source="classpath:org/springframework/scripting/groovy/Messenger.groovy">
<lang:property name="message" value="Fiona Apple Is Just So Dreamy."/>
</lang:groovy>
<!--
when the above bean (messenger) is instantiated, this custom
BeanPostProcessor implementation will output the fact to the system console
-->
<bean class="scripting.InstantiationTracingBeanPostProcessor"/>
</beans>
Notice how the InstantiationTracingBeanPostProcessor is merely defined. It does not even have a name, and, because it is a bean, it can be dependency-injected as you would any other bean. (The preceding configuration also defines a bean that is backed by a Groovy script. The Spring dynamic language support is detailed in the chapter entitled Dynamic Language Support.)
The following Java application runs the preceding code and configuration:
Java
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.scripting.Messenger;
public final class Boot {
public static void main(final String[] args) throws Exception {
ApplicationContext ctx = new ClassPathXmlApplicationContext("scripting/beans.xml");
Messenger messenger = ctx.getBean("messenger", Messenger.class);
System.out.println(messenger);
}
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = ClassPathXmlApplicationContext("scripting/beans.xml")
val messenger = ctx.getBean<Messenger>("messenger")
println(messenger)
}
The output of the preceding application resembles the following:
Bean 'messenger' created : [email protected]
[email protected]
Example: The RequiredAnnotationBeanPostProcessor
Using callback interfaces or annotations in conjunction with a custom BeanPostProcessor implementation is a common means of extending the Spring IoC container. An example is Spring’s RequiredAnnotationBeanPostProcessor — a BeanPostProcessor implementation that ships with the Spring distribution and that ensures that JavaBean properties on beans that are marked with an (arbitrary) annotation are actually (configured to be) dependency-injected with a value.
1.8.2. Customizing Configuration Metadata with a BeanFactoryPostProcessor
The next extension point that we look at is the org.springframework.beans.factory.config.BeanFactoryPostProcessor. The semantics of this interface are similar to those of the BeanPostProcessor, with one major difference: BeanFactoryPostProcessor operates on the bean configuration metadata. That is, the Spring IoC container lets a BeanFactoryPostProcessor read the configuration metadata and potentially change it before the container instantiates any beans other than BeanFactoryPostProcessor instances.
You can configure multiple BeanFactoryPostProcessor instances, and you can control the order in which these BeanFactoryPostProcessor instances run by setting the order property. However, you can only set this property if the BeanFactoryPostProcessor implements the Ordered interface. If you write your own BeanFactoryPostProcessor, you should consider implementing the Ordered interface, too. See the javadoc of the BeanFactoryPostProcessor and Ordered interfaces for more details.
If you want to change the actual bean instances (that is, the objects that are created from the configuration metadata), then you instead need to use a BeanPostProcessor (described earlier in Customizing Beans by Using a BeanPostProcessor). While it is technically possible to work with bean instances within a BeanFactoryPostProcessor (for example, by using BeanFactory.getBean()), doing so causes premature bean instantiation, violating the standard container lifecycle. This may cause negative side effects, such as bypassing bean post processing.
Also, BeanFactoryPostProcessor instances are scoped per-container. This is only relevant if you use container hierarchies. If you define a BeanFactoryPostProcessor in one container, it is applied only to the bean definitions in that container. Bean definitions in one container are not post-processed by BeanFactoryPostProcessor instances in another container, even if both containers are part of the same hierarchy.
A bean factory post-processor is automatically executed when it is declared inside an ApplicationContext, in order to apply changes to the configuration metadata that define the container. Spring includes a number of predefined bean factory post-processors, such as PropertyOverrideConfigurer and PropertySourcesPlaceholderConfigurer. You can also use a custom BeanFactoryPostProcessor — for example, to register custom property editors.
An ApplicationContext automatically detects any beans that are deployed into it that implement the BeanFactoryPostProcessor interface. It uses these beans as bean factory post-processors, at the appropriate time. You can deploy these post-processor beans as you would any other bean.
As with BeanPostProcessors , you typically do not want to configure BeanFactoryPostProcessors for lazy initialization. If no other bean references a Bean(Factory)PostProcessor, that post-processor will not get instantiated at all. Thus, marking it for lazy initialization will be ignored, and the Bean(Factory)PostProcessor will be instantiated eagerly even if you set the default-lazy-init attribute to true on the declaration of your <beans /> element.
Example: The Class Name Substitution PropertySourcesPlaceholderConfigurer
You can use the PropertySourcesPlaceholderConfigurer to externalize property values from a bean definition in a separate file by using the standard Java Properties format. Doing so enables the person deploying an application to customize environment-specific properties, such as database URLs and passwords, without the complexity or risk of modifying the main XML definition file or files for the container.
Consider the following XML-based configuration metadata fragment, where a DataSource with placeholder values is defined:
<bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
<property name="locations" value="classpath:com/something/jdbc.properties"/>
</bean>
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
The example shows properties configured from an external Properties file. At runtime, a PropertySourcesPlaceholderConfigurer is applied to the metadata that replaces some properties of the DataSource. The values to replace are specified as placeholders of the form ${property-name}, which follows the Ant and log4j and JSP EL style.
The actual values come from another file in the standard Java Properties format:
jdbc.driverClassName=org.hsqldb.jdbcDriver
jdbc.url=jdbc:hsqldb:hsql://production:9002
jdbc.username=sa
jdbc.password=root
Therefore, the ${jdbc.username} string is replaced at runtime with the value, 'sa', and the same applies for other placeholder values that match keys in the properties file. The PropertySourcesPlaceholderConfigurer checks for placeholders in most properties and attributes of a bean definition. Furthermore, you can customize the placeholder prefix and suffix.
With the context namespace introduced in Spring 2.5, you can configure property placeholders with a dedicated configuration element. You can provide one or more locations as a comma-separated list in the location attribute, as the following example shows:
<context:property-placeholder location="classpath:com/something/jdbc.properties"/>
The PropertySourcesPlaceholderConfigurer not only looks for properties in the Properties file you specify. By default, if it cannot find a property in the specified properties files, it checks against Spring Environment properties and regular Java System properties.
You can use the PropertySourcesPlaceholderConfigurer to substitute class names, which is sometimes useful when you have to pick a particular implementation class at runtime. The following example shows how to do so:
<bean class="org.springframework.beans.factory.config.PropertySourcesPlaceholderConfigurer">
<property name="locations">
<value>classpath:com/something/strategy.properties</value>
</property>
<property name="properties">
<value>custom.strategy.class=com.something.DefaultStrategy</value>
</property>
</bean>
<bean id="serviceStrategy" class="${custom.strategy.class}"/>
If the class cannot be resolved at runtime to a valid class, resolution of the bean fails when it is about to be created, which is during the preInstantiateSingletons() phase of an ApplicationContext for a non-lazy-init bean.
Example: The PropertyOverrideConfigurer
The PropertyOverrideConfigurer, another bean factory post-processor, resembles the PropertySourcesPlaceholderConfigurer, but unlike the latter, the original definitions can have default values or no values at all for bean properties. If an overriding Properties file does not have an entry for a certain bean property, the default context definition is used.
Note that the bean definition is not aware of being overridden, so it is not immediately obvious from the XML definition file that the override configurer is being used. In case of multiple PropertyOverrideConfigurer instances that define different values for the same bean property, the last one wins, due to the overriding mechanism.
Properties file configuration lines take the following format:
beanName.property=value
The following listing shows an example of the format:
dataSource.driverClassName=com.mysql.jdbc.Driver
dataSource.url=jdbc:mysql:mydb
This example file can be used with a container definition that contains a bean called dataSource that has driver and url properties.
Compound property names are also supported, as long as every component of the path except the final property being overridden is already non-null (presumably initialized by the constructors). In the following example, the sammy property of the bob property of the fred property of the tom bean is set to the scalar value 123:
tom.fred.bob.sammy=123
Specified override values are always literal values. They are not translated into bean references. This convention also applies when the original value in the XML bean definition specifies a bean reference.
With the context namespace introduced in Spring 2.5, it is possible to configure property overriding with a dedicated configuration element, as the following example shows:
<context:property-override location="classpath:override.properties"/>
1.8.3. Customizing Instantiation Logic with a FactoryBean
You can implement the org.springframework.beans.factory.FactoryBean interface for objects that are themselves factories.
The FactoryBean interface is a point of pluggability into the Spring IoC container’s instantiation logic. If you have complex initialization code that is better expressed in Java as opposed to a (potentially) verbose amount of XML, you can create your own FactoryBean, write the complex initialization inside that class, and then plug your custom FactoryBean into the container.
The FactoryBean interface provides three methods:
• Object getObject(): Returns an instance of the object this factory creates. The instance can possibly be shared, depending on whether this factory returns singletons or prototypes.
• boolean isSingleton(): Returns true if this FactoryBean returns singletons or false otherwise.
• Class getObjectType(): Returns the object type returned by the getObject() method or null if the type is not known in advance.
The FactoryBean concept and interface is used in a number of places within the Spring Framework. More than 50 implementations of the FactoryBean interface ship with Spring itself.
When you need to ask a container for an actual FactoryBean instance itself instead of the bean it produces, preface the bean’s id with the ampersand symbol (&) when calling the getBean() method of the ApplicationContext. So, for a given FactoryBean with an id of myBean, invoking getBean("myBean") on the container returns the product of the FactoryBean, whereas invoking getBean("&myBean") returns the FactoryBean instance itself.
1.9. Annotation-based Container Configuration
Are annotations better than XML for configuring Spring?
The introduction of annotation-based configuration raised the question of whether this approach is “better” than XML. The short answer is “it depends.” The long answer is that each approach has its pros and cons, and, usually, it is up to the developer to decide which strategy suits them better. Due to the way they are defined, annotations provide a lot of context in their declaration, leading to shorter and more concise configuration. However, XML excels at wiring up components without touching their source code or recompiling them. Some developers prefer having the wiring close to the source while others argue that annotated classes are no longer POJOs and, furthermore, that the configuration becomes decentralized and harder to control.
No matter the choice, Spring can accommodate both styles and even mix them together. It is worth pointing out that through its JavaConfig option, Spring lets annotations be used in a non-invasive way, without touching the target components source code and that, in terms of tooling, all configuration styles are supported by the Spring Tools for Eclipse.
An alternative to XML setup is provided by annotation-based configuration, which relies on the bytecode metadata for wiring up components instead of angle-bracket declarations. Instead of using XML to describe a bean wiring, the developer moves the configuration into the component class itself by using annotations on the relevant class, method, or field declaration. As mentioned in Example: The RequiredAnnotationBeanPostProcessor, using a BeanPostProcessor in conjunction with annotations is a common means of extending the Spring IoC container. For example, Spring 2.0 introduced the possibility of enforcing required properties with the @Required annotation. Spring 2.5 made it possible to follow that same general approach to drive Spring’s dependency injection. Essentially, the @Autowired annotation provides the same capabilities as described in Autowiring Collaborators but with more fine-grained control and wider applicability. Spring 2.5 also added support for JSR-250 annotations, such as @PostConstruct and @PreDestroy. Spring 3.0 added support for JSR-330 (Dependency Injection for Java) annotations contained in the javax.inject package such as @Inject and @Named. Details about those annotations can be found in the relevant section.
Annotation injection is performed before XML injection. Thus, the XML configuration overrides the annotations for properties wired through both approaches.
As always, you can register them as individual bean definitions, but they can also be implicitly registered by including the following tag in an XML-based Spring configuration (notice the inclusion of the context namespace):
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
</beans>
<context:annotation-config/> only looks for annotations on beans in the same application context in which it is defined. This means that, if you put <context:annotation-config/> in a WebApplicationContext for a DispatcherServlet, it only checks for @Autowired beans in your controllers, and not your services. See The DispatcherServlet for more information.
1.9.1. @Required
The @Required annotation applies to bean property setter methods, as in the following example:
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Required
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
class SimpleMovieLister {
@Required
lateinit var movieFinder: MovieFinder
// ...
}
This annotation indicates that the affected bean property must be populated at configuration time, through an explicit property value in a bean definition or through autowiring. The container throws an exception if the affected bean property has not been populated. This allows for eager and explicit failure, avoiding NullPointerException instances or the like later on. We still recommend that you put assertions into the bean class itself (for example, into an init method). Doing so enforces those required references and values even when you use the class outside of a container.
The @Required annotation is formally deprecated as of Spring Framework 5.1, in favor of using constructor injection for required settings (or a custom implementation of InitializingBean.afterPropertiesSet() along with bean property setter methods).
1.9.2. Using @Autowired
JSR 330’s @Inject annotation can be used in place of Spring’s @Autowired annotation in the examples included in this section. See here for more details.
You can apply the @Autowired annotation to constructors, as the following example shows:
Java
public class MovieRecommender {
private final CustomerPreferenceDao customerPreferenceDao;
@Autowired
public MovieRecommender(CustomerPreferenceDao customerPreferenceDao) {
this.customerPreferenceDao = customerPreferenceDao;
}
// ...
}
Kotlin
class MovieRecommender @Autowired constructor(
private val customerPreferenceDao: CustomerPreferenceDao)
As of Spring Framework 4.3, an @Autowired annotation on such a constructor is no longer necessary if the target bean defines only one constructor to begin with. However, if several constructors are available and there is no primary/default constructor, at least one of the constructors must be annotated with @Autowired in order to instruct the container which one to use. See the discussion on constructor resolution for details.
You can also apply the @Autowired annotation to traditional setter methods, as the following example shows:
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Autowired
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
class SimpleMovieLister {
@Autowired
lateinit var movieFinder: MovieFinder
// ...
}
You can also apply the annotation to methods with arbitrary names and multiple arguments, as the following example shows:
Java
public class MovieRecommender {
private MovieCatalog movieCatalog;
private CustomerPreferenceDao customerPreferenceDao;
@Autowired
public void prepare(MovieCatalog movieCatalog,
CustomerPreferenceDao customerPreferenceDao) {
this.movieCatalog = movieCatalog;
this.customerPreferenceDao = customerPreferenceDao;
}
// ...
}
Kotlin
class MovieRecommender {
private lateinit var movieCatalog: MovieCatalog
private lateinit var customerPreferenceDao: CustomerPreferenceDao
@Autowired
fun prepare(movieCatalog: MovieCatalog,
customerPreferenceDao: CustomerPreferenceDao) {
this.movieCatalog = movieCatalog
this.customerPreferenceDao = customerPreferenceDao
}
// ...
}
You can apply @Autowired to fields as well and even mix it with constructors, as the following example shows:
Java
public class MovieRecommender {
private final CustomerPreferenceDao customerPreferenceDao;
@Autowired
private MovieCatalog movieCatalog;
@Autowired
public MovieRecommender(CustomerPreferenceDao customerPreferenceDao) {
this.customerPreferenceDao = customerPreferenceDao;
}
// ...
}
Kotlin
class MovieRecommender @Autowired constructor(
private val customerPreferenceDao: CustomerPreferenceDao) {
@Autowired
private lateinit var movieCatalog: MovieCatalog
// ...
}
Make sure that your target components (for example, MovieCatalog or CustomerPreferenceDao) are consistently declared by the type that you use for your @Autowired-annotated injection points. Otherwise, injection may fail due to a "no type match found" error at runtime.
For XML-defined beans or component classes found via classpath scanning, the container usually knows the concrete type up front. However, for @Bean factory methods, you need to make sure that the declared return type is sufficiently expressive. For components that implement several interfaces or for components potentially referred to by their implementation type, consider declaring the most specific return type on your factory method (at least as specific as required by the injection points referring to your bean).
You can also instruct Spring to provide all beans of a particular type from the ApplicationContext by adding the @Autowired annotation to a field or method that expects an array of that type, as the following example shows:
Java
public class MovieRecommender {
@Autowired
private MovieCatalog[] movieCatalogs;
// ...
}
Kotlin
class MovieRecommender {
@Autowired
private lateinit var movieCatalogs: Array<MovieCatalog>
// ...
}
The same applies for typed collections, as the following example shows:
Java
public class MovieRecommender {
private Set<MovieCatalog> movieCatalogs;
@Autowired
public void setMovieCatalogs(Set<MovieCatalog> movieCatalogs) {
this.movieCatalogs = movieCatalogs;
}
// ...
}
Kotlin
class MovieRecommender {
@Autowired
lateinit var movieCatalogs: Set<MovieCatalog>
// ...
}
Your target beans can implement the org.springframework.core.Ordered interface or use the @Order or standard @Priority annotation if you want items in the array or list to be sorted in a specific order. Otherwise, their order follows the registration order of the corresponding target bean definitions in the container.
You can declare the @Order annotation at the target class level and on @Bean methods, potentially for individual bean definitions (in case of multiple definitions that use the same bean class). @Order values may influence priorities at injection points, but be aware that they do not influence singleton startup order, which is an orthogonal concern determined by dependency relationships and @DependsOn declarations.
Note that the standard javax.annotation.Priority annotation is not available at the @Bean level, since it cannot be declared on methods. Its semantics can be modeled through @Order values in combination with @Primary on a single bean for each type.
Even typed Map instances can be autowired as long as the expected key type is String. The map values contain all beans of the expected type, and the keys contain the corresponding bean names, as the following example shows:
Java
public class MovieRecommender {
private Map<String, MovieCatalog> movieCatalogs;
@Autowired
public void setMovieCatalogs(Map<String, MovieCatalog> movieCatalogs) {
this.movieCatalogs = movieCatalogs;
}
// ...
}
Kotlin
class MovieRecommender {
@Autowired
lateinit var movieCatalogs: Map<String, MovieCatalog>
// ...
}
By default, autowiring fails when no matching candidate beans are available for a given injection point. In the case of a declared array, collection, or map, at least one matching element is expected.
The default behavior is to treat annotated methods and fields as indicating required dependencies. You can change this behavior as demonstrated in the following example, enabling the framework to skip a non-satisfiable injection point through marking it as non-required (i.e., by setting the required attribute in @Autowired to false):
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Autowired(required = false)
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
class SimpleMovieLister {
@Autowired(required = false)
var movieFinder: MovieFinder? = null
// ...
}
A non-required method will not be called at all if its dependency (or one of its dependencies, in case of multiple arguments) is not available. A non-required field will not get populated at all in such cases, leaving its default value in place.
Injected constructor and factory method arguments are a special case since the required attribute in @Autowired has a somewhat different meaning due to Spring’s constructor resolution algorithm that may potentially deal with multiple constructors. Constructor and factory method arguments are effectively required by default but with a few special rules in a single-constructor scenario, such as multi-element injection points (arrays, collections, maps) resolving to empty instances if no matching beans are available. This allows for a common implementation pattern where all dependencies can be declared in a unique multi-argument constructor — for example, declared as a single public constructor without an @Autowired annotation.
Only one constructor of any given bean class may declare @Autowired with the required attribute set to true, indicating the constructor to autowire when used as a Spring bean. As a consequence, if the required attribute is left at its default value true, only a single constructor may be annotated with @Autowired. If multiple constructors declare the annotation, they will all have to declare required=false in order to be considered as candidates for autowiring (analogous to autowire=constructor in XML). The constructor with the greatest number of dependencies that can be satisfied by matching beans in the Spring container will be chosen. If none of the candidates can be satisfied, then a primary/default constructor (if present) will be used. Similarly, if a class declares multiple constructors but none of them is annotated with @Autowired, then a primary/default constructor (if present) will be used. If a class only declares a single constructor to begin with, it will always be used, even if not annotated. Note that an annotated constructor does not have to be public.
The required attribute of @Autowired is recommended over the deprecated @Required annotation on setter methods. Setting the required attribute to false indicates that the property is not required for autowiring purposes, and the property is ignored if it cannot be autowired. @Required, on the other hand, is stronger in that it enforces the property to be set by any means supported by the container, and if no value is defined, a corresponding exception is raised.
Alternatively, you can express the non-required nature of a particular dependency through Java 8’s java.util.Optional, as the following example shows:
public class SimpleMovieLister {
@Autowired
public void setMovieFinder(Optional<MovieFinder> movieFinder) {
...
}
}
As of Spring Framework 5.0, you can also use a @Nullable annotation (of any kind in any package — for example, javax.annotation.Nullable from JSR-305) or just leverage Kotlin builtin null-safety support:
Java
public class SimpleMovieLister {
@Autowired
public void setMovieFinder(@Nullable MovieFinder movieFinder) {
...
}
}
Kotlin
class SimpleMovieLister {
@Autowired
var movieFinder: MovieFinder? = null
// ...
}
You can also use @Autowired for interfaces that are well-known resolvable dependencies: BeanFactory, ApplicationContext, Environment, ResourceLoader, ApplicationEventPublisher, and MessageSource. These interfaces and their extended interfaces, such as ConfigurableApplicationContext or ResourcePatternResolver, are automatically resolved, with no special setup necessary. The following example autowires an ApplicationContext object:
Java
public class MovieRecommender {
@Autowired
private ApplicationContext context;
public MovieRecommender() {
}
// ...
}
Kotlin
class MovieRecommender {
@Autowired
lateinit var context: ApplicationContext
// ...
}
The @Autowired, @Inject, @Value, and @Resource annotations are handled by Spring BeanPostProcessor implementations. This means that you cannot apply these annotations within your own BeanPostProcessor or BeanFactoryPostProcessor types (if any). These types must be 'wired up' explicitly by using XML or a Spring @Bean method.
1.9.3. Fine-tuning Annotation-based Autowiring with @Primary
Because autowiring by type may lead to multiple candidates, it is often necessary to have more control over the selection process. One way to accomplish this is with Spring’s @Primary annotation. @Primary indicates that a particular bean should be given preference when multiple beans are candidates to be autowired to a single-valued dependency. If exactly one primary bean exists among the candidates, it becomes the autowired value.
Consider the following configuration that defines firstMovieCatalog as the primary MovieCatalog:
Java
@Configuration
public class MovieConfiguration {
@Bean
@Primary
public MovieCatalog firstMovieCatalog() { ... }
@Bean
public MovieCatalog secondMovieCatalog() { ... }
// ...
}
Kotlin
@Configuration
class MovieConfiguration {
@Bean
@Primary
fun firstMovieCatalog(): MovieCatalog { ... }
@Bean
fun secondMovieCatalog(): MovieCatalog { ... }
// ...
}
With the preceding configuration, the following MovieRecommender is autowired with the firstMovieCatalog:
Java
public class MovieRecommender {
@Autowired
private MovieCatalog movieCatalog;
// ...
}
Kotlin
class MovieRecommender {
@Autowired
private lateinit var movieCatalog: MovieCatalog
// ...
}
The corresponding bean definitions follow:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<bean class="example.SimpleMovieCatalog" primary="true">
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<!-- inject any dependencies required by this bean -->
</bean>
<bean id="movieRecommender" class="example.MovieRecommender"/>
</beans>
1.9.4. Fine-tuning Annotation-based Autowiring with Qualifiers
@Primary is an effective way to use autowiring by type with several instances when one primary candidate can be determined. When you need more control over the selection process, you can use Spring’s @Qualifier annotation. You can associate qualifier values with specific arguments, narrowing the set of type matches so that a specific bean is chosen for each argument. In the simplest case, this can be a plain descriptive value, as shown in the following example:
Java
public class MovieRecommender {
@Autowired
@Qualifier("main")
private MovieCatalog movieCatalog;
// ...
}
Kotlin
class MovieRecommender {
@Autowired
@Qualifier("main")
private lateinit var movieCatalog: MovieCatalog
// ...
}
You can also specify the @Qualifier annotation on individual constructor arguments or method parameters, as shown in the following example:
Java
public class MovieRecommender {
private MovieCatalog movieCatalog;
private CustomerPreferenceDao customerPreferenceDao;
@Autowired
public void prepare(@Qualifier("main") MovieCatalog movieCatalog,
CustomerPreferenceDao customerPreferenceDao) {
this.movieCatalog = movieCatalog;
this.customerPreferenceDao = customerPreferenceDao;
}
// ...
}
Kotlin
class MovieRecommender {
private lateinit var movieCatalog: MovieCatalog
private lateinit var customerPreferenceDao: CustomerPreferenceDao
@Autowired
fun prepare(@Qualifier("main") movieCatalog: MovieCatalog,
customerPreferenceDao: CustomerPreferenceDao) {
this.movieCatalog = movieCatalog
this.customerPreferenceDao = customerPreferenceDao
}
// ...
}
The following example shows corresponding bean definitions.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<bean class="example.SimpleMovieCatalog">
<qualifier value="main"/> (1)
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<qualifier value="action"/> (2)
<!-- inject any dependencies required by this bean -->
</bean>
<bean id="movieRecommender" class="example.MovieRecommender"/>
</beans>
1 The bean with the main qualifier value is wired with the constructor argument that is qualified with the same value.
2 The bean with the action qualifier value is wired with the constructor argument that is qualified with the same value.
For a fallback match, the bean name is considered a default qualifier value. Thus, you can define the bean with an id of main instead of the nested qualifier element, leading to the same matching result. However, although you can use this convention to refer to specific beans by name, @Autowired is fundamentally about type-driven injection with optional semantic qualifiers. This means that qualifier values, even with the bean name fallback, always have narrowing semantics within the set of type matches. They do not semantically express a reference to a unique bean id. Good qualifier values are main or EMEA or persistent, expressing characteristics of a specific component that are independent from the bean id, which may be auto-generated in case of an anonymous bean definition such as the one in the preceding example.
Qualifiers also apply to typed collections, as discussed earlier — for example, to Set<MovieCatalog>. In this case, all matching beans, according to the declared qualifiers, are injected as a collection. This implies that qualifiers do not have to be unique. Rather, they constitute filtering criteria. For example, you can define multiple MovieCatalog beans with the same qualifier value “action”, all of which are injected into a Set<MovieCatalog> annotated with @Qualifier("action").
Letting qualifier values select against target bean names, within the type-matching candidates, does not require a @Qualifier annotation at the injection point. If there is no other resolution indicator (such as a qualifier or a primary marker), for a non-unique dependency situation, Spring matches the injection point name (that is, the field name or parameter name) against the target bean names and choose the same-named candidate, if any.
That said, if you intend to express annotation-driven injection by name, do not primarily use @Autowired, even if it is capable of selecting by bean name among type-matching candidates. Instead, use the JSR-250 @Resource annotation, which is semantically defined to identify a specific target component by its unique name, with the declared type being irrelevant for the matching process. @Autowired has rather different semantics: After selecting candidate beans by type, the specified String qualifier value is considered within those type-selected candidates only (for example, matching an account qualifier against beans marked with the same qualifier label).
For beans that are themselves defined as a collection, Map, or array type, @Resource is a fine solution, referring to the specific collection or array bean by unique name. That said, as of 4.3, collection, you can match Map, and array types through Spring’s @Autowired type matching algorithm as well, as long as the element type information is preserved in @Bean return type signatures or collection inheritance hierarchies. In this case, you can use qualifier values to select among same-typed collections, as outlined in the previous paragraph.
As of 4.3, @Autowired also considers self references for injection (that is, references back to the bean that is currently injected). Note that self injection is a fallback. Regular dependencies on other components always have precedence. In that sense, self references do not participate in regular candidate selection and are therefore in particular never primary. On the contrary, they always end up as lowest precedence. In practice, you should use self references as a last resort only (for example, for calling other methods on the same instance through the bean’s transactional proxy). Consider factoring out the affected methods to a separate delegate bean in such a scenario. Alternatively, you can use @Resource, which may obtain a proxy back to the current bean by its unique name.
Trying to inject the results from @Bean methods on the same configuration class is effectively a self-reference scenario as well. Either lazily resolve such references in the method signature where it is actually needed (as opposed to an autowired field in the configuration class) or declare the affected @Bean methods as static, decoupling them from the containing configuration class instance and its lifecycle. Otherwise, such beans are only considered in the fallback phase, with matching beans on other configuration classes selected as primary candidates instead (if available).
@Autowired applies to fields, constructors, and multi-argument methods, allowing for narrowing through qualifier annotations at the parameter level. In contrast, @Resource is supported only for fields and bean property setter methods with a single argument. As a consequence, you should stick with qualifiers if your injection target is a constructor or a multi-argument method.
You can create your own custom qualifier annotations. To do so, define an annotation and provide the @Qualifier annotation within your definition, as the following example shows:
Java
@Target({ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface Genre {
String value();
}
Kotlin
@Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER)
@Retention(AnnotationRetention.RUNTIME)
@Qualifier
annotation class Genre(val value: String)
Then you can provide the custom qualifier on autowired fields and parameters, as the following example shows:
Java
public class MovieRecommender {
@Autowired
@Genre("Action")
private MovieCatalog actionCatalog;
private MovieCatalog comedyCatalog;
@Autowired
public void setComedyCatalog(@Genre("Comedy") MovieCatalog comedyCatalog) {
this.comedyCatalog = comedyCatalog;
}
// ...
}
Kotlin
class MovieRecommender {
@Autowired
@Genre("Action")
private lateinit var actionCatalog: MovieCatalog
private lateinit var comedyCatalog: MovieCatalog
@Autowired
fun setComedyCatalog(@Genre("Comedy") comedyCatalog: MovieCatalog) {
this.comedyCatalog = comedyCatalog
}
// ...
}
Next, you can provide the information for the candidate bean definitions. You can add <qualifier/> tags as sub-elements of the <bean/> tag and then specify the type and value to match your custom qualifier annotations. The type is matched against the fully-qualified class name of the annotation. Alternately, as a convenience if no risk of conflicting names exists, you can use the short class name. The following example demonstrates both approaches:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<bean class="example.SimpleMovieCatalog">
<qualifier type="Genre" value="Action"/>
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<qualifier type="example.Genre" value="Comedy"/>
<!-- inject any dependencies required by this bean -->
</bean>
<bean id="movieRecommender" class="example.MovieRecommender"/>
</beans>
In Classpath Scanning and Managed Components, you can see an annotation-based alternative to providing the qualifier metadata in XML. Specifically, see Providing Qualifier Metadata with Annotations.
In some cases, using an annotation without a value may suffice. This can be useful when the annotation serves a more generic purpose and can be applied across several different types of dependencies. For example, you may provide an offline catalog that can be searched when no Internet connection is available. First, define the simple annotation, as the following example shows:
Java
@Target({ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface Offline {
}
Kotlin
@Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER)
@Retention(AnnotationRetention.RUNTIME)
@Qualifier
annotation class Offline
Then add the annotation to the field or property to be autowired, as shown in the following example:
Java
public class MovieRecommender {
@Autowired
@Offline (1)
private MovieCatalog offlineCatalog;
// ...
}
1 This line adds the @Offline annotation.
Kotlin
class MovieRecommender {
@Autowired
@Offline (1)
private lateinit var offlineCatalog: MovieCatalog
// ...
}
1 This line adds the @Offline annotation.
Now the bean definition only needs a qualifier type, as shown in the following example:
<bean class="example.SimpleMovieCatalog">
<qualifier type="Offline"/> (1)
<!-- inject any dependencies required by this bean -->
</bean>
1 This element specifies the qualifier.
You can also define custom qualifier annotations that accept named attributes in addition to or instead of the simple value attribute. If multiple attribute values are then specified on a field or parameter to be autowired, a bean definition must match all such attribute values to be considered an autowire candidate. As an example, consider the following annotation definition:
Java
@Target({ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface MovieQualifier {
String genre();
Format format();
}
Kotlin
@Target(AnnotationTarget.FIELD, AnnotationTarget.VALUE_PARAMETER)
@Retention(AnnotationRetention.RUNTIME)
@Qualifier
annotation class MovieQualifier(val genre: String, val format: Format)
In this case Format is an enum, defined as follows:
Java
public enum Format {
VHS, DVD, BLURAY
}
Kotlin
enum class Format {
VHS, DVD, BLURAY
}
The fields to be autowired are annotated with the custom qualifier and include values for both attributes: genre and format, as the following example shows:
Java
public class MovieRecommender {
@Autowired
@MovieQualifier(format=Format.VHS, genre="Action")
private MovieCatalog actionVhsCatalog;
@Autowired
@MovieQualifier(format=Format.VHS, genre="Comedy")
private MovieCatalog comedyVhsCatalog;
@Autowired
@MovieQualifier(format=Format.DVD, genre="Action")
private MovieCatalog actionDvdCatalog;
@Autowired
@MovieQualifier(format=Format.BLURAY, genre="Comedy")
private MovieCatalog comedyBluRayCatalog;
// ...
}
Kotlin
class MovieRecommender {
@Autowired
@MovieQualifier(format = Format.VHS, genre = "Action")
private lateinit var actionVhsCatalog: MovieCatalog
@Autowired
@MovieQualifier(format = Format.VHS, genre = "Comedy")
private lateinit var comedyVhsCatalog: MovieCatalog
@Autowired
@MovieQualifier(format = Format.DVD, genre = "Action")
private lateinit var actionDvdCatalog: MovieCatalog
@Autowired
@MovieQualifier(format = Format.BLURAY, genre = "Comedy")
private lateinit var comedyBluRayCatalog: MovieCatalog
// ...
}
Finally, the bean definitions should contain matching qualifier values. This example also demonstrates that you can use bean meta attributes instead of the <qualifier/> elements. If available, the <qualifier/> element and its attributes take precedence, but the autowiring mechanism falls back on the values provided within the <meta/> tags if no such qualifier is present, as in the last two bean definitions in the following example:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<bean class="example.SimpleMovieCatalog">
<qualifier type="MovieQualifier">
<attribute key="format" value="VHS"/>
<attribute key="genre" value="Action"/>
</qualifier>
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<qualifier type="MovieQualifier">
<attribute key="format" value="VHS"/>
<attribute key="genre" value="Comedy"/>
</qualifier>
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<meta key="format" value="DVD"/>
<meta key="genre" value="Action"/>
<!-- inject any dependencies required by this bean -->
</bean>
<bean class="example.SimpleMovieCatalog">
<meta key="format" value="BLURAY"/>
<meta key="genre" value="Comedy"/>
<!-- inject any dependencies required by this bean -->
</bean>
</beans>
1.9.5. Using Generics as Autowiring Qualifiers
In addition to the @Qualifier annotation, you can use Java generic types as an implicit form of qualification. For example, suppose you have the following configuration:
Java
@Configuration
public class MyConfiguration {
@Bean
public StringStore stringStore() {
return new StringStore();
}
@Bean
public IntegerStore integerStore() {
return new IntegerStore();
}
}
Kotlin
@Configuration
class MyConfiguration {
@Bean
fun stringStore() = StringStore()
@Bean
fun integerStore() = IntegerStore()
}
Assuming that the preceding beans implement a generic interface, (that is, Store<String> and Store<Integer>), you can @Autowire the Store interface and the generic is used as a qualifier, as the following example shows:
Java
@Autowired
private Store<String> s1; // <String> qualifier, injects the stringStore bean
@Autowired
private Store<Integer> s2; // <Integer> qualifier, injects the integerStore bean
Kotlin
@Autowired
private lateinit var s1: Store<String> // <String> qualifier, injects the stringStore bean
@Autowired
private lateinit var s2: Store<Integer> // <Integer> qualifier, injects the integerStore bean
Generic qualifiers also apply when autowiring lists, Map instances and arrays. The following example autowires a generic List:
Java
// Inject all Store beans as long as they have an <Integer> generic
// Store<String> beans will not appear in this list
@Autowired
private List<Store<Integer>> s;
Kotlin
// Inject all Store beans as long as they have an <Integer> generic
// Store<String> beans will not appear in this list
@Autowired
private lateinit var s: List<Store<Integer>>
1.9.6. Using CustomAutowireConfigurer
CustomAutowireConfigurer is a BeanFactoryPostProcessor that lets you register your own custom qualifier annotation types, even if they are not annotated with Spring’s @Qualifier annotation. The following example shows how to use CustomAutowireConfigurer:
<bean id="customAutowireConfigurer"
class="org.springframework.beans.factory.annotation.CustomAutowireConfigurer">
<property name="customQualifierTypes">
<set>
<value>example.CustomQualifier</value>
</set>
</property>
</bean>
The AutowireCandidateResolver determines autowire candidates by:
• The autowire-candidate value of each bean definition
• Any default-autowire-candidates patterns available on the <beans/> element
• The presence of @Qualifier annotations and any custom annotations registered with the CustomAutowireConfigurer
When multiple beans qualify as autowire candidates, the determination of a “primary” is as follows: If exactly one bean definition among the candidates has a primary attribute set to true, it is selected.
1.9.7. Injection with @Resource
Spring also supports injection by using the JSR-250 @Resource annotation (javax.annotation.Resource) on fields or bean property setter methods. This is a common pattern in Java EE: for example, in JSF-managed beans and JAX-WS endpoints. Spring supports this pattern for Spring-managed objects as well.
@Resource takes a name attribute. By default, Spring interprets that value as the bean name to be injected. In other words, it follows by-name semantics, as demonstrated in the following example:
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Resource(name="myMovieFinder") (1)
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
}
1 This line injects a @Resource.
Kotlin
class SimpleMovieLister {
@Resource(name="myMovieFinder") (1)
private lateinit var movieFinder:MovieFinder
}
1 This line injects a @Resource.
If no name is explicitly specified, the default name is derived from the field name or setter method. In case of a field, it takes the field name. In case of a setter method, it takes the bean property name. The following example is going to have the bean named movieFinder injected into its setter method:
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Resource
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
}
Kotlin
class SimpleMovieLister {
@Resource
private lateinit var movieFinder: MovieFinder
}
The name provided with the annotation is resolved as a bean name by the ApplicationContext of which the CommonAnnotationBeanPostProcessor is aware. The names can be resolved through JNDI if you configure Spring’s SimpleJndiBeanFactory explicitly. However, we recommend that you rely on the default behavior and use Spring’s JNDI lookup capabilities to preserve the level of indirection.
In the exclusive case of @Resource usage with no explicit name specified, and similar to @Autowired, @Resource finds a primary type match instead of a specific named bean and resolves well known resolvable dependencies: the BeanFactory, ApplicationContext, ResourceLoader, ApplicationEventPublisher, and MessageSource interfaces.
Thus, in the following example, the customerPreferenceDao field first looks for a bean named "customerPreferenceDao" and then falls back to a primary type match for the type CustomerPreferenceDao:
Java
public class MovieRecommender {
@Resource
private CustomerPreferenceDao customerPreferenceDao;
@Resource
private ApplicationContext context; (1)
public MovieRecommender() {
}
// ...
}
1 The context field is injected based on the known resolvable dependency type: ApplicationContext.
Kotlin
class MovieRecommender {
@Resource
private lateinit var customerPreferenceDao: CustomerPreferenceDao
@Resource
private lateinit var context: ApplicationContext (1)
// ...
}
1 The context field is injected based on the known resolvable dependency type: ApplicationContext.
1.9.8. Using @Value
@Value is typically used to inject externalized properties:
Java
@Component
public class MovieRecommender {
private final String catalog;
public MovieRecommender(@Value("${catalog.name}") String catalog) {
this.catalog = catalog;
}
}
Kotlin
@Component
class MovieRecommender(@Value("\${catalog.name}") private val catalog: String)
With the following configuration:
Java
@Configuration
@PropertySource("classpath:application.properties")
public class AppConfig { }
Kotlin
@Configuration
@PropertySource("classpath:application.properties")
class AppConfig
And the following application.properties file:
catalog.name=MovieCatalog
In that case, the catalog parameter and field will be equal to the MovieCatalog value.
A default lenient embedded value resolver is provided by Spring. It will try to resolve the property value and if it cannot be resolved, the property name (for example ${catalog.name}) will be injected as the value. If you want to maintain strict control over nonexistent values, you should declare a PropertySourcesPlaceholderConfigurer bean, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public static PropertySourcesPlaceholderConfigurer propertyPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun propertyPlaceholderConfigurer() = PropertySourcesPlaceholderConfigurer()
}
When configuring a PropertySourcesPlaceholderConfigurer using JavaConfig, the @Bean method must be static.
Using the above configuration ensures Spring initialization failure if any ${} placeholder could not be resolved. It is also possible to use methods like setPlaceholderPrefix, setPlaceholderSuffix, or setValueSeparator to customize placeholders.
Spring Boot configures by default a PropertySourcesPlaceholderConfigurer bean that will get properties from application.properties and application.yml files.
Built-in converter support provided by Spring allows simple type conversion (to Integer or int for example) to be automatically handled. Multiple comma-separated values can be automatically converted to String array without extra effort.
It is possible to provide a default value as following:
Java
@Component
public class MovieRecommender {
private final String catalog;
public MovieRecommender(@Value("${catalog.name:defaultCatalog}") String catalog) {
this.catalog = catalog;
}
}
Kotlin
@Component
class MovieRecommender(@Value("\${catalog.name:defaultCatalog}") private val catalog: String)
A Spring BeanPostProcessor uses a ConversionService behind the scene to handle the process for converting the String value in @Value to the target type. If you want to provide conversion support for your own custom type, you can provide your own ConversionService bean instance as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public ConversionService conversionService() {
DefaultFormattingConversionService conversionService = new DefaultFormattingConversionService();
conversionService.addConverter(new MyCustomConverter());
return conversionService;
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun conversionService(): ConversionService {
return DefaultFormattingConversionService().apply {
addConverter(MyCustomConverter())
}
}
}
When @Value contains a SpEL expression the value will be dynamically computed at runtime as the following example shows:
Java
@Component
public class MovieRecommender {
private final String catalog;
public MovieRecommender(@Value("#{systemProperties['user.catalog'] + 'Catalog' }") String catalog) {
this.catalog = catalog;
}
}
Kotlin
@Component
class MovieRecommender(
@Value("#{systemProperties['user.catalog'] + 'Catalog' }") private val catalog: String)
SpEL also enables the use of more complex data structures:
Java
@Component
public class MovieRecommender {
private final Map<String, Integer> countOfMoviesPerCatalog;
public MovieRecommender(
@Value("#{{'Thriller': 100, 'Comedy': 300}}") Map<String, Integer> countOfMoviesPerCatalog) {
this.countOfMoviesPerCatalog = countOfMoviesPerCatalog;
}
}
Kotlin
@Component
class MovieRecommender(
@Value("#{{'Thriller': 100, 'Comedy': 300}}") private val countOfMoviesPerCatalog: Map<String, Int>)
1.9.9. Using @PostConstruct and @PreDestroy
The CommonAnnotationBeanPostProcessor not only recognizes the @Resource annotation but also the JSR-250 lifecycle annotations: javax.annotation.PostConstruct and javax.annotation.PreDestroy. Introduced in Spring 2.5, the support for these annotations offers an alternative to the lifecycle callback mechanism described in initialization callbacks and destruction callbacks. Provided that the CommonAnnotationBeanPostProcessor is registered within the Spring ApplicationContext, a method carrying one of these annotations is invoked at the same point in the lifecycle as the corresponding Spring lifecycle interface method or explicitly declared callback method. In the following example, the cache is pre-populated upon initialization and cleared upon destruction:
Java
public class CachingMovieLister {
@PostConstruct
public void populateMovieCache() {
// populates the movie cache upon initialization...
}
@PreDestroy
public void clearMovieCache() {
// clears the movie cache upon destruction...
}
}
Kotlin
class CachingMovieLister {
@PostConstruct
fun populateMovieCache() {
// populates the movie cache upon initialization...
}
@PreDestroy
fun clearMovieCache() {
// clears the movie cache upon destruction...
}
}
For details about the effects of combining various lifecycle mechanisms, see Combining Lifecycle Mechanisms.
Like @Resource, the @PostConstruct and @PreDestroy annotation types were a part of the standard Java libraries from JDK 6 to 8. However, the entire javax.annotation package got separated from the core Java modules in JDK 9 and eventually removed in JDK 11. If needed, the javax.annotation-api artifact needs to be obtained via Maven Central now, simply to be added to the application’s classpath like any other library.
1.10. Classpath Scanning and Managed Components
Most examples in this chapter use XML to specify the configuration metadata that produces each BeanDefinition within the Spring container. The previous section (Annotation-based Container Configuration) demonstrates how to provide a lot of the configuration metadata through source-level annotations. Even in those examples, however, the “base” bean definitions are explicitly defined in the XML file, while the annotations drive only the dependency injection. This section describes an option for implicitly detecting the candidate components by scanning the classpath. Candidate components are classes that match against a filter criteria and have a corresponding bean definition registered with the container. This removes the need to use XML to perform bean registration. Instead, you can use annotations (for example, @Component), AspectJ type expressions, or your own custom filter criteria to select which classes have bean definitions registered with the container.
Starting with Spring 3.0, many features provided by the Spring JavaConfig project are part of the core Spring Framework. This allows you to define beans using Java rather than using the traditional XML files. Take a look at the @Configuration, @Bean, @Import, and @DependsOn annotations for examples of how to use these new features.
1.10.1. @Component and Further Stereotype Annotations
The @Repository annotation is a marker for any class that fulfills the role or stereotype of a repository (also known as Data Access Object or DAO). Among the uses of this marker is the automatic translation of exceptions, as described in Exception Translation.
Spring provides further stereotype annotations: @Component, @Service, and @Controller. @Component is a generic stereotype for any Spring-managed component. @Repository, @Service, and @Controller are specializations of @Component for more specific use cases (in the persistence, service, and presentation layers, respectively). Therefore, you can annotate your component classes with @Component, but, by annotating them with @Repository, @Service, or @Controller instead, your classes are more properly suited for processing by tools or associating with aspects. For example, these stereotype annotations make ideal targets for pointcuts. @Repository, @Service, and @Controller can also carry additional semantics in future releases of the Spring Framework. Thus, if you are choosing between using @Component or @Service for your service layer, @Service is clearly the better choice. Similarly, as stated earlier, @Repository is already supported as a marker for automatic exception translation in your persistence layer.
1.10.2. Using Meta-annotations and Composed Annotations
Many of the annotations provided by Spring can be used as meta-annotations in your own code. A meta-annotation is an annotation that can be applied to another annotation. For example, the @Service annotation mentioned earlier is meta-annotated with @Component, as the following example shows:
Java
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Component (1)
public @interface Service {
// ...
}
1 The Component causes @Service to be treated in the same way as @Component.
Kotlin
@Target(AnnotationTarget.TYPE)
@Retention(AnnotationRetention.RUNTIME)
@MustBeDocumented
@Component (1)
annotation class Service {
// ...
}
1 The Component causes @Service to be treated in the same way as @Component.
You can also combine meta-annotations to create “composed annotations”. For example, the @RestController annotation from Spring MVC is composed of @Controller and @ResponseBody.
In addition, composed annotations can optionally redeclare attributes from meta-annotations to allow customization. This can be particularly useful when you want to only expose a subset of the meta-annotation’s attributes. For example, Spring’s @SessionScope annotation hardcodes the scope name to session but still allows customization of the proxyMode. The following listing shows the definition of the SessionScope annotation:
Java
@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Scope(WebApplicationContext.SCOPE_SESSION)
public @interface SessionScope {
/**
* Alias for {@link Scope#proxyMode}.
* <p>Defaults to {@link ScopedProxyMode#TARGET_CLASS}.
*/
@AliasFor(annotation = Scope.class)
ScopedProxyMode proxyMode() default ScopedProxyMode.TARGET_CLASS;
}
Kotlin
@Target(AnnotationTarget.TYPE, AnnotationTarget.FUNCTION)
@Retention(AnnotationRetention.RUNTIME)
@MustBeDocumented
@Scope(WebApplicationContext.SCOPE_SESSION)
annotation class SessionScope(
@get:AliasFor(annotation = Scope::class)
val proxyMode: ScopedProxyMode = ScopedProxyMode.TARGET_CLASS
)
You can then use @SessionScope without declaring the proxyMode as follows:
Java
@Service
@SessionScope
public class SessionScopedService {
// ...
}
Kotlin
@Service
@SessionScope
class SessionScopedService {
// ...
}
You can also override the value for the proxyMode, as the following example shows:
Java
@Service
@SessionScope(proxyMode = ScopedProxyMode.INTERFACES)
public class SessionScopedUserService implements UserService {
// ...
}
Kotlin
@Service
@SessionScope(proxyMode = ScopedProxyMode.INTERFACES)
class SessionScopedUserService : UserService {
// ...
}
For further details, see the Spring Annotation Programming Model wiki page.
1.10.3. Automatically Detecting Classes and Registering Bean Definitions
Spring can automatically detect stereotyped classes and register corresponding BeanDefinition instances with the ApplicationContext. For example, the following two classes are eligible for such autodetection:
Java
@Service
public class SimpleMovieLister {
private MovieFinder movieFinder;
public SimpleMovieLister(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
}
Kotlin
@Service
class SimpleMovieLister(private val movieFinder: MovieFinder)
Java
@Repository
public class JpaMovieFinder implements MovieFinder {
// implementation elided for clarity
}
Kotlin
@Repository
class JpaMovieFinder : MovieFinder {
// implementation elided for clarity
}
To autodetect these classes and register the corresponding beans, you need to add @ComponentScan to your @Configuration class, where the basePackages attribute is a common parent package for the two classes. (Alternatively, you can specify a comma- or semicolon- or space-separated list that includes the parent package of each class.)
Java
@Configuration
@ComponentScan(basePackages = "org.example")
public class AppConfig {
// ...
}
Kotlin
@Configuration
@ComponentScan(basePackages = ["org.example"])
class AppConfig {
// ...
}
For brevity, the preceding example could have used the value attribute of the annotation (that is, @ComponentScan("org.example")).
The following alternative uses XML:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
https://www.springframework.org/schema/context/spring-context.xsd">
<context:component-scan base-package="org.example"/>
</beans>
The use of <context:component-scan> implicitly enables the functionality of <context:annotation-config>. There is usually no need to include the <context:annotation-config> element when using <context:component-scan>.
The scanning of classpath packages requires the presence of corresponding directory entries in the classpath. When you build JARs with Ant, make sure that you do not activate the files-only switch of the JAR task. Also, classpath directories may not be exposed based on security policies in some environments — for example, standalone apps on JDK 1.7.0_45 and higher (which requires 'Trusted-Library' setup in your manifests — see https://stackoverflow.com/questions/19394570/java-jre-7u45-breaks-classloader-getresources).
On JDK 9’s module path (Jigsaw), Spring’s classpath scanning generally works as expected. However, make sure that your component classes are exported in your module-info descriptors. If you expect Spring to invoke non-public members of your classes, make sure that they are 'opened' (that is, that they use an opens declaration instead of an exports declaration in your module-info descriptor).
Furthermore, the AutowiredAnnotationBeanPostProcessor and CommonAnnotationBeanPostProcessor are both implicitly included when you use the component-scan element. That means that the two components are autodetected and wired together — all without any bean configuration metadata provided in XML.
You can disable the registration of AutowiredAnnotationBeanPostProcessor and CommonAnnotationBeanPostProcessor by including the annotation-config attribute with a value of false.
1.10.4. Using Filters to Customize Scanning
By default, classes annotated with @Component, @Repository, @Service, @Controller, @Configuration, or a custom annotation that itself is annotated with @Component are the only detected candidate components. However, you can modify and extend this behavior by applying custom filters. Add them as includeFilters or excludeFilters attributes of the @ComponentScan annotation (or as <context:include-filter /> or <context:exclude-filter /> child elements of the <context:component-scan> element in XML configuration). Each filter element requires the type and expression attributes. The following table describes the filtering options:
Table 5. Filter Types
Filter Type Example Expression Description
annotation (default)
org.example.SomeAnnotation
An annotation to be present or meta-present at the type level in target components.
assignable
org.example.SomeClass
A class (or interface) that the target components are assignable to (extend or implement).
aspectj
org.example..*Service+
An AspectJ type expression to be matched by the target components.
regex
org\.example\.Default.*
A regex expression to be matched by the target components' class names.
custom
org.example.MyTypeFilter
A custom implementation of the org.springframework.core.type.TypeFilter interface.
The following example shows the configuration ignoring all @Repository annotations and using “stub” repositories instead:
Java
@Configuration
@ComponentScan(basePackages = "org.example",
includeFilters = @Filter(type = FilterType.REGEX, pattern = ".*Stub.*Repository"),
excludeFilters = @Filter(Repository.class))
public class AppConfig {
...
}
Kotlin
@Configuration
@ComponentScan(basePackages = "org.example",
includeFilters = [Filter(type = FilterType.REGEX, pattern = [".*Stub.*Repository"])],
excludeFilters = [Filter(Repository::class)])
class AppConfig {
// ...
}
The following listing shows the equivalent XML:
<beans>
<context:component-scan base-package="org.example">
<context:include-filter type="regex"
expression=".*Stub.*Repository"/>
<context:exclude-filter type="annotation"
expression="org.springframework.stereotype.Repository"/>
</context:component-scan>
</beans>
You can also disable the default filters by setting useDefaultFilters=false on the annotation or by providing use-default-filters="false" as an attribute of the <component-scan/> element. This effectively disables automatic detection of classes annotated or meta-annotated with @Component, @Repository, @Service, @Controller, @RestController, or @Configuration.
1.10.5. Defining Bean Metadata within Components
Spring components can also contribute bean definition metadata to the container. You can do this with the same @Bean annotation used to define bean metadata within @Configuration annotated classes. The following example shows how to do so:
Java
@Component
public class FactoryMethodComponent {
@Bean
@Qualifier("public")
public TestBean publicInstance() {
return new TestBean("publicInstance");
}
public void doWork() {
// Component method implementation omitted
}
}
Kotlin
@Component
class FactoryMethodComponent {
@Bean
@Qualifier("public")
fun publicInstance() = TestBean("publicInstance")
fun doWork() {
// Component method implementation omitted
}
}
The preceding class is a Spring component that has application-specific code in its doWork() method. However, it also contributes a bean definition that has a factory method referring to the method publicInstance(). The @Bean annotation identifies the factory method and other bean definition properties, such as a qualifier value through the @Qualifier annotation. Other method-level annotations that can be specified are @Scope, @Lazy, and custom qualifier annotations.
In addition to its role for component initialization, you can also place the @Lazy annotation on injection points marked with @Autowired or @Inject. In this context, it leads to the injection of a lazy-resolution proxy.
Autowired fields and methods are supported, as previously discussed, with additional support for autowiring of @Bean methods. The following example shows how to do so:
Java
@Component
public class FactoryMethodComponent {
private static int i;
@Bean
@Qualifier("public")
public TestBean publicInstance() {
return new TestBean("publicInstance");
}
// use of a custom qualifier and autowiring of method parameters
@Bean
protected TestBean protectedInstance(
@Qualifier("public") TestBean spouse,
@Value("#{privateInstance.age}") String country) {
TestBean tb = new TestBean("protectedInstance", 1);
tb.setSpouse(spouse);
tb.setCountry(country);
return tb;
}
@Bean
private TestBean privateInstance() {
return new TestBean("privateInstance", i++);
}
@Bean
@RequestScope
public TestBean requestScopedInstance() {
return new TestBean("requestScopedInstance", 3);
}
}
Kotlin
@Component
class FactoryMethodComponent {
companion object {
private var i: Int = 0
}
@Bean
@Qualifier("public")
fun publicInstance() = TestBean("publicInstance")
// use of a custom qualifier and autowiring of method parameters
@Bean
protected fun protectedInstance(
@Qualifier("public") spouse: TestBean,
@Value("#{privateInstance.age}") country: String) = TestBean("protectedInstance", 1).apply {
this.spouse = spouse
this.country = country
}
@Bean
private fun privateInstance() = TestBean("privateInstance", i++)
@Bean
@RequestScope
fun requestScopedInstance() = TestBean("requestScopedInstance", 3)
}
The example autowires the String method parameter country to the value of the age property on another bean named privateInstance. A Spring Expression Language element defines the value of the property through the notation #{ <expression> }. For @Value annotations, an expression resolver is preconfigured to look for bean names when resolving expression text.
As of Spring Framework 4.3, you may also declare a factory method parameter of type InjectionPoint (or its more specific subclass: DependencyDescriptor) to access the requesting injection point that triggers the creation of the current bean. Note that this applies only to the actual creation of bean instances, not to the injection of existing instances. As a consequence, this feature makes most sense for beans of prototype scope. For other scopes, the factory method only ever sees the injection point that triggered the creation of a new bean instance in the given scope (for example, the dependency that triggered the creation of a lazy singleton bean). You can use the provided injection point metadata with semantic care in such scenarios. The following example shows how to use InjectionPoint:
Java
@Component
public class FactoryMethodComponent {
@Bean @Scope("prototype")
public TestBean prototypeInstance(InjectionPoint injectionPoint) {
return new TestBean("prototypeInstance for " + injectionPoint.getMember());
}
}
Kotlin
@Component
class FactoryMethodComponent {
@Bean
@Scope("prototype")
fun prototypeInstance(injectionPoint: InjectionPoint) =
TestBean("prototypeInstance for ${injectionPoint.member}")
}
The @Bean methods in a regular Spring component are processed differently than their counterparts inside a Spring @Configuration class. The difference is that @Component classes are not enhanced with CGLIB to intercept the invocation of methods and fields. CGLIB proxying is the means by which invoking methods or fields within @Bean methods in @Configuration classes creates bean metadata references to collaborating objects. Such methods are not invoked with normal Java semantics but rather go through the container in order to provide the usual lifecycle management and proxying of Spring beans, even when referring to other beans through programmatic calls to @Bean methods. In contrast, invoking a method or field in a @Bean method within a plain @Component class has standard Java semantics, with no special CGLIB processing or other constraints applying.
You may declare @Bean methods as static, allowing for them to be called without creating their containing configuration class as an instance. This makes particular sense when defining post-processor beans (for example, of type BeanFactoryPostProcessor or BeanPostProcessor), since such beans get initialized early in the container lifecycle and should avoid triggering other parts of the configuration at that point.
Calls to static @Bean methods never get intercepted by the container, not even within @Configuration classes (as described earlier in this section), due to technical limitations: CGLIB subclassing can override only non-static methods. As a consequence, a direct call to another @Bean method has standard Java semantics, resulting in an independent instance being returned straight from the factory method itself.
The Java language visibility of @Bean methods does not have an immediate impact on the resulting bean definition in Spring’s container. You can freely declare your factory methods as you see fit in non-@Configuration classes and also for static methods anywhere. However, regular @Bean methods in @Configuration classes need to be overridable — that is, they must not be declared as private or final.
@Bean methods are also discovered on base classes of a given component or configuration class, as well as on Java 8 default methods declared in interfaces implemented by the component or configuration class. This allows for a lot of flexibility in composing complex configuration arrangements, with even multiple inheritance being possible through Java 8 default methods as of Spring 4.2.
Finally, a single class may hold multiple @Bean methods for the same bean, as an arrangement of multiple factory methods to use depending on available dependencies at runtime. This is the same algorithm as for choosing the “greediest” constructor or factory method in other configuration scenarios: The variant with the largest number of satisfiable dependencies is picked at construction time, analogous to how the container selects between multiple @Autowired constructors.
1.10.6. Naming Autodetected Components
When a component is autodetected as part of the scanning process, its bean name is generated by the BeanNameGenerator strategy known to that scanner. By default, any Spring stereotype annotation (@Component, @Repository, @Service, and @Controller) that contains a name value thereby provides that name to the corresponding bean definition.
If such an annotation contains no name value or for any other detected component (such as those discovered by custom filters), the default bean name generator returns the uncapitalized non-qualified class name. For example, if the following component classes were detected, the names would be myMovieLister and movieFinderImpl:
Java
@Service("myMovieLister")
public class SimpleMovieLister {
// ...
}
Kotlin
@Service("myMovieLister")
class SimpleMovieLister {
// ...
}
Java
@Repository
public class MovieFinderImpl implements MovieFinder {
// ...
}
Kotlin
@Repository
class MovieFinderImpl : MovieFinder {
// ...
}
If you do not want to rely on the default bean-naming strategy, you can provide a custom bean-naming strategy. First, implement the BeanNameGenerator interface, and be sure to include a default no-arg constructor. Then, provide the fully qualified class name when configuring the scanner, as the following example annotation and bean definition show.
If you run into naming conflicts due to multiple autodetected components having the same non-qualified class name (i.e., classes with identical names but residing in different packages), you may need to configure a BeanNameGenerator that defaults to the fully qualified class name for the generated bean name. As of Spring Framework 5.2.3, the FullyQualifiedAnnotationBeanNameGenerator located in package org.springframework.context.annotation can be used for such purposes.
Java
@Configuration
@ComponentScan(basePackages = "org.example", nameGenerator = MyNameGenerator.class)
public class AppConfig {
// ...
}
Kotlin
@Configuration
@ComponentScan(basePackages = ["org.example"], nameGenerator = MyNameGenerator::class)
class AppConfig {
// ...
}
<beans>
<context:component-scan base-package="org.example"
name-generator="org.example.MyNameGenerator" />
</beans>
As a general rule, consider specifying the name with the annotation whenever other components may be making explicit references to it. On the other hand, the auto-generated names are adequate whenever the container is responsible for wiring.
1.10.7. Providing a Scope for Autodetected Components
As with Spring-managed components in general, the default and most common scope for autodetected components is singleton. However, sometimes you need a different scope that can be specified by the @Scope annotation. You can provide the name of the scope within the annotation, as the following example shows:
Java
@Scope("prototype")
@Repository
public class MovieFinderImpl implements MovieFinder {
// ...
}
Kotlin
@Scope("prototype")
@Repository
class MovieFinderImpl : MovieFinder {
// ...
}
@Scope annotations are only introspected on the concrete bean class (for annotated components) or the factory method (for @Bean methods). In contrast to XML bean definitions, there is no notion of bean definition inheritance, and inheritance hierarchies at the class level are irrelevant for metadata purposes.
For details on web-specific scopes such as “request” or “session” in a Spring context, see Request, Session, Application, and WebSocket Scopes. As with the pre-built annotations for those scopes, you may also compose your own scoping annotations by using Spring’s meta-annotation approach: for example, a custom annotation meta-annotated with @Scope("prototype"), possibly also declaring a custom scoped-proxy mode.
To provide a custom strategy for scope resolution rather than relying on the annotation-based approach, you can implement the ScopeMetadataResolver interface. Be sure to include a default no-arg constructor. Then you can provide the fully qualified class name when configuring the scanner, as the following example of both an annotation and a bean definition shows:
Java
@Configuration
@ComponentScan(basePackages = "org.example", scopeResolver = MyScopeResolver.class)
public class AppConfig {
// ...
}
Kotlin
@Configuration
@ComponentScan(basePackages = ["org.example"], scopeResolver = MyScopeResolver::class)
class AppConfig {
// ...
}
<beans>
<context:component-scan base-package="org.example" scope-resolver="org.example.MyScopeResolver"/>
</beans>
When using certain non-singleton scopes, it may be necessary to generate proxies for the scoped objects. The reasoning is described in Scoped Beans as Dependencies. For this purpose, a scoped-proxy attribute is available on the component-scan element. The three possible values are: no, interfaces, and targetClass. For example, the following configuration results in standard JDK dynamic proxies:
Java
@Configuration
@ComponentScan(basePackages = "org.example", scopedProxy = ScopedProxyMode.INTERFACES)
public class AppConfig {
// ...
}
Kotlin
@Configuration
@ComponentScan(basePackages = ["org.example"], scopedProxy = ScopedProxyMode.INTERFACES)
class AppConfig {
// ...
}
<beans>
<context:component-scan base-package="org.example" scoped-proxy="interfaces"/>
</beans>
1.10.8. Providing Qualifier Metadata with Annotations
The @Qualifier annotation is discussed in Fine-tuning Annotation-based Autowiring with Qualifiers. The examples in that section demonstrate the use of the @Qualifier annotation and custom qualifier annotations to provide fine-grained control when you resolve autowire candidates. Because those examples were based on XML bean definitions, the qualifier metadata was provided on the candidate bean definitions by using the qualifier or meta child elements of the bean element in the XML. When relying upon classpath scanning for auto-detection of components, you can provide the qualifier metadata with type-level annotations on the candidate class. The following three examples demonstrate this technique:
Java
@Component
@Qualifier("Action")
public class ActionMovieCatalog implements MovieCatalog {
// ...
}
Kotlin
@Component
@Qualifier("Action")
class ActionMovieCatalog : MovieCatalog
Java
@Component
@Genre("Action")
public class ActionMovieCatalog implements MovieCatalog {
// ...
}
Kotlin
@Component
@Genre("Action")
class ActionMovieCatalog : MovieCatalog {
// ...
}
Java
@Component
@Offline
public class CachingMovieCatalog implements MovieCatalog {
// ...
}
Kotlin
@Component
@Offline
class CachingMovieCatalog : MovieCatalog {
// ...
}
As with most annotation-based alternatives, keep in mind that the annotation metadata is bound to the class definition itself, while the use of XML allows for multiple beans of the same type to provide variations in their qualifier metadata, because that metadata is provided per-instance rather than per-class.
1.10.9. Generating an Index of Candidate Components
While classpath scanning is very fast, it is possible to improve the startup performance of large applications by creating a static list of candidates at compilation time. In this mode, all modules that are target of component scan must use this mechanism.
Your existing @ComponentScan or <context:component-scan directives must stay as is to request the context to scan candidates in certain packages. When the ApplicationContext detects such an index, it automatically uses it rather than scanning the classpath.
To generate the index, add an additional dependency to each module that contains components that are targets for component scan directives. The following example shows how to do so with Maven:
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-indexer</artifactId>
<version>5.3.0-M1</version>
<optional>true</optional>
</dependency>
</dependencies>
With Gradle 4.5 and earlier, the dependency should be declared in the compileOnly configuration, as shown in the following example:
dependencies {
compileOnly "org.springframework:spring-context-indexer:5.3.0-M1"
}
With Gradle 4.6 and later, the dependency should be declared in the annotationProcessor configuration, as shown in the following example:
dependencies {
annotationProcessor "org.springframework:spring-context-indexer:{spring-version}"
}
That process generates a META-INF/spring.components file that is included in the jar file.
When working with this mode in your IDE, the spring-context-indexer must be registered as an annotation processor to make sure the index is up-to-date when candidate components are updated.
The index is enabled automatically when a META-INF/spring.components is found on the classpath. If an index is partially available for some libraries (or use cases) but could not be built for the whole application, you can fallback to a regular classpath arrangement (as though no index was present at all) by setting spring.index.ignore to true, either as a system property or in a spring.properties file at the root of the classpath.
1.11. Using JSR 330 Standard Annotations
Starting with Spring 3.0, Spring offers support for JSR-330 standard annotations (Dependency Injection). Those annotations are scanned in the same way as the Spring annotations. To use them, you need to have the relevant jars in your classpath.
If you use Maven, the javax.inject artifact is available in the standard Maven repository ( https://repo1.maven.org/maven2/javax/inject/javax.inject/1/). You can add the following dependency to your file pom.xml:
<dependency>
<groupId>javax.inject</groupId>
<artifactId>javax.inject</artifactId>
<version>1</version>
</dependency>
1.11.1. Dependency Injection with @Inject and @Named
Instead of @Autowired, you can use @javax.inject.Inject as follows:
Java
import javax.inject.Inject;
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Inject
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
public void listMovies() {
this.movieFinder.findMovies(...);
// ...
}
}
Kotlin
import javax.inject.Inject
class SimpleMovieLister {
@Inject
lateinit var movieFinder: MovieFinder
fun listMovies() {
movieFinder.findMovies(...)
// ...
}
}
As with @Autowired, you can use @Inject at the field level, method level and constructor-argument level. Furthermore, you may declare your injection point as a Provider, allowing for on-demand access to beans of shorter scopes or lazy access to other beans through a Provider.get() call. The following example offers a variant of the preceding example:
Java
import javax.inject.Inject;
import javax.inject.Provider;
public class SimpleMovieLister {
private Provider<MovieFinder> movieFinder;
@Inject
public void setMovieFinder(Provider<MovieFinder> movieFinder) {
this.movieFinder = movieFinder;
}
public void listMovies() {
this.movieFinder.get().findMovies(...);
// ...
}
}
Kotlin
import javax.inject.Inject
class SimpleMovieLister {
@Inject
lateinit var movieFinder: MovieFinder
fun listMovies() {
movieFinder.findMovies(...)
// ...
}
}
If you would like to use a qualified name for the dependency that should be injected, you should use the @Named annotation, as the following example shows:
Java
import javax.inject.Inject;
import javax.inject.Named;
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Inject
public void setMovieFinder(@Named("main") MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
import javax.inject.Inject
import javax.inject.Named
class SimpleMovieLister {
private lateinit var movieFinder: MovieFinder
@Inject
fun setMovieFinder(@Named("main") movieFinder: MovieFinder) {
this.movieFinder = movieFinder
}
// ...
}
As with @Autowired, @Inject can also be used with java.util.Optional or @Nullable. This is even more applicable here, since @Inject does not have a required attribute. The following pair of examples show how to use @Inject and @Nullable:
public class SimpleMovieLister {
@Inject
public void setMovieFinder(Optional<MovieFinder> movieFinder) {
// ...
}
}
Java
public class SimpleMovieLister {
@Inject
public void setMovieFinder(@Nullable MovieFinder movieFinder) {
// ...
}
}
Kotlin
class SimpleMovieLister {
@Inject
var movieFinder: MovieFinder? = null
}
1.11.2. @Named and @ManagedBean: Standard Equivalents to the @Component Annotation
Instead of @Component, you can use @javax.inject.Named or javax.annotation.ManagedBean, as the following example shows:
Java
import javax.inject.Inject;
import javax.inject.Named;
@Named("movieListener") // @ManagedBean("movieListener") could be used as well
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Inject
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
import javax.inject.Inject
import javax.inject.Named
@Named("movieListener") // @ManagedBean("movieListener") could be used as well
class SimpleMovieLister {
@Inject
lateinit var movieFinder: MovieFinder
// ...
}
It is very common to use @Component without specifying a name for the component. @Named can be used in a similar fashion, as the following example shows:
Java
import javax.inject.Inject;
import javax.inject.Named;
@Named
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Inject
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
Kotlin
import javax.inject.Inject
import javax.inject.Named
@Named
class SimpleMovieLister {
@Inject
lateinit var movieFinder: MovieFinder
// ...
}
When you use @Named or @ManagedBean, you can use component scanning in the exact same way as when you use Spring annotations, as the following example shows:
Java
@Configuration
@ComponentScan(basePackages = "org.example")
public class AppConfig {
// ...
}
Kotlin
@Configuration
@ComponentScan(basePackages = ["org.example"])
class AppConfig {
// ...
}
In contrast to @Component, the JSR-330 @Named and the JSR-250 ManagedBean annotations are not composable. You should use Spring’s stereotype model for building custom component annotations.
1.11.3. Limitations of JSR-330 Standard Annotations
When you work with standard annotations, you should know that some significant features are not available, as the following table shows:
Table 6. Spring component model elements versus JSR-330 variants
Spring javax.inject.* javax.inject restrictions / comments
@Autowired
@Inject
@Inject has no 'required' attribute. Can be used with Java 8’s Optional instead.
@Component
@Named / @ManagedBean
JSR-330 does not provide a composable model, only a way to identify named components.
@Scope("singleton")
@Singleton
The JSR-330 default scope is like Spring’s prototype. However, in order to keep it consistent with Spring’s general defaults, a JSR-330 bean declared in the Spring container is a singleton by default. In order to use a scope other than singleton, you should use Spring’s @Scope annotation. javax.inject also provides a @Scope annotation. Nevertheless, this one is only intended to be used for creating your own annotations.
@Qualifier
@Qualifier / @Named
javax.inject.Qualifier is just a meta-annotation for building custom qualifiers. Concrete String qualifiers (like Spring’s @Qualifier with a value) can be associated through javax.inject.Named.
@Value
-
no equivalent
@Required
-
no equivalent
@Lazy
-
no equivalent
ObjectFactory
Provider
javax.inject.Provider is a direct alternative to Spring’s ObjectFactory, only with a shorter get() method name. It can also be used in combination with Spring’s @Autowired or with non-annotated constructors and setter methods.
1.12. Java-based Container Configuration
This section covers how to use annotations in your Java code to configure the Spring container. It includes the following topics:
1.12.1. Basic Concepts: @Bean and @Configuration
The central artifacts in Spring’s new Java-configuration support are @Configuration-annotated classes and @Bean-annotated methods.
The @Bean annotation is used to indicate that a method instantiates, configures, and initializes a new object to be managed by the Spring IoC container. For those familiar with Spring’s <beans/> XML configuration, the @Bean annotation plays the same role as the <bean/> element. You can use @Bean-annotated methods with any Spring @Component. However, they are most often used with @Configuration beans.
Annotating a class with @Configuration indicates that its primary purpose is as a source of bean definitions. Furthermore, @Configuration classes let inter-bean dependencies be defined by calling other @Bean methods in the same class. The simplest possible @Configuration class reads as follows:
Java
@Configuration
public class AppConfig {
@Bean
public MyService myService() {
return new MyServiceImpl();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun myService(): MyService {
return MyServiceImpl()
}
}
The preceding AppConfig class is equivalent to the following Spring <beans/> XML:
<beans>
<bean id="myService" class="com.acme.services.MyServiceImpl"/>
</beans>
Full @Configuration vs “lite” @Bean mode?
When @Bean methods are declared within classes that are not annotated with @Configuration, they are referred to as being processed in a “lite” mode. Bean methods declared in a @Component or even in a plain old class are considered to be “lite”, with a different primary purpose of the containing class and a @Bean method being a sort of bonus there. For example, service components may expose management views to the container through an additional @Bean method on each applicable component class. In such scenarios, @Bean methods are a general-purpose factory method mechanism.
Unlike full @Configuration, lite @Bean methods cannot declare inter-bean dependencies. Instead, they operate on their containing component’s internal state and, optionally, on arguments that they may declare. Such a @Bean method should therefore not invoke other @Bean methods. Each such method is literally only a factory method for a particular bean reference, without any special runtime semantics. The positive side-effect here is that no CGLIB subclassing has to be applied at runtime, so there are no limitations in terms of class design (that is, the containing class may be final and so forth).
In common scenarios, @Bean methods are to be declared within @Configuration classes, ensuring that “full” mode is always used and that cross-method references therefore get redirected to the container’s lifecycle management. This prevents the same @Bean method from accidentally being invoked through a regular Java call, which helps to reduce subtle bugs that can be hard to track down when operating in “lite” mode.
The @Bean and @Configuration annotations are discussed in depth in the following sections. First, however, we cover the various ways of creating a spring container using by Java-based configuration.
1.12.2. Instantiating the Spring Container by Using AnnotationConfigApplicationContext
The following sections document Spring’s AnnotationConfigApplicationContext, introduced in Spring 3.0. This versatile ApplicationContext implementation is capable of accepting not only @Configuration classes as input but also plain @Component classes and classes annotated with JSR-330 metadata.
When @Configuration classes are provided as input, the @Configuration class itself is registered as a bean definition and all declared @Bean methods within the class are also registered as bean definitions.
When @Component and JSR-330 classes are provided, they are registered as bean definitions, and it is assumed that DI metadata such as @Autowired or @Inject are used within those classes where necessary.
Simple Construction
In much the same way that Spring XML files are used as input when instantiating a ClassPathXmlApplicationContext, you can use @Configuration classes as input when instantiating an AnnotationConfigApplicationContext. This allows for completely XML-free usage of the Spring container, as the following example shows:
Java
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(AppConfig.class);
MyService myService = ctx.getBean(MyService.class);
myService.doStuff();
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = AnnotationConfigApplicationContext(AppConfig::class.java)
val myService = ctx.getBean<MyService>()
myService.doStuff()
}
As mentioned earlier, AnnotationConfigApplicationContext is not limited to working only with @Configuration classes. Any @Component or JSR-330 annotated class may be supplied as input to the constructor, as the following example shows:
Java
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(MyServiceImpl.class, Dependency1.class, Dependency2.class);
MyService myService = ctx.getBean(MyService.class);
myService.doStuff();
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = AnnotationConfigApplicationContext(MyServiceImpl::class.java, Dependency1::class.java, Dependency2::class.java)
val myService = ctx.getBean<MyService>()
myService.doStuff()
}
The preceding example assumes that MyServiceImpl, Dependency1, and Dependency2 use Spring dependency injection annotations such as @Autowired.
Building the Container Programmatically by Using register(Class<?>…)
You can instantiate an AnnotationConfigApplicationContext by using a no-arg constructor and then configure it by using the register() method. This approach is particularly useful when programmatically building an AnnotationConfigApplicationContext. The following example shows how to do so:
Java
public static void main(String[] args) {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(AppConfig.class, OtherConfig.class);
ctx.register(AdditionalConfig.class);
ctx.refresh();
MyService myService = ctx.getBean(MyService.class);
myService.doStuff();
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = AnnotationConfigApplicationContext()
ctx.register(AppConfig::class.java, OtherConfig::class.java)
ctx.register(AdditionalConfig::class.java)
ctx.refresh()
val myService = ctx.getBean<MyService>()
myService.doStuff()
}
Enabling Component Scanning with scan(String…)
To enable component scanning, you can annotate your @Configuration class as follows:
Java
@Configuration
@ComponentScan(basePackages = "com.acme") (1)
public class AppConfig {
...
}
1 This annotation enables component scanning.
Kotlin
@Configuration
@ComponentScan(basePackages = ["com.acme"]) (1)
class AppConfig {
// ...
}
1 This annotation enables component scanning.
Experienced Spring users may be familiar with the XML declaration equivalent from Spring’s context: namespace, shown in the following example:
<beans>
<context:component-scan base-package="com.acme"/>
</beans>
In the preceding example, the com.acme package is scanned to look for any @Component-annotated classes, and those classes are registered as Spring bean definitions within the container. AnnotationConfigApplicationContext exposes the scan(String…) method to allow for the same component-scanning functionality, as the following example shows:
Java
public static void main(String[] args) {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.scan("com.acme");
ctx.refresh();
MyService myService = ctx.getBean(MyService.class);
}
Kotlin
fun main() {
val ctx = AnnotationConfigApplicationContext()
ctx.scan("com.acme")
ctx.refresh()
val myService = ctx.getBean<MyService>()
}
Remember that @Configuration classes are meta-annotated with @Component, so they are candidates for component-scanning. In the preceding example, assuming that AppConfig is declared within the com.acme package (or any package underneath), it is picked up during the call to scan(). Upon refresh(), all its @Bean methods are processed and registered as bean definitions within the container.
Support for Web Applications with AnnotationConfigWebApplicationContext
A WebApplicationContext variant of AnnotationConfigApplicationContext is available with AnnotationConfigWebApplicationContext. You can use this implementation when configuring the Spring ContextLoaderListener servlet listener, Spring MVC DispatcherServlet, and so forth. The following web.xml snippet configures a typical Spring MVC web application (note the use of the contextClass context-param and init-param):
<web-app>
<!-- Configure ContextLoaderListener to use AnnotationConfigWebApplicationContext
instead of the default XmlWebApplicationContext -->
<context-param>
<param-name>contextClass</param-name>
<param-value>
org.springframework.web.context.support.AnnotationConfigWebApplicationContext
</param-value>
</context-param>
<!-- Configuration locations must consist of one or more comma- or space-delimited
fully-qualified @Configuration classes. Fully-qualified packages may also be
specified for component-scanning -->
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>com.acme.AppConfig</param-value>
</context-param>
<!-- Bootstrap the root application context as usual using ContextLoaderListener -->
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<!-- Declare a Spring MVC DispatcherServlet as usual -->
<servlet>
<servlet-name>dispatcher</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<!-- Configure DispatcherServlet to use AnnotationConfigWebApplicationContext
instead of the default XmlWebApplicationContext -->
<init-param>
<param-name>contextClass</param-name>
<param-value>
org.springframework.web.context.support.AnnotationConfigWebApplicationContext
</param-value>
</init-param>
<!-- Again, config locations must consist of one or more comma- or space-delimited
and fully-qualified @Configuration classes -->
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>com.acme.web.MvcConfig</param-value>
</init-param>
</servlet>
<!-- map all requests for /app/* to the dispatcher servlet -->
<servlet-mapping>
<servlet-name>dispatcher</servlet-name>
<url-pattern>/app/*</url-pattern>
</servlet-mapping>
</web-app>
1.12.3. Using the @Bean Annotation
@Bean is a method-level annotation and a direct analog of the XML <bean/> element. The annotation supports some of the attributes offered by <bean/>, such as: * init-method * destroy-method * autowiring * name.
You can use the @Bean annotation in a @Configuration-annotated or in a @Component-annotated class.
Declaring a Bean
To declare a bean, you can annotate a method with the @Bean annotation. You use this method to register a bean definition within an ApplicationContext of the type specified as the method’s return value. By default, the bean name is the same as the method name. The following example shows a @Bean method declaration:
Java
@Configuration
public class AppConfig {
@Bean
public TransferServiceImpl transferService() {
return new TransferServiceImpl();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun transferService() = TransferServiceImpl()
}
The preceding configuration is exactly equivalent to the following Spring XML:
<beans>
<bean id="transferService" class="com.acme.TransferServiceImpl"/>
</beans>
Both declarations make a bean named transferService available in the ApplicationContext, bound to an object instance of type TransferServiceImpl, as the following text image shows:
transferService -> com.acme.TransferServiceImpl
You can also declare your @Bean method with an interface (or base class) return type, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public TransferService transferService() {
return new TransferServiceImpl();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun transferService(): TransferService {
return TransferServiceImpl()
}
}
However, this limits the visibility for advance type prediction to the specified interface type (TransferService). Then, with the full type (TransferServiceImpl) known to the container only once, the affected singleton bean has been instantiated. Non-lazy singleton beans get instantiated according to their declaration order, so you may see different type matching results depending on when another component tries to match by a non-declared type (such as @Autowired TransferServiceImpl, which resolves only once the transferService bean has been instantiated).
If you consistently refer to your types by a declared service interface, your @Bean return types may safely join that design decision. However, for components that implement several interfaces or for components potentially referred to by their implementation type, it is safer to declare the most specific return type possible (at least as specific as required by the injection points that refer to your bean).
Bean Dependencies
A @Bean-annotated method can have an arbitrary number of parameters that describe the dependencies required to build that bean. For instance, if our TransferService requires an AccountRepository, we can materialize that dependency with a method parameter, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public TransferService transferService(AccountRepository accountRepository) {
return new TransferServiceImpl(accountRepository);
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun transferService(accountRepository: AccountRepository): TransferService {
return TransferServiceImpl(accountRepository)
}
}
The resolution mechanism is pretty much identical to constructor-based dependency injection. See the relevant section for more details.
Receiving Lifecycle Callbacks
Any classes defined with the @Bean annotation support the regular lifecycle callbacks and can use the @PostConstruct and @PreDestroy annotations from JSR-250. See JSR-250 annotations for further details.
The regular Spring lifecycle callbacks are fully supported as well. If a bean implements InitializingBean, DisposableBean, or Lifecycle, their respective methods are called by the container.
The standard set of *Aware interfaces (such as BeanFactoryAware, BeanNameAware, MessageSourceAware, ApplicationContextAware, and so on) are also fully supported.
The @Bean annotation supports specifying arbitrary initialization and destruction callback methods, much like Spring XML’s init-method and destroy-method attributes on the bean element, as the following example shows:
Java
public class BeanOne {
public void init() {
// initialization logic
}
}
public class BeanTwo {
public void cleanup() {
// destruction logic
}
}
@Configuration
public class AppConfig {
@Bean(initMethod = "init")
public BeanOne beanOne() {
return new BeanOne();
}
@Bean(destroyMethod = "cleanup")
public BeanTwo beanTwo() {
return new BeanTwo();
}
}
Kotlin
class BeanOne {
fun init() {
// initialization logic
}
}
class BeanTwo {
fun cleanup() {
// destruction logic
}
}
@Configuration
class AppConfig {
@Bean(initMethod = "init")
fun beanOne() = BeanOne()
@Bean(destroyMethod = "cleanup")
fun beanTwo() = BeanTwo()
}
By default, beans defined with Java configuration that have a public close or shutdown method are automatically enlisted with a destruction callback. If you have a public close or shutdown method and you do not wish for it to be called when the container shuts down, you can add @Bean(destroyMethod="") to your bean definition to disable the default (inferred) mode.
You may want to do that by default for a resource that you acquire with JNDI, as its lifecycle is managed outside the application. In particular, make sure to always do it for a DataSource, as it is known to be problematic on Java EE application servers.
The following example shows how to prevent an automatic destruction callback for a DataSource:
Java
@Bean(destroyMethod="")
public DataSource dataSource() throws NamingException {
return (DataSource) jndiTemplate.lookup("MyDS");
}
Kotlin
@Bean(destroyMethod = "")
fun dataSource(): DataSource {
return jndiTemplate.lookup("MyDS") as DataSource
}
Also, with @Bean methods, you typically use programmatic JNDI lookups, either by using Spring’s JndiTemplate or JndiLocatorDelegate helpers or straight JNDI InitialContext usage but not the JndiObjectFactoryBean variant (which would force you to declare the return type as the FactoryBean type instead of the actual target type, making it harder to use for cross-reference calls in other @Bean methods that intend to refer to the provided resource here).
In the case of BeanOne from the example above the preceding note, it would be equally valid to call the init() method directly during construction, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public BeanOne beanOne() {
BeanOne beanOne = new BeanOne();
beanOne.init();
return beanOne;
}
// ...
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun beanOne() = BeanOne().apply {
init()
}
// ...
}
When you work directly in Java, you can do anything you like with your objects and do not always need to rely on the container lifecycle.
Specifying Bean Scope
Spring includes the @Scope annotation so that you can specify the scope of a bean.
Using the @Scope Annotation
You can specify that your beans defined with the @Bean annotation should have a specific scope. You can use any of the standard scopes specified in the Bean Scopes section.
The default scope is singleton, but you can override this with the @Scope annotation, as the following example shows:
Java
@Configuration
public class MyConfiguration {
@Bean
@Scope("prototype")
public Encryptor encryptor() {
// ...
}
}
Kotlin
@Configuration
class MyConfiguration {
@Bean
@Scope("prototype")
fun encryptor(): Encryptor {
// ...
}
}
@Scope and scoped-proxy
Spring offers a convenient way of working with scoped dependencies through scoped proxies. The easiest way to create such a proxy when using the XML configuration is the <aop:scoped-proxy/> element. Configuring your beans in Java with a @Scope annotation offers equivalent support with the proxyMode attribute. The default is no proxy (ScopedProxyMode.NO), but you can specify ScopedProxyMode.TARGET_CLASS or ScopedProxyMode.INTERFACES.
If you port the scoped proxy example from the XML reference documentation (see scoped proxies) to our @Bean using Java, it resembles the following:
Java
// an HTTP Session-scoped bean exposed as a proxy
@Bean
@SessionScope
public UserPreferences userPreferences() {
return new UserPreferences();
}
@Bean
public Service userService() {
UserService service = new SimpleUserService();
// a reference to the proxied userPreferences bean
service.setUserPreferences(userPreferences());
return service;
}
Kotlin
// an HTTP Session-scoped bean exposed as a proxy
@Bean
@SessionScope
fun userPreferences() = UserPreferences()
@Bean
fun userService(): Service {
return SimpleUserService().apply {
// a reference to the proxied userPreferences bean
setUserPreferences(userPreferences()
}
}
Customizing Bean Naming
By default, configuration classes use a @Bean method’s name as the name of the resulting bean. This functionality can be overridden, however, with the name attribute, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean(name = "myThing")
public Thing thing() {
return new Thing();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean("myThing")
fun thing() = Thing()
}
Bean Aliasing
As discussed in Naming Beans, it is sometimes desirable to give a single bean multiple names, otherwise known as bean aliasing. The name attribute of the @Bean annotation accepts a String array for this purpose. The following example shows how to set a number of aliases for a bean:
Java
@Configuration
public class AppConfig {
@Bean({"dataSource", "subsystemA-dataSource", "subsystemB-dataSource"})
public DataSource dataSource() {
// instantiate, configure and return DataSource bean...
}
}
Kotlin
@Configuration
class AppConfig {
@Bean("dataSource", "subsystemA-dataSource", "subsystemB-dataSource")
fun dataSource(): DataSource {
// instantiate, configure and return DataSource bean...
}
}
Bean Description
Sometimes, it is helpful to provide a more detailed textual description of a bean. This can be particularly useful when beans are exposed (perhaps through JMX) for monitoring purposes.
To add a description to a @Bean, you can use the @Description annotation, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
@Description("Provides a basic example of a bean")
public Thing thing() {
return new Thing();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
@Description("Provides a basic example of a bean")
fun thing() = Thing()
}
1.12.4. Using the @Configuration annotation
@Configuration is a class-level annotation indicating that an object is a source of bean definitions. @Configuration classes declare beans through public @Bean annotated methods. Calls to @Bean methods on @Configuration classes can also be used to define inter-bean dependencies. See Basic Concepts: @Bean and @Configuration for a general introduction.
Injecting Inter-bean Dependencies
When beans have dependencies on one another, expressing that dependency is as simple as having one bean method call another, as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean
public BeanOne beanOne() {
return new BeanOne(beanTwo());
}
@Bean
public BeanTwo beanTwo() {
return new BeanTwo();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun beanOne() = BeanOne(beanTwo())
@Bean
fun beanTwo() = BeanTwo()
}
In the preceding example, beanOne receives a reference to beanTwo through constructor injection.
This method of declaring inter-bean dependencies works only when the @Bean method is declared within a @Configuration class. You cannot declare inter-bean dependencies by using plain @Component classes.
Lookup Method Injection
As noted earlier, lookup method injection is an advanced feature that you should use rarely. It is useful in cases where a singleton-scoped bean has a dependency on a prototype-scoped bean. Using Java for this type of configuration provides a natural means for implementing this pattern. The following example shows how to use lookup method injection:
Java
public abstract class CommandManager {
public Object process(Object commandState) {
// grab a new instance of the appropriate Command interface
Command command = createCommand();
// set the state on the (hopefully brand new) Command instance
command.setState(commandState);
return command.execute();
}
// okay... but where is the implementation of this method?
protected abstract Command createCommand();
}
Kotlin
abstract class CommandManager {
fun process(commandState: Any): Any {
// grab a new instance of the appropriate Command interface
val command = createCommand()
// set the state on the (hopefully brand new) Command instance
command.setState(commandState)
return command.execute()
}
// okay... but where is the implementation of this method?
protected abstract fun createCommand(): Command
}
By using Java configuration, you can create a subclass of CommandManager where the abstract createCommand() method is overridden in such a way that it looks up a new (prototype) command object. The following example shows how to do so:
Java
@Bean
@Scope("prototype")
public AsyncCommand asyncCommand() {
AsyncCommand command = new AsyncCommand();
// inject dependencies here as required
return command;
}
@Bean
public CommandManager commandManager() {
// return new anonymous implementation of CommandManager with createCommand()
// overridden to return a new prototype Command object
return new CommandManager() {
protected Command createCommand() {
return asyncCommand();
}
}
}
Kotlin
@Bean
@Scope("prototype")
fun asyncCommand(): AsyncCommand {
val command = AsyncCommand()
// inject dependencies here as required
return command
}
@Bean
fun commandManager(): CommandManager {
// return new anonymous implementation of CommandManager with createCommand()
// overridden to return a new prototype Command object
return object : CommandManager() {
override fun createCommand(): Command {
return asyncCommand()
}
}
}
Further Information About How Java-based Configuration Works Internally
Consider the following example, which shows a @Bean annotated method being called twice:
Java
@Configuration
public class AppConfig {
@Bean
public ClientService clientService1() {
ClientServiceImpl clientService = new ClientServiceImpl();
clientService.setClientDao(clientDao());
return clientService;
}
@Bean
public ClientService clientService2() {
ClientServiceImpl clientService = new ClientServiceImpl();
clientService.setClientDao(clientDao());
return clientService;
}
@Bean
public ClientDao clientDao() {
return new ClientDaoImpl();
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun clientService1(): ClientService {
return ClientServiceImpl().apply {
clientDao = clientDao()
}
}
@Bean
fun clientService2(): ClientService {
return ClientServiceImpl().apply {
clientDao = clientDao()
}
}
@Bean
fun clientDao(): ClientDao {
return ClientDaoImpl()
}
}
clientDao() has been called once in clientService1() and once in clientService2(). Since this method creates a new instance of ClientDaoImpl and returns it, you would normally expect to have two instances (one for each service). That definitely would be problematic: In Spring, instantiated beans have a singleton scope by default. This is where the magic comes in: All @Configuration classes are subclassed at startup-time with CGLIB. In the subclass, the child method checks the container first for any cached (scoped) beans before it calls the parent method and creates a new instance.
The behavior could be different according to the scope of your bean. We are talking about singletons here.
As of Spring 3.2, it is no longer necessary to add CGLIB to your classpath because CGLIB classes have been repackaged under org.springframework.cglib and included directly within the spring-core JAR.
There are a few restrictions due to the fact that CGLIB dynamically adds features at startup-time. In particular, configuration classes must not be final. However, as of 4.3, any constructors are allowed on configuration classes, including the use of @Autowired or a single non-default constructor declaration for default injection.
If you prefer to avoid any CGLIB-imposed limitations, consider declaring your @Bean methods on non-@Configuration classes (for example, on plain @Component classes instead). Cross-method calls between @Bean methods are not then intercepted, so you have to exclusively rely on dependency injection at the constructor or method level there.
1.12.5. Composing Java-based Configurations
Spring’s Java-based configuration feature lets you compose annotations, which can reduce the complexity of your configuration.
Using the @Import Annotation
Much as the <import/> element is used within Spring XML files to aid in modularizing configurations, the @Import annotation allows for loading @Bean definitions from another configuration class, as the following example shows:
Java
@Configuration
public class ConfigA {
@Bean
public A a() {
return new A();
}
}
@Configuration
@Import(ConfigA.class)
public class ConfigB {
@Bean
public B b() {
return new B();
}
}
Kotlin
@Configuration
class ConfigA {
@Bean
fun a() = A()
}
@Configuration
@Import(ConfigA::class)
class ConfigB {
@Bean
fun b() = B()
}
Now, rather than needing to specify both ConfigA.class and ConfigB.class when instantiating the context, only ConfigB needs to be supplied explicitly, as the following example shows:
Java
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(ConfigB.class);
// now both beans A and B will be available...
A a = ctx.getBean(A.class);
B b = ctx.getBean(B.class);
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = AnnotationConfigApplicationContext(ConfigB::class.java)
// now both beans A and B will be available...
val a = ctx.getBean<A>()
val b = ctx.getBean<B>()
}
This approach simplifies container instantiation, as only one class needs to be dealt with, rather than requiring you to remember a potentially large number of @Configuration classes during construction.
As of Spring Framework 4.2, @Import also supports references to regular component classes, analogous to the AnnotationConfigApplicationContext.register method. This is particularly useful if you want to avoid component scanning, by using a few configuration classes as entry points to explicitly define all your components.
Injecting Dependencies on Imported @Bean Definitions
The preceding example works but is simplistic. In most practical scenarios, beans have dependencies on one another across configuration classes. When using XML, this is not an issue, because no compiler is involved, and you can declare ref="someBean" and trust Spring to work it out during container initialization. When using @Configuration classes, the Java compiler places constraints on the configuration model, in that references to other beans must be valid Java syntax.
Fortunately, solving this problem is simple. As we already discussed, a @Bean method can have an arbitrary number of parameters that describe the bean dependencies. Consider the following more real-world scenario with several @Configuration classes, each depending on beans declared in the others:
Java
@Configuration
public class ServiceConfig {
@Bean
public TransferService transferService(AccountRepository accountRepository) {
return new TransferServiceImpl(accountRepository);
}
}
@Configuration
public class RepositoryConfig {
@Bean
public AccountRepository accountRepository(DataSource dataSource) {
return new JdbcAccountRepository(dataSource);
}
}
@Configuration
@Import({ServiceConfig.class, RepositoryConfig.class})
public class SystemTestConfig {
@Bean
public DataSource dataSource() {
// return new DataSource
}
}
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(SystemTestConfig.class);
// everything wires up across configuration classes...
TransferService transferService = ctx.getBean(TransferService.class);
transferService.transfer(100.00, "A123", "C456");
}
Kotlin
import org.springframework.beans.factory.getBean
@Configuration
class ServiceConfig {
@Bean
fun transferService(accountRepository: AccountRepository): TransferService {
return TransferServiceImpl(accountRepository)
}
}
@Configuration
class RepositoryConfig {
@Bean
fun accountRepository(dataSource: DataSource): AccountRepository {
return JdbcAccountRepository(dataSource)
}
}
@Configuration
@Import(ServiceConfig::class, RepositoryConfig::class)
class SystemTestConfig {
@Bean
fun dataSource(): DataSource {
// return new DataSource
}
}
fun main() {
val ctx = AnnotationConfigApplicationContext(SystemTestConfig::class.java)
// everything wires up across configuration classes...
val transferService = ctx.getBean<TransferService>()
transferService.transfer(100.00, "A123", "C456")
}
There is another way to achieve the same result. Remember that @Configuration classes are ultimately only another bean in the container: This means that they can take advantage of @Autowired and @Value injection and other features the same as any other bean.
Make sure that the dependencies you inject that way are of the simplest kind only. @Configuration classes are processed quite early during the initialization of the context, and forcing a dependency to be injected this way may lead to unexpected early initialization. Whenever possible, resort to parameter-based injection, as in the preceding example.
Also, be particularly careful with BeanPostProcessor and BeanFactoryPostProcessor definitions through @Bean. Those should usually be declared as static @Bean methods, not triggering the instantiation of their containing configuration class. Otherwise, @Autowired and @Value may not work on the configuration class itself, since it is possible to create it as a bean instance earlier than AutowiredAnnotationBeanPostProcessor.
The following example shows how one bean can be autowired to another bean:
Java
@Configuration
public class ServiceConfig {
@Autowired
private AccountRepository accountRepository;
@Bean
public TransferService transferService() {
return new TransferServiceImpl(accountRepository);
}
}
@Configuration
public class RepositoryConfig {
private final DataSource dataSource;
public RepositoryConfig(DataSource dataSource) {
this.dataSource = dataSource;
}
@Bean
public AccountRepository accountRepository() {
return new JdbcAccountRepository(dataSource);
}
}
@Configuration
@Import({ServiceConfig.class, RepositoryConfig.class})
public class SystemTestConfig {
@Bean
public DataSource dataSource() {
// return new DataSource
}
}
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(SystemTestConfig.class);
// everything wires up across configuration classes...
TransferService transferService = ctx.getBean(TransferService.class);
transferService.transfer(100.00, "A123", "C456");
}
Kotlin
import org.springframework.beans.factory.getBean
@Configuration
class ServiceConfig {
@Autowired
lateinit var accountRepository: AccountRepository
@Bean
fun transferService(): TransferService {
return TransferServiceImpl(accountRepository)
}
}
@Configuration
class RepositoryConfig(private val dataSource: DataSource) {
@Bean
fun accountRepository(): AccountRepository {
return JdbcAccountRepository(dataSource)
}
}
@Configuration
@Import(ServiceConfig::class, RepositoryConfig::class)
class SystemTestConfig {
@Bean
fun dataSource(): DataSource {
// return new DataSource
}
}
fun main() {
val ctx = AnnotationConfigApplicationContext(SystemTestConfig::class.java)
// everything wires up across configuration classes...
val transferService = ctx.getBean<TransferService>()
transferService.transfer(100.00, "A123", "C456")
}
Constructor injection in @Configuration classes is only supported as of Spring Framework 4.3. Note also that there is no need to specify @Autowired if the target bean defines only one constructor.
Fully-qualifying imported beans for ease of navigation
In the preceding scenario, using @Autowired works well and provides the desired modularity, but determining exactly where the autowired bean definitions are declared is still somewhat ambiguous. For example, as a developer looking at ServiceConfig, how do you know exactly where the @Autowired AccountRepository bean is declared? It is not explicit in the code, and this may be just fine. Remember that the Spring Tools for Eclipse provides tooling that can render graphs showing how everything is wired, which may be all you need. Also, your Java IDE can easily find all declarations and uses of the AccountRepository type and quickly show you the location of @Bean methods that return that type.
In cases where this ambiguity is not acceptable and you wish to have direct navigation from within your IDE from one @Configuration class to another, consider autowiring the configuration classes themselves. The following example shows how to do so:
Java
@Configuration
public class ServiceConfig {
@Autowired
private RepositoryConfig repositoryConfig;
@Bean
public TransferService transferService() {
// navigate 'through' the config class to the @Bean method!
return new TransferServiceImpl(repositoryConfig.accountRepository());
}
}
Kotlin
@Configuration
class ServiceConfig {
@Autowired
private lateinit var repositoryConfig: RepositoryConfig
@Bean
fun transferService(): TransferService {
// navigate 'through' the config class to the @Bean method!
return TransferServiceImpl(repositoryConfig.accountRepository())
}
}
In the preceding situation, where AccountRepository is defined is completely explicit. However, ServiceConfig is now tightly coupled to RepositoryConfig. That is the tradeoff. This tight coupling can be somewhat mitigated by using interface-based or abstract class-based @Configuration classes. Consider the following example:
Java
@Configuration
public class ServiceConfig {
@Autowired
private RepositoryConfig repositoryConfig;
@Bean
public TransferService transferService() {
return new TransferServiceImpl(repositoryConfig.accountRepository());
}
}
@Configuration
public interface RepositoryConfig {
@Bean
AccountRepository accountRepository();
}
@Configuration
public class DefaultRepositoryConfig implements RepositoryConfig {
@Bean
public AccountRepository accountRepository() {
return new JdbcAccountRepository(...);
}
}
@Configuration
@Import({ServiceConfig.class, DefaultRepositoryConfig.class}) // import the concrete config!
public class SystemTestConfig {
@Bean
public DataSource dataSource() {
// return DataSource
}
}
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(SystemTestConfig.class);
TransferService transferService = ctx.getBean(TransferService.class);
transferService.transfer(100.00, "A123", "C456");
}
Kotlin
import org.springframework.beans.factory.getBean
@Configuration
class ServiceConfig {
@Autowired
private lateinit var repositoryConfig: RepositoryConfig
@Bean
fun transferService(): TransferService {
return TransferServiceImpl(repositoryConfig.accountRepository())
}
}
@Configuration
interface RepositoryConfig {
@Bean
fun accountRepository(): AccountRepository
}
@Configuration
class DefaultRepositoryConfig : RepositoryConfig {
@Bean
fun accountRepository(): AccountRepository {
return JdbcAccountRepository(...)
}
}
@Configuration
@Import(ServiceConfig::class, DefaultRepositoryConfig::class) // import the concrete config!
class SystemTestConfig {
@Bean
fun dataSource(): DataSource {
// return DataSource
}
}
fun main() {
val ctx = AnnotationConfigApplicationContext(SystemTestConfig::class.java)
val transferService = ctx.getBean<TransferService>()
transferService.transfer(100.00, "A123", "C456")
}
Now ServiceConfig is loosely coupled with respect to the concrete DefaultRepositoryConfig, and built-in IDE tooling is still useful: You can easily get a type hierarchy of RepositoryConfig implementations. In this way, navigating @Configuration classes and their dependencies becomes no different than the usual process of navigating interface-based code.
If you want to influence the startup creation order of certain beans, consider declaring some of them as @Lazy (for creation on first access instead of on startup) or as @DependsOn certain other beans (making sure that specific other beans are created before the current bean, beyond what the latter’s direct dependencies imply).
Conditionally Include @Configuration Classes or @Bean Methods
It is often useful to conditionally enable or disable a complete @Configuration class or even individual @Bean methods, based on some arbitrary system state. One common example of this is to use the @Profile annotation to activate beans only when a specific profile has been enabled in the Spring Environment (see Bean Definition Profiles for details).
The @Profile annotation is actually implemented by using a much more flexible annotation called @Conditional. The @Conditional annotation indicates specific org.springframework.context.annotation.Condition implementations that should be consulted before a @Bean is registered.
Implementations of the Condition interface provide a matches(…) method that returns true or false. For example, the following listing shows the actual Condition implementation used for @Profile:
Java
@Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
// Read the @Profile annotation attributes
MultiValueMap<String, Object> attrs = metadata.getAllAnnotationAttributes(Profile.class.getName());
if (attrs != null) {
for (Object value : attrs.get("value")) {
if (context.getEnvironment().acceptsProfiles(((String[]) value))) {
return true;
}
}
return false;
}
return true;
}
Kotlin
override fun matches(context: ConditionContext, metadata: AnnotatedTypeMetadata): Boolean {
// Read the @Profile annotation attributes
val attrs = metadata.getAllAnnotationAttributes(Profile::class.java.name)
if (attrs != null) {
for (value in attrs["value"]!!) {
if (context.environment.acceptsProfiles(Profiles .of(*value as Array<String>))) {
return true
}
}
return false
}
return true
}
See the @Conditional javadoc for more detail.
Combining Java and XML Configuration
Spring’s @Configuration class support does not aim to be a 100% complete replacement for Spring XML. Some facilities, such as Spring XML namespaces, remain an ideal way to configure the container. In cases where XML is convenient or necessary, you have a choice: either instantiate the container in an “XML-centric” way by using, for example, ClassPathXmlApplicationContext, or instantiate it in a “Java-centric” way by using AnnotationConfigApplicationContext and the @ImportResource annotation to import XML as needed.
XML-centric Use of @Configuration Classes
It may be preferable to bootstrap the Spring container from XML and include @Configuration classes in an ad-hoc fashion. For example, in a large existing codebase that uses Spring XML, it is easier to create @Configuration classes on an as-needed basis and include them from the existing XML files. Later in this section, we cover the options for using @Configuration classes in this kind of “XML-centric” situation.
Declaring @Configuration classes as plain Spring <bean/> elements
Remember that @Configuration classes are ultimately bean definitions in the container. In this series examples, we create a @Configuration class named AppConfig and include it within system-test-config.xml as a <bean/> definition. Because <context:annotation-config/> is switched on, the container recognizes the @Configuration annotation and processes the @Bean methods declared in AppConfig properly.
The following example shows an ordinary configuration class in Java:
Java
@Configuration
public class AppConfig {
@Autowired
private DataSource dataSource;
@Bean
public AccountRepository accountRepository() {
return new JdbcAccountRepository(dataSource);
}
@Bean
public TransferService transferService() {
return new TransferService(accountRepository());
}
}
Kotlin
@Configuration
class AppConfig {
@Autowired
private lateinit var dataSource: DataSource
@Bean
fun accountRepository(): AccountRepository {
return JdbcAccountRepository(dataSource)
}
@Bean
fun transferService() = TransferService(accountRepository())
}
The following example shows part of a sample system-test-config.xml file:
<beans>
<!-- enable processing of annotations such as @Autowired and @Configuration -->
<context:annotation-config/>
<context:property-placeholder location="classpath:/com/acme/jdbc.properties"/>
<bean class="com.acme.AppConfig"/>
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
</beans>
The following example shows a possible jdbc.properties file:
jdbc.url=jdbc:hsqldb:hsql://localhost/xdb
jdbc.username=sa
jdbc.password=
Java
public static void main(String[] args) {
ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath:/com/acme/system-test-config.xml");
TransferService transferService = ctx.getBean(TransferService.class);
// ...
}
Kotlin
fun main() {
val ctx = ClassPathXmlApplicationContext("classpath:/com/acme/system-test-config.xml")
val transferService = ctx.getBean<TransferService>()
// ...
}
In system-test-config.xml file, the AppConfig <bean/> does not declare an id element. While it would be acceptable to do so, it is unnecessary, given that no other bean ever refers to it, and it is unlikely to be explicitly fetched from the container by name. Similarly, the DataSource bean is only ever autowired by type, so an explicit bean id is not strictly required.
Using <context:component-scan/> to pick up @Configuration classes
Because @Configuration is meta-annotated with @Component, @Configuration-annotated classes are automatically candidates for component scanning. Using the same scenario as describe in the previous example, we can redefine system-test-config.xml to take advantage of component-scanning. Note that, in this case, we need not explicitly declare <context:annotation-config/>, because <context:component-scan/> enables the same functionality.
The following example shows the modified system-test-config.xml file:
<beans>
<!-- picks up and registers AppConfig as a bean definition -->
<context:component-scan base-package="com.acme"/>
<context:property-placeholder location="classpath:/com/acme/jdbc.properties"/>
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
</beans>
@Configuration Class-centric Use of XML with @ImportResource
In applications where @Configuration classes are the primary mechanism for configuring the container, it is still likely necessary to use at least some XML. In these scenarios, you can use @ImportResource and define only as much XML as you need. Doing so achieves a “Java-centric” approach to configuring the container and keeps XML to a bare minimum. The following example (which includes a configuration class, an XML file that defines a bean, a properties file, and the main class) shows how to use the @ImportResource annotation to achieve “Java-centric” configuration that uses XML as needed:
Java
@Configuration
@ImportResource("classpath:/com/acme/properties-config.xml")
public class AppConfig {
@Value("${jdbc.url}")
private String url;
@Value("${jdbc.username}")
private String username;
@Value("${jdbc.password}")
private String password;
@Bean
public DataSource dataSource() {
return new DriverManagerDataSource(url, username, password);
}
}
Kotlin
@Configuration
@ImportResource("classpath:/com/acme/properties-config.xml")
class AppConfig {
@Value("\${jdbc.url}")
private lateinit var url: String
@Value("\${jdbc.username}")
private lateinit var username: String
@Value("\${jdbc.password}")
private lateinit var password: String
@Bean
fun dataSource(): DataSource {
return DriverManagerDataSource(url, username, password)
}
}
properties-config.xml
<beans>
<context:property-placeholder location="classpath:/com/acme/jdbc.properties"/>
</beans>
jdbc.properties
jdbc.url=jdbc:hsqldb:hsql://localhost/xdb
jdbc.username=sa
jdbc.password=
Java
public static void main(String[] args) {
ApplicationContext ctx = new AnnotationConfigApplicationContext(AppConfig.class);
TransferService transferService = ctx.getBean(TransferService.class);
// ...
}
Kotlin
import org.springframework.beans.factory.getBean
fun main() {
val ctx = AnnotationConfigApplicationContext(AppConfig::class.java)
val transferService = ctx.getBean<TransferService>()
// ...
}
1.13. Environment Abstraction
The Environment interface is an abstraction integrated in the container that models two key aspects of the application environment: profiles and properties.
A profile is a named, logical group of bean definitions to be registered with the container only if the given profile is active. Beans may be assigned to a profile whether defined in XML or with annotations. The role of the Environment object with relation to profiles is in determining which profiles (if any) are currently active, and which profiles (if any) should be active by default.
Properties play an important role in almost all applications and may originate from a variety of sources: properties files, JVM system properties, system environment variables, JNDI, servlet context parameters, ad-hoc Properties objects, Map objects, and so on. The role of the Environment object with relation to properties is to provide the user with a convenient service interface for configuring property sources and resolving properties from them.
1.13.1. Bean Definition Profiles
Bean definition profiles provide a mechanism in the core container that allows for registration of different beans in different environments. The word, “environment,” can mean different things to different users, and this feature can help with many use cases, including:
• Working against an in-memory datasource in development versus looking up that same datasource from JNDI when in QA or production.
• Registering monitoring infrastructure only when deploying an application into a performance environment.
• Registering customized implementations of beans for customer A versus customer B deployments.
Consider the first use case in a practical application that requires a DataSource. In a test environment, the configuration might resemble the following:
Java
@Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("my-schema.sql")
.addScript("my-test-data.sql")
.build();
}
Kotlin
@Bean
fun dataSource(): DataSource {
return EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("my-schema.sql")
.addScript("my-test-data.sql")
.build()
}
Now consider how this application can be deployed into a QA or production environment, assuming that the datasource for the application is registered with the production application server’s JNDI directory. Our dataSource bean now looks like the following listing:
Java
@Bean(destroyMethod="")
public DataSource dataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
Kotlin
@Bean(destroyMethod = "")
fun dataSource(): DataSource {
val ctx = InitialContext()
return ctx.lookup("java:comp/env/jdbc/datasource") as DataSource
}
The problem is how to switch between using these two variations based on the current environment. Over time, Spring users have devised a number of ways to get this done, usually relying on a combination of system environment variables and XML <import/> statements containing ${placeholder} tokens that resolve to the correct configuration file path depending on the value of an environment variable. Bean definition profiles is a core container feature that provides a solution to this problem.
If we generalize the use case shown in the preceding example of environment-specific bean definitions, we end up with the need to register certain bean definitions in certain contexts but not in others. You could say that you want to register a certain profile of bean definitions in situation A and a different profile in situation B. We start by updating our configuration to reflect this need.
Using @Profile
The @Profile annotation lets you indicate that a component is eligible for registration when one or more specified profiles are active. Using our preceding example, we can rewrite the dataSource configuration as follows:
Java
@Configuration
@Profile("development")
public class StandaloneDataConfig {
@Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build();
}
}
Kotlin
@Configuration
@Profile("development")
class StandaloneDataConfig {
@Bean
fun dataSource(): DataSource {
return EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build()
}
}
Java
@Configuration
@Profile("production")
public class JndiDataConfig {
@Bean(destroyMethod="")
public DataSource dataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
}
Kotlin
@Configuration
@Profile("production")
class JndiDataConfig {
@Bean(destroyMethod = "")
fun dataSource(): DataSource {
val ctx = InitialContext()
return ctx.lookup("java:comp/env/jdbc/datasource") as DataSource
}
}
As mentioned earlier, with @Bean methods, you typically choose to use programmatic JNDI lookups, by using either Spring’s JndiTemplate/JndiLocatorDelegate helpers or the straight JNDI InitialContext usage shown earlier but not the JndiObjectFactoryBean variant, which would force you to declare the return type as the FactoryBean type.
The profile string may contain a simple profile name (for example, production) or a profile expression. A profile expression allows for more complicated profile logic to be expressed (for example, production & us-east). The following operators are supported in profile expressions:
• !: A logical “not” of the profile
• &: A logical “and” of the profiles
• |: A logical “or” of the profiles
You cannot mix the & and | operators without using parentheses. For example, production & us-east | eu-central is not a valid expression. It must be expressed as production & (us-east | eu-central).
You can use @Profile as a meta-annotation for the purpose of creating a custom composed annotation. The following example defines a custom @Production annotation that you can use as a drop-in replacement for @Profile("production"):
Java
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Profile("production")
public @interface Production {
}
Kotlin
@Target(AnnotationTarget.TYPE)
@Retention(AnnotationRetention.RUNTIME)
@Profile("production")
annotation class Production
If a @Configuration class is marked with @Profile, all of the @Bean methods and @Import annotations associated with that class are bypassed unless one or more of the specified profiles are active. If a @Component or @Configuration class is marked with @Profile({"p1", "p2"}), that class is not registered or processed unless profiles 'p1' or 'p2' have been activated. If a given profile is prefixed with the NOT operator (!), the annotated element is registered only if the profile is not active. For example, given @Profile({"p1", "!p2"}), registration will occur if profile 'p1' is active or if profile 'p2' is not active.
@Profile can also be declared at the method level to include only one particular bean of a configuration class (for example, for alternative variants of a particular bean), as the following example shows:
Java
@Configuration
public class AppConfig {
@Bean("dataSource")
@Profile("development") (1)
public DataSource standaloneDataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build();
}
@Bean("dataSource")
@Profile("production") (2)
public DataSource jndiDataSource() throws Exception {
Context ctx = new InitialContext();
return (DataSource) ctx.lookup("java:comp/env/jdbc/datasource");
}
}
1 The standaloneDataSource method is available only in the development profile.
2 The jndiDataSource method is available only in the production profile.
Kotlin
@Configuration
class AppConfig {
@Bean("dataSource")
@Profile("development") (1)
fun standaloneDataSource(): DataSource {
return EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.addScript("classpath:com/bank/config/sql/test-data.sql")
.build()
}
@Bean("dataSource")
@Profile("production") (2)
fun jndiDataSource() =
InitialContext().lookup("java:comp/env/jdbc/datasource") as DataSource
}
1 The standaloneDataSource method is available only in the development profile.
2 The jndiDataSource method is available only in the production profile.
With @Profile on @Bean methods, a special scenario may apply: In the case of overloaded @Bean methods of the same Java method name (analogous to constructor overloading), a @Profile condition needs to be consistently declared on all overloaded methods. If the conditions are inconsistent, only the condition on the first declaration among the overloaded methods matters. Therefore, @Profile can not be used to select an overloaded method with a particular argument signature over another. Resolution between all factory methods for the same bean follows Spring’s constructor resolution algorithm at creation time.
If you want to define alternative beans with different profile conditions, use distinct Java method names that point to the same bean name by using the @Bean name attribute, as shown in the preceding example. If the argument signatures are all the same (for example, all of the variants have no-arg factory methods), this is the only way to represent such an arrangement in a valid Java class in the first place (since there can only be one method of a particular name and argument signature).
XML Bean Definition Profiles
The XML counterpart is the profile attribute of the <beans> element. Our preceding sample configuration can be rewritten in two XML files, as follows:
<beans profile="development"
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xsi:schemaLocation="...">
<jdbc:embedded-database id="dataSource">
<jdbc:script location="classpath:com/bank/config/sql/schema.sql"/>
<jdbc:script location="classpath:com/bank/config/sql/test-data.sql"/>
</jdbc:embedded-database>
</beans>
<beans profile="production"
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="...">
<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/datasource"/>
</beans>
It is also possible to avoid that split and nest <beans/> elements within the same file, as the following example shows:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="...">
<!-- other bean definitions -->
<beans profile="development">
<jdbc:embedded-database id="dataSource">
<jdbc:script location="classpath:com/bank/config/sql/schema.sql"/>
<jdbc:script location="classpath:com/bank/config/sql/test-data.sql"/>
</jdbc:embedded-database>
</beans>
<beans profile="production">
<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/datasource"/>
</beans>
</beans>
The spring-bean.xsd has been constrained to allow such elements only as the last ones in the file. This should help provide flexibility without incurring clutter in the XML files.
The XML counterpart does not support the profile expressions described earlier. It is possible, however, to negate a profile by using the ! operator. It is also possible to apply a logical “and” by nesting the profiles, as the following example shows:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="...">
<!-- other bean definitions -->
<beans profile="production">
<beans profile="us-east">
<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/datasource"/>
</beans>
</beans>
</beans>
In the preceding example, the dataSource bean is exposed if both the production and us-east profiles are active.
Activating a Profile
Now that we have updated our configuration, we still need to instruct Spring which profile is active. If we started our sample application right now, we would see a NoSuchBeanDefinitionException thrown, because the container could not find the Spring bean named dataSource.
Activating a profile can be done in several ways, but the most straightforward is to do it programmatically against the Environment API which is available through an ApplicationContext. The following example shows how to do so:
Java
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.getEnvironment().setActiveProfiles("development");
ctx.register(SomeConfig.class, StandaloneDataConfig.class, JndiDataConfig.class);
ctx.refresh();
Kotlin
val ctx = AnnotationConfigApplicationContext().apply {
environment.setActiveProfiles("development")
register(SomeConfig::class.java, StandaloneDataConfig::class.java, JndiDataConfig::class.java)
refresh()
}
In addition, you can also declaratively activate profiles through the spring.profiles.active property, which may be specified through system environment variables, JVM system properties, servlet context parameters in web.xml, or even as an entry in JNDI (see PropertySource Abstraction). In integration tests, active profiles can be declared by using the @ActiveProfiles annotation in the spring-test module (see context configuration with environment profiles).
Note that profiles are not an “either-or” proposition. You can activate multiple profiles at once. Programmatically, you can provide multiple profile names to the setActiveProfiles() method, which accepts String… varargs. The following example activates multiple profiles:
Java
ctx.getEnvironment().setActiveProfiles("profile1", "profile2");
Kotlin
ctx.getEnvironment().setActiveProfiles("profile1", "profile2")
Declaratively, spring.profiles.active may accept a comma-separated list of profile names, as the following example shows:
-Dspring.profiles.active="profile1,profile2"
Default Profile
The default profile represents the profile that is enabled by default. Consider the following example:
Java
@Configuration
@Profile("default")
public class DefaultDataConfig {
@Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.build();
}
}
Kotlin
@Configuration
@Profile("default")
class DefaultDataConfig {
@Bean
fun dataSource(): DataSource {
return EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.HSQL)
.addScript("classpath:com/bank/config/sql/schema.sql")
.build()
}
}
If no profile is active, the dataSource is created. You can see this as a way to provide a default definition for one or more beans. If any profile is enabled, the default profile does not apply.
You can change the name of the default profile by using setDefaultProfiles() on the Environment or ,declaratively, by using the spring.profiles.default property.
1.13.2. PropertySource Abstraction
Spring’s Environment abstraction provides search operations over a configurable hierarchy of property sources. Consider the following listing:
Java
ApplicationContext ctx = new GenericApplicationContext();
Environment env = ctx.getEnvironment();
boolean containsMyProperty = env.containsProperty("my-property");
System.out.println("Does my environment contain the 'my-property' property? " + containsMyProperty);
Kotlin
val ctx = GenericApplicationContext()
val env = ctx.environment
val containsMyProperty = env.containsProperty("my-property")
println("Does my environment contain the 'my-property' property? $containsMyProperty")
In the preceding snippet, we see a high-level way of asking Spring whether the my-property property is defined for the current environment. To answer this question, the Environment object performs a search over a set of PropertySource objects. A PropertySource is a simple abstraction over any source of key-value pairs, and Spring’s StandardEnvironment is configured with two PropertySource objects — one representing the set of JVM system properties (System.getProperties()) and one representing the set of system environment variables (System.getenv()).
These default property sources are present for StandardEnvironment, for use in standalone applications. StandardServletEnvironment is populated with additional default property sources including servlet config and servlet context parameters. It can optionally enable a JndiPropertySource. See the javadoc for details.
Concretely, when you use the StandardEnvironment, the call to env.containsProperty("my-property") returns true if a my-property system property or my-property environment variable is present at runtime.
The search performed is hierarchical. By default, system properties have precedence over environment variables. So, if the my-property property happens to be set in both places during a call to env.getProperty("my-property"), the system property value “wins” and is returned. Note that property values are not merged but rather completely overridden by a preceding entry.
For a common StandardServletEnvironment, the full hierarchy is as follows, with the highest-precedence entries at the top:
1. ServletConfig parameters (if applicable — for example, in case of a DispatcherServlet context)
2. ServletContext parameters (web.xml context-param entries)
3. JNDI environment variables (java:comp/env/ entries)
4. JVM system properties (-D command-line arguments)
5. JVM system environment (operating system environment variables)
Most importantly, the entire mechanism is configurable. Perhaps you have a custom source of properties that you want to integrate into this search. To do so, implement and instantiate your own PropertySource and add it to the set of PropertySources for the current Environment. The following example shows how to do so:
Java
ConfigurableApplicationContext ctx = new GenericApplicationContext();
MutablePropertySources sources = ctx.getEnvironment().getPropertySources();
sources.addFirst(new MyPropertySource());
Kotlin
val ctx = GenericApplicationContext()
val sources = ctx.environment.propertySources
sources.addFirst(MyPropertySource())
In the preceding code, MyPropertySource has been added with highest precedence in the search. If it contains a my-property property, the property is detected and returned, in favor of any my-property property in any other PropertySource. The MutablePropertySources API exposes a number of methods that allow for precise manipulation of the set of property sources.
1.13.3. Using @PropertySource
The @PropertySource annotation provides a convenient and declarative mechanism for adding a PropertySource to Spring’s Environment.
Given a file called app.properties that contains the key-value pair testbean.name=myTestBean, the following @Configuration class uses @PropertySource in such a way that a call to testBean.getName() returns myTestBean:
Java
@Configuration
@PropertySource("classpath:/com/myco/app.properties")
public class AppConfig {
@Autowired
Environment env;
@Bean
public TestBean testBean() {
TestBean testBean = new TestBean();
testBean.setName(env.getProperty("testbean.name"));
return testBean;
}
}
Kotlin
@Configuration
@PropertySource("classpath:/com/myco/app.properties")
class AppConfig {
@Autowired
private lateinit var env: Environment
@Bean
fun testBean() = TestBean().apply {
name = env.getProperty("testbean.name")!!
}
}
Any ${…} placeholders present in a @PropertySource resource location are resolved against the set of property sources already registered against the environment, as the following example shows:
Java
@Configuration
@PropertySource("classpath:/com/${my.placeholder:default/path}/app.properties")
public class AppConfig {
@Autowired
Environment env;
@Bean
public TestBean testBean() {
TestBean testBean = new TestBean();
testBean.setName(env.getProperty("testbean.name"));
return testBean;
}
}
Kotlin
@Configuration
@PropertySource("classpath:/com/\${my.placeholder:default/path}/app.properties")
class AppConfig {
@Autowired
private lateinit var env: Environment
@Bean
fun testBean() = TestBean().apply {
name = env.getProperty("testbean.name")!!
}
}
Assuming that my.placeholder is present in one of the property sources already registered (for example, system properties or environment variables), the placeholder is resolved to the corresponding value. If not, then default/path is used as a default. If no default is specified and a property cannot be resolved, an IllegalArgumentException is thrown.
The @PropertySource annotation is repeatable, according to Java 8 conventions. However, all such @PropertySource annotations need to be declared at the same level, either directly on the configuration class or as meta-annotations within the same custom annotation. Mixing direct annotations and meta-annotations is not recommended, since direct annotations effectively override meta-annotations.
1.13.4. Placeholder Resolution in Statements
Historically, the value of placeholders in elements could be resolved only against JVM system properties or environment variables. This is no longer the case. Because the Environment abstraction is integrated throughout the container, it is easy to route resolution of placeholders through it. This means that you may configure the resolution process in any way you like. You can change the precedence of searching through system properties and environment variables or remove them entirely. You can also add your own property sources to the mix, as appropriate.
Concretely, the following statement works regardless of where the customer property is defined, as long as it is available in the Environment:
<beans>
<import resource="com/bank/service/${customer}-config.xml"/>
</beans>
1.14. Registering a LoadTimeWeaver
The LoadTimeWeaver is used by Spring to dynamically transform classes as they are loaded into the Java virtual machine (JVM).
To enable load-time weaving, you can add the @EnableLoadTimeWeaving to one of your @Configuration classes, as the following example shows:
Java
@Configuration
@EnableLoadTimeWeaving
public class AppConfig {
}
Kotlin
@Configuration
@EnableLoadTimeWeaving
class AppConfig
Alternatively, for XML configuration, you can use the context:load-time-weaver element:
<beans>
<context:load-time-weaver/>
</beans>
Once configured for the ApplicationContext, any bean within that ApplicationContext may implement LoadTimeWeaverAware, thereby receiving a reference to the load-time weaver instance. This is particularly useful in combination with Spring’s JPA support where load-time weaving may be necessary for JPA class transformation. Consult the LocalContainerEntityManagerFactoryBean javadoc for more detail. For more on AspectJ load-time weaving, see Load-time Weaving with AspectJ in the Spring Framework.
1.15. Additional Capabilities of the ApplicationContext
As discussed in the chapter introduction, the org.springframework.beans.factory package provides basic functionality for managing and manipulating beans, including in a programmatic way. The org.springframework.context package adds the ApplicationContext interface, which extends the BeanFactory interface, in addition to extending other interfaces to provide additional functionality in a more application framework-oriented style. Many people use the ApplicationContext in a completely declarative fashion, not even creating it programmatically, but instead relying on support classes such as ContextLoader to automatically instantiate an ApplicationContext as part of the normal startup process of a Java EE web application.
To enhance BeanFactory functionality in a more framework-oriented style, the context package also provides the following functionality:
• Access to messages in i18n-style, through the MessageSource interface.
• Access to resources, such as URLs and files, through the ResourceLoader interface.
• Event publication, namely to beans that implement the ApplicationListener interface, through the use of the ApplicationEventPublisher interface.
• Loading of multiple (hierarchical) contexts, letting each be focused on one particular layer, such as the web layer of an application, through the HierarchicalBeanFactory interface.
1.15.1. Internationalization using MessageSource
The ApplicationContext interface extends an interface called MessageSource and, therefore, provides internationalization (“i18n”) functionality. Spring also provides the HierarchicalMessageSource interface, which can resolve messages hierarchically. Together, these interfaces provide the foundation upon which Spring effects message resolution. The methods defined on these interfaces include:
• String getMessage(String code, Object[] args, String default, Locale loc): The basic method used to retrieve a message from the MessageSource. When no message is found for the specified locale, the default message is used. Any arguments passed in become replacement values, using the MessageFormat functionality provided by the standard library.
• String getMessage(String code, Object[] args, Locale loc): Essentially the same as the previous method but with one difference: No default message can be specified. If the message cannot be found, a NoSuchMessageException is thrown.
• String getMessage(MessageSourceResolvable resolvable, Locale locale): All properties used in the preceding methods are also wrapped in a class named MessageSourceResolvable, which you can use with this method.
When an ApplicationContext is loaded, it automatically searches for a MessageSource bean defined in the context. The bean must have the name messageSource. If such a bean is found, all calls to the preceding methods are delegated to the message source. If no message source is found, the ApplicationContext attempts to find a parent containing a bean with the same name. If it does, it uses that bean as the MessageSource. If the ApplicationContext cannot find any source for messages, an empty DelegatingMessageSource is instantiated in order to be able to accept calls to the methods defined above.
Spring provides two MessageSource implementations, ResourceBundleMessageSource and StaticMessageSource. Both implement HierarchicalMessageSource in order to do nested messaging. The StaticMessageSource is rarely used but provides programmatic ways to add messages to the source. The following example shows ResourceBundleMessageSource:
<beans>
<bean id="messageSource"
class="org.springframework.context.support.ResourceBundleMessageSource">
<property name="basenames">
<list>
<value>format</value>
<value>exceptions</value>
<value>windows</value>
</list>
</property>
</bean>
</beans>
The example assumes that you have three resource bundles called format, exceptions and windows defined in your classpath. Any request to resolve a message is handled in the JDK-standard way of resolving messages through ResourceBundle objects. For the purposes of the example, assume the contents of two of the above resource bundle files are as follows:
# in format.properties
message=Alligators rock!
# in exceptions.properties
argument.required=The {0} argument is required.
The next example shows a program to execute the MessageSource functionality. Remember that all ApplicationContext implementations are also MessageSource implementations and so can be cast to the MessageSource interface.
Java
public static void main(String[] args) {
MessageSource resources = new ClassPathXmlApplicationContext("beans.xml");
String message = resources.getMessage("message", null, "Default", Locale.ENGLISH);
System.out.println(message);
}
Kotlin
fun main() {
val resources = ClassPathXmlApplicationContext("beans.xml")
val message = resources.getMessage("message", null, "Default", Locale.ENGLISH)
println(message)
}
The resulting output from the above program is as follows:
Alligators rock!
To summarize, the MessageSource is defined in a file called beans.xml, which exists at the root of your classpath. The messageSource bean definition refers to a number of resource bundles through its basenames property. The three files that are passed in the list to the basenames property exist as files at the root of your classpath and are called format.properties, exceptions.properties, and windows.properties, respectively.
The next example shows arguments passed to the message lookup. These arguments are converted into String objects and inserted into placeholders in the lookup message.
<beans>
<!-- this MessageSource is being used in a web application -->
<bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource">
<property name="basename" value="exceptions"/>
</bean>
<!-- lets inject the above MessageSource into this POJO -->
<bean id="example" class="com.something.Example">
<property name="messages" ref="messageSource"/>
</bean>
</beans>
Java
public class Example {
private MessageSource messages;
public void setMessages(MessageSource messages) {
this.messages = messages;
}
public void execute() {
String message = this.messages.getMessage("argument.required",
new Object [] {"userDao"}, "Required", Locale.ENGLISH);
System.out.println(message);
}
}
Kotlin
class Example {
lateinit var messages: MessageSource
fun execute() {
val message = messages.getMessage("argument.required",
arrayOf("userDao"), "Required", Locale.ENGLISH)
println(message)
}
}
The resulting output from the invocation of the execute() method is as follows:
The userDao argument is required.
With regard to internationalization (“i18n”), Spring’s various MessageSource implementations follow the same locale resolution and fallback rules as the standard JDK ResourceBundle. In short, and continuing with the example messageSource defined previously, if you want to resolve messages against the British (en-GB) locale, you would create files called format_en_GB.properties, exceptions_en_GB.properties, and windows_en_GB.properties, respectively.
Typically, locale resolution is managed by the surrounding environment of the application. In the following example, the locale against which (British) messages are resolved is specified manually:
# in exceptions_en_GB.properties
argument.required=Ebagum lad, the ''{0}'' argument is required, I say, required.
Java
public static void main(final String[] args) {
MessageSource resources = new ClassPathXmlApplicationContext("beans.xml");
String message = resources.getMessage("argument.required",
new Object [] {"userDao"}, "Required", Locale.UK);
System.out.println(message);
}
Kotlin
fun main() {
val resources = ClassPathXmlApplicationContext("beans.xml")
val message = resources.getMessage("argument.required",
arrayOf("userDao"), "Required", Locale.UK)
println(message)
}
The resulting output from the running of the above program is as follows:
Ebagum lad, the 'userDao' argument is required, I say, required.
You can also use the MessageSourceAware interface to acquire a reference to any MessageSource that has been defined. Any bean that is defined in an ApplicationContext that implements the MessageSourceAware interface is injected with the application context’s MessageSource when the bean is created and configured.
As an alternative to ResourceBundleMessageSource, Spring provides a ReloadableResourceBundleMessageSource class. This variant supports the same bundle file format but is more flexible than the standard JDK based ResourceBundleMessageSource implementation. In particular, it allows for reading files from any Spring resource location (not only from the classpath) and supports hot reloading of bundle property files (while efficiently caching them in between). See the ReloadableResourceBundleMessageSource javadoc for details.
1.15.2. Standard and Custom Events
Event handling in the ApplicationContext is provided through the ApplicationEvent class and the ApplicationListener interface. If a bean that implements the ApplicationListener interface is deployed into the context, every time an ApplicationEvent gets published to the ApplicationContext, that bean is notified. Essentially, this is the standard Observer design pattern.
As of Spring 4.2, the event infrastructure has been significantly improved and offers an annotation-based model as well as the ability to publish any arbitrary event (that is, an object that does not necessarily extend from ApplicationEvent). When such an object is published, we wrap it in an event for you.
The following table describes the standard events that Spring provides:
Table 7. Built-in Events
Event Explanation
ContextRefreshedEvent
Published when the ApplicationContext is initialized or refreshed (for example, by using the refresh() method on the ConfigurableApplicationContext interface). Here, “initialized” means that all beans are loaded, post-processor beans are detected and activated, singletons are pre-instantiated, and the ApplicationContext object is ready for use. As long as the context has not been closed, a refresh can be triggered multiple times, provided that the chosen ApplicationContext actually supports such “hot” refreshes. For example, XmlWebApplicationContext supports hot refreshes, but GenericApplicationContext does not.
ContextStartedEvent
Published when the ApplicationContext is started by using the start() method on the ConfigurableApplicationContext interface. Here, “started” means that all Lifecycle beans receive an explicit start signal. Typically, this signal is used to restart beans after an explicit stop, but it may also be used to start components that have not been configured for autostart (for example, components that have not already started on initialization).
ContextStoppedEvent
Published when the ApplicationContext is stopped by using the stop() method on the ConfigurableApplicationContext interface. Here, “stopped” means that all Lifecycle beans receive an explicit stop signal. A stopped context may be restarted through a start() call.
ContextClosedEvent
Published when the ApplicationContext is being closed by using the close() method on the ConfigurableApplicationContext interface or via a JVM shutdown hook. Here, "closed" means that all singleton beans will be destroyed. Once the context is closed, it reaches its end of life and cannot be refreshed or restarted.
RequestHandledEvent
A web-specific event telling all beans that an HTTP request has been serviced. This event is published after the request is complete. This event is only applicable to web applications that use Spring’s DispatcherServlet.
ServletRequestHandledEvent
A subclass of RequestHandledEvent that adds Servlet-specific context information.
You can also create and publish your own custom events. The following example shows a simple class that extends Spring’s ApplicationEvent base class:
Java
public class BlockedListEvent extends ApplicationEvent {
private final String address;
private final String content;
public BlockedListEvent(Object source, String address, String content) {
super(source);
this.address = address;
this.content = content;
}
// accessor and other methods...
}
Kotlin
class BlockedListEvent(source: Any,
val address: String,
val content: String) : ApplicationEvent(source)
To publish a custom ApplicationEvent, call the publishEvent() method on an ApplicationEventPublisher. Typically, this is done by creating a class that implements ApplicationEventPublisherAware and registering it as a Spring bean. The following example shows such a class:
Java
public class EmailService implements ApplicationEventPublisherAware {
private List<String> blockedList;
private ApplicationEventPublisher publisher;
public void setBlockedList(List<String> blockedList) {
this.blockedList = blockedList;
}
public void setApplicationEventPublisher(ApplicationEventPublisher publisher) {
this.publisher = publisher;
}
public void sendEmail(String address, String content) {
if (blockedList.contains(address)) {
publisher.publishEvent(new BlockedListEvent(this, address, content));
return;
}
// send email...
}
}
Kotlin
class EmailService : ApplicationEventPublisherAware {
private lateinit var blockedList: List<String>
private lateinit var publisher: ApplicationEventPublisher
fun setBlockedList(blockedList: List<String>) {
this.blockedList = blockedList
}
override fun setApplicationEventPublisher(publisher: ApplicationEventPublisher) {
this.publisher = publisher
}
fun sendEmail(address: String, content: String) {
if (blockedList!!.contains(address)) {
publisher!!.publishEvent(BlockedListEvent(this, address, content))
return
}
// send email...
}
}
At configuration time, the Spring container detects that EmailService implements ApplicationEventPublisherAware and automatically calls setApplicationEventPublisher(). In reality, the parameter passed in is the Spring container itself. You are interacting with the application context through its ApplicationEventPublisher interface.
To receive the custom ApplicationEvent, you can create a class that implements ApplicationListener and register it as a Spring bean. The following example shows such a class:
Java
public class BlockedListNotifier implements ApplicationListener<BlockedListEvent> {
private String notificationAddress;
public void setNotificationAddress(String notificationAddress) {
this.notificationAddress = notificationAddress;
}
public void onApplicationEvent(BlockedListEvent event) {
// notify appropriate parties via notificationAddress...
}
}
Kotlin
class BlockedListNotifier : ApplicationListener<BlockedListEvent> {
lateinit var notificationAddres: String
override fun onApplicationEvent(event: BlockedListEvent) {
// notify appropriate parties via notificationAddress...
}
}
Notice that ApplicationListener is generically parameterized with the type of your custom event (BlockedListEvent in the preceding example). This means that the onApplicationEvent() method can remain type-safe, avoiding any need for downcasting. You can register as many event listeners as you wish, but note that, by default, event listeners receive events synchronously. This means that the publishEvent() method blocks until all listeners have finished processing the event. One advantage of this synchronous and single-threaded approach is that, when a listener receives an event, it operates inside the transaction context of the publisher if a transaction context is available. If another strategy for event publication becomes necessary, see the javadoc for Spring’s ApplicationEventMulticaster interface and SimpleApplicationEventMulticaster implementation for configuration options.
The following example shows the bean definitions used to register and configure each of the classes above:
<bean id="emailService" class="example.EmailService">
<property name="blockedList">
<list>
<value>[email protected]</value>
<value>[email protected]</value>
<value>[email protected]</value>
</list>
</property>
</bean>
<bean id="blockedListNotifier" class="example.BlockedListNotifier">
<property name="notificationAddress" value="[email protected]"/>
</bean>
Putting it all together, when the sendEmail() method of the emailService bean is called, if there are any email messages that should be blocked, a custom event of type BlockedListEvent is published. The blockedListNotifier bean is registered as an ApplicationListener and receives the BlockedListEvent, at which point it can notify appropriate parties.
Spring’s eventing mechanism is designed for simple communication between Spring beans within the same application context. However, for more sophisticated enterprise integration needs, the separately maintained Spring Integration project provides complete support for building lightweight, pattern-oriented, event-driven architectures that build upon the well-known Spring programming model.
Annotation-based Event Listeners
As of Spring 4.2, you can register an event listener on any public method of a managed bean by using the @EventListener annotation. The BlockedListNotifier can be rewritten as follows:
Java
public class BlockedListNotifier {
private String notificationAddress;
public void setNotificationAddress(String notificationAddress) {
this.notificationAddress = notificationAddress;
}
@EventListener
public void processBlockedListEvent(BlockedListEvent event) {
// notify appropriate parties via notificationAddress...
}
}
Kotlin
class BlockedListNotifier {
lateinit var notificationAddress: String
@EventListener
fun processBlockedListEvent(event: BlockedListEvent) {
// notify appropriate parties via notificationAddress...
}
}
The method signature once again declares the event type to which it listens, but, this time, with a flexible name and without implementing a specific listener interface. The event type can also be narrowed through generics as long as the actual event type resolves your generic parameter in its implementation hierarchy.
If your method should listen to several events or if you want to define it with no parameter at all, the event types can also be specified on the annotation itself. The following example shows how to do so:
Java
@EventListener({ContextStartedEvent.class, ContextRefreshedEvent.class})
public void handleContextStart() {
// ...
}
Kotlin
@EventListener(ContextStartedEvent::class, ContextRefreshedEvent::class)
fun handleContextStart() {
// ...
}
It is also possible to add additional runtime filtering by using the condition attribute of the annotation that defines a SpEL expression , which should match to actually invoke the method for a particular event.
The following example shows how our notifier can be rewritten to be invoked only if the content attribute of the event is equal to my-event:
Java
@EventListener(condition = "#blEvent.content == 'my-event'")
public void processBlockedListEvent(BlockedListEvent blockedListEvent) {
// notify appropriate parties via notificationAddress...
}
Kotlin
@EventListener(condition = "#blEvent.content == 'my-event'")
fun processBlockedListEvent(blockedListEvent: BlockedListEvent) {
// notify appropriate parties via notificationAddress...
}
Each SpEL expression evaluates against a dedicated context. The following table lists the items made available to the context so that you can use them for conditional event processing:
Table 8. Event SpEL available metadata
Name Location Description Example
Event
root object
The actual ApplicationEvent.
#root.event or event
Arguments array
root object
The arguments (as an object array) used to invoke the method.
#root.args or args; args[0] to access the first argument, etc.
Argument name
evaluation context
The name of any of the method arguments. If, for some reason, the names are not available (for example, because there is no debug information in the compiled byte code), individual arguments are also available using the #a<#arg> syntax where <#arg> stands for the argument index (starting from 0).
#blEvent or #a0 (you can also use #p0 or #p<#arg> parameter notation as an alias)
Note that #root.event gives you access to the underlying event, even if your method signature actually refers to an arbitrary object that was published.
If you need to publish an event as the result of processing another event, you can change the method signature to return the event that should be published, as the following example shows:
Java
@EventListener
public ListUpdateEvent handleBlockedListEvent(BlockedListEvent event) {
// notify appropriate parties via notificationAddress and
// then publish a ListUpdateEvent...
}
Kotlin
@EventListener
fun handleBlockedListEvent(event: BlockedListEvent): ListUpdateEvent {
// notify appropriate parties via notificationAddress and
// then publish a ListUpdateEvent...
}
This feature is not supported for asynchronous listeners.
This new method publishes a new ListUpdateEvent for every BlockedListEvent handled by the method above. If you need to publish several events, you can return a Collection of events instead.
Asynchronous Listeners
If you want a particular listener to process events asynchronously, you can reuse the regular @Async support. The following example shows how to do so:
Java
@EventListener
@Async
public void processBlockedListEvent(BlockedListEvent event) {
// BlockedListEvent is processed in a separate thread
}
Kotlin
@EventListener
@Async
fun processBlockedListEvent(event: BlockedListEvent) {
// BlockedListEvent is processed in a separate thread
}
Be aware of the following limitations when using asynchronous events:
• If an asynchronous event listener throws an Exception, it is not propagated to the caller. See AsyncUncaughtExceptionHandler for more details.
• Asynchronous event listener methods cannot publish a subsequent event by returning a value. If you need to publish another event as the result of the processing, inject an ApplicationEventPublisher to publish the event manually.
Ordering Listeners
If you need one listener to be invoked before another one, you can add the @Order annotation to the method declaration, as the following example shows:
Java
@EventListener
@Order(42)
public void processBlockedListEvent(BlockedListEvent event) {
// notify appropriate parties via notificationAddress...
}
Kotlin
@EventListener
@Order(42)
fun processBlockedListEvent(event: BlockedListEvent) {
// notify appropriate parties via notificationAddress...
}
Generic Events
You can also use generics to further define the structure of your event. Consider using an EntityCreatedEvent<T> where T is the type of the actual entity that got created. For example, you can create the following listener definition to receive only EntityCreatedEvent for a Person:
Java
@EventListener
public void onPersonCreated(EntityCreatedEvent<Person> event) {
// ...
}
Kotlin
@EventListener
fun onPersonCreated(event: EntityCreatedEvent<Person>) {
// ...
}
Due to type erasure, this works only if the event that is fired resolves the generic parameters on which the event listener filters (that is, something like class PersonCreatedEvent extends EntityCreatedEvent<Person> { … }).
In certain circumstances, this may become quite tedious if all events follow the same structure (as should be the case for the event in the preceding example). In such a case, you can implement ResolvableTypeProvider to guide the framework beyond what the runtime environment provides. The following event shows how to do so:
Java
public class EntityCreatedEvent<T> extends ApplicationEvent implements ResolvableTypeProvider {
public EntityCreatedEvent(T entity) {
super(entity);
}
@Override
public ResolvableType getResolvableType() {
return ResolvableType.forClassWithGenerics(getClass(), ResolvableType.forInstance(getSource()));
}
}
Kotlin
class EntityCreatedEvent<T>(entity: T) : ApplicationEvent(entity), ResolvableTypeProvider {
override fun getResolvableType(): ResolvableType? {
return ResolvableType.forClassWithGenerics(javaClass, ResolvableType.forInstance(getSource()))
}
}
This works not only for ApplicationEvent but any arbitrary object that you send as an event.
1.15.3. Convenient Access to Low-level Resources
For optimal usage and understanding of application contexts, you should familiarize yourself with Spring’s Resource abstraction, as described in Resources.
An application context is a ResourceLoader, which can be used to load Resource objects. A Resource is essentially a more feature rich version of the JDK java.net.URL class. In fact, the implementations of the Resource wrap an instance of java.net.URL, where appropriate. A Resource can obtain low-level resources from almost any location in a transparent fashion, including from the classpath, a filesystem location, anywhere describable with a standard URL, and some other variations. If the resource location string is a simple path without any special prefixes, where those resources come from is specific and appropriate to the actual application context type.
You can configure a bean deployed into the application context to implement the special callback interface, ResourceLoaderAware, to be automatically called back at initialization time with the application context itself passed in as the ResourceLoader. You can also expose properties of type Resource, to be used to access static resources. They are injected into it like any other properties. You can specify those Resource properties as simple String paths and rely on automatic conversion from those text strings to actual Resource objects when the bean is deployed.
The location path or paths supplied to an ApplicationContext constructor are actually resource strings and, in simple form, are treated appropriately according to the specific context implementation. For example ClassPathXmlApplicationContext treats a simple location path as a classpath location. You can also use location paths (resource strings) with special prefixes to force loading of definitions from the classpath or a URL, regardless of the actual context type.
1.15.4. Convenient ApplicationContext Instantiation for Web Applications
You can create ApplicationContext instances declaratively by using, for example, a ContextLoader. Of course, you can also create ApplicationContext instances programmatically by using one of the ApplicationContext implementations.
You can register an ApplicationContext by using the ContextLoaderListener, as the following example shows:
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/daoContext.xml /WEB-INF/applicationContext.xml</param-value>
</context-param>
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
The listener inspects the contextConfigLocation parameter. If the parameter does not exist, the listener uses /WEB-INF/applicationContext.xml as a default. When the parameter does exist, the listener separates the String by using predefined delimiters (comma, semicolon, and whitespace) and uses the values as locations where application contexts are searched. Ant-style path patterns are supported as well. Examples are /WEB-INF/*Context.xml (for all files with names that end with Context.xml and that reside in the WEB-INF directory) and /WEB-INF/**/*Context.xml (for all such files in any subdirectory of WEB-INF).
1.15.5. Deploying a Spring ApplicationContext as a Java EE RAR File
It is possible to deploy a Spring ApplicationContext as a RAR file, encapsulating the context and all of its required bean classes and library JARs in a Java EE RAR deployment unit. This is the equivalent of bootstrapping a stand-alone ApplicationContext (only hosted in Java EE environment) being able to access the Java EE servers facilities. RAR deployment is a more natural alternative to a scenario of deploying a headless WAR file — in effect, a WAR file without any HTTP entry points that is used only for bootstrapping a Spring ApplicationContext in a Java EE environment.
RAR deployment is ideal for application contexts that do not need HTTP entry points but rather consist only of message endpoints and scheduled jobs. Beans in such a context can use application server resources such as the JTA transaction manager and JNDI-bound JDBC DataSource instances and JMS ConnectionFactory instances and can also register with the platform’s JMX server — all through Spring’s standard transaction management and JNDI and JMX support facilities. Application components can also interact with the application server’s JCA WorkManager through Spring’s TaskExecutor abstraction.
See the javadoc of the SpringContextResourceAdapter class for the configuration details involved in RAR deployment.
For a simple deployment of a Spring ApplicationContext as a Java EE RAR file:
1. Package all application classes into a RAR file (which is a standard JAR file with a different file extension). .Add all required library JARs into the root of the RAR archive. .Add a META-INF/ra.xml deployment descriptor (as shown in the javadoc for SpringContextResourceAdapter) and the corresponding Spring XML bean definition file(s) (typically META-INF/applicationContext.xml).
2. Drop the resulting RAR file into your application server’s deployment directory.
Such RAR deployment units are usually self-contained. They do not expose components to the outside world, not even to other modules of the same application. Interaction with a RAR-based ApplicationContext usually occurs through JMS destinations that it shares with other modules. A RAR-based ApplicationContext may also, for example, schedule some jobs or react to new files in the file system (or the like). If it needs to allow synchronous access from the outside, it could (for example) export RMI endpoints, which may be used by other application modules on the same machine.
1.16. The BeanFactory
The BeanFactory API provides the underlying basis for Spring’s IoC functionality. Its specific contracts are mostly used in integration with other parts of Spring and related third-party frameworks, and its DefaultListableBeanFactory implementation is a key delegate within the higher-level GenericApplicationContext container.
BeanFactory and related interfaces (such as BeanFactoryAware, InitializingBean, DisposableBean) are important integration points for other framework components. By not requiring any annotations or even reflection, they allow for very efficient interaction between the container and its components. Application-level beans may use the same callback interfaces but typically prefer declarative dependency injection instead, either through annotations or through programmatic configuration.
Note that the core BeanFactory API level and its DefaultListableBeanFactory implementation do not make assumptions about the configuration format or any component annotations to be used. All of these flavors come in through extensions (such as XmlBeanDefinitionReader and AutowiredAnnotationBeanPostProcessor) and operate on shared BeanDefinition objects as a core metadata representation. This is the essence of what makes Spring’s container so flexible and extensible.
1.16.1. BeanFactory or ApplicationContext?
This section explains the differences between the BeanFactory and ApplicationContext container levels and the implications on bootstrapping.
You should use an ApplicationContext unless you have a good reason for not doing so, with GenericApplicationContext and its subclass AnnotationConfigApplicationContext as the common implementations for custom bootstrapping. These are the primary entry points to Spring’s core container for all common purposes: loading of configuration files, triggering a classpath scan, programmatically registering bean definitions and annotated classes, and (as of 5.0) registering functional bean definitions.
Because an ApplicationContext includes all the functionality of a BeanFactory, it is generally recommended over a plain BeanFactory, except for scenarios where full control over bean processing is needed. Within an ApplicationContext (such as the GenericApplicationContext implementation), several kinds of beans are detected by convention (that is, by bean name or by bean type — in particular, post-processors), while a plain DefaultListableBeanFactory is agnostic about any special beans.
For many extended container features, such as annotation processing and AOP proxying, the BeanPostProcessor extension point is essential. If you use only a plain DefaultListableBeanFactory, such post-processors do not get detected and activated by default. This situation could be confusing, because nothing is actually wrong with your bean configuration. Rather, in such a scenario, the container needs to be fully bootstrapped through additional setup.
The following table lists features provided by the BeanFactory and ApplicationContext interfaces and implementations.
Table 9. Feature Matrix
Feature BeanFactory ApplicationContext
Bean instantiation/wiring
Yes
Yes
Integrated lifecycle management
No
Yes
Automatic BeanPostProcessor registration
No
Yes
Automatic BeanFactoryPostProcessor registration
No
Yes
Convenient MessageSource access (for internalization)
No
Yes
Built-in ApplicationEvent publication mechanism
No
Yes
To explicitly register a bean post-processor with a DefaultListableBeanFactory, you need to programmatically call addBeanPostProcessor, as the following example shows:
Java
DefaultListableBeanFactory factory = new DefaultListableBeanFactory();
// populate the factory with bean definitions
// now register any needed BeanPostProcessor instances
factory.addBeanPostProcessor(new AutowiredAnnotationBeanPostProcessor());
factory.addBeanPostProcessor(new MyBeanPostProcessor());
// now start using the factory
Kotlin
val factory = DefaultListableBeanFactory()
// populate the factory with bean definitions
// now register any needed BeanPostProcessor instances
factory.addBeanPostProcessor(AutowiredAnnotationBeanPostProcessor())
factory.addBeanPostProcessor(MyBeanPostProcessor())
// now start using the factory
To apply a BeanFactoryPostProcessor to a plain DefaultListableBeanFactory, you need to call its postProcessBeanFactory method, as the following example shows:
Java
DefaultListableBeanFactory factory = new DefaultListableBeanFactory();
XmlBeanDefinitionReader reader = new XmlBeanDefinitionReader(factory);
reader.loadBeanDefinitions(new FileSystemResource("beans.xml"));
// bring in some property values from a Properties file
PropertySourcesPlaceholderConfigurer cfg = new PropertySourcesPlaceholderConfigurer();
cfg.setLocation(new FileSystemResource("jdbc.properties"));
// now actually do the replacement
cfg.postProcessBeanFactory(factory);
Kotlin
val factory = DefaultListableBeanFactory()
val reader = XmlBeanDefinitionReader(factory)
reader.loadBeanDefinitions(FileSystemResource("beans.xml"))
// bring in some property values from a Properties file
val cfg = PropertySourcesPlaceholderConfigurer()
cfg.setLocation(FileSystemResource("jdbc.properties"))
// now actually do the replacement
cfg.postProcessBeanFactory(factory)
In both cases, the explicit registration steps are inconvenient, which is why the various ApplicationContext variants are preferred over a plain DefaultListableBeanFactory in Spring-backed applications, especially when relying on BeanFactoryPostProcessor and BeanPostProcessor instances for extended container functionality in a typical enterprise setup.
An AnnotationConfigApplicationContext has all common annotation post-processors registered and may bring in additional processors underneath the covers through configuration annotations, such as @EnableTransactionManagement. At the abstraction level of Spring’s annotation-based configuration model, the notion of bean post-processors becomes a mere internal container detail.
2. Resources
This chapter covers how Spring handles resources and how you can work with resources in Spring. It includes the following topics:
2.1. Introduction
Java’s standard java.net.URL class and standard handlers for various URL prefixes, unfortunately, are not quite adequate enough for all access to low-level resources. For example, there is no standardized URL implementation that may be used to access a resource that needs to be obtained from the classpath or relative to a ServletContext. While it is possible to register new handlers for specialized URL prefixes (similar to existing handlers for prefixes such as http:), this is generally quite complicated, and the URL interface still lacks some desirable functionality, such as a method to check for the existence of the resource being pointed to.
2.2. The Resource Interface
Spring’s Resource interface is meant to be a more capable interface for abstracting access to low-level resources. The following listing shows the Resource interface definition:
Java
public interface Resource extends InputStreamSource {
boolean exists();
boolean isOpen();
URL getURL() throws IOException;
File getFile() throws IOException;
Resource createRelative(String relativePath) throws IOException;
String getFilename();
String getDescription();
}
Kotlin
interface Resource : InputStreamSource {
fun exists(): Boolean
val isOpen: Boolean
val url: URL
val file: File
@Throws(IOException::class)
fun createRelative(relativePath: String): Resource
val filename: String
val description: String
}
As the definition of the Resource interface shows, it extends the InputStreamSource interface. The following listing shows the definition of the InputStreamSource interface:
Java
public interface InputStreamSource {
InputStream getInputStream() throws IOException;
}
Kotlin
interface InputStreamSource {
val inputStream: InputStream
}
Some of the most important methods from the Resource interface are:
• getInputStream(): Locates and opens the resource, returning an InputStream for reading from the resource. It is expected that each invocation returns a fresh InputStream. It is the responsibility of the caller to close the stream.
• exists(): Returns a boolean indicating whether this resource actually exists in physical form.
• isOpen(): Returns a boolean indicating whether this resource represents a handle with an open stream. If true, the InputStream cannot be read multiple times and must be read once only and then closed to avoid resource leaks. Returns false for all usual resource implementations, with the exception of InputStreamResource.
• getDescription(): Returns a description for this resource, to be used for error output when working with the resource. This is often the fully qualified file name or the actual URL of the resource.
Other methods let you obtain an actual URL or File object representing the resource (if the underlying implementation is compatible and supports that functionality).
Spring itself uses the Resource abstraction extensively, as an argument type in many method signatures when a resource is needed. Other methods in some Spring APIs (such as the constructors to various ApplicationContext implementations) take a String which in unadorned or simple form is used to create a Resource appropriate to that context implementation or, via special prefixes on the String path, let the caller specify that a specific Resource implementation must be created and used.
While the Resource interface is used a lot with Spring and by Spring, it is actually very useful to use as a general utility class by itself in your own code, for access to resources, even when your code does not know or care about any other parts of Spring. While this couples your code to Spring, it really only couples it to this small set of utility classes, which serve as a more capable replacement for URL and can be considered equivalent to any other library you would use for this purpose.
The Resource abstraction does not replace functionality. It wraps it where possible. For example, a UrlResource wraps a URL and uses the wrapped URL to do its work.
2.3. Built-in Resource Implementations
Spring includes the following Resource implementations:
2.3.1. UrlResource
UrlResource wraps a java.net.URL and can be used to access any object that is normally accessible with a URL, such as files, an HTTP target, an FTP target, and others. All URLs have a standardized String representation, such that appropriate standardized prefixes are used to indicate one URL type from another. This includes file: for accessing filesystem paths, http: for accessing resources through the HTTP protocol, ftp: for accessing resources through FTP, and others.
A UrlResource is created by Java code by explicitly using the UrlResource constructor but is often created implicitly when you call an API method that takes a String argument meant to represent a path. For the latter case, a JavaBeans PropertyEditor ultimately decides which type of Resource to create. If the path string contains well-known (to it, that is) prefix (such as classpath:), it creates an appropriate specialized Resource for that prefix. However, if it does not recognize the prefix, it assume the string is a standard URL string and creates a UrlResource.
2.3.2. ClassPathResource
This class represents a resource that should be obtained from the classpath. It uses either the thread context class loader, a given class loader, or a given class for loading resources.
This Resource implementation supports resolution as java.io.File if the class path resource resides in the file system but not for classpath resources that reside in a jar and have not been expanded (by the servlet engine or whatever the environment is) to the filesystem. To address this, the various Resource implementations always support resolution as a java.net.URL.
A ClassPathResource is created by Java code by explicitly using the ClassPathResource constructor but is often created implicitly when you call an API method that takes a String argument meant to represent a path. For the latter case, a JavaBeans PropertyEditor recognizes the special prefix, classpath:, on the string path and creates a ClassPathResource in that case.
2.3.3. FileSystemResource
This is a Resource implementation for java.io.File and java.nio.file.Path handles. It supports resolution as a File and as a URL.
2.3.4. ServletContextResource
This is a Resource implementation for ServletContext resources that interprets relative paths within the relevant web application’s root directory.
It always supports stream access and URL access but allows java.io.File access only when the web application archive is expanded and the resource is physically on the filesystem. Whether or not it is expanded and on the filesystem or accessed directly from the JAR or somewhere else like a database (which is conceivable) is actually dependent on the Servlet container.
2.3.5. InputStreamResource
An InputStreamResource is a Resource implementation for a given InputStream. It should be used only if no specific Resource implementation is applicable. In particular, prefer ByteArrayResource or any of the file-based Resource implementations where possible.
In contrast to other Resource implementations, this is a descriptor for an already-opened resource. Therefore, it returns true from isOpen(). Do not use it if you need to keep the resource descriptor somewhere or if you need to read a stream multiple times.
2.3.6. ByteArrayResource
This is a Resource implementation for a given byte array. It creates a ByteArrayInputStream for the given byte array.
It is useful for loading content from any given byte array without having to resort to a single-use InputStreamResource.
2.4. The ResourceLoader
The ResourceLoader interface is meant to be implemented by objects that can return (that is, load) Resource instances. The following listing shows the ResourceLoader interface definition:
Java
public interface ResourceLoader {
Resource getResource(String location);
}
Kotlin
interface ResourceLoader {
fun getResource(location: String): Resource
}
All application contexts implement the ResourceLoader interface. Therefore, all application contexts may be used to obtain Resource instances.
When you call getResource() on a specific application context, and the location path specified doesn’t have a specific prefix, you get back a Resource type that is appropriate to that particular application context. For example, assume the following snippet of code was executed against a ClassPathXmlApplicationContext instance:
Java
Resource template = ctx.getResource("some/resource/path/myTemplate.txt");
Kotlin
val template = ctx.getResource("some/resource/path/myTemplate.txt")
Against a ClassPathXmlApplicationContext, that code returns a ClassPathResource. If the same method were executed against a FileSystemXmlApplicationContext instance, it would return a FileSystemResource. For a WebApplicationContext, it would return a ServletContextResource. It would similarly return appropriate objects for each context.
As a result, you can load resources in a fashion appropriate to the particular application context.
On the other hand, you may also force ClassPathResource to be used, regardless of the application context type, by specifying the special classpath: prefix, as the following example shows:
Java
Resource template = ctx.getResource("classpath:some/resource/path/myTemplate.txt");
Kotlin
val template = ctx.getResource("classpath:some/resource/path/myTemplate.txt")
Similarly, you can force a UrlResource to be used by specifying any of the standard java.net.URL prefixes. The following pair of examples use the file and http prefixes:
Java
Resource template = ctx.getResource("file:///some/resource/path/myTemplate.txt");
Kotlin
val template = ctx.getResource("file:///some/resource/path/myTemplate.txt")
Java
Resource template = ctx.getResource("https://myhost.com/resource/path/myTemplate.txt");
Kotlin
val template = ctx.getResource("https://myhost.com/resource/path/myTemplate.txt")
The following table summarizes the strategy for converting String objects to Resource objects:
Table 10. Resource strings
Prefix Example Explanation
classpath:
classpath:com/myapp/config.xml
Loaded from the classpath.
file:
file:///data/config.xml
Loaded as a URL from the filesystem. See also FileSystemResource Caveats.
http:
https://myserver/logo.png
Loaded as a URL.
(none)
/data/config.xml
Depends on the underlying ApplicationContext.
2.5. The ResourceLoaderAware interface
The ResourceLoaderAware interface is a special callback interface which identifies components that expect to be provided with a ResourceLoader reference. The following listing shows the definition of the ResourceLoaderAware interface:
Java
public interface ResourceLoaderAware {
void setResourceLoader(ResourceLoader resourceLoader);
}
Kotlin
interface ResourceLoaderAware {
fun setResourceLoader(resourceLoader: ResourceLoader)
}
When a class implements ResourceLoaderAware and is deployed into an application context (as a Spring-managed bean), it is recognized as ResourceLoaderAware by the application context. The application context then invokes setResourceLoader(ResourceLoader), supplying itself as the argument (remember, all application contexts in Spring implement the ResourceLoader interface).
Since an ApplicationContext is a ResourceLoader, the bean could also implement the ApplicationContextAware interface and use the supplied application context directly to load resources. However, in general, it is better to use the specialized ResourceLoader interface if that is all you need. The code would be coupled only to the resource loading interface (which can be considered a utility interface) and not to the whole Spring ApplicationContext interface.
In application components, you may also rely upon autowiring of the ResourceLoader as an alternative to implementing the ResourceLoaderAware interface. The “traditional” constructor and byType autowiring modes (as described in Autowiring Collaborators) are capable of providing a ResourceLoader for either a constructor argument or a setter method parameter, respectively. For more flexibility (including the ability to autowire fields and multiple parameter methods), consider using the annotation-based autowiring features. In that case, the ResourceLoader is autowired into a field, constructor argument, or method parameter that expects the ResourceLoader type as long as the field, constructor, or method in question carries the @Autowired annotation. For more information, see Using @Autowired.
2.6. Resources as Dependencies
If the bean itself is going to determine and supply the resource path through some sort of dynamic process, it probably makes sense for the bean to use the ResourceLoader interface to load resources. For example, consider the loading of a template of some sort, where the specific resource that is needed depends on the role of the user. If the resources are static, it makes sense to eliminate the use of the ResourceLoader interface completely, have the bean expose the Resource properties it needs, and expect them to be injected into it.
What makes it trivial to then inject these properties is that all application contexts register and use a special JavaBeans PropertyEditor, which can convert String paths to Resource objects. So, if myBean has a template property of type Resource, it can be configured with a simple string for that resource, as the following example shows:
<bean id="myBean" class="...">
<property name="template" value="some/resource/path/myTemplate.txt"/>
</bean>
Note that the resource path has no prefix. Consequently, because the application context itself is going to be used as the ResourceLoader, the resource itself is loaded through a ClassPathResource, a FileSystemResource, or a ServletContextResource, depending on the exact type of the context.
If you need to force a specific Resource type to be used, you can use a prefix. The following two examples show how to force a ClassPathResource and a UrlResource (the latter being used to access a filesystem file):
<property name="template" value="classpath:some/resource/path/myTemplate.txt">
<property name="template" value="file:///some/resource/path/myTemplate.txt"/>
2.7. Application Contexts and Resource Paths
This section covers how to create application contexts with resources, including shortcuts that work with XML, how to use wildcards, and other details.
2.7.1. Constructing Application Contexts
An application context constructor (for a specific application context type) generally takes a string or array of strings as the location paths of the resources, such as XML files that make up the definition of the context.
When such a location path does not have a prefix, the specific Resource type built from that path and used to load the bean definitions depends on and is appropriate to the specific application context. For example, consider the following example, which creates a ClassPathXmlApplicationContext:
Java
ApplicationContext ctx = new ClassPathXmlApplicationContext("conf/appContext.xml");
Kotlin
val ctx = ClassPathXmlApplicationContext("conf/appContext.xml")
The bean definitions are loaded from the classpath, because a ClassPathResource is used. However, consider the following example, which creates a FileSystemXmlApplicationContext:
Java
ApplicationContext ctx =
new FileSystemXmlApplicationContext("conf/appContext.xml");
Kotlin
val ctx = FileSystemXmlApplicationContext("conf/appContext.xml")
Now the bean definition is loaded from a filesystem location (in this case, relative to the current working directory).
Note that the use of the special classpath prefix or a standard URL prefix on the location path overrides the default type of Resource created to load the definition. Consider the following example:
Java
ApplicationContext ctx =
new FileSystemXmlApplicationContext("classpath:conf/appContext.xml");
Kotlin
val ctx = FileSystemXmlApplicationContext("classpath:conf/appContext.xml")
Using FileSystemXmlApplicationContext loads the bean definitions from the classpath. However, it is still a FileSystemXmlApplicationContext. If it is subsequently used as a ResourceLoader, any unprefixed paths are still treated as filesystem paths.
Constructing ClassPathXmlApplicationContext Instances — Shortcuts
The ClassPathXmlApplicationContext exposes a number of constructors to enable convenient instantiation. The basic idea is that you can supply merely a string array that contains only the filenames of the XML files themselves (without the leading path information) and also supplies a Class. The ClassPathXmlApplicationContext then derives the path information from the supplied class.
Consider the following directory layout:
com/
foo/
services.xml
daos.xml
MessengerService.class
The following example shows how a ClassPathXmlApplicationContext instance composed of the beans defined in files named services.xml and daos.xml (which are on the classpath) can be instantiated:
Java
ApplicationContext ctx = new ClassPathXmlApplicationContext(
new String[] {"services.xml", "daos.xml"}, MessengerService.class);
Kotlin
val ctx = ClassPathXmlApplicationContext(arrayOf("services.xml", "daos.xml"), MessengerService::class.java)
See the ClassPathXmlApplicationContext javadoc for details on the various constructors.
2.7.2. Wildcards in Application Context Constructor Resource Paths
The resource paths in application context constructor values may be simple paths (as shown earlier), each of which has a one-to-one mapping to a target Resource or, alternately, may contain the special "classpath*:" prefix or internal Ant-style regular expressions (matched by using Spring’s PathMatcher utility). Both of the latter are effectively wildcards.
One use for this mechanism is when you need to do component-style application assembly. All components can 'publish' context definition fragments to a well-known location path, and, when the final application context is created using the same path prefixed with classpath*:, all component fragments are automatically picked up.
Note that this wildcarding is specific to the use of resource paths in application context constructors (or when you use the PathMatcher utility class hierarchy directly) and is resolved at construction time. It has nothing to do with the Resource type itself. You cannot use the classpath*: prefix to construct an actual Resource, as a resource points to just one resource at a time.
Ant-style Patterns
Path locations can contain Ant-style patterns, as the following example shows:
/WEB-INF/*-context.xml
com/mycompany/**/applicationContext.xml
file:C:/some/path/*-context.xml
classpath:com/mycompany/**/applicationContext.xml
When the path location contains an Ant-style pattern, the resolver follows a more complex procedure to try to resolve the wildcard. It produces a Resource for the path up to the last non-wildcard segment and obtains a URL from it. If this URL is not a jar: URL or container-specific variant (such as zip: in WebLogic, wsjar in WebSphere, and so on), a java.io.File is obtained from it and used to resolve the wildcard by traversing the filesystem. In the case of a jar URL, the resolver either gets a java.net.JarURLConnection from it or manually parses the jar URL and then traverses the contents of the jar file to resolve the wildcards.
Implications on Portability
If the specified path is already a file URL (either implicitly because the base ResourceLoader is a filesystem one or explicitly), wildcarding is guaranteed to work in a completely portable fashion.
If the specified path is a classpath location, the resolver must obtain the last non-wildcard path segment URL by making a Classloader.getResource() call. Since this is just a node of the path (not the file at the end), it is actually undefined (in the ClassLoader javadoc) exactly what sort of a URL is returned in this case. In practice, it is always a java.io.File representing the directory (where the classpath resource resolves to a filesystem location) or a jar URL of some sort (where the classpath resource resolves to a jar location). Still, there is a portability concern on this operation.
If a jar URL is obtained for the last non-wildcard segment, the resolver must be able to get a java.net.JarURLConnection from it or manually parse the jar URL, to be able to walk the contents of the jar and resolve the wildcard. This does work in most environments but fails in others, and we strongly recommend that the wildcard resolution of resources coming from jars be thoroughly tested in your specific environment before you rely on it.
The classpath*: Prefix
When constructing an XML-based application context, a location string may use the special classpath*: prefix, as the following example shows:
Java
ApplicationContext ctx =
new ClassPathXmlApplicationContext("classpath*:conf/appContext.xml");
Kotlin
val ctx = ClassPathXmlApplicationContext("classpath*:conf/appContext.xml")
This special prefix specifies that all classpath resources that match the given name must be obtained (internally, this essentially happens through a call to ClassLoader.getResources(…)) and then merged to form the final application context definition.
The wildcard classpath relies on the getResources() method of the underlying classloader. As most application servers nowadays supply their own classloader implementation, the behavior might differ, especially when dealing with jar files. A simple test to check if classpath* works is to use the classloader to load a file from within a jar on the classpath: getClass().getClassLoader().getResources("<someFileInsideTheJar>"). Try this test with files that have the same name but are placed inside two different locations. In case an inappropriate result is returned, check the application server documentation for settings that might affect the classloader behavior.
You can also combine the classpath*: prefix with a PathMatcher pattern in the rest of the location path (for example, classpath*:META-INF/*-beans.xml). In this case, the resolution strategy is fairly simple: A ClassLoader.getResources() call is used on the last non-wildcard path segment to get all the matching resources in the class loader hierarchy and then, off each resource, the same PathMatcher resolution strategy described earlier is used for the wildcard subpath.
Other Notes Relating to Wildcards
Note that classpath*:, when combined with Ant-style patterns, only works reliably with at least one root directory before the pattern starts, unless the actual target files reside in the file system. This means that a pattern such as classpath*:*.xml might not retrieve files from the root of jar files but rather only from the root of expanded directories.
Spring’s ability to retrieve classpath entries originates from the JDK’s ClassLoader.getResources() method, which only returns file system locations for an empty string (indicating potential roots to search). Spring evaluates URLClassLoader runtime configuration and the java.class.path manifest in jar files as well, but this is not guaranteed to lead to portable behavior.
The scanning of classpath packages requires the presence of corresponding directory entries in the classpath. When you build JARs with Ant, do not activate the files-only switch of the JAR task. Also, classpath directories may not get exposed based on security policies in some environments — for example, stand-alone applications on JDK 1.7.0_45 and higher (which requires 'Trusted-Library' to be set up in your manifests. See https://stackoverflow.com/questions/19394570/java-jre-7u45-breaks-classloader-getresources).
On JDK 9’s module path (Jigsaw), Spring’s classpath scanning generally works as expected. Putting resources into a dedicated directory is highly recommendable here as well, avoiding the aforementioned portability problems with searching the jar file root level.
Ant-style patterns with classpath: resources are not guaranteed to find matching resources if the root package to search is available in multiple class path locations. Consider the following example of a resource location:
com/mycompany/package1/service-context.xml
Now consider an Ant-style path that someone might use to try to find that file:
classpath:com/mycompany/**/service-context.xml
Such a resource may be in only one location, but when a path such as the preceding example is used to try to resolve it, the resolver works off the (first) URL returned by getResource("com/mycompany");. If this base package node exists in multiple classloader locations, the actual end resource may not be there. Therefore, in such a case you should prefer using classpath*: with the same Ant-style pattern, which searches all class path locations that contain the root package.
2.7.3. FileSystemResource Caveats
A FileSystemResource that is not attached to a FileSystemApplicationContext (that is, when a FileSystemApplicationContext is not the actual ResourceLoader) treats absolute and relative paths as you would expect. Relative paths are relative to the current working directory, while absolute paths are relative to the root of the filesystem.
For backwards compatibility (historical) reasons however, this changes when the FileSystemApplicationContext is the ResourceLoader. The FileSystemApplicationContext forces all attached FileSystemResource instances to treat all location paths as relative, whether they start with a leading slash or not. In practice, this means the following examples are equivalent:
Java
ApplicationContext ctx =
new FileSystemXmlApplicationContext("conf/context.xml");
Kotlin
val ctx = FileSystemXmlApplicationContext("conf/context.xml")
Java
ApplicationContext ctx =
new FileSystemXmlApplicationContext("/conf/context.xml");
Kotlin
val ctx = FileSystemXmlApplicationContext("/conf/context.xml")
The following examples are also equivalent (even though it would make sense for them to be different, as one case is relative and the other absolute):
Java
FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("some/resource/path/myTemplate.txt");
Kotlin
val ctx: FileSystemXmlApplicationContext = ...
ctx.getResource("some/resource/path/myTemplate.txt")
Java
FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("/some/resource/path/myTemplate.txt");
Kotlin
val ctx: FileSystemXmlApplicationContext = ...
ctx.getResource("/some/resource/path/myTemplate.txt")
In practice, if you need true absolute filesystem paths, you should avoid using absolute paths with FileSystemResource or FileSystemXmlApplicationContext and force the use of a UrlResource by using the file: URL prefix. The following examples show how to do so:
Java
// actual context type doesn't matter, the Resource will always be UrlResource
ctx.getResource("file:///some/resource/path/myTemplate.txt");
Kotlin
// actual context type doesn't matter, the Resource will always be UrlResource
ctx.getResource("file:///some/resource/path/myTemplate.txt")
Java
// force this FileSystemXmlApplicationContext to load its definition via a UrlResource
ApplicationContext ctx =
new FileSystemXmlApplicationContext("file:///conf/context.xml");
Kotlin
// force this FileSystemXmlApplicationContext to load its definition via a UrlResource
val ctx = FileSystemXmlApplicationContext("file:///conf/context.xml")
3. Validation, Data Binding, and Type Conversion
There are pros and cons for considering validation as business logic, and Spring offers a design for validation (and data binding) that does not exclude either one of them. Specifically, validation should not be tied to the web tier and should be easy to localize, and it should be possible to plug in any available validator. Considering these concerns, Spring provides a Validator contract that is both basic and eminently usable in every layer of an application.
Data binding is useful for letting user input be dynamically bound to the domain model of an application (or whatever objects you use to process user input). Spring provides the aptly named DataBinder to do exactly that. The Validator and the DataBinder make up the validation package, which is primarily used in but not limited to the web layer.
The BeanWrapper is a fundamental concept in the Spring Framework and is used in a lot of places. However, you probably do not need to use the BeanWrapper directly. Because this is reference documentation, however, we felt that some explanation might be in order. We explain the BeanWrapper in this chapter, since, if you are going to use it at all, you are most likely do so when trying to bind data to objects.
Spring’s DataBinder and the lower-level BeanWrapper both use PropertyEditorSupport implementations to parse and format property values. The PropertyEditor and PropertyEditorSupport types are part of the JavaBeans specification and are also explained in this chapter. Spring 3 introduced a core.convert package that provides a general type conversion facility, as well as a higher-level “format” package for formatting UI field values. You can use these packages as simpler alternatives to PropertyEditorSupport implementations. They are also discussed in this chapter.
Spring supports Java Bean Validation through setup infrastructure and an adaptor to Spring’s own Validator contract. Applications can enable Bean Validation once globally, as described in Java Bean Validation, and use it exclusively for all validation needs. In the web layer, applications can further register controller-local Spring Validator instances per DataBinder, as described in Configuring a DataBinder, which can be useful for plugging in custom validation logic.
3.1. Validation by Using Spring’s Validator Interface
Spring features a Validator interface that you can use to validate objects. The Validator interface works by using an Errors object so that, while validating, validators can report validation failures to the Errors object.
Consider the following example of a small data object:
Java
public class Person {
private String name;
private int age;
// the usual getters and setters...
}
Kotlin
class Person(val name: String, val age: Int)
The next example provides validation behavior for the Person class by implementing the following two methods of the org.springframework.validation.Validator interface:
• supports(Class): Can this Validator validate instances of the supplied Class?
• validate(Object, org.springframework.validation.Errors): Validates the given object and, in case of validation errors, registers those with the given Errors object.
Implementing a Validator is fairly straightforward, especially when you know of the ValidationUtils helper class that the Spring Framework also provides. The following example implements Validator for Person instances:
Java
public class PersonValidator implements Validator {
/**
* This Validator validates only Person instances
*/
public boolean supports(Class clazz) {
return Person.class.equals(clazz);
}
public void validate(Object obj, Errors e) {
ValidationUtils.rejectIfEmpty(e, "name", "name.empty");
Person p = (Person) obj;
if (p.getAge() < 0) {
e.rejectValue("age", "negativevalue");
} else if (p.getAge() > 110) {
e.rejectValue("age", "too.darn.old");
}
}
}
Kotlin
class PersonValidator : Validator {
/*
* This Validator validates only Person instances
*/
override fun supports(clazz: Class<>): Boolean {
return Person::class.java == clazz
}
override fun validate(obj: Any, e: Errors) {
ValidationUtils.rejectIfEmpty(e, "name", "name.empty")
val p = obj as Person
if (p.age < 0) {
e.rejectValue("age", "negativevalue")
} else if (p.age > 110) {
e.rejectValue("age", "too.darn.old")
}
}
}
The static rejectIfEmpty(..) method on the ValidationUtils class is used to reject the name property if it is null or the empty string. Have a look at the ValidationUtils javadoc to see what functionality it provides besides the example shown previously.
While it is certainly possible to implement a single Validator class to validate each of the nested objects in a rich object, it may be better to encapsulate the validation logic for each nested class of object in its own Validator implementation. A simple example of a “rich” object would be a Customer that is composed of two String properties (a first and a second name) and a complex Address object. Address objects may be used independently of Customer objects, so a distinct AddressValidator has been implemented. If you want your CustomerValidator to reuse the logic contained within the AddressValidator class without resorting to copy-and-paste, you can dependency-inject or instantiate an AddressValidator within your CustomerValidator, as the following example shows:
Java
public class CustomerValidator implements Validator {
private final Validator addressValidator;
public CustomerValidator(Validator addressValidator) {
if (addressValidator == null) {
throw new IllegalArgumentException("The supplied [Validator] is " +
"required and must not be null.");
}
if (!addressValidator.supports(Address.class)) {
throw new IllegalArgumentException("The supplied [Validator] must " +
"support the validation of [Address] instances.");
}
this.addressValidator = addressValidator;
}
/**
* This Validator validates Customer instances, and any subclasses of Customer too
*/
public boolean supports(Class clazz) {
return Customer.class.isAssignableFrom(clazz);
}
public void validate(Object target, Errors errors) {
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required");
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required");
Customer customer = (Customer) target;
try {
errors.pushNestedPath("address");
ValidationUtils.invokeValidator(this.addressValidator, customer.getAddress(), errors);
} finally {
errors.popNestedPath();
}
}
}
Kotlin
class CustomerValidator(private val addressValidator: Validator) : Validator {
init {
if (addressValidator == null) {
throw IllegalArgumentException("The supplied [Validator] is required and must not be null.")
}
if (!addressValidator.supports(Address::class.java)) {
throw IllegalArgumentException("The supplied [Validator] must support the validation of [Address] instances.")
}
}
/*
* This Validator validates Customer instances, and any subclasses of Customer too
*/
override fun supports(clazz: Class<>): Boolean {
return Customer::class.java.isAssignableFrom(clazz)
}
override fun validate(target: Any, errors: Errors) {
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required")
ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required")
val customer = target as Customer
try {
errors.pushNestedPath("address")
ValidationUtils.invokeValidator(this.addressValidator, customer.address, errors)
} finally {
errors.popNestedPath()
}
}
}
Validation errors are reported to the Errors object passed to the validator. In the case of Spring Web MVC, you can use the <spring:bind/> tag to inspect the error messages, but you can also inspect the Errors object yourself. More information about the methods it offers can be found in the javadoc.
3.2. Resolving Codes to Error Messages
We covered databinding and validation. This section covers outputting messages that correspond to validation errors. In the example shown in the preceding section, we rejected the name and age fields. If we want to output the error messages by using a MessageSource, we can do so using the error code we provide when rejecting the field ('name' and 'age' in this case). When you call (either directly, or indirectly, by using, for example, the ValidationUtils class) rejectValue or one of the other reject methods from the Errors interface, the underlying implementation not only registers the code you passed in but also registers a number of additional error codes. The MessageCodesResolver determines which error codes the Errors interface registers. By default, the DefaultMessageCodesResolver is used, which (for example) not only registers a message with the code you gave but also registers messages that include the field name you passed to the reject method. So, if you reject a field by using rejectValue("age", "too.darn.old"), apart from the too.darn.old code, Spring also registers too.darn.old.age and too.darn.old.age.int (the first includes the field name and the second includes the type of the field). This is done as a convenience to aid developers when targeting error messages.
More information on the MessageCodesResolver and the default strategy can be found in the javadoc of MessageCodesResolver and DefaultMessageCodesResolver, respectively.
3.3. Bean Manipulation and the BeanWrapper
The org.springframework.beans package adheres to the JavaBeans standard. A JavaBean is a class with a default no-argument constructor and that follows a naming convention where (for example) a property named bingoMadness would have a setter method setBingoMadness(..) and a getter method getBingoMadness(). For more information about JavaBeans and the specification, see javabeans.
One quite important class in the beans package is the BeanWrapper interface and its corresponding implementation (BeanWrapperImpl). As quoted from the javadoc, the BeanWrapper offers functionality to set and get property values (individually or in bulk), get property descriptors, and query properties to determine if they are readable or writable. Also, the BeanWrapper offers support for nested properties, enabling the setting of properties on sub-properties to an unlimited depth. The BeanWrapper also supports the ability to add standard JavaBeans PropertyChangeListeners and VetoableChangeListeners, without the need for supporting code in the target class. Last but not least, the BeanWrapper provides support for setting indexed properties. The BeanWrapper usually is not used by application code directly but is used by the DataBinder and the BeanFactory.
The way the BeanWrapper works is partly indicated by its name: it wraps a bean to perform actions on that bean, such as setting and retrieving properties.
3.3.1. Setting and Getting Basic and Nested Properties
Setting and getting properties is done through the setPropertyValue and getPropertyValue overloaded method variants of BeanWrapper. See their Javadoc for details. The below table shows some examples of these conventions:
Table 11. Examples of properties
Expression Explanation
name
Indicates the property name that corresponds to the getName() or isName() and setName(..) methods.
account.name
Indicates the nested property name of the property account that corresponds to (for example) the getAccount().setName() or getAccount().getName() methods.
account[2]
Indicates the third element of the indexed property account. Indexed properties can be of type array, list, or other naturally ordered collection.
account[COMPANYNAME]
Indicates the value of the map entry indexed by the COMPANYNAME key of the account Map property.
(This next section is not vitally important to you if you do not plan to work with the BeanWrapper directly. If you use only the DataBinder and the BeanFactory and their default implementations, you should skip ahead to the section on PropertyEditors.)
The following two example classes use the BeanWrapper to get and set properties:
Java
public class Company {
private String name;
private Employee managingDirector;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public Employee getManagingDirector() {
return this.managingDirector;
}
public void setManagingDirector(Employee managingDirector) {
this.managingDirector = managingDirector;
}
}
Kotlin
class Company {
var name: String? = null
var managingDirector: Employee? = null
}
Java
public class Employee {
private String name;
private float salary;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public float getSalary() {
return salary;
}
public void setSalary(float salary) {
this.salary = salary;
}
}
Kotlin
class Employee {
var name: String? = null
var salary: Float? = null
}
The following code snippets show some examples of how to retrieve and manipulate some of the properties of instantiated Companies and Employees:
Java
BeanWrapper company = new BeanWrapperImpl(new Company());
// setting the company name..
company.setPropertyValue("name", "Some Company Inc.");
// ... can also be done like this:
PropertyValue value = new PropertyValue("name", "Some Company Inc.");
company.setPropertyValue(value);
// ok, let's create the director and tie it to the company:
BeanWrapper jim = new BeanWrapperImpl(new Employee());
jim.setPropertyValue("name", "Jim Stravinsky");
company.setPropertyValue("managingDirector", jim.getWrappedInstance());
// retrieving the salary of the managingDirector through the company
Float salary = (Float) company.getPropertyValue("managingDirector.salary");
Kotlin
val company = BeanWrapperImpl(Company())
// setting the company name..
company.setPropertyValue("name", "Some Company Inc.")
// ... can also be done like this:
val value = PropertyValue("name", "Some Company Inc.")
company.setPropertyValue(value)
// ok, let's create the director and tie it to the company:
val jim = BeanWrapperImpl(Employee())
jim.setPropertyValue("name", "Jim Stravinsky")
company.setPropertyValue("managingDirector", jim.wrappedInstance)
// retrieving the salary of the managingDirector through the company
val salary = company.getPropertyValue("managingDirector.salary") as Float?
3.3.2. Built-in PropertyEditor Implementations
Spring uses the concept of a PropertyEditor to effect the conversion between an Object and a String. It can be handy to represent properties in a different way than the object itself. For example, a Date can be represented in a human readable way (as the String: '2007-14-09'), while we can still convert the human readable form back to the original date (or, even better, convert any date entered in a human readable form back to Date objects). This behavior can be achieved by registering custom editors of type java.beans.PropertyEditor. Registering custom editors on a BeanWrapper or, alternatively, in a specific IoC container (as mentioned in the previous chapter), gives it the knowledge of how to convert properties to the desired type. For more about PropertyEditor, see the javadoc of the java.beans package from Oracle.
A couple of examples where property editing is used in Spring:
• Setting properties on beans is done by using PropertyEditor implementations. When you use String as the value of a property of some bean that you declare in an XML file, Spring (if the setter of the corresponding property has a Class parameter) uses ClassEditor to try to resolve the parameter to a Class object.
• Parsing HTTP request parameters in Spring’s MVC framework is done by using all kinds of PropertyEditor implementations that you can manually bind in all subclasses of the CommandController.
Spring has a number of built-in PropertyEditor implementations to make life easy. They are all located in the org.springframework.beans.propertyeditors package. Most, (but not all, as indicated in the following table) are, by default, registered by BeanWrapperImpl. Where the property editor is configurable in some fashion, you can still register your own variant to override the default one. The following table describes the various PropertyEditor implementations that Spring provides:
Table 12. Built-in PropertyEditor Implementations
Class Explanation
ByteArrayPropertyEditor
Editor for byte arrays. Converts strings to their corresponding byte representations. Registered by default by BeanWrapperImpl.
ClassEditor
Parses Strings that represent classes to actual classes and vice-versa. When a class is not found, an IllegalArgumentException is thrown. By default, registered by BeanWrapperImpl.
CustomBooleanEditor
Customizable property editor for Boolean properties. By default, registered by BeanWrapperImpl but can be overridden by registering a custom instance of it as a custom editor.
CustomCollectionEditor
Property editor for collections, converting any source Collection to a given target Collection type.
CustomDateEditor
Customizable property editor for java.util.Date, supporting a custom DateFormat. NOT registered by default. Must be user-registered with the appropriate format as needed.
CustomNumberEditor
Customizable property editor for any Number subclass, such as Integer, Long, Float, or Double. By default, registered by BeanWrapperImpl but can be overridden by registering a custom instance of it as a custom editor.
FileEditor
Resolves strings to java.io.File objects. By default, registered by BeanWrapperImpl.
InputStreamEditor
One-way property editor that can take a string and produce (through an intermediate ResourceEditor and Resource) an InputStream so that InputStream properties may be directly set as strings. Note that the default usage does not close the InputStream for you. By default, registered by BeanWrapperImpl.
LocaleEditor
Can resolve strings to Locale objects and vice-versa (the string format is [country][variant], same as the toString() method of Locale). By default, registered by BeanWrapperImpl.
PatternEditor
Can resolve strings to java.util.regex.Pattern objects and vice-versa.
PropertiesEditor
Can convert strings (formatted with the format defined in the javadoc of the java.util.Properties class) to Properties objects. By default, registered by BeanWrapperImpl.
StringTrimmerEditor
Property editor that trims strings. Optionally allows transforming an empty string into a null value. NOT registered by default — must be user-registered.
URLEditor
Can resolve a string representation of a URL to an actual URL object. By default, registered by BeanWrapperImpl.
Spring uses the java.beans.PropertyEditorManager to set the search path for property editors that might be needed. The search path also includes sun.bean.editors, which includes PropertyEditor implementations for types such as Font, Color, and most of the primitive types. Note also that the standard JavaBeans infrastructure automatically discovers PropertyEditor classes (without you having to register them explicitly) if they are in the same package as the class they handle and have the same name as that class, with Editor appended. For example, one could have the following class and package structure, which would be sufficient for the SomethingEditor class to be recognized and used as the PropertyEditor for Something-typed properties.
com
chank
pop
Something
SomethingEditor // the PropertyEditor for the Something class
Note that you can also use the standard BeanInfo JavaBeans mechanism here as well (described to some extent here). The following example use the BeanInfo mechanism to explicitly register one or more PropertyEditor instances with the properties of an associated class:
com
chank
pop
Something
SomethingBeanInfo // the BeanInfo for the Something class
The following Java source code for the referenced SomethingBeanInfo class associates a CustomNumberEditor with the age property of the Something class:
Java
public class SomethingBeanInfo extends SimpleBeanInfo {
public PropertyDescriptor[] getPropertyDescriptors() {
try {
final PropertyEditor numberPE = new CustomNumberEditor(Integer.class, true);
PropertyDescriptor ageDescriptor = new PropertyDescriptor("age", Something.class) {
public PropertyEditor createPropertyEditor(Object bean) {
return numberPE;
};
};
return new PropertyDescriptor[] { ageDescriptor };
}
catch (IntrospectionException ex) {
throw new Error(ex.toString());
}
}
}
Kotlin
class SomethingBeanInfo : SimpleBeanInfo() {
override fun getPropertyDescriptors(): Array<PropertyDescriptor> {
try {
val numberPE = CustomNumberEditor(Int::class.java, true)
val ageDescriptor = object : PropertyDescriptor("age", Something::class.java) {
override fun createPropertyEditor(bean: Any): PropertyEditor {
return numberPE
}
}
return arrayOf(ageDescriptor)
} catch (ex: IntrospectionException) {
throw Error(ex.toString())
}
}
}
Registering Additional Custom PropertyEditor Implementations
When setting bean properties as string values, a Spring IoC container ultimately uses standard JavaBeans PropertyEditor implementations to convert these strings to the complex type of the property. Spring pre-registers a number of custom PropertyEditor implementations (for example, to convert a class name expressed as a string into a Class object). Additionally, Java’s standard JavaBeans PropertyEditor lookup mechanism lets a PropertyEditor for a class be named appropriately and placed in the same package as the class for which it provides support, so that it can be found automatically.
If there is a need to register other custom PropertyEditors, several mechanisms are available. The most manual approach, which is not normally convenient or recommended, is to use the registerCustomEditor() method of the ConfigurableBeanFactory interface, assuming you have a BeanFactory reference. Another (slightly more convenient) mechanism is to use a special bean factory post-processor called CustomEditorConfigurer. Although you can use bean factory post-processors with BeanFactory implementations, the CustomEditorConfigurer has a nested property setup, so we strongly recommend that you use it with the ApplicationContext, where you can deploy it in similar fashion to any other bean and where it can be automatically detected and applied.
Note that all bean factories and application contexts automatically use a number of built-in property editors, through their use a BeanWrapper to handle property conversions. The standard property editors that the BeanWrapper registers are listed in the previous section. Additionally, ApplicationContexts also override or add additional editors to handle resource lookups in a manner appropriate to the specific application context type.
Standard JavaBeans PropertyEditor instances are used to convert property values expressed as strings to the actual complex type of the property. You can use CustomEditorConfigurer, a bean factory post-processor, to conveniently add support for additional PropertyEditor instances to an ApplicationContext.
Consider the following example, which defines a user class called ExoticType and another class called DependsOnExoticType, which needs ExoticType set as a property:
Java
package example;
public class ExoticType {
private String name;
public ExoticType(String name) {
this.name = name;
}
}
public class DependsOnExoticType {
private ExoticType type;
public void setType(ExoticType type) {
this.type = type;
}
}
Kotlin
package example
class ExoticType(val name: String)
class DependsOnExoticType {
var type: ExoticType? = null
}
When things are properly set up, we want to be able to assign the type property as a string, which a PropertyEditor converts into an actual ExoticType instance. The following bean definition shows how to set up this relationship:
<bean id="sample" class="example.DependsOnExoticType">
<property name="type" value="aNameForExoticType"/>
</bean>
The PropertyEditor implementation could look similar to the following:
Java
// converts string representation to ExoticType object
package example;
public class ExoticTypeEditor extends PropertyEditorSupport {
public void setAsText(String text) {
setValue(new ExoticType(text.toUpperCase()));
}
}
Kotlin
// converts string representation to ExoticType object
package example
import java.beans.PropertyEditorSupport
class ExoticTypeEditor : PropertyEditorSupport() {
override fun setAsText(text: String) {
value = ExoticType(text.toUpperCase())
}
}
Finally, the following example shows how to use CustomEditorConfigurer to register the new PropertyEditor with the ApplicationContext, which will then be able to use it as needed:
<bean class="org.springframework.beans.factory.config.CustomEditorConfigurer">
<property name="customEditors">
<map>
<entry key="example.ExoticType" value="example.ExoticTypeEditor"/>
</map>
</property>
</bean>
Using PropertyEditorRegistrar
Another mechanism for registering property editors with the Spring container is to create and use a PropertyEditorRegistrar. This interface is particularly useful when you need to use the same set of property editors in several different situations. You can write a corresponding registrar and reuse it in each case. PropertyEditorRegistrar instances work in conjunction with an interface called PropertyEditorRegistry, an interface that is implemented by the Spring BeanWrapper (and DataBinder). PropertyEditorRegistrar instances are particularly convenient when used in conjunction with CustomEditorConfigurer (described here), which exposes a property called setPropertyEditorRegistrars(..). PropertyEditorRegistrar instances added to a CustomEditorConfigurer in this fashion can easily be shared with DataBinder and Spring MVC controllers. Furthermore, it avoids the need for synchronization on custom editors: A PropertyEditorRegistrar is expected to create fresh PropertyEditor instances for each bean creation attempt.
The following example shows how to create your own PropertyEditorRegistrar implementation:
Java
package com.foo.editors.spring;
public final class CustomPropertyEditorRegistrar implements PropertyEditorRegistrar {
public void registerCustomEditors(PropertyEditorRegistry registry) {
// it is expected that new PropertyEditor instances are created
registry.registerCustomEditor(ExoticType.class, new ExoticTypeEditor());
// you could register as many custom property editors as are required here...
}
}
Kotlin
package com.foo.editors.spring
import org.springframework.beans.PropertyEditorRegistrar
import org.springframework.beans.PropertyEditorRegistry
class CustomPropertyEditorRegistrar : PropertyEditorRegistrar {
override fun registerCustomEditors(registry: PropertyEditorRegistry) {
// it is expected that new PropertyEditor instances are created
registry.registerCustomEditor(ExoticType::class.java, ExoticTypeEditor())
// you could register as many custom property editors as are required here...
}
}
See also the org.springframework.beans.support.ResourceEditorRegistrar for an example PropertyEditorRegistrar implementation. Notice how in its implementation of the registerCustomEditors(..) method ,it creates new instances of each property editor.
The next example shows how to configure a CustomEditorConfigurer and inject an instance of our CustomPropertyEditorRegistrar into it:
<bean class="org.springframework.beans.factory.config.CustomEditorConfigurer">
<property name="propertyEditorRegistrars">
<list>
<ref bean="customPropertyEditorRegistrar"/>
</list>
</property>
</bean>
<bean id="customPropertyEditorRegistrar"
class="com.foo.editors.spring.CustomPropertyEditorRegistrar"/>
Finally (and in a bit of a departure from the focus of this chapter for those of you using Spring’s MVC web framework), using PropertyEditorRegistrars in conjunction with data-binding Controllers (such as SimpleFormController) can be very convenient. The following example uses a PropertyEditorRegistrar in the implementation of an initBinder(..) method:
Java
public final class RegisterUserController extends SimpleFormController {
private final PropertyEditorRegistrar customPropertyEditorRegistrar;
public RegisterUserController(PropertyEditorRegistrar propertyEditorRegistrar) {
this.customPropertyEditorRegistrar = propertyEditorRegistrar;
}
protected void initBinder(HttpServletRequest request,
ServletRequestDataBinder binder) throws Exception {
this.customPropertyEditorRegistrar.registerCustomEditors(binder);
}
// other methods to do with registering a User
}
Kotlin
class RegisterUserController(
private val customPropertyEditorRegistrar: PropertyEditorRegistrar) : SimpleFormController() {
protected fun initBinder(request: HttpServletRequest,
binder: ServletRequestDataBinder) {
this.customPropertyEditorRegistrar.registerCustomEditors(binder)
}
// other methods to do with registering a User
}
This style of PropertyEditor registration can lead to concise code (the implementation of initBinder(..) is only one line long) and lets common PropertyEditor registration code be encapsulated in a class and then shared amongst as many Controllers as needed.
3.4. Spring Type Conversion
Spring 3 introduced a core.convert package that provides a general type conversion system. The system defines an SPI to implement type conversion logic and an API to perform type conversions at runtime. Within a Spring container, you can use this system as an alternative to PropertyEditor implementations to convert externalized bean property value strings to the required property types. You can also use the public API anywhere in your application where type conversion is needed.
3.4.1. Converter SPI
The SPI to implement type conversion logic is simple and strongly typed, as the following interface definition shows:
Java
package org.springframework.core.convert.converter;
public interface Converter<S, T> {
T convert(S source);
}
Kotlin
package org.springframework.core.convert.converter
interface Converter<S, T> {
fun convert(source: S): T
}
To create your own converter, implement the Converter interface and parameterize S as the type you are converting from and T as the type you are converting to. You can also transparently apply such a converter if a collection or array of S needs to be converted to an array or collection of T, provided that a delegating array or collection converter has been registered as well (which DefaultConversionService does by default).
For each call to convert(S), the source argument is guaranteed to not be null. Your Converter may throw any unchecked exception if conversion fails. Specifically, it should throw an IllegalArgumentException to report an invalid source value. Take care to ensure that your Converter implementation is thread-safe.
Several converter implementations are provided in the core.convert.support package as a convenience. These include converters from strings to numbers and other common types. The following listing shows the StringToInteger class, which is a typical Converter implementation:
Java
package org.springframework.core.convert.support;
final class StringToInteger implements Converter<String, Integer> {
public Integer convert(String source) {
return Integer.valueOf(source);
}
}
Kotlin
package org.springframework.core.convert.support
import org.springframework.core.convert.converter.Converter
internal class StringToInteger : Converter<String, Int> {
override fun convert(source: String): Int? {
return Integer.valueOf(source)
}
}
3.4.2. Using ConverterFactory
When you need to centralize the conversion logic for an entire class hierarchy (for example, when converting from String to Enum objects), you can implement ConverterFactory, as the following example shows:
Java
package org.springframework.core.convert.converter;
public interface ConverterFactory<S, R> {
<T extends R> Converter<S, T> getConverter(Class<T> targetType);
}
Kotlin
package org.springframework.core.convert.converter
interface ConverterFactory<S, R> {
fun <T : R> getConverter(targetType: Class<T>): Converter<S, T>
}
Parameterize S to be the type you are converting from and R to be the base type defining the range of classes you can convert to. Then implement getConverter(Class<T>), where T is a subclass of R.
Consider the StringToEnumConverterFactory as an example:
Java
package org.springframework.core.convert.support;
final class StringToEnumConverterFactory implements ConverterFactory<String, Enum> {
public <T extends Enum> Converter<String, T> getConverter(Class<T> targetType) {
return new StringToEnumConverter(targetType);
}
private final class StringToEnumConverter<T extends Enum> implements Converter<String, T> {
private Class<T> enumType;
public StringToEnumConverter(Class<T> enumType) {
this.enumType = enumType;
}
public T convert(String source) {
return (T) Enum.valueOf(this.enumType, source.trim());
}
}
}
3.4.3. Using GenericConverter
When you require a sophisticated Converter implementation, consider using the GenericConverter interface. With a more flexible but less strongly typed signature than Converter, a GenericConverter supports converting between multiple source and target types. In addition, a GenericConverter makes available source and target field context that you can use when you implement your conversion logic. Such context lets a type conversion be driven by a field annotation or by generic information declared on a field signature. The following listing shows the interface definition of GenericConverter:
Java
package org.springframework.core.convert.converter;
public interface GenericConverter {
public Set<ConvertiblePair> getConvertibleTypes();
Object convert(Object source, TypeDescriptor sourceType, TypeDescriptor targetType);
}
Kotlin
package org.springframework.core.convert.converter
interface GenericConverter {
fun getConvertibleTypes(): Set<ConvertiblePair>?
fun convert(@Nullable source: Any?, sourceType: TypeDescriptor, targetType: TypeDescriptor): Any?
}
To implement a GenericConverter, have getConvertibleTypes() return the supported source→target type pairs. Then implement convert(Object, TypeDescriptor, TypeDescriptor) to contain your conversion logic. The source TypeDescriptor provides access to the source field that holds the value being converted. The target TypeDescriptor provides access to the target field where the converted value is to be set.
A good example of a GenericConverter is a converter that converts between a Java array and a collection. Such an ArrayToCollectionConverter introspects the field that declares the target collection type to resolve the collection’s element type. This lets each element in the source array be converted to the collection element type before the collection is set on the target field.
Because GenericConverter is a more complex SPI interface, you should use it only when you need it. Favor Converter or ConverterFactory for basic type conversion needs.
Using ConditionalGenericConverter
Sometimes, you want a Converter to run only if a specific condition holds true. For example, you might want to run a Converter only if a specific annotation is present on the target field, or you might want to run a Converter only if a specific method (such as a static valueOf method) is defined on the target class. ConditionalGenericConverter is the union of the GenericConverter and ConditionalConverter interfaces that lets you define such custom matching criteria:
Java
public interface ConditionalConverter {
boolean matches(TypeDescriptor sourceType, TypeDescriptor targetType);
}
public interface ConditionalGenericConverter extends GenericConverter, ConditionalConverter {
}
Kotlin
interface ConditionalConverter {
fun matches(sourceType: TypeDescriptor, targetType: TypeDescriptor): Boolean
}
interface ConditionalGenericConverter : GenericConverter, ConditionalConverter
A good example of a ConditionalGenericConverter is an EntityConverter that converts between a persistent entity identifier and an entity reference. Such an EntityConverter might match only if the target entity type declares a static finder method (for example, findAccount(Long)). You might perform such a finder method check in the implementation of matches(TypeDescriptor, TypeDescriptor).
3.4.4. The ConversionService API
ConversionService defines a unified API for executing type conversion logic at runtime. Converters are often executed behind the following facade interface:
Java
package org.springframework.core.convert;
public interface ConversionService {
boolean canConvert(Class<?> sourceType, Class<?> targetType);
<T> T convert(Object source, Class<T> targetType);
boolean canConvert(TypeDescriptor sourceType, TypeDescriptor targetType);
Object convert(Object source, TypeDescriptor sourceType, TypeDescriptor targetType);
}
Kotlin
package org.springframework.core.convert
interface ConversionService {
fun canConvert(sourceType: Class<*>, targetType: Class<*>): Boolean
fun <T> convert(source: Any, targetType: Class<T>): T
fun canConvert(sourceType: TypeDescriptor, targetType: TypeDescriptor): Boolean
fun convert(source: Any, sourceType: TypeDescriptor, targetType: TypeDescriptor): Any
}
Most ConversionService implementations also implement ConverterRegistry, which provides an SPI for registering converters. Internally, a ConversionService implementation delegates to its registered converters to carry out type conversion logic.
A robust ConversionService implementation is provided in the core.convert.support package. GenericConversionService is the general-purpose implementation suitable for use in most environments. ConversionServiceFactory provides a convenient factory for creating common ConversionService configurations.
3.4.5. Configuring a ConversionService
A ConversionService is a stateless object designed to be instantiated at application startup and then shared between multiple threads. In a Spring application, you typically configure a ConversionService instance for each Spring container (or ApplicationContext). Spring picks up that ConversionService and uses it whenever a type conversion needs to be performed by the framework. You can also inject this ConversionService into any of your beans and invoke it directly.
If no ConversionService is registered with Spring, the original PropertyEditor-based system is used.
To register a default ConversionService with Spring, add the following bean definition with an id of conversionService:
<bean id="conversionService"
class="org.springframework.context.support.ConversionServiceFactoryBean"/>
A default ConversionService can convert between strings, numbers, enums, collections, maps, and other common types. To supplement or override the default converters with your own custom converters, set the converters property. Property values can implement any of the Converter, ConverterFactory, or GenericConverter interfaces.
<bean id="conversionService"
class="org.springframework.context.support.ConversionServiceFactoryBean">
<property name="converters">
<set>
<bean class="example.MyCustomConverter"/>
</set>
</property>
</bean>
It is also common to use a ConversionService within a Spring MVC application. See Conversion and Formatting in the Spring MVC chapter.
In certain situations, you may wish to apply formatting during conversion. See The FormatterRegistry SPI for details on using FormattingConversionServiceFactoryBean.
3.4.6. Using a ConversionService Programmatically
To work with a ConversionService instance programmatically, you can inject a reference to it like you would for any other bean. The following example shows how to do so:
Java
@Service
public class MyService {
public MyService(ConversionService conversionService) {
this.conversionService = conversionService;
}
public void doIt() {
this.conversionService.convert(...)
}
}
Kotlin
@Service
class MyService(private val conversionService: ConversionService) {
fun doIt() {
conversionService.convert(...)
}
}
For most use cases, you can use the convert method that specifies the targetType, but it does not work with more complex types, such as a collection of a parameterized element. For example, if you want to convert a List of Integer to a List of String programmatically, you need to provide a formal definition of the source and target types.
Fortunately, TypeDescriptor provides various options to make doing so straightforward, as the following example shows:
Java
DefaultConversionService cs = new DefaultConversionService();
List<Integer> input = ...
cs.convert(input,
TypeDescriptor.forObject(input), // List<Integer> type descriptor
TypeDescriptor.collection(List.class, TypeDescriptor.valueOf(String.class)));
Kotlin
val cs = DefaultConversionService()
val input: List<Integer> = ...
cs.convert(input,
TypeDescriptor.forObject(input), // List<Integer> type descriptor
TypeDescriptor.collection(List::class.java, TypeDescriptor.valueOf(String::class.java)))
Note that DefaultConversionService automatically registers converters that are appropriate for most environments. This includes collection converters, scalar converters, and basic Object-to-String converters. You can register the same converters with any ConverterRegistry by using the static addDefaultConverters method on the DefaultConversionService class.
Converters for value types are reused for arrays and collections, so there is no need to create a specific converter to convert from a Collection of S to a Collection of T, assuming that standard collection handling is appropriate.
3.5. Spring Field Formatting
As discussed in the previous section, core.convert is a general-purpose type conversion system. It provides a unified ConversionService API as well as a strongly typed Converter SPI for implementing conversion logic from one type to another. A Spring container uses this system to bind bean property values. In addition, both the Spring Expression Language (SpEL) and DataBinder use this system to bind field values. For example, when SpEL needs to coerce a Short to a Long to complete an expression.setValue(Object bean, Object value) attempt, the core.convert system performs the coercion.
Now consider the type conversion requirements of a typical client environment, such as a web or desktop application. In such environments, you typically convert from String to support the client postback process, as well as back to String to support the view rendering process. In addition, you often need to localize String values. The more general core.convert Converter SPI does not address such formatting requirements directly. To directly address them, Spring 3 introduced a convenient Formatter SPI that provides a simple and robust alternative to PropertyEditor implementations for client environments.
In general, you can use the Converter SPI when you need to implement general-purpose type conversion logic — for example, for converting between a java.util.Date and a Long. You can use the Formatter SPI when you work in a client environment (such as a web application) and need to parse and print localized field values. The ConversionService provides a unified type conversion API for both SPIs.
3.5.1. The Formatter SPI
The Formatter SPI to implement field formatting logic is simple and strongly typed. The following listing shows the Formatter interface definition:
Java
package org.springframework.format;
public interface Formatter<T> extends Printer<T>, Parser<T> {
}
Formatter extends from the Printer and Parser building-block interfaces. The following listing shows the definitions of those two interfaces:
Java
public interface Printer<T> {
String print(T fieldValue, Locale locale);
}
Kotlin
interface Printer<T> {
fun print(fieldValue: T, locale: Locale): String
}
Java
import java.text.ParseException;
public interface Parser<T> {
T parse(String clientValue, Locale locale) throws ParseException;
}
Kotlin
interface Parser<T> {
@Throws(ParseException::class)
fun parse(clientValue: String, locale: Locale): T
}
To create your own Formatter, implement the Formatter interface shown earlier. Parameterize T to be the type of object you wish to format — for example, java.util.Date. Implement the print() operation to print an instance of T for display in the client locale. Implement the parse() operation to parse an instance of T from the formatted representation returned from the client locale. Your Formatter should throw a ParseException or an IllegalArgumentException if a parse attempt fails. Take care to ensure that your Formatter implementation is thread-safe.
The format subpackages provide several Formatter implementations as a convenience. The number package provides NumberStyleFormatter, CurrencyStyleFormatter, and PercentStyleFormatter to format Number objects that use a java.text.NumberFormat. The datetime package provides a DateFormatter to format java.util.Date objects with a java.text.DateFormat. The datetime.joda package provides comprehensive datetime formatting support based on the Joda-Time library.
The following DateFormatter is an example Formatter implementation:
Java
package org.springframework.format.datetime;
public final class DateFormatter implements Formatter<Date> {
private String pattern;
public DateFormatter(String pattern) {
this.pattern = pattern;
}
public String print(Date date, Locale locale) {
if (date == null) {
return "";
}
return getDateFormat(locale).format(date);
}
public Date parse(String formatted, Locale locale) throws ParseException {
if (formatted.length() == 0) {
return null;
}
return getDateFormat(locale).parse(formatted);
}
protected DateFormat getDateFormat(Locale locale) {
DateFormat dateFormat = new SimpleDateFormat(this.pattern, locale);
dateFormat.setLenient(false);
return dateFormat;
}
}
Kotlin
class DateFormatter(private val pattern: String) : Formatter<Date> {
override fun print(date: Date, locale: Locale)
= getDateFormat(locale).format(date)
@Throws(ParseException::class)
override fun parse(formatted: String, locale: Locale)
= getDateFormat(locale).parse(formatted)
protected fun getDateFormat(locale: Locale): DateFormat {
val dateFormat = SimpleDateFormat(this.pattern, locale)
dateFormat.isLenient = false
return dateFormat
}
}
The Spring team welcomes community-driven Formatter contributions. See GitHub Issues to contribute.
3.5.2. Annotation-driven Formatting
Field formatting can be configured by field type or annotation. To bind an annotation to a Formatter, implement AnnotationFormatterFactory. The following listing shows the definition of the AnnotationFormatterFactory interface:
Java
package org.springframework.format;
public interface AnnotationFormatterFactory<A extends Annotation> {
Set<Class<?>> getFieldTypes();
Printer<?> getPrinter(A annotation, Class<?> fieldType);
Parser<?> getParser(A annotation, Class<?> fieldType);
}
Kotlin
package org.springframework.format
interface AnnotationFormatterFactory<A : Annotation> {
val fieldTypes: Set<Class<*>>
fun getPrinter(annotation: A, fieldType: Class<*>): Printer<*>
fun getParser(annotation: A, fieldType: Class<*>): Parser<*>
}
To create an implementation: . Parameterize A to be the field annotationType with which you wish to associate formatting logic — for example org.springframework.format.annotation.DateTimeFormat. . Have getFieldTypes() return the types of fields on which the annotation can be used. . Have getPrinter() return a Printer to print the value of an annotated field. . Have getParser() return a Parser to parse a clientValue for an annotated field.
The following example AnnotationFormatterFactory implementation binds the @NumberFormat annotation to a formatter to let a number style or pattern be specified:
Java
public final class NumberFormatAnnotationFormatterFactory
implements AnnotationFormatterFactory<NumberFormat> {
public Set<Class<?>> getFieldTypes() {
return new HashSet<Class<?>>(asList(new Class<?>[] {
Short.class, Integer.class, Long.class, Float.class,
Double.class, BigDecimal.class, BigInteger.class }));
}
public Printer<Number> getPrinter(NumberFormat annotation, Class<?> fieldType) {
return configureFormatterFrom(annotation, fieldType);
}
public Parser<Number> getParser(NumberFormat annotation, Class<?> fieldType) {
return configureFormatterFrom(annotation, fieldType);
}
private Formatter<Number> configureFormatterFrom(NumberFormat annotation, Class<?> fieldType) {
if (!annotation.pattern().isEmpty()) {
return new NumberStyleFormatter(annotation.pattern());
} else {
Style style = annotation.style();
if (style == Style.PERCENT) {
return new PercentStyleFormatter();
} else if (style == Style.CURRENCY) {
return new CurrencyStyleFormatter();
} else {
return new NumberStyleFormatter();
}
}
}
}
Kotlin
class NumberFormatAnnotationFormatterFactory : AnnotationFormatterFactory<NumberFormat> {
override fun getFieldTypes(): Set<Class<*>> {
return setOf(Short::class.java, Int::class.java, Long::class.java, Float::class.java, Double::class.java, BigDecimal::class.java, BigInteger::class.java)
}
override fun getPrinter(annotation: NumberFormat, fieldType: Class<*>): Printer<Number> {
return configureFormatterFrom(annotation, fieldType)
}
override fun getParser(annotation: NumberFormat, fieldType: Class<*>): Parser<Number> {
return configureFormatterFrom(annotation, fieldType)
}
private fun configureFormatterFrom(annotation: NumberFormat, fieldType: Class<*>): Formatter<Number> {
return if (annotation.pattern.isNotEmpty()) {
NumberStyleFormatter(annotation.pattern)
} else {
val style = annotation.style
when {
style === NumberFormat.Style.PERCENT -> PercentStyleFormatter()
style === NumberFormat.Style.CURRENCY -> CurrencyStyleFormatter()
else -> NumberStyleFormatter()
}
}
}
}
To trigger formatting, you can annotate fields with @NumberFormat, as the following example shows:
Java
public class MyModel {
@NumberFormat(style=Style.CURRENCY)
private BigDecimal decimal;
}
Kotlin
class MyModel(
@field:NumberFormat(style = Style.CURRENCY) private val decimal: BigDecimal
)
Format Annotation API
A portable format annotation API exists in the org.springframework.format.annotation package. You can use @NumberFormat to format Number fields such as Double and Long, and @DateTimeFormat to format java.util.Date, java.util.Calendar, Long (for millisecond timestamps) as well as JSR-310 java.time and Joda-Time value types.
The following example uses @DateTimeFormat to format a java.util.Date as an ISO Date (yyyy-MM-dd):
Java
public class MyModel {
@DateTimeFormat(iso=ISO.DATE)
private Date date;
}
Kotlin
class MyModel(
@DateTimeFormat(iso= ISO.DATE) private val date: Date
)
3.5.3. The FormatterRegistry SPI
The FormatterRegistry is an SPI for registering formatters and converters. FormattingConversionService is an implementation of FormatterRegistry suitable for most environments. You can programmatically or declaratively configure this variant as a Spring bean, e.g. by using FormattingConversionServiceFactoryBean. Because this implementation also implements ConversionService, you can directly configure it for use with Spring’s DataBinder and the Spring Expression Language (SpEL).
The following listing shows the FormatterRegistry SPI:
Java
package org.springframework.format;
public interface FormatterRegistry extends ConverterRegistry {
void addFormatterForFieldType(Class<?> fieldType, Printer<?> printer, Parser<?> parser);
void addFormatterForFieldType(Class<?> fieldType, Formatter<?> formatter);
void addFormatterForFieldType(Formatter<?> formatter);
void addFormatterForAnnotation(AnnotationFormatterFactory<?> factory);
}
Kotlin
package org.springframework.format
interface FormatterRegistry : ConverterRegistry {
fun addFormatterForFieldType(fieldType: Class<*>, printer: Printer<*>, parser: Parser<*>)
fun addFormatterForFieldType(fieldType: Class<*>, formatter: Formatter<*>)
fun addFormatterForFieldType(formatter: Formatter<*>)
fun addFormatterForAnnotation(factory: AnnotationFormatterFactory<*>)
}
As shown in the preceding listing, you can register formatters by field type or by annotation.
The FormatterRegistry SPI lets you configure formatting rules centrally, instead of duplicating such configuration across your controllers. For example, you might want to enforce that all date fields are formatted a certain way or that fields with a specific annotation are formatted in a certain way. With a shared FormatterRegistry, you define these rules once, and they are applied whenever formatting is needed.
3.5.4. The FormatterRegistrar SPI
FormatterRegistrar is an SPI for registering formatters and converters through the FormatterRegistry. The following listing shows its interface definition:
Java
package org.springframework.format;
public interface FormatterRegistrar {
void registerFormatters(FormatterRegistry registry);
}
Kotlin
package org.springframework.format
interface FormatterRegistrar {
fun registerFormatters(registry: FormatterRegistry)
}
A FormatterRegistrar is useful when registering multiple related converters and formatters for a given formatting category, such as date formatting. It can also be useful where declarative registration is insufficient — for example, when a formatter needs to be indexed under a specific field type different from its own <T> or when registering a Printer/Parser pair. The next section provides more information on converter and formatter registration.
3.5.5. Configuring Formatting in Spring MVC
See Conversion and Formatting in the Spring MVC chapter.
3.6. Configuring a Global Date and Time Format
By default, date and time fields not annotated with @DateTimeFormat are converted from strings by using the DateFormat.SHORT style. If you prefer, you can change this by defining your own global format.
To do that, ensure that Spring does not register default formatters. Instead, register formatters manually with the help of:
• org.springframework.format.datetime.standard.DateTimeFormatterRegistrar
• org.springframework.format.datetime.DateFormatterRegistrar, or org.springframework.format.datetime.joda.JodaTimeFormatterRegistrar for Joda-Time.
For example, the following Java configuration registers a global yyyyMMdd format:
Java
@Configuration
public class AppConfig {
@Bean
public FormattingConversionService conversionService() {
// Use the DefaultFormattingConversionService but do not register defaults
DefaultFormattingConversionService conversionService = new DefaultFormattingConversionService(false);
// Ensure @NumberFormat is still supported
conversionService.addFormatterForFieldAnnotation(new NumberFormatAnnotationFormatterFactory());
// Register JSR-310 date conversion with a specific global format
DateTimeFormatterRegistrar registrar = new DateTimeFormatterRegistrar();
registrar.setDateFormatter(DateTimeFormatter.ofPattern("yyyyMMdd"));
registrar.registerFormatters(conversionService);
// Register date conversion with a specific global format
DateFormatterRegistrar registrar = new DateFormatterRegistrar();
registrar.setFormatter(new DateFormatter("yyyyMMdd"));
registrar.registerFormatters(conversionService);
return conversionService;
}
}
Kotlin
@Configuration
class AppConfig {
@Bean
fun conversionService(): FormattingConversionService {
// Use the DefaultFormattingConversionService but do not register defaults
return DefaultFormattingConversionService(false).apply {
// Ensure @NumberFormat is still supported
addFormatterForFieldAnnotation(NumberFormatAnnotationFormatterFactory())
// Register JSR-310 date conversion with a specific global format
val registrar = DateTimeFormatterRegistrar()
registrar.setDateFormatter(DateTimeFormatter.ofPattern("yyyyMMdd"))
registrar.registerFormatters(this)
// Register date conversion with a specific global format
val registrar = DateFormatterRegistrar()
registrar.setFormatter(DateFormatter("yyyyMMdd"))
registrar.registerFormatters(this)
}
}
}
If you prefer XML-based configuration, you can use a FormattingConversionServiceFactoryBean. The following example shows how to do so (this time using Joda Time):
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd>
<bean id="conversionService" class="org.springframework.format.support.FormattingConversionServiceFactoryBean">
<property name="registerDefaultFormatters" value="false" />
<property name="formatters">
<set>
<bean class="org.springframework.format.number.NumberFormatAnnotationFormatterFactory" />
</set>
</property>
<property name="formatterRegistrars">
<set>
<bean class="org.springframework.format.datetime.joda.JodaTimeFormatterRegistrar">
<property name="dateFormatter">
<bean class="org.springframework.format.datetime.joda.DateTimeFormatterFactoryBean">
<property name="pattern" value="yyyyMMdd"/>
</bean>
</property>
</bean>
</set>
</property>
</bean>
</beans>
Note there are extra considerations when configuring date and time formats in web applications. Please see WebMVC Conversion and Formatting or WebFlux Conversion and Formatting.
3.7. Java Bean Validation
The Spring Framework provides support for the Java Bean Validation API.
3.7.1. Overview of Bean Validation
Bean Validation provides a common way of validation through constraint declaration and metadata for Java applications. To use it, you annotate domain model properties with declarative validation constraints which are then enforced by the runtime. There are built-in constraints, and you can also define your own custom constraints.
Consider the following example, which shows a simple PersonForm model with two properties:
Java
public class PersonForm {
private String name;
private int age;
}
Kotlin
class PersonForm(
private val name: String,
private val age: Int
)
Bean Validation lets you declare constraints as the following example shows:
Java
public class PersonForm {
@NotNull
@Size(max=64)
private String name;
@Min(0)
private int age;
}
Kotlin
class PersonForm(
@get:NotNull @get:Size(max=64)
private val name: String,
@get:Min(0)
private val age: Int
)
A Bean Validation validator then validates instances of this class based on the declared constraints. See Bean Validation for general information about the API. See the Hibernate Validator documentation for specific constraints. To learn how to set up a bean validation provider as a Spring bean, keep reading.
3.7.2. Configuring a Bean Validation Provider
Spring provides full support for the Bean Validation API including the bootstrapping of a Bean Validation provider as a Spring bean. This lets you inject a javax.validation.ValidatorFactory or javax.validation.Validator wherever validation is needed in your application.
You can use the LocalValidatorFactoryBean to configure a default Validator as a Spring bean, as the following example shows:
Java
import org.springframework.validation.beanvalidation.LocalValidatorFactoryBean;
@Configuration
public class AppConfig {
@Bean
public LocalValidatorFactoryBean validator() {
return new LocalValidatorFactoryBean();
}
}
XML
<bean id="validator"
class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean"/>
The basic configuration in the preceding example triggers bean validation to initialize by using its default bootstrap mechanism. A Bean Validation provider, such as the Hibernate Validator, is expected to be present in the classpath and is automatically detected.
Injecting a Validator
LocalValidatorFactoryBean implements both javax.validation.ValidatorFactory and javax.validation.Validator, as well as Spring’s org.springframework.validation.Validator. You can inject a reference to either of these interfaces into beans that need to invoke validation logic.
You can inject a reference to javax.validation.Validator if you prefer to work with the Bean Validation API directly, as the following example shows:
Java
import javax.validation.Validator;
@Service
public class MyService {
@Autowired
private Validator validator;
}
Kotlin
import javax.validation.Validator;
@Service
class MyService(@Autowired private val validator: Validator)
You can inject a reference to org.springframework.validation.Validator if your bean requires the Spring Validation API, as the following example shows:
Java
import org.springframework.validation.Validator;
@Service
public class MyService {
@Autowired
private Validator validator;
}
Kotlin
import org.springframework.validation.Validator
@Service
class MyService(@Autowired private val validator: Validator)
Configuring Custom Constraints
Each bean validation constraint consists of two parts:
• A @Constraint annotation that declares the constraint and its configurable properties.
• An implementation of the javax.validation.ConstraintValidator interface that implements the constraint’s behavior.
To associate a declaration with an implementation, each @Constraint annotation references a corresponding ConstraintValidator implementation class. At runtime, a ConstraintValidatorFactory instantiates the referenced implementation when the constraint annotation is encountered in your domain model.
By default, the LocalValidatorFactoryBean configures a SpringConstraintValidatorFactory that uses Spring to create ConstraintValidator instances. This lets your custom ConstraintValidators benefit from dependency injection like any other Spring bean.
The following example shows a custom @Constraint declaration followed by an associated ConstraintValidator implementation that uses Spring for dependency injection:
Java
@Target({ElementType.METHOD, ElementType.FIELD})
@Retention(RetentionPolicy.RUNTIME)
@Constraint(validatedBy=MyConstraintValidator.class)
public @interface MyConstraint {
}
Kotlin
@Target(AnnotationTarget.FUNCTION, AnnotationTarget.FIELD)
@Retention(AnnotationRetention.RUNTIME)
@Constraint(validatedBy = MyConstraintValidator::class)
annotation class MyConstraint
Java
import javax.validation.ConstraintValidator;
public class MyConstraintValidator implements ConstraintValidator {
@Autowired;
private Foo aDependency;
// ...
}
Kotlin
import javax.validation.ConstraintValidator
class MyConstraintValidator(private val aDependency: Foo) : ConstraintValidator {
// ...
}
As the preceding example shows, a ConstraintValidator implementation can have its dependencies @Autowired as any other Spring bean.
Spring-driven Method Validation
You can integrate the method validation feature supported by Bean Validation 1.1 (and, as a custom extension, also by Hibernate Validator 4.3) into a Spring context through a MethodValidationPostProcessor bean definition:
Java
import org.springframework.validation.beanvalidation.MethodValidationPostProcessor;
@Configuration
public class AppConfig {
@Bean
public MethodValidationPostProcessor validationPostProcessor() {
return new MethodValidationPostProcessor();
}
}
XML
<bean class="org.springframework.validation.beanvalidation.MethodValidationPostProcessor"/>
To be eligible for Spring-driven method validation, all target classes need to be annotated with Spring’s @Validated annotation, which can optionally also declare the validation groups to use. See MethodValidationPostProcessor for setup details with the Hibernate Validator and Bean Validation 1.1 providers.
Method validation relies on AOP Proxies around the target classes, either JDK dynamic proxies for methods on interfaces or CGLIB proxies. There are certain limitations with the use of proxies, some of which are described in Understanding AOP Proxies. In addition remember to always use methods and accessors on proxied classes; direct field access will not work.
Additional Configuration Options
The default LocalValidatorFactoryBean configuration suffices for most cases. There are a number of configuration options for various Bean Validation constructs, from message interpolation to traversal resolution. See the LocalValidatorFactoryBean javadoc for more information on these options.
3.7.3. Configuring a DataBinder
Since Spring 3, you can configure a DataBinder instance with a Validator. Once configured, you can invoke the Validator by calling binder.validate(). Any validation Errors are automatically added to the binder’s BindingResult.
The following example shows how to use a DataBinder programmatically to invoke validation logic after binding to a target object:
Java
Foo target = new Foo();
DataBinder binder = new DataBinder(target);
binder.setValidator(new FooValidator());
// bind to the target object
binder.bind(propertyValues);
// validate the target object
binder.validate();
// get BindingResult that includes any validation errors
BindingResult results = binder.getBindingResult();
Kotlin
val target = Foo()
val binder = DataBinder(target)
binder.validator = FooValidator()
// bind to the target object
binder.bind(propertyValues)
// validate the target object
binder.validate()
// get BindingResult that includes any validation errors
val results = binder.bindingResult
You can also configure a DataBinder with multiple Validator instances through dataBinder.addValidators and dataBinder.replaceValidators. This is useful when combining globally configured bean validation with a Spring Validator configured locally on a DataBinder instance. See Spring MVC Validation Configuration.
3.7.4. Spring MVC 3 Validation
See Validation in the Spring MVC chapter.
4. Spring Expression Language (SpEL)
The Spring Expression Language (“SpEL” for short) is a powerful expression language that supports querying and manipulating an object graph at runtime. The language syntax is similar to Unified EL but offers additional features, most notably method invocation and basic string templating functionality.
While there are several other Java expression languages available — OGNL, MVEL, and JBoss EL, to name a few — the Spring Expression Language was created to provide the Spring community with a single well supported expression language that can be used across all the products in the Spring portfolio. Its language features are driven by the requirements of the projects in the Spring portfolio, including tooling requirements for code completion support within the Spring Tools for Eclipse. That said, SpEL is based on a technology-agnostic API that lets other expression language implementations be integrated, should the need arise.
While SpEL serves as the foundation for expression evaluation within the Spring portfolio, it is not directly tied to Spring and can be used independently. To be self contained, many of the examples in this chapter use SpEL as if it were an independent expression language. This requires creating a few bootstrapping infrastructure classes, such as the parser. Most Spring users need not deal with this infrastructure and can, instead, author only expression strings for evaluation. An example of this typical use is the integration of SpEL into creating XML or annotation-based bean definitions, as shown in Expression support for defining bean definitions.
This chapter covers the features of the expression language, its API, and its language syntax. In several places, Inventor and Society classes are used as the target objects for expression evaluation. These class declarations and the data used to populate them are listed at the end of the chapter.
The expression language supports the following functionality:
• Literal expressions
• Boolean and relational operators
• Regular expressions
• Class expressions
• Accessing properties, arrays, lists, and maps
• Method invocation
• Relational operators
• Assignment
• Calling constructors
• Bean references
• Array construction
• Inline lists
• Inline maps
• Ternary operator
• Variables
• User-defined functions
• Collection projection
• Collection selection
• Templated expressions
4.1. Evaluation
This section introduces the simple use of SpEL interfaces and its expression language. The complete language reference can be found in Language Reference.
The following code introduces the SpEL API to evaluate the literal string expression, Hello World.
Java
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("'Hello World'"); (1)
String message = (String) exp.getValue();
1 The value of the message variable is 'Hello World'.
Kotlin
val parser = SpelExpressionParser()
val exp = parser.parseExpression("'Hello World'") (1)
val message = exp.value as String
1 The value of the message variable is 'Hello World'.
The SpEL classes and interfaces you are most likely to use are located in the org.springframework.expression package and its sub-packages, such as spel.support.
The ExpressionParser interface is responsible for parsing an expression string. In the preceding example, the expression string is a string literal denoted by the surrounding single quotation marks. The Expression interface is responsible for evaluating the previously defined expression string. Two exceptions that can be thrown, ParseException and EvaluationException, when calling parser.parseExpression and exp.getValue, respectively.
SpEL supports a wide range of features, such as calling methods, accessing properties, and calling constructors.
In the following example of method invocation, we call the concat method on the string literal:
Java
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("'Hello World'.concat('!')"); (1)
String message = (String) exp.getValue();
1 The value of message is now 'Hello World!'.
Kotlin
val parser = SpelExpressionParser()
val exp = parser.parseExpression("'Hello World'.concat('!')") (1)
val message = exp.value as String
1 The value of message is now 'Hello World!'.
The following example of calling a JavaBean property calls the String property Bytes:
Java
ExpressionParser parser = new SpelExpressionParser();
// invokes 'getBytes()'
Expression exp = parser.parseExpression("'Hello World'.bytes"); (1)
byte[] bytes = (byte[]) exp.getValue();
1 This line converts the literal to a byte array.
Kotlin
val parser = SpelExpressionParser()
// invokes 'getBytes()'
val exp = parser.parseExpression("'Hello World'.bytes") (1)
val bytes = exp.value as ByteArray
1 This line converts the literal to a byte array.
SpEL also supports nested properties by using the standard dot notation (such as prop1.prop2.prop3) and also the corresponding setting of property values. Public fields may also be accessed.
The following example shows how to use dot notation to get the length of a literal:
Java
ExpressionParser parser = new SpelExpressionParser();
// invokes 'getBytes().length'
Expression exp = parser.parseExpression("'Hello World'.bytes.length"); (1)
int length = (Integer) exp.getValue();
1 'Hello World'.bytes.length gives the length of the literal.
Kotlin
val parser = SpelExpressionParser()
// invokes 'getBytes().length'
val exp = parser.parseExpression("'Hello World'.bytes.length") (1)
val length = exp.value as Int
1 'Hello World'.bytes.length gives the length of the literal.
The String’s constructor can be called instead of using a string literal, as the following example shows:
Java
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("new String('hello world').toUpperCase()"); (1)
String message = exp.getValue(String.class);
1 Construct a new String from the literal and make it be upper case.
Kotlin
val parser = SpelExpressionParser()
val exp = parser.parseExpression("new String('hello world').toUpperCase()") (1)
val message = exp.getValue(String::class.java)
1 Construct a new String from the literal and make it be upper case.
Note the use of the generic method: public <T> T getValue(Class<T> desiredResultType). Using this method removes the need to cast the value of the expression to the desired result type. An EvaluationException is thrown if the value cannot be cast to the type T or converted by using the registered type converter.
The more common usage of SpEL is to provide an expression string that is evaluated against a specific object instance (called the root object). The following example shows how to retrieve the name property from an instance of the Inventor class or create a boolean condition:
Java
// Create and set a calendar
GregorianCalendar c = new GregorianCalendar();
c.set(1856, 7, 9);
// The constructor arguments are name, birthday, and nationality.
Inventor tesla = new Inventor("Nikola Tesla", c.getTime(), "Serbian");
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("name"); // Parse name as an expression
String name = (String) exp.getValue(tesla);
// name == "Nikola Tesla"
exp = parser.parseExpression("name == 'Nikola Tesla'");
boolean result = exp.getValue(tesla, Boolean.class);
// result == true
Kotlin
// Create and set a calendar
val c = GregorianCalendar()
c.set(1856, 7, 9)
// The constructor arguments are name, birthday, and nationality.
val tesla = Inventor("Nikola Tesla", c.time, "Serbian")
val parser = SpelExpressionParser()
var exp = parser.parseExpression("name") // Parse name as an expression
val name = exp.getValue(tesla) as String
// name == "Nikola Tesla"
exp = parser.parseExpression("name == 'Nikola Tesla'")
val result = exp.getValue(tesla, Boolean::class.java)
// result == true
4.1.1. Understanding EvaluationContext
The EvaluationContext interface is used when evaluating an expression to resolve properties, methods, or fields and to help perform type conversion. Spring provides two implementations.
• SimpleEvaluationContext: Exposes a subset of essential SpEL language features and configuration options, for categories of expressions that do not require the full extent of the SpEL language syntax and should be meaningfully restricted. Examples include but are not limited to data binding expressions and property-based filters.
• StandardEvaluationContext: Exposes the full set of SpEL language features and configuration options. You can use it to specify a default root object and to configure every available evaluation-related strategy.
SimpleEvaluationContext is designed to support only a subset of the SpEL language syntax. It excludes Java type references, constructors, and bean references. It also requires you to explicitly choose the level of support for properties and methods in expressions. By default, the create() static factory method enables only read access to properties. You can also obtain a builder to configure the exact level of support needed, targeting one or some combination of the following:
• Custom PropertyAccessor only (no reflection)
• Data binding properties for read-only access
• Data binding properties for read and write
Type Conversion
By default, SpEL uses the conversion service available in Spring core (org.springframework.core.convert.ConversionService). This conversion service comes with many built-in converters for common conversions but is also fully extensible so that you can add custom conversions between types. Additionally, it is generics-aware. This means that, when you work with generic types in expressions, SpEL attempts conversions to maintain type correctness for any objects it encounters.
What does this mean in practice? Suppose assignment, using setValue(), is being used to set a List property. The type of the property is actually List<Boolean>. SpEL recognizes that the elements of the list need to be converted to Boolean before being placed in it. The following example shows how to do so:
Java
class Simple {
public List<Boolean> booleanList = new ArrayList<Boolean>();
}
Simple simple = new Simple();
simple.booleanList.add(true);
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
// "false" is passed in here as a String. SpEL and the conversion service
// will recognize that it needs to be a Boolean and convert it accordingly.
parser.parseExpression("booleanList[0]").setValue(context, simple, "false");
// b is false
Boolean b = simple.booleanList.get(0);
Kotlin
class Simple {
var booleanList: MutableList<Boolean> = ArrayList()
}
val simple = Simple()
simple.booleanList.add(true)
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
// "false" is passed in here as a String. SpEL and the conversion service
// will recognize that it needs to be a Boolean and convert it accordingly.
parser.parseExpression("booleanList[0]").setValue(context, simple, "false")
// b is false
val b = simple.booleanList[0]
4.1.2. Parser Configuration
It is possible to configure the SpEL expression parser by using a parser configuration object (org.springframework.expression.spel.SpelParserConfiguration). The configuration object controls the behavior of some of the expression components. For example, if you index into an array or collection and the element at the specified index is null, you can automatically create the element. This is useful when using expressions made up of a chain of property references. If you index into an array or list and specifying an index that is beyond the end of the current size of the array or list, you can automatically grow the array or list to accommodate that index. The following example demonstrates how to automatically grow the list:
Java
class Demo {
public List<String> list;
}
// Turn on:
// - auto null reference initialization
// - auto collection growing
SpelParserConfiguration config = new SpelParserConfiguration(true,true);
ExpressionParser parser = new SpelExpressionParser(config);
Expression expression = parser.parseExpression("list[3]");
Demo demo = new Demo();
Object o = expression.getValue(demo);
// demo.list will now be a real collection of 4 entries
// Each entry is a new empty String
Kotlin
class Demo {
var list: List<String>? = null
}
// Turn on:
// - auto null reference initialization
// - auto collection growing
val config = SpelParserConfiguration(true, true)
val parser = SpelExpressionParser(config)
val expression = parser.parseExpression("list[3]")
val demo = Demo()
val o = expression.getValue(demo)
// demo.list will now be a real collection of 4 entries
// Each entry is a new empty String
4.1.3. SpEL Compilation
Spring Framework 4.1 includes a basic expression compiler. Expressions are usually interpreted, which provides a lot of dynamic flexibility during evaluation but does not provide optimum performance. For occasional expression usage, this is fine, but, when used by other components such as Spring Integration, performance can be very important, and there is no real need for the dynamism.
The SpEL compiler is intended to address this need. During evaluation, the compiler generates a Java class that embodies the expression behavior at runtime and uses that class to achieve much faster expression evaluation. Due to the lack of typing around expressions, the compiler uses information gathered during the interpreted evaluations of an expression when performing compilation. For example, it does not know the type of a property reference purely from the expression, but during the first interpreted evaluation, it finds out what it is. Of course, basing compilation on such derived information can cause trouble later if the types of the various expression elements change over time. For this reason, compilation is best suited to expressions whose type information is not going to change on repeated evaluations.
Consider the following basic expression:
someArray[0].someProperty.someOtherProperty < 0.1
Because the preceding expression involves array access, some property de-referencing, and numeric operations, the performance gain can be very noticeable. In an example micro benchmark run of 50000 iterations, it took 75ms to evaluate by using the interpreter and only 3ms using the compiled version of the expression.
Compiler Configuration
The compiler is not turned on by default, but you can turn it on in either of two different ways. You can turn it on by using the parser configuration process (discussed earlier) or by using a system property when SpEL usage is embedded inside another component. This section discusses both of these options.
The compiler can operate in one of three modes, which are captured in the org.springframework.expression.spel.SpelCompilerMode enum. The modes are as follows:
• OFF (default): The compiler is switched off.
• IMMEDIATE: In immediate mode, the expressions are compiled as soon as possible. This is typically after the first interpreted evaluation. If the compiled expression fails (typically due to a type changing, as described earlier), the caller of the expression evaluation receives an exception.
• MIXED: In mixed mode, the expressions silently switch between interpreted and compiled mode over time. After some number of interpreted runs, they switch to compiled form and, if something goes wrong with the compiled form (such as a type changing, as described earlier), the expression automatically switches back to interpreted form again. Sometime later, it may generate another compiled form and switch to it. Basically, the exception that the user gets in IMMEDIATE mode is instead handled internally.
IMMEDIATE mode exists because MIXED mode could cause issues for expressions that have side effects. If a compiled expression blows up after partially succeeding, it may have already done something that has affected the state of the system. If this has happened, the caller may not want it to silently re-run in interpreted mode, since part of the expression may be running twice.
After selecting a mode, use the SpelParserConfiguration to configure the parser. The following example shows how to do so:
Java
SpelParserConfiguration config = new SpelParserConfiguration(SpelCompilerMode.IMMEDIATE,
this.getClass().getClassLoader());
SpelExpressionParser parser = new SpelExpressionParser(config);
Expression expr = parser.parseExpression("payload");
MyMessage message = new MyMessage();
Object payload = expr.getValue(message);
Kotlin
val config = SpelParserConfiguration(SpelCompilerMode.IMMEDIATE,
this.javaClass.classLoader)
val parser = SpelExpressionParser(config)
val expr = parser.parseExpression("payload")
val message = MyMessage()
val payload = expr.getValue(message)
When you specify the compiler mode, you can also specify a classloader (passing null is allowed). Compiled expressions are defined in a child classloader created under any that is supplied. It is important to ensure that, if a classloader is specified, it can see all the types involved in the expression evaluation process. If you do not specify a classloader, a default classloader is used (typically the context classloader for the thread that is running during expression evaluation).
The second way to configure the compiler is for use when SpEL is embedded inside some other component and it may not be possible to configure it through a configuration object. In these cases, it is possible to use a system property. You can set the spring.expression.compiler.mode property to one of the SpelCompilerMode enum values (off, immediate, or mixed).
Compiler Limitations
Since Spring Framework 4.1, the basic compilation framework is in place. However, the framework does not yet support compiling every kind of expression. The initial focus has been on the common expressions that are likely to be used in performance-critical contexts. The following kinds of expression cannot be compiled at the moment:
• Expressions involving assignment
• Expressions relying on the conversion service
• Expressions using custom resolvers or accessors
• Expressions using selection or projection
More types of expression will be compilable in the future.
4.2. Expressions in Bean Definitions
You can use SpEL expressions with XML-based or annotation-based configuration metadata for defining BeanDefinition instances. In both cases, the syntax to define the expression is of the form #{ <expression string> }.
4.2.1. XML Configuration
A property or constructor argument value can be set by using expressions, as the following example shows:
<bean id="numberGuess" class="org.spring.samples.NumberGuess">
<property name="randomNumber" value="#{ T(java.lang.Math).random() * 100.0 }"/>
<!-- other properties -->
</bean>
All beans in the application context are available as predefined variables with their common bean name. This includes standard context beans such as environment (of type org.springframework.core.env.Environment) as well as systemProperties and systemEnvironment (of type Map<String, Object>) for access to the runtime environment.
The following example shows access to the systemProperties bean as a SpEL variable:
<bean id="taxCalculator" class="org.spring.samples.TaxCalculator">
<property name="defaultLocale" value="#{ systemProperties['user.region'] }"/>
<!-- other properties -->
</bean>
Note that you do not have to prefix the predefined variable with the # symbol here.
You can also refer to other bean properties by name, as the following example shows:
<bean id="numberGuess" class="org.spring.samples.NumberGuess">
<property name="randomNumber" value="#{ T(java.lang.Math).random() * 100.0 }"/>
<!-- other properties -->
</bean>
<bean id="shapeGuess" class="org.spring.samples.ShapeGuess">
<property name="initialShapeSeed" value="#{ numberGuess.randomNumber }"/>
<!-- other properties -->
</bean>
4.2.2. Annotation Configuration
To specify a default value, you can place the @Value annotation on fields, methods, and method or constructor parameters.
The following example sets the default value of a field variable:
Java
public class FieldValueTestBean {
@Value("#{ systemProperties['user.region'] }")
private String defaultLocale;
public void setDefaultLocale(String defaultLocale) {
this.defaultLocale = defaultLocale;
}
public String getDefaultLocale() {
return this.defaultLocale;
}
}
Kotlin
class FieldValueTestBean {
@Value("#{ systemProperties['user.region'] }")
var defaultLocale: String? = null
}
The following example shows the equivalent but on a property setter method:
Java
public class PropertyValueTestBean {
private String defaultLocale;
@Value("#{ systemProperties['user.region'] }")
public void setDefaultLocale(String defaultLocale) {
this.defaultLocale = defaultLocale;
}
public String getDefaultLocale() {
return this.defaultLocale;
}
}
Kotlin
class PropertyValueTestBean {
@Value("#{ systemProperties['user.region'] }")
var defaultLocale: String? = null
}
Autowired methods and constructors can also use the @Value annotation, as the following examples show:
Java
public class SimpleMovieLister {
private MovieFinder movieFinder;
private String defaultLocale;
@Autowired
public void configure(MovieFinder movieFinder,
@Value("#{ systemProperties['user.region'] }") String defaultLocale) {
this.movieFinder = movieFinder;
this.defaultLocale = defaultLocale;
}
// ...
}
Kotlin
class SimpleMovieLister {
private lateinit var movieFinder: MovieFinder
private lateinit var defaultLocale: String
@Autowired
fun configure(movieFinder: MovieFinder,
@Value("#{ systemProperties['user.region'] }") defaultLocale: String) {
this.movieFinder = movieFinder
this.defaultLocale = defaultLocale
}
// ...
}
Java
public class MovieRecommender {
private String defaultLocale;
private CustomerPreferenceDao customerPreferenceDao;
public MovieRecommender(CustomerPreferenceDao customerPreferenceDao,
@Value("#{systemProperties['user.country']}") String defaultLocale) {
this.customerPreferenceDao = customerPreferenceDao;
this.defaultLocale = defaultLocale;
}
// ...
}
Kotlin
class MovieRecommender(private val customerPreferenceDao: CustomerPreferenceDao,
@Value("#{systemProperties['user.country']}") private val defaultLocale: String) {
// ...
}
4.3. Language Reference
This section describes how the Spring Expression Language works. It covers the following topics:
4.3.1. Literal Expressions
The types of literal expressions supported are strings, numeric values (int, real, hex), boolean, and null. Strings are delimited by single quotation marks. To put a single quotation mark itself in a string, use two single quotation mark characters.
The following listing shows simple usage of literals. Typically, they are not used in isolation like this but, rather, as part of a more complex expression — for example, using a literal on one side of a logical comparison operator.
Java
ExpressionParser parser = new SpelExpressionParser();
// evals to "Hello World"
String helloWorld = (String) parser.parseExpression("'Hello World'").getValue();
double avogadrosNumber = (Double) parser.parseExpression("6.0221415E+23").getValue();
// evals to 2147483647
int maxValue = (Integer) parser.parseExpression("0x7FFFFFFF").getValue();
boolean trueValue = (Boolean) parser.parseExpression("true").getValue();
Object nullValue = parser.parseExpression("null").getValue();
Kotlin
val parser = SpelExpressionParser()
// evals to "Hello World"
val helloWorld = parser.parseExpression("'Hello World'").value as String
val avogadrosNumber = parser.parseExpression("6.0221415E+23").value as Double
// evals to 2147483647
val maxValue = parser.parseExpression("0x7FFFFFFF").value as Int
val trueValue = parser.parseExpression("true").value as Boolean
val nullValue = parser.parseExpression("null").value
Numbers support the use of the negative sign, exponential notation, and decimal points. By default, real numbers are parsed by using Double.parseDouble().
4.3.2. Properties, Arrays, Lists, Maps, and Indexers
Navigating with property references is easy. To do so, use a period to indicate a nested property value. The instances of the Inventor class, pupin and tesla, were populated with data listed in the Classes used in the examples section. To navigate “down” and get Tesla’s year of birth and Pupin’s city of birth, we use the following expressions:
Java
// evals to 1856
int year = (Integer) parser.parseExpression("Birthdate.Year + 1900").getValue(context);
String city = (String) parser.parseExpression("placeOfBirth.City").getValue(context);
Kotlin
// evals to 1856
val year = parser.parseExpression("Birthdate.Year + 1900").getValue(context) as Int
val city = parser.parseExpression("placeOfBirth.City").getValue(context) as String
Case insensitivity is allowed for the first letter of property names. The contents of arrays and lists are obtained by using square bracket notation, as the following example shows:
Java
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
// Inventions Array
// evaluates to "Induction motor"
String invention = parser.parseExpression("inventions[3]").getValue(
context, tesla, String.class);
// Members List
// evaluates to "Nikola Tesla"
String name = parser.parseExpression("Members[0].Name").getValue(
context, ieee, String.class);
// List and Array navigation
// evaluates to "Wireless communication"
String invention = parser.parseExpression("Members[0].Inventions[6]").getValue(
context, ieee, String.class);
Kotlin
val parser = SpelExpressionParser()
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
// Inventions Array
// evaluates to "Induction motor"
val invention = parser.parseExpression("inventions[3]").getValue(
context, tesla, String::class.java)
// Members List
// evaluates to "Nikola Tesla"
val name = parser.parseExpression("Members[0].Name").getValue(
context, ieee, String::class.java)
// List and Array navigation
// evaluates to "Wireless communication"
val invention = parser.parseExpression("Members[0].Inventions[6]").getValue(
context, ieee, String::class.java)
The contents of maps are obtained by specifying the literal key value within the brackets. In the following example, because keys for the Officers map are strings, we can specify string literals:
Java
// Officer's Dictionary
Inventor pupin = parser.parseExpression("Officers['president']").getValue(
societyContext, Inventor.class);
// evaluates to "Idvor"
String city = parser.parseExpression("Officers['president'].PlaceOfBirth.City").getValue(
societyContext, String.class);
// setting values
parser.parseExpression("Officers['advisors'][0].PlaceOfBirth.Country").setValue(
societyContext, "Croatia");
Kotlin
// Officer's Dictionary
val pupin = parser.parseExpression("Officers['president']").getValue(
societyContext, Inventor::class.java)
// evaluates to "Idvor"
val city = parser.parseExpression("Officers['president'].PlaceOfBirth.City").getValue(
societyContext, String::class.java)
// setting values
parser.parseExpression("Officers['advisors'][0].PlaceOfBirth.Country").setValue(
societyContext, "Croatia")
4.3.3. Inline Lists
You can directly express lists in an expression by using {} notation.
Java
// evaluates to a Java list containing the four numbers
List numbers = (List) parser.parseExpression("{1,2,3,4}").getValue(context);
List listOfLists = (List) parser.parseExpression("{{'a','b'},{'x','y'}}").getValue(context);
Kotlin
// evaluates to a Java list containing the four numbers
val numbers = parser.parseExpression("{1,2,3,4}").getValue(context) as List<*>
val listOfLists = parser.parseExpression("{{'a','b'},{'x','y'}}").getValue(context) as List<*>
{} by itself means an empty list. For performance reasons, if the list is itself entirely composed of fixed literals, a constant list is created to represent the expression (rather than building a new list on each evaluation).
4.3.4. Inline Maps
You can also directly express maps in an expression by using {key:value} notation. The following example shows how to do so:
Java
// evaluates to a Java map containing the two entries
Map inventorInfo = (Map) parser.parseExpression("{name:'Nikola',dob:'10-July-1856'}").getValue(context);
Map mapOfMaps = (Map) parser.parseExpression("{name:{first:'Nikola',last:'Tesla'},dob:{day:10,month:'July',year:1856}}").getValue(context);
Kotlin
// evaluates to a Java map containing the two entries
val inventorInfo = parser.parseExpression("{name:'Nikola',dob:'10-July-1856'}").getValue(context) as Map<*, >
val mapOfMaps = parser.parseExpression("{name:{first:'Nikola',last:'Tesla'},dob:{day:10,month:'July',year:1856}}").getValue(context) as Map<, *>
{:} by itself means an empty map. For performance reasons, if the map is itself composed of fixed literals or other nested constant structures (lists or maps), a constant map is created to represent the expression (rather than building a new map on each evaluation). Quoting of the map keys is optional. The examples above do not use quoted keys.
4.3.5. Array Construction
You can build arrays by using the familiar Java syntax, optionally supplying an initializer to have the array populated at construction time. The following example shows how to do so:
Java
int[] numbers1 = (int[]) parser.parseExpression("new int[4]").getValue(context);
// Array with initializer
int[] numbers2 = (int[]) parser.parseExpression("new int[]{1,2,3}").getValue(context);
// Multi dimensional array
int[][] numbers3 = (int[][]) parser.parseExpression("new int[4][5]").getValue(context);
Kotlin
val numbers1 = parser.parseExpression("new int[4]").getValue(context) as IntArray
// Array with initializer
val numbers2 = parser.parseExpression("new int[]{1,2,3}").getValue(context) as IntArray
// Multi dimensional array
val numbers3 = parser.parseExpression("new int[4][5]").getValue(context) as Array<IntArray>
You cannot currently supply an initializer when you construct multi-dimensional array.
4.3.6. Methods
You can invoke methods by using typical Java programming syntax. You can also invoke methods on literals. Variable arguments are also supported. The following examples show how to invoke methods:
Java
// string literal, evaluates to "bc"
String bc = parser.parseExpression("'abc'.substring(1, 3)").getValue(String.class);
// evaluates to true
boolean isMember = parser.parseExpression("isMember('Mihajlo Pupin')").getValue(
societyContext, Boolean.class);
Kotlin
// string literal, evaluates to "bc"
val bc = parser.parseExpression("'abc'.substring(1, 3)").getValue(String::class.java)
// evaluates to true
val isMember = parser.parseExpression("isMember('Mihajlo Pupin')").getValue(
societyContext, Boolean::class.java)
4.3.7. Operators
The Spring Expression Language supports the following kinds of operators:
Relational Operators
The relational operators (equal, not equal, less than, less than or equal, greater than, and greater than or equal) are supported by using standard operator notation. The following listing shows a few examples of operators:
Java
// evaluates to true
boolean trueValue = parser.parseExpression("2 == 2").getValue(Boolean.class);
// evaluates to false
boolean falseValue = parser.parseExpression("2 < -5.0").getValue(Boolean.class);
// evaluates to true
boolean trueValue = parser.parseExpression("'black' < 'block'").getValue(Boolean.class);
Kotlin
// evaluates to true
val trueValue = parser.parseExpression("2 == 2").getValue(Boolean::class.java)
// evaluates to false
val falseValue = parser.parseExpression("2 < -5.0").getValue(Boolean::class.java)
// evaluates to true
val trueValue = parser.parseExpression("'black' < 'block'").getValue(Boolean::class.java)
Greater-than and less-than comparisons against null follow a simple rule: null is treated as nothing (that is NOT as zero). As a consequence, any other value is always greater than null (X > null is always true) and no other value is ever less than nothing (X < null is always false).
If you prefer numeric comparisons instead, avoid number-based null comparisons in favor of comparisons against zero (for example, X > 0 or X < 0).
In addition to the standard relational operators, SpEL supports the instanceof and regular expression-based matches operator. The following listing shows examples of both:
Java
// evaluates to false
boolean falseValue = parser.parseExpression(
"'xyz' instanceof T(Integer)").getValue(Boolean.class);
// evaluates to true
boolean trueValue = parser.parseExpression(
"'5.00' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean.class);
//evaluates to false
boolean falseValue = parser.parseExpression(
"'5.0067' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean.class);
Kotlin
// evaluates to false
val falseValue = parser.parseExpression(
"'xyz' instanceof T(Integer)").getValue(Boolean::class.java)
// evaluates to true
val trueValue = parser.parseExpression(
"'5.00' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean::class.java)
//evaluates to false
val falseValue = parser.parseExpression(
"'5.0067' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean::class.java)
Be careful with primitive types, as they are immediately boxed up to the wrapper type, so 1 instanceof T(int) evaluates to false while 1 instanceof T(Integer) evaluates to true, as expected.
Each symbolic operator can also be specified as a purely alphabetic equivalent. This avoids problems where the symbols used have special meaning for the document type in which the expression is embedded (such as in an XML document). The textual equivalents are:
• lt (<)
• gt (>)
• le (<=)
• ge (>=)
• eq (==)
• ne (!=)
• div (/)
• mod (%)
• not (!).
All of the textual operators are case-insensitive.
Logical Operators
SpEL supports the following logical operators:
• and (&&)
• or (||)
• not (!)
The following example shows how to use the logical operators
Java
// -- AND --
// evaluates to false
boolean falseValue = parser.parseExpression("true and false").getValue(Boolean.class);
// evaluates to true
String expression = "isMember('Nikola Tesla') and isMember('Mihajlo Pupin')";
boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class);
// -- OR --
// evaluates to true
boolean trueValue = parser.parseExpression("true or false").getValue(Boolean.class);
// evaluates to true
String expression = "isMember('Nikola Tesla') or isMember('Albert Einstein')";
boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class);
// -- NOT --
// evaluates to false
boolean falseValue = parser.parseExpression("!true").getValue(Boolean.class);
// -- AND and NOT --
String expression = "isMember('Nikola Tesla') and !isMember('Mihajlo Pupin')";
boolean falseValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class);
Kotlin
// -- AND --
// evaluates to false
val falseValue = parser.parseExpression("true and false").getValue(Boolean::class.java)
// evaluates to true
val expression = "isMember('Nikola Tesla') and isMember('Mihajlo Pupin')"
val trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java)
// -- OR --
// evaluates to true
val trueValue = parser.parseExpression("true or false").getValue(Boolean::class.java)
// evaluates to true
val expression = "isMember('Nikola Tesla') or isMember('Albert Einstein')"
val trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java)
// -- NOT --
// evaluates to false
val falseValue = parser.parseExpression("!true").getValue(Boolean::class.java)
// -- AND and NOT --
val expression = "isMember('Nikola Tesla') and !isMember('Mihajlo Pupin')"
val falseValue = parser.parseExpression(expression).getValue(societyContext, Boolean::class.java)
Mathematical Operators
You can use the addition operator on both numbers and strings. You can use the subtraction, multiplication, and division operators only on numbers. You can also use the modulus (%) and exponential power (^) operators. Standard operator precedence is enforced. The following example shows the mathematical operators in use:
Java
// Addition
int two = parser.parseExpression("1 + 1").getValue(Integer.class); // 2
String testString = parser.parseExpression(
"'test' + ' ' + 'string'").getValue(String.class); // 'test string'
// Subtraction
int four = parser.parseExpression("1 - -3").getValue(Integer.class); // 4
double d = parser.parseExpression("1000.00 - 1e4").getValue(Double.class); // -9000
// Multiplication
int six = parser.parseExpression("-2 * -3").getValue(Integer.class); // 6
double twentyFour = parser.parseExpression("2.0 * 3e0 * 4").getValue(Double.class); // 24.0
// Division
int minusTwo = parser.parseExpression("6 / -3").getValue(Integer.class); // -2
double one = parser.parseExpression("8.0 / 4e0 / 2").getValue(Double.class); // 1.0
// Modulus
int three = parser.parseExpression("7 % 4").getValue(Integer.class); // 3
int one = parser.parseExpression("8 / 5 % 2").getValue(Integer.class); // 1
// Operator precedence
int minusTwentyOne = parser.parseExpression("1+2-3*8").getValue(Integer.class); // -21
Kotlin
// Addition
val two = parser.parseExpression("1 + 1").getValue(Int::class.java) // 2
val testString = parser.parseExpression(
"'test' + ' ' + 'string'").getValue(String::class.java) // 'test string'
// Subtraction
val four = parser.parseExpression("1 - -3").getValue(Int::class.java) // 4
val d = parser.parseExpression("1000.00 - 1e4").getValue(Double::class.java) // -9000
// Multiplication
val six = parser.parseExpression("-2 * -3").getValue(Int::class.java) // 6
val twentyFour = parser.parseExpression("2.0 * 3e0 * 4").getValue(Double::class.java) // 24.0
// Division
val minusTwo = parser.parseExpression("6 / -3").getValue(Int::class.java) // -2
val one = parser.parseExpression("8.0 / 4e0 / 2").getValue(Double::class.java) // 1.0
// Modulus
val three = parser.parseExpression("7 % 4").getValue(Int::class.java) // 3
val one = parser.parseExpression("8 / 5 % 2").getValue(Int::class.java) // 1
// Operator precedence
val minusTwentyOne = parser.parseExpression("1+2-3*8").getValue(Int::class.java) // -21
The Assignment Operator
To setting a property, use the assignment operator (=). This is typically done within a call to setValue but can also be done inside a call to getValue. The following listing shows both ways to use the assignment operator:
Java
Inventor inventor = new Inventor();
EvaluationContext context = SimpleEvaluationContext.forReadWriteDataBinding().build();
parser.parseExpression("Name").setValue(context, inventor, "Aleksandar Seovic");
// alternatively
String aleks = parser.parseExpression(
"Name = 'Aleksandar Seovic'").getValue(context, inventor, String.class);
Kotlin
val inventor = Inventor()
val context = SimpleEvaluationContext.forReadWriteDataBinding().build()
parser.parseExpression("Name").setValue(context, inventor, "Aleksandar Seovic")
// alternatively
val aleks = parser.parseExpression(
"Name = 'Aleksandar Seovic'").getValue(context, inventor, String::class.java)
4.3.8. Types
You can use the special T operator to specify an instance of java.lang.Class (the type). Static methods are invoked by using this operator as well. The StandardEvaluationContext uses a TypeLocator to find types, and the StandardTypeLocator (which can be replaced) is built with an understanding of the java.lang package. This means that T() references to types within java.lang do not need to be fully qualified, but all other type references must be. The following example shows how to use the T operator:
Java
Class dateClass = parser.parseExpression("T(java.util.Date)").getValue(Class.class);
Class stringClass = parser.parseExpression("T(String)").getValue(Class.class);
boolean trueValue = parser.parseExpression(
"T(java.math.RoundingMode).CEILING < T(java.math.RoundingMode).FLOOR")
.getValue(Boolean.class);
Kotlin
val dateClass = parser.parseExpression("T(java.util.Date)").getValue(Class::class.java)
val stringClass = parser.parseExpression("T(String)").getValue(Class::class.java)
val trueValue = parser.parseExpression(
"T(java.math.RoundingMode).CEILING < T(java.math.RoundingMode).FLOOR")
.getValue(Boolean::class.java)
4.3.9. Constructors
You can invoke constructors by using the new operator. You should use the fully qualified class name for all but the primitive types (int, float, and so on) and String. The following example shows how to use the new operator to invoke constructors:
Java
Inventor einstein = p.parseExpression(
"new org.spring.samples.spel.inventor.Inventor('Albert Einstein', 'German')")
.getValue(Inventor.class);
//create new inventor instance within add method of List
p.parseExpression(
"Members.add(new org.spring.samples.spel.inventor.Inventor(
'Albert Einstein', 'German'))").getValue(societyContext);
Kotlin
val einstein = p.parseExpression(
"new org.spring.samples.spel.inventor.Inventor('Albert Einstein', 'German')")
.getValue(Inventor::class.java)
//create new inventor instance within add method of List
p.parseExpression(
"Members.add(new org.spring.samples.spel.inventor.Inventor('Albert Einstein', 'German'))")
.getValue(societyContext)
4.3.10. Variables
You can reference variables in the expression by using the #variableName syntax. Variables are set by using the setVariable method on EvaluationContext implementations.
Valid variable names must be composed of one or more of the following supported characters.
• letters: A to Z and a to z
• digits: 0 to 9
• underscore: _
• dollar sign: $
The following example shows how to use variables.
Java
Inventor tesla = new Inventor("Nikola Tesla", "Serbian");
EvaluationContext context = SimpleEvaluationContext.forReadWriteDataBinding().build();
context.setVariable("newName", "Mike Tesla");
parser.parseExpression("Name = #newName").getValue(context, tesla);
System.out.println(tesla.getName()) // "Mike Tesla"
Kotlin
val tesla = Inventor("Nikola Tesla", "Serbian")
val context = SimpleEvaluationContext.forReadWriteDataBinding().build()
context.setVariable("newName", "Mike Tesla")
parser.parseExpression("Name = #newName").getValue(context, tesla)
println(tesla.name) // "Mike Tesla"
The #this and #root Variables
The #this variable is always defined and refers to the current evaluation object (against which unqualified references are resolved). The #root variable is always defined and refers to the root context object. Although #this may vary as components of an expression are evaluated, #root always refers to the root. The following examples show how to use the #this and #root variables:
Java
// create an array of integers
List<Integer> primes = new ArrayList<Integer>();
primes.addAll(Arrays.asList(2,3,5,7,11,13,17));
// create parser and set variable 'primes' as the array of integers
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataAccess();
context.setVariable("primes", primes);
// all prime numbers > 10 from the list (using selection ?{...})
// evaluates to [11, 13, 17]
List<Integer> primesGreaterThanTen = (List<Integer>) parser.parseExpression(
"#primes.?[#this>10]").getValue(context);
Kotlin
// create an array of integers
val primes = ArrayList<Int>()
primes.addAll(listOf(2, 3, 5, 7, 11, 13, 17))
// create parser and set variable 'primes' as the array of integers
val parser = SpelExpressionParser()
val context = SimpleEvaluationContext.forReadOnlyDataAccess()
context.setVariable("primes", primes)
// all prime numbers > 10 from the list (using selection ?{...})
// evaluates to [11, 13, 17]
val primesGreaterThanTen = parser.parseExpression(
"#primes.?[#this>10]").getValue(context) as List<Int>
4.3.11. Functions
You can extend SpEL by registering user-defined functions that can be called within the expression string. The function is registered through the EvaluationContext. The following example shows how to register a user-defined function:
Java
Method method = ...;
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
context.setVariable("myFunction", method);
Kotlin
val method: Method = ...
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
context.setVariable("myFunction", method)
For example, consider the following utility method that reverses a string:
Java
public abstract class StringUtils {
public static String reverseString(String input) {
StringBuilder backwards = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
backwards.append(input.charAt(input.length() - 1 - i));
}
return backwards.toString();
}
}
Kotlin
fun reverseString(input: String): String {
val backwards = StringBuilder(input.length)
for (i in 0 until input.length) {
backwards.append(input[input.length - 1 - i])
}
return backwards.toString()
}
You can then register and use the preceding method, as the following example shows:
Java
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
context.setVariable("reverseString",
StringUtils.class.getDeclaredMethod("reverseString", String.class));
String helloWorldReversed = parser.parseExpression(
"#reverseString('hello')").getValue(context, String.class);
Kotlin
val parser = SpelExpressionParser()
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
context.setVariable("reverseString", ::reverseString::javaMethod)
val helloWorldReversed = parser.parseExpression(
"#reverseString('hello')").getValue(context, String::class.java)
4.3.12. Bean References
If the evaluation context has been configured with a bean resolver, you can look up beans from an expression by using the @ symbol. The following example shows how to do so:
Java
ExpressionParser parser = new SpelExpressionParser();
StandardEvaluationContext context = new StandardEvaluationContext();
context.setBeanResolver(new MyBeanResolver());
// This will end up calling resolve(context,"something") on MyBeanResolver during evaluation
Object bean = parser.parseExpression("@something").getValue(context);
Kotlin
val parser = SpelExpressionParser()
val context = StandardEvaluationContext()
context.setBeanResolver(MyBeanResolver())
// This will end up calling resolve(context,"something") on MyBeanResolver during evaluation
val bean = parser.parseExpression("@something").getValue(context)
To access a factory bean itself, you should instead prefix the bean name with an & symbol. The following example shows how to do so:
Java
ExpressionParser parser = new SpelExpressionParser();
StandardEvaluationContext context = new StandardEvaluationContext();
context.setBeanResolver(new MyBeanResolver());
// This will end up calling resolve(context,"&foo") on MyBeanResolver during evaluation
Object bean = parser.parseExpression("&foo").getValue(context);
Kotlin
val parser = SpelExpressionParser()
val context = StandardEvaluationContext()
context.setBeanResolver(MyBeanResolver())
// This will end up calling resolve(context,"&foo") on MyBeanResolver during evaluation
val bean = parser.parseExpression("&foo").getValue(context)
4.3.13. Ternary Operator (If-Then-Else)
You can use the ternary operator for performing if-then-else conditional logic inside the expression. The following listing shows a minimal example:
Java
String falseString = parser.parseExpression(
"false ? 'trueExp' : 'falseExp'").getValue(String.class);
Kotlin
val falseString = parser.parseExpression(
"false ? 'trueExp' : 'falseExp'").getValue(String::class.java)
In this case, the boolean false results in returning the string value 'falseExp'. A more realistic example follows:
Java
parser.parseExpression("Name").setValue(societyContext, "IEEE");
societyContext.setVariable("queryName", "Nikola Tesla");
expression = "isMember(#queryName)? #queryName + ' is a member of the ' " +
"+ Name + ' Society' : #queryName + ' is not a member of the ' + Name + ' Society'";
String queryResultString = parser.parseExpression(expression)
.getValue(societyContext, String.class);
// queryResultString = "Nikola Tesla is a member of the IEEE Society"
Kotlin
parser.parseExpression("Name").setValue(societyContext, "IEEE")
societyContext.setVariable("queryName", "Nikola Tesla")
expression = "isMember(#queryName)? #queryName + ' is a member of the ' " + "+ Name + ' Society' : #queryName + ' is not a member of the ' + Name + ' Society'"
val queryResultString = parser.parseExpression(expression)
.getValue(societyContext, String::class.java)
// queryResultString = "Nikola Tesla is a member of the IEEE Society"
See the next section on the Elvis operator for an even shorter syntax for the ternary operator.
4.3.14. The Elvis Operator
The Elvis operator is a shortening of the ternary operator syntax and is used in the Groovy language. With the ternary operator syntax, you usually have to repeat a variable twice, as the following example shows:
String name = "Elvis Presley";
String displayName = (name != null ? name : "Unknown");
Instead, you can use the Elvis operator (named for the resemblance to Elvis' hair style). The following example shows how to use the Elvis operator:
Java
ExpressionParser parser = new SpelExpressionParser();
String name = parser.parseExpression("name?:'Unknown'").getValue(new Inventor(), String.class);
System.out.println(name); // 'Unknown'
Kotlin
val parser = SpelExpressionParser()
val name = parser.parseExpression("name?:'Unknown'").getValue(Inventor(), String::class.java)
println(name) // 'Unknown'
The following listing shows a more complex example:
Java
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
Inventor tesla = new Inventor("Nikola Tesla", "Serbian");
String name = parser.parseExpression("Name?:'Elvis Presley'").getValue(context, tesla, String.class);
System.out.println(name); // Nikola Tesla
tesla.setName(null);
name = parser.parseExpression("Name?:'Elvis Presley'").getValue(context, tesla, String.class);
System.out.println(name); // Elvis Presley
Kotlin
val parser = SpelExpressionParser()
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
val tesla = Inventor("Nikola Tesla", "Serbian")
var name = parser.parseExpression("Name?:'Elvis Presley'").getValue(context, tesla, String::class.java)
println(name) // Nikola Tesla
tesla.setName(null)
name = parser.parseExpression("Name?:'Elvis Presley'").getValue(context, tesla, String::class.java)
println(name) // Elvis Presley
You can use the Elvis operator to apply default values in expressions. The following example shows how to use the Elvis operator in a @Value expression:
@Value("#{systemProperties['pop3.port'] ?: 25}")
This will inject a system property pop3.port if it is defined or 25 if not.
4.3.15. Safe Navigation Operator
The safe navigation operator is used to avoid a NullPointerException and comes from the Groovy language. Typically, when you have a reference to an object, you might need to verify that it is not null before accessing methods or properties of the object. To avoid this, the safe navigation operator returns null instead of throwing an exception. The following example shows how to use the safe navigation operator:
Java
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = SimpleEvaluationContext.forReadOnlyDataBinding().build();
Inventor tesla = new Inventor("Nikola Tesla", "Serbian");
tesla.setPlaceOfBirth(new PlaceOfBirth("Smiljan"));
String city = parser.parseExpression("PlaceOfBirth?.City").getValue(context, tesla, String.class);
System.out.println(city); // Smiljan
tesla.setPlaceOfBirth(null);
city = parser.parseExpression("PlaceOfBirth?.City").getValue(context, tesla, String.class);
System.out.println(city); // null - does not throw NullPointerException!!!
Kotlin
val parser = SpelExpressionParser()
val context = SimpleEvaluationContext.forReadOnlyDataBinding().build()
val tesla = Inventor("Nikola Tesla", "Serbian")
tesla.setPlaceOfBirth(PlaceOfBirth("Smiljan"))
var city = parser.parseExpression("PlaceOfBirth?.City").getValue(context, tesla, String::class.java)
println(city) // Smiljan
tesla.setPlaceOfBirth(null)
city = parser.parseExpression("PlaceOfBirth?.City").getValue(context, tesla, String::class.java)
println(city) // null - does not throw NullPointerException!!!
4.3.16. Collection Selection
Selection is a powerful expression language feature that lets you transform a source collection into another collection by selecting from its entries.
Selection uses a syntax of .?[selectionExpression]. It filters the collection and returns a new collection that contain a subset of the original elements. For example, selection lets us easily get a list of Serbian inventors, as the following example shows:
Java
List<Inventor> list = (List<Inventor>) parser.parseExpression(
"Members.?[Nationality == 'Serbian']").getValue(societyContext);
Kotlin
val list = parser.parseExpression(
"Members.?[Nationality == 'Serbian']").getValue(societyContext) as List<Inventor>
Selection is possible upon both lists and maps. For a list, the selection criteria is evaluated against each individual list element. Against a map, the selection criteria is evaluated against each map entry (objects of the Java type Map.Entry). Each map entry has its key and value accessible as properties for use in the selection.
The following expression returns a new map that consists of those elements of the original map where the entry value is less than 27:
Java
Map newMap = parser.parseExpression("map.?[value<27]").getValue();
Kotlin
val newMap = parser.parseExpression("map.?[value<27]").getValue()
In addition to returning all the selected elements, you can retrieve only the first or the last value. To obtain the first entry matching the selection, the syntax is .^[selectionExpression]. To obtain the last matching selection, the syntax is .$[selectionExpression].
4.3.17. Collection Projection
Projection lets a collection drive the evaluation of a sub-expression, and the result is a new collection. The syntax for projection is .![projectionExpression]. For example, suppose we have a list of inventors but want the list of cities where they were born. Effectively, we want to evaluate 'placeOfBirth.city' for every entry in the inventor list. The following example uses projection to do so:
Java
// returns ['Smiljan', 'Idvor' ]
List placesOfBirth = (List)parser.parseExpression("Members.![placeOfBirth.city]");
Kotlin
// returns ['Smiljan', 'Idvor' ]
val placesOfBirth = parser.parseExpression("Members.![placeOfBirth.city]") as List<*>
You can also use a map to drive projection and, in this case, the projection expression is evaluated against each entry in the map (represented as a Java Map.Entry). The result of a projection across a map is a list that consists of the evaluation of the projection expression against each map entry.
4.3.18. Expression templating
Expression templates allow mixing literal text with one or more evaluation blocks. Each evaluation block is delimited with prefix and suffix characters that you can define. A common choice is to use #{ } as the delimiters, as the following example shows:
Java
String randomPhrase = parser.parseExpression(
"random number is #{T(java.lang.Math).random()}",
new TemplateParserContext()).getValue(String.class);
// evaluates to "random number is 0.7038186818312008"
Kotlin
val randomPhrase = parser.parseExpression(
"random number is #{T(java.lang.Math).random()}",
TemplateParserContext()).getValue(String::class.java)
// evaluates to "random number is 0.7038186818312008"
The string is evaluated by concatenating the literal text 'random number is ' with the result of evaluating the expression inside the #{ } delimiter (in this case, the result of calling that random() method). The second argument to the parseExpression() method is of the type ParserContext. The ParserContext interface is used to influence how the expression is parsed in order to support the expression templating functionality. The definition of TemplateParserContext follows:
Java
public class TemplateParserContext implements ParserContext {
public String getExpressionPrefix() {
return "#{";
}
public String getExpressionSuffix() {
return "}";
}
public boolean isTemplate() {
return true;
}
}
Kotlin
class TemplateParserContext : ParserContext {
override fun getExpressionPrefix(): String {
return "#{"
}
override fun getExpressionSuffix(): String {
return "}"
}
override fun isTemplate(): Boolean {
return true
}
}
4.4. Classes Used in the Examples
This section lists the classes used in the examples throughout this chapter.
Inventor.Java
package org.spring.samples.spel.inventor;
import java.util.Date;
import java.util.GregorianCalendar;
public class Inventor {
private String name;
private String nationality;
private String[] inventions;
private Date birthdate;
private PlaceOfBirth placeOfBirth;
public Inventor(String name, String nationality) {
GregorianCalendar c= new GregorianCalendar();
this.name = name;
this.nationality = nationality;
this.birthdate = c.getTime();
}
public Inventor(String name, Date birthdate, String nationality) {
this.name = name;
this.nationality = nationality;
this.birthdate = birthdate;
}
public Inventor() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getNationality() {
return nationality;
}
public void setNationality(String nationality) {
this.nationality = nationality;
}
public Date getBirthdate() {
return birthdate;
}
public void setBirthdate(Date birthdate) {
this.birthdate = birthdate;
}
public PlaceOfBirth getPlaceOfBirth() {
return placeOfBirth;
}
public void setPlaceOfBirth(PlaceOfBirth placeOfBirth) {
this.placeOfBirth = placeOfBirth;
}
public void setInventions(String[] inventions) {
this.inventions = inventions;
}
public String[] getInventions() {
return inventions;
}
}
Inventor.kt
class Inventor(
var name: String,
var nationality: String,
var inventions: Array<String>? = null,
var birthdate: Date = GregorianCalendar().time,
var placeOfBirth: PlaceOfBirth? = null)
PlaceOfBirth.java
package org.spring.samples.spel.inventor;
public class PlaceOfBirth {
private String city;
private String country;
public PlaceOfBirth(String city) {
this.city=city;
}
public PlaceOfBirth(String city, String country) {
this(city);
this.country = country;
}
public String getCity() {
return city;
}
public void setCity(String s) {
this.city = s;
}
public String getCountry() {
return country;
}
public void setCountry(String country) {
this.country = country;
}
}
PlaceOfBirth.kt
class PlaceOfBirth(var city: String, var country: String? = null) {
Society.java
package org.spring.samples.spel.inventor;
import java.util.*;
public class Society {
private String name;
public static String Advisors = "advisors";
public static String President = "president";
private List<Inventor> members = new ArrayList<Inventor>();
private Map officers = new HashMap();
public List getMembers() {
return members;
}
public Map getOfficers() {
return officers;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public boolean isMember(String name) {
for (Inventor inventor : members) {
if (inventor.getName().equals(name)) {
return true;
}
}
return false;
}
}
Society.kt
package org.spring.samples.spel.inventor
import java.util.*
class Society {
val Advisors = "advisors"
val President = "president"
var name: String? = null
val members = ArrayList<Inventor>()
val officers = mapOf<Any, Any>()
fun isMember(name: String): Boolean {
for (inventor in members) {
if (inventor.name == name) {
return true
}
}
return false
}
}
5. Aspect Oriented Programming with Spring
Aspect-oriented Programming (AOP) complements Object-oriented Programming (OOP) by providing another way of thinking about program structure. The key unit of modularity in OOP is the class, whereas in AOP the unit of modularity is the aspect. Aspects enable the modularization of concerns (such as transaction management) that cut across multiple types and objects. (Such concerns are often termed “crosscutting” concerns in AOP literature.)
One of the key components of Spring is the AOP framework. While the Spring IoC container does not depend on AOP (meaning you do not need to use AOP if you don’t want to), AOP complements Spring IoC to provide a very capable middleware solution.
Spring AOP with AspectJ pointcuts
Spring provides simple and powerful ways of writing custom aspects by using either a schema-based approach or the @AspectJ annotation style. Both of these styles offer fully typed advice and use of the AspectJ pointcut language while still using Spring AOP for weaving.
This chapter discusses the schema- and @AspectJ-based AOP support. The lower-level AOP support is discussed in the following chapter.
AOP is used in the Spring Framework to:
• Provide declarative enterprise services. The most important such service is declarative transaction management.
• Let users implement custom aspects, complementing their use of OOP with AOP.
If you are interested only in generic declarative services or other pre-packaged declarative middleware services such as pooling, you do not need to work directly with Spring AOP, and can skip most of this chapter.
5.1. AOP Concepts
Let us begin by defining some central AOP concepts and terminology. These terms are not Spring-specific. Unfortunately, AOP terminology is not particularly intuitive. However, it would be even more confusing if Spring used its own terminology.
• Aspect: A modularization of a concern that cuts across multiple classes. Transaction management is a good example of a crosscutting concern in enterprise Java applications. In Spring AOP, aspects are implemented by using regular classes (the schema-based approach) or regular classes annotated with the @Aspect annotation (the @AspectJ style).
• Join point: A point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution.
• Advice: Action taken by an aspect at a particular join point. Different types of advice include “around”, “before” and “after” advice. (Advice types are discussed later.) Many AOP frameworks, including Spring, model an advice as an interceptor and maintain a chain of interceptors around the join point.
• Pointcut: A predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default.
• Introduction: Declaring additional methods or fields on behalf of a type. Spring AOP lets you introduce new interfaces (and a corresponding implementation) to any advised object. For example, you could use an introduction to make a bean implement an IsModified interface, to simplify caching. (An introduction is known as an inter-type declaration in the AspectJ community.)
• Target object: An object being advised by one or more aspects. Also referred to as the “advised object”. Since Spring AOP is implemented by using runtime proxies, this object is always a proxied object.
• AOP proxy: An object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy is a JDK dynamic proxy or a CGLIB proxy.
• Weaving: linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.
Spring AOP includes the following types of advice:
• Before advice: Advice that runs before a join point but that does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception).
• After returning advice: Advice to be run after a join point completes normally (for example, if a method returns without throwing an exception).
• After throwing advice: Advice to be executed if a method exits by throwing an exception.
• After (finally) advice: Advice to be executed regardless of the means by which a join point exits (normal or exceptional return).
• Around advice: Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behavior before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception.
Around advice is the most general kind of advice. Since Spring AOP, like AspectJ, provides a full range of advice types, we recommend that you use the least powerful advice type that can implement the required behavior. For example, if you need only to update a cache with the return value of a method, you are better off implementing an after returning advice than an around advice, although an around advice can accomplish the same thing. Using the most specific advice type provides a simpler programming model with less potential for errors. For example, you do not need to invoke the proceed() method on the JoinPoint used for around advice, and, hence, you cannot fail to invoke it.
All advice parameters are statically typed so that you work with advice parameters of the appropriate type (e.g. the type of the return value from a method execution) rather than Object arrays.
The concept of join points matched by pointcuts is the key to AOP, which distinguishes it from older technologies offering only interception. Pointcuts enable advice to be targeted independently of the object-oriented hierarchy. For example, you can apply an around advice providing declarative transaction management to a set of methods that span multiple objects (such as all business operations in the service layer).
5.2. Spring AOP Capabilities and Goals
Spring AOP is implemented in pure Java. There is no need for a special compilation process. Spring AOP does not need to control the class loader hierarchy and is thus suitable for use in a servlet container or application server.
Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. If you need to advise field access and update join points, consider a language such as AspectJ.
Spring AOP’s approach to AOP differs from that of most other AOP frameworks. The aim is not to provide the most complete AOP implementation (although Spring AOP is quite capable). Rather, the aim is to provide a close integration between AOP implementation and Spring IoC, to help solve common problems in enterprise applications.
Thus, for example, the Spring Framework’s AOP functionality is normally used in conjunction with the Spring IoC container. Aspects are configured by using normal bean definition syntax (although this allows powerful “auto-proxying” capabilities). This is a crucial difference from other AOP implementations. You cannot do some things easily or efficiently with Spring AOP, such as advise very fine-grained objects (typically, domain objects). AspectJ is the best choice in such cases. However, our experience is that Spring AOP provides an excellent solution to most problems in enterprise Java applications that are amenable to AOP.
Spring AOP never strives to compete with AspectJ to provide a comprehensive AOP solution. We believe that both proxy-based frameworks such as Spring AOP and full-blown frameworks such as AspectJ are valuable and that they are complementary, rather than in competition. Spring seamlessly integrates Spring AOP and IoC with AspectJ, to enable all uses of AOP within a consistent Spring-based application architecture. This integration does not affect the Spring AOP API or the AOP Alliance API. Spring AOP remains backward-compatible. See the following chapter for a discussion of the Spring AOP APIs.
One of the central tenets of the Spring Framework is that of non-invasiveness. This is the idea that you should not be forced to introduce framework-specific classes and interfaces into your business or domain model. However, in some places, the Spring Framework does give you the option to introduce Spring Framework-specific dependencies into your codebase. The rationale in giving you such options is because, in certain scenarios, it might be just plain easier to read or code some specific piece of functionality in such a way. However, the Spring Framework (almost) always offers you the choice: You have the freedom to make an informed decision as to which option best suits your particular use case or scenario.
One such choice that is relevant to this chapter is that of which AOP framework (and which AOP style) to choose. You have the choice of AspectJ, Spring AOP, or both. You also have the choice of either the @AspectJ annotation-style approach or the Spring XML configuration-style approach. The fact that this chapter chooses to introduce the @AspectJ-style approach first should not be taken as an indication that the Spring team favors the @AspectJ annotation-style approach over the Spring XML configuration-style.
See Choosing which AOP Declaration Style to Use for a more complete discussion of the “whys and wherefores” of each style.
5.3. AOP Proxies
Spring AOP defaults to using standard JDK dynamic proxies for AOP proxies. This enables any interface (or set of interfaces) to be proxied.
Spring AOP can also use CGLIB proxies. This is necessary to proxy classes rather than interfaces. By default, CGLIB is used if a business object does not implement an interface. As it is good practice to program to interfaces rather than classes, business classes normally implement one or more business interfaces. It is possible to force the use of CGLIB, in those (hopefully rare) cases where you need to advise a method that is not declared on an interface or where you need to pass a proxied object to a method as a concrete type.
It is important to grasp the fact that Spring AOP is proxy-based. See Understanding AOP Proxies for a thorough examination of exactly what this implementation detail actually means.
5.4. @AspectJ support
@AspectJ refers to a style of declaring aspects as regular Java classes annotated with annotations. The @AspectJ style was introduced by the AspectJ project as part of the AspectJ 5 release. Spring interprets the same annotations as AspectJ 5, using a library supplied by AspectJ for pointcut parsing and matching. The AOP runtime is still pure Spring AOP, though, and there is no dependency on the AspectJ compiler or weaver.
Using the AspectJ compiler and weaver enables use of the full AspectJ language and is discussed in Using AspectJ with Spring Applications.
5.4.1. Enabling @AspectJ Support
To use @AspectJ aspects in a Spring configuration, you need to enable Spring support for configuring Spring AOP based on @AspectJ aspects and auto-proxying beans based on whether or not they are advised by those aspects. By auto-proxying, we mean that, if Spring determines that a bean is advised by one or more aspects, it automatically generates a proxy for that bean to intercept method invocations and ensures that advice is executed as needed.
The @AspectJ support can be enabled with XML- or Java-style configuration. In either case, you also need to ensure that AspectJ’s aspectjweaver.jar library is on the classpath of your application (version 1.8 or later). This library is available in the lib directory of an AspectJ distribution or from the Maven Central repository.
Enabling @AspectJ Support with Java Configuration
To enable @AspectJ support with Java @Configuration, add the @EnableAspectJAutoProxy annotation, as the following example shows:
Java
@Configuration
@EnableAspectJAutoProxy
public class AppConfig {
}
Kotlin
@Configuration
@EnableAspectJAutoProxy
class AppConfig
Enabling @AspectJ Support with XML Configuration
To enable @AspectJ support with XML-based configuration, use the aop:aspectj-autoproxy element, as the following example shows:
<aop:aspectj-autoproxy/>
This assumes that you use schema support as described in XML Schema-based configuration. See the AOP schema for how to import the tags in the aop namespace.
5.4.2. Declaring an Aspect
With @AspectJ support enabled, any bean defined in your application context with a class that is an @AspectJ aspect (has the @Aspect annotation) is automatically detected by Spring and used to configure Spring AOP. The next two examples show the minimal definition required for a not-very-useful aspect.
The first of the two example shows a regular bean definition in the application context that points to a bean class that has the @Aspect annotation:
<bean id="myAspect" class="org.xyz.NotVeryUsefulAspect">
<!-- configure properties of the aspect here -->
</bean>
The second of the two examples shows the NotVeryUsefulAspect class definition, which is annotated with the org.aspectj.lang.annotation.Aspect annotation;
Java
package org.xyz;
import org.aspectj.lang.annotation.Aspect;
@Aspect
public class NotVeryUsefulAspect {
}
Kotlin
package org.xyz
import org.aspectj.lang.annotation.Aspect;
@Aspect
class NotVeryUsefulAspect
Aspects (classes annotated with @Aspect) can have methods and fields, the same as any other class. They can also contain pointcut, advice, and introduction (inter-type) declarations.
Autodetecting aspects through component scanning
You can register aspect classes as regular beans in your Spring XML configuration or autodetect them through classpath scanning — the same as any other Spring-managed bean. However, note that the @Aspect annotation is not sufficient for autodetection in the classpath. For that purpose, you need to add a separate @Component annotation (or, alternatively, a custom stereotype annotation that qualifies, as per the rules of Spring’s component scanner).
Advising aspects with other aspects?
In Spring AOP, aspects themselves cannot be the targets of advice from other aspects. The @Aspect annotation on a class marks it as an aspect and, hence, excludes it from auto-proxying.
5.4.3. Declaring a Pointcut
Pointcuts determine join points of interest and thus enable us to control when advice executes. Spring AOP only supports method execution join points for Spring beans, so you can think of a pointcut as matching the execution of methods on Spring beans. A pointcut declaration has two parts: a signature comprising a name and any parameters and a pointcut expression that determines exactly which method executions we are interested in. In the @AspectJ annotation-style of AOP, a pointcut signature is provided by a regular method definition, and the pointcut expression is indicated by using the @Pointcut annotation (the method serving as the pointcut signature must have a void return type).
An example may help make this distinction between a pointcut signature and a pointcut expression clear. The following example defines a pointcut named anyOldTransfer that matches the execution of any method named transfer:
Java
@Pointcut("execution(* transfer(..))") // the pointcut expression
private void anyOldTransfer() {} // the pointcut signature
Kotlin
@Pointcut("execution(* transfer(..))") // the pointcut expression
private fun anyOldTransfer() {} // the pointcut signature
The pointcut expression that forms the value of the @Pointcut annotation is a regular AspectJ 5 pointcut expression. For a full discussion of AspectJ’s pointcut language, see the AspectJ Programming Guide (and, for extensions, the AspectJ 5 Developer’s Notebook) or one of the books on AspectJ (such as Eclipse AspectJ, by Colyer et. al., or AspectJ in Action, by Ramnivas Laddad).
Supported Pointcut Designators
Spring AOP supports the following AspectJ pointcut designators (PCD) for use in pointcut expressions:
• execution: For matching method execution join points. This is the primary pointcut designator to use when working with Spring AOP.
• within: Limits matching to join points within certain types (the execution of a method declared within a matching type when using Spring AOP).
• this: Limits matching to join points (the execution of methods when using Spring AOP) where the bean reference (Spring AOP proxy) is an instance of the given type.
• target: Limits matching to join points (the execution of methods when using Spring AOP) where the target object (application object being proxied) is an instance of the given type.
• args: Limits matching to join points (the execution of methods when using Spring AOP) where the arguments are instances of the given types.
• @target: Limits matching to join points (the execution of methods when using Spring AOP) where the class of the executing object has an annotation of the given type.
• @args: Limits matching to join points (the execution of methods when using Spring AOP) where the runtime type of the actual arguments passed have annotations of the given types.
• @within: Limits matching to join points within types that have the given annotation (the execution of methods declared in types with the given annotation when using Spring AOP).
• @annotation: Limits matching to join points where the subject of the join point (the method being executed in Spring AOP) has the given annotation.
Other pointcut types
The full AspectJ pointcut language supports additional pointcut designators that are not supported in Spring: call, get, set, preinitialization, staticinitialization, initialization, handler, adviceexecution, withincode, cflow, cflowbelow, if, @this, and @withincode. Use of these pointcut designators in pointcut expressions interpreted by Spring AOP results in an IllegalArgumentException being thrown.
The set of pointcut designators supported by Spring AOP may be extended in future releases to support more of the AspectJ pointcut designators.
Because Spring AOP limits matching to only method execution join points, the preceding discussion of the pointcut designators gives a narrower definition than you can find in the AspectJ programming guide. In addition, AspectJ itself has type-based semantics and, at an execution join point, both this and target refer to the same object: the object executing the method. Spring AOP is a proxy-based system and differentiates between the proxy object itself (which is bound to this) and the target object behind the proxy (which is bound to target).
Due to the proxy-based nature of Spring’s AOP framework, calls within the target object are, by definition, not intercepted. For JDK proxies, only public interface method calls on the proxy can be intercepted. With CGLIB, public and protected method calls on the proxy are intercepted (and even package-visible methods, if necessary). However, common interactions through proxies should always be designed through public signatures.
Note that pointcut definitions are generally matched against any intercepted method. If a pointcut is strictly meant to be public-only, even in a CGLIB proxy scenario with potential non-public interactions through proxies, it needs to be defined accordingly.
If your interception needs include method calls or even constructors within the target class, consider the use of Spring-driven native AspectJ weaving instead of Spring’s proxy-based AOP framework. This constitutes a different mode of AOP usage with different characteristics, so be sure to make yourself familiar with weaving before making a decision.
Spring AOP also supports an additional PCD named bean. This PCD lets you limit the matching of join points to a particular named Spring bean or to a set of named Spring beans (when using wildcards). The bean PCD has the following form:
Java
bean(idOrNameOfBean)
Kotlin
bean(idOrNameOfBean)
The idOrNameOfBean token can be the name of any Spring bean. Limited wildcard support that uses the * character is provided, so, if you establish some naming conventions for your Spring beans, you can write a bean PCD expression to select them. As is the case with other pointcut designators, the bean PCD can be used with the && (and), || (or), and ! (negation) operators, too.
The bean PCD is supported only in Spring AOP and not in native AspectJ weaving. It is a Spring-specific extension to the standard PCDs that AspectJ defines and is, therefore, not available for aspects declared in the @Aspect model.
The bean PCD operates at the instance level (building on the Spring bean name concept) rather than at the type level only (to which weaving-based AOP is limited). Instance-based pointcut designators are a special capability of Spring’s proxy-based AOP framework and its close integration with the Spring bean factory, where it is natural and straightforward to identify specific beans by name.
Combining Pointcut Expressions
You can combine pointcut expressions by using &&, || and !. You can also refer to pointcut expressions by name. The following example shows three pointcut expressions:
Java
@Pointcut("execution(public * *(..))")
private void anyPublicOperation() {} (1)
@Pointcut("within(com.xyz.someapp.trading..*)")
private void inTrading() {} (2)
@Pointcut("anyPublicOperation() && inTrading()")
private void tradingOperation() {} (3)
1 anyPublicOperation matches if a method execution join point represents the execution of any public method.
2 inTrading matches if a method execution is in the trading module.
3 tradingOperation matches if a method execution represents any public method in the trading module.
Kotlin
@Pointcut("execution(public * *(..))")
private fun anyPublicOperation() {} (1)
@Pointcut("within(com.xyz.someapp.trading..*)")
private fun inTrading() {} (2)
@Pointcut("anyPublicOperation() && inTrading()")
private fun tradingOperation() {} (3)
1 anyPublicOperation matches if a method execution join point represents the execution of any public method.
2 inTrading matches if a method execution is in the trading module.
3 tradingOperation matches if a method execution represents any public method in the trading module.
It is a best practice to build more complex pointcut expressions out of smaller named components, as shown earlier. When referring to pointcuts by name, normal Java visibility rules apply (you can see private pointcuts in the same type, protected pointcuts in the hierarchy, public pointcuts anywhere, and so on). Visibility does not affect pointcut matching.
Sharing Common Pointcut Definitions
When working with enterprise applications, developers often want to refer to modules of the application and particular sets of operations from within several aspects. We recommend defining a “SystemArchitecture” aspect that captures common pointcut expressions for this purpose. Such an aspect typically resembles the following example:
Java
package com.xyz.someapp;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
@Aspect
public class SystemArchitecture {
/**
* A join point is in the web layer if the method is defined
* in a type in the com.xyz.someapp.web package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.web..*)")
public void inWebLayer() {}
/**
* A join point is in the service layer if the method is defined
* in a type in the com.xyz.someapp.service package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.service..*)")
public void inServiceLayer() {}
/**
* A join point is in the data access layer if the method is defined
* in a type in the com.xyz.someapp.dao package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.dao..*)")
public void inDataAccessLayer() {}
/**
* A business service is the execution of any method defined on a service
* interface. This definition assumes that interfaces are placed in the
* "service" package, and that implementation types are in sub-packages.
*
* If you group service interfaces by functional area (for example,
* in packages com.xyz.someapp.abc.service and com.xyz.someapp.def.service) then
* the pointcut expression "execution(* com.xyz.someapp..service.*.*(..))"
* could be used instead.
*
* Alternatively, you can write the expression using the 'bean'
* PCD, like so "bean(*Service)". (This assumes that you have
* named your Spring service beans in a consistent fashion.)
*/
@Pointcut("execution(* com.xyz.someapp..service.*.*(..))")
public void businessService() {}
/**
* A data access operation is the execution of any method defined on a
* dao interface. This definition assumes that interfaces are placed in the
* "dao" package, and that implementation types are in sub-packages.
*/
@Pointcut("execution(* com.xyz.someapp.dao.*.*(..))")
public void dataAccessOperation() {}
}
Kotlin
package com.xyz.someapp
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.Pointcut
import org.springframework.aop.Pointcut
@Aspect
class SystemArchitecture {
/**
* A join point is in the web layer if the method is defined
* in a type in the com.xyz.someapp.web package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.web..*)")
fun inWebLayer() {
}
/**
* A join point is in the service layer if the method is defined
* in a type in the com.xyz.someapp.service package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.service..*)")
fun inServiceLayer() {
}
/**
* A join point is in the data access layer if the method is defined
* in a type in the com.xyz.someapp.dao package or any sub-package
* under that.
*/
@Pointcut("within(com.xyz.someapp.dao..*)")
fun inDataAccessLayer() {
}
/**
* A business service is the execution of any method defined on a service
* interface. This definition assumes that interfaces are placed in the
* "service" package, and that implementation types are in sub-packages.
*
* If you group service interfaces by functional area (for example,
* in packages com.xyz.someapp.abc.service and com.xyz.someapp.def.service) then
* the pointcut expression "execution(* com.xyz.someapp..service.*.*(..))"
* could be used instead.
*
* Alternatively, you can write the expression using the 'bean'
* PCD, like so "bean(*Service)". (This assumes that you have
* named your Spring service beans in a consistent fashion.)
*/
@Pointcut("execution(* com.xyz.someapp..service.*.*(..))")
fun businessService() {
}
/**
* A data access operation is the execution of any method defined on a
* dao interface. This definition assumes that interfaces are placed in the
* "dao" package, and that implementation types are in sub-packages.
*/
@Pointcut("execution(* com.xyz.someapp.dao.*.*(..))")
fun dataAccessOperation() {
}
}
You can refer to the pointcuts defined in such an aspect anywhere you need a pointcut expression. For example, to make the service layer transactional, you could write the following:
<aop:config>
<aop:advisor
pointcut="com.xyz.someapp.SystemArchitecture.businessService()"
advice-ref="tx-advice"/>
</aop:config>
<tx:advice id="tx-advice">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED"/>
</tx:attributes>
</tx:advice>
The <aop:config> and <aop:advisor> elements are discussed in Schema-based AOP Support. The transaction elements are discussed in Transaction Management.
Examples
Spring AOP users are likely to use the execution pointcut designator the most often. The format of an execution expression follows:
execution(modifiers-pattern? ret-type-pattern declaring-type-pattern?name-pattern(param-pattern)
throws-pattern?)
All parts except the returning type pattern (ret-type-pattern in the preceding snippet), the name pattern, and the parameters pattern are optional. The returning type pattern determines what the return type of the method must be in order for a join point to be matched. * is most frequently used as the returning type pattern. It matches any return type. A fully-qualified type name matches only when the method returns the given type. The name pattern matches the method name. You can use the * wildcard as all or part of a name pattern. If you specify a declaring type pattern, include a trailing . to join it to the name pattern component. The parameters pattern is slightly more complex: () matches a method that takes no parameters, whereas (..) matches any number (zero or more) of parameters. The (*) pattern matches a method that takes one parameter of any type. (*,String) matches a method that takes two parameters. The first can be of any type, while the second must be a String. Consult the Language Semantics section of the AspectJ Programming Guide for more information.
The following examples show some common pointcut expressions:
• The execution of any public method:
execution(public * *(..))
• The execution of any method with a name that begins with set:
execution(* set*(..))
• The execution of any method defined by the AccountService interface:
execution(* com.xyz.service.AccountService.*(..))
• The execution of any method defined in the service package:
execution(* com.xyz.service.*.*(..))
• The execution of any method defined in the service package or one of its sub-packages:
execution(* com.xyz.service..*.*(..))
• Any join point (method execution only in Spring AOP) within the service package:
within(com.xyz.service.*)
• Any join point (method execution only in Spring AOP) within the service package or one of its sub-packages:
within(com.xyz.service..*)
• Any join point (method execution only in Spring AOP) where the proxy implements the AccountService interface:
this(com.xyz.service.AccountService)
'this' is more commonly used in a binding form. See the section on Declaring Advice for how to make the proxy object available in the advice body.
• Any join point (method execution only in Spring AOP) where the target object implements the AccountService interface:
target(com.xyz.service.AccountService)
'target' is more commonly used in a binding form. See the Declaring Advice section for how to make the target object available in the advice body.
• Any join point (method execution only in Spring AOP) that takes a single parameter and where the argument passed at runtime is Serializable:
args(java.io.Serializable)
'args' is more commonly used in a binding form. See the Declaring Advice section for how to make the method arguments available in the advice body.
Note that the pointcut given in this example is different from execution(* *(java.io.Serializable)). The args version matches if the argument passed at runtime is Serializable, and the execution version matches if the method signature declares a single parameter of type Serializable.
• Any join point (method execution only in Spring AOP) where the target object has a @Transactional annotation:
@target(org.springframework.transaction.annotation.Transactional)
You can also use '@target' in a binding form. See the Declaring Advice section for how to make the annotation object available in the advice body.
• Any join point (method execution only in Spring AOP) where the declared type of the target object has an @Transactional annotation:
@within(org.springframework.transaction.annotation.Transactional)
You can also use '@within' in a binding form. See the Declaring Advice section for how to make the annotation object available in the advice body.
• Any join point (method execution only in Spring AOP) where the executing method has an @Transactional annotation:
@annotation(org.springframework.transaction.annotation.Transactional)
You can also use '@annotation' in a binding form. See the Declaring Advice section for how to make the annotation object available in the advice body.
• Any join point (method execution only in Spring AOP) which takes a single parameter, and where the runtime type of the argument passed has the @Classified annotation:
@args(com.xyz.security.Classified)
You can also use '@args' in a binding form. See the Declaring Advice section how to make the annotation object(s) available in the advice body.
• Any join point (method execution only in Spring AOP) on a Spring bean named tradeService:
bean(tradeService)
• Any join point (method execution only in Spring AOP) on Spring beans having names that match the wildcard expression *Service:
bean(*Service)
Writing Good Pointcuts
During compilation, AspectJ processes pointcuts in order to optimize matching performance. Examining code and determining if each join point matches (statically or dynamically) a given pointcut is a costly process. (A dynamic match means the match cannot be fully determined from static analysis and that a test is placed in the code to determine if there is an actual match when the code is running). On first encountering a pointcut declaration, AspectJ rewrites it into an optimal form for the matching process. What does this mean? Basically, pointcuts are rewritten in DNF (Disjunctive Normal Form) and the components of the pointcut are sorted such that those components that are cheaper to evaluate are checked first. This means you do not have to worry about understanding the performance of various pointcut designators and may supply them in any order in a pointcut declaration.
However, AspectJ can work only with what it is told. For optimal performance of matching, you should think about what they are trying to achieve and narrow the search space for matches as much as possible in the definition. The existing designators naturally fall into one of three groups: kinded, scoping, and contextual:
• Kinded designators select a particular kind of join point: execution, get, set, call, and handler.
• Scoping designators select a group of join points of interest (probably of many kinds): within and withincode
• Contextual designators match (and optionally bind) based on context: this, target, and @annotation
A well written pointcut should include at least the first two types (kinded and scoping). You can include the contextual designators to match based on join point context or bind that context for use in the advice. Supplying only a kinded designator or only a contextual designator works but could affect weaving performance (time and memory used), due to extra processing and analysis. Scoping designators are very fast to match, and using them usage means AspectJ can very quickly dismiss groups of join points that should not be further processed. A good pointcut should always include one if possible.
5.4.4. Declaring Advice
Advice is associated with a pointcut expression and runs before, after, or around method executions matched by the pointcut. The pointcut expression may be either a simple reference to a named pointcut or a pointcut expression declared in place.
Before Advice
You can declare before advice in an aspect by using the @Before annotation:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
@Aspect
public class BeforeExample {
@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
public void doAccessCheck() {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.Before
@Aspect
class BeforeExample {
@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
fun doAccessCheck() {
// ...
}
}
If we use an in-place pointcut expression, we could rewrite the preceding example as the following example:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
@Aspect
public class BeforeExample {
@Before("execution(* com.xyz.myapp.dao.*.*(..))")
public void doAccessCheck() {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.Before
@Aspect
class BeforeExample {
@Before("execution(* com.xyz.myapp.dao.*.*(..))")
fun doAccessCheck() {
// ...
}
}
After Returning Advice
After returning advice runs when a matched method execution returns normally. You can declare it by using the @AfterReturning annotation:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterReturning;
@Aspect
public class AfterReturningExample {
@AfterReturning("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
public void doAccessCheck() {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.AfterReturning
@Aspect
class AfterReturningExample {
@AfterReturning("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
fun doAccessCheck() {
// ...
}
You can have multiple advice declarations (and other members as well), all inside the same aspect. We show only a single advice declaration in these examples to focus the effect of each one.
Sometimes, you need access in the advice body to the actual value that was returned. You can use the form of @AfterReturning that binds the return value to get that access, as the following example shows:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterReturning;
@Aspect
public class AfterReturningExample {
@AfterReturning(
pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()",
returning="retVal")
public void doAccessCheck(Object retVal) {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.AfterReturning
@Aspect
class AfterReturningExample {
@AfterReturning(pointcut = "com.xyz.myapp.SystemArchitecture.dataAccessOperation()", returning = "retVal")
fun doAccessCheck(retVal: Any) {
// ...
}
}
The name used in the returning attribute must correspond to the name of a parameter in the advice method. When a method execution returns, the return value is passed to the advice method as the corresponding argument value. A returning clause also restricts matching to only those method executions that return a value of the specified type (in this case, Object, which matches any return value).
Please note that it is not possible to return a totally different reference when using after returning advice.
After Throwing Advice
After throwing advice runs when a matched method execution exits by throwing an exception. You can declare it by using the @AfterThrowing annotation, as the following example shows:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterThrowing;
@Aspect
public class AfterThrowingExample {
@AfterThrowing("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
public void doRecoveryActions() {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.AfterThrowing
@Aspect
class AfterThrowingExample {
@AfterThrowing("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
fun doRecoveryActions() {
// ...
}
}
Often, you want the advice to run only when exceptions of a given type are thrown, and you also often need access to the thrown exception in the advice body. You can use the throwing attribute to both restrict matching (if desired — use Throwable as the exception type otherwise) and bind the thrown exception to an advice parameter. The following example shows how to do so:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.AfterThrowing;
@Aspect
public class AfterThrowingExample {
@AfterThrowing(
pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()",
throwing="ex")
public void doRecoveryActions(DataAccessException ex) {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.AfterThrowing
@Aspect
class AfterThrowingExample {
@AfterThrowing(pointcut = "com.xyz.myapp.SystemArchitecture.dataAccessOperation()", throwing = "ex")
fun doRecoveryActions(ex: DataAccessException) {
// ...
}
}
The name used in the throwing attribute must correspond to the name of a parameter in the advice method. When a method execution exits by throwing an exception, the exception is passed to the advice method as the corresponding argument value. A throwing clause also restricts matching to only those method executions that throw an exception of the specified type ( DataAccessException, in this case).
After (Finally) Advice
After (finally) advice runs when a matched method execution exits. It is declared by using the @After annotation. After advice must be prepared to handle both normal and exception return conditions. It is typically used for releasing resources and similar purposes. The following example shows how to use after finally advice:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.After;
@Aspect
public class AfterFinallyExample {
@After("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
public void doReleaseLock() {
// ...
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.After
@Aspect
class AfterFinallyExample {
@After("com.xyz.myapp.SystemArchitecture.dataAccessOperation()")
fun doReleaseLock() {
// ...
}
}
Around Advice
The last kind of advice is around advice. Around advice runs “around” a matched method’s execution. It has the opportunity to do work both before and after the method executes and to determine when, how, and even if the method actually gets to execute at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer, for example). Always use the least powerful form of advice that meets your requirements (that is, do not use around advice if before advice would do).
Around advice is declared by using the @Around annotation. The first parameter of the advice method must be of type ProceedingJoinPoint. Within the body of the advice, calling proceed() on the ProceedingJoinPoint causes the underlying method to execute. The proceed method can also pass in an Object[]. The values in the array are used as the arguments to the method execution when it proceeds.
The behavior of proceed when called with an Object[] is a little different than the behavior of proceed for around advice compiled by the AspectJ compiler. For around advice written using the traditional AspectJ language, the number of arguments passed to proceed must match the number of arguments passed to the around advice (not the number of arguments taken by the underlying join point), and the value passed to proceed in a given argument position supplants the original value at the join point for the entity the value was bound to (do not worry if this does not make sense right now). The approach taken by Spring is simpler and a better match to its proxy-based, execution-only semantics. You only need to be aware of this difference if you compile @AspectJ aspects written for Spring and use proceed with arguments with the AspectJ compiler and weaver. There is a way to write such aspects that is 100% compatible across both Spring AOP and AspectJ, and this is discussed in the following section on advice parameters.
The following example shows how to use around advice:
Java
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.ProceedingJoinPoint;
@Aspect
public class AroundExample {
@Around("com.xyz.myapp.SystemArchitecture.businessService()")
public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
// start stopwatch
Object retVal = pjp.proceed();
// stop stopwatch
return retVal;
}
}
Kotlin
import org.aspectj.lang.annotation.Aspect
import org.aspectj.lang.annotation.Around
import org.aspectj.lang.ProceedingJoinPoint
@Aspect
class AroundExample {
@Around("com.xyz.myapp.SystemArchitecture.businessService()")
fun doBasicProfiling(pjp: ProceedingJoinPoint): Any {
// start stopwatch
val retVal = pjp.proceed()
// stop stopwatch
return retVal
}
}
The value returned by the around advice is the return value seen by the caller of the method. For example, a simple caching aspect could return a value from a cache if it has one and invoke proceed() if it does not. Note that proceed may be invoked once, many times, or not at all within the body of the around advice. All of these are legal.
Advice Parameters
Spring offers fully typed advice, meaning that you declare the parameters you need in the advice signature (as we saw earlier for the returning and throwing examples) rather than work with Object[] arrays all the time. We see how to make argument and other contextual values available to the advice body later in this section. First, we take a look at how to write generic advice that can find out about the method the advice is currently advising.
Access to the Current JoinPoint
Any advice method may declare, as its first parameter, a parameter of type org.aspectj.lang.JoinPoint (note that around advice is required to declare a first parameter of type ProceedingJoinPoint, which is a subclass of JoinPoint. The JoinPoint interface provides a number of useful methods:
• getArgs(): Returns the method arguments.
• getThis(): Returns the proxy object.
• getTarget(): Returns the target object.
• getSignature(): Returns a description of the method that is being advised.
• toString(): Prints a useful description of the method being advised.
See the javadoc for more detail.
Passing Parameters to Advice
We have already seen how to bind the returned value or exception value (using after returning and after throwing advice). To make argument values available to the advice body, you can use the binding form of args. If you use a parameter name in place of a type name in an args expression, the value of the corresponding argument is passed as the parameter value when the advice is invoked. An example should make this clearer. Suppose you want to advise the execution of DAO operations that take an Account object as the first parameter, and you need access to the account in the advice body. You could write the following:
Java
@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation() && args(account,..)")
public void validateAccount(Account account) {
// ...
}
Kotlin
@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation() && args(account,..)")
fun validateAccount(account: Account) {
// ...
}
The args(account,..) part of the pointcut expression serves two purposes. First, it restricts matching to only those method executions where the method takes at least one parameter, and the argument passed to that parameter is an instance of Account. Second, it makes the actual Account object available to the advice through the account parameter.
Another way of writing this is to declare a pointcut that “provides” the Account object value when it matches a join point, and then refer to the named pointcut from the advice. This would look as follows:
Java
@Pointcut("com.xyz.myapp.SystemArchitecture.dataAccessOperation() && args(account,..)")
private void accountDataAccessOperation(Account account) {}
@Before("accountDataAccessOperation(account)")
public void validateAccount(Account account) {
// ...
}
Kotlin
@Pointcut("com.xyz.myapp.SystemArchitecture.dataAccessOperation() && args(account,..)")
private fun accountDataAccessOperation(account: Account) {
}
@Before("accountDataAccessOperation(account)")
fun validateAccount(account: Account) {
// ...
}
See the AspectJ programming guide for more details.
The proxy object ( this), target object ( target), and annotations ( @within, @target, @annotation, and @args) can all be bound in a similar fashion. The next two examples show how to match the execution of methods annotated with an @Auditable annotation and extract the audit code:
The first of the two examples shows the definition of the @Auditable annotation:
Java
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Auditable {
AuditCode value();
}
Kotlin
@Retention(AnnotationRetention.RUNTIME)
@Target(AnnotationTarget.FUNCTION)
annotation class Auditable(val value: AuditCode)
The second of the two examples shows the advice that matches the execution of @Auditable methods:
Java
@Before("com.xyz.lib.Pointcuts.anyPublicMethod() && @annotation(auditable)")
public void audit(Auditable auditable) {
AuditCode code = auditable.value();
// ...
}
Kotlin
@Before("com.xyz.lib.Pointcuts.anyPublicMethod() && @annotation(auditable)")
fun audit(auditable: Auditable) {
val code = auditable.value()
// ...
}
Advice Parameters and Generics
Spring AOP can handle generics used in class declarations and method parameters. Suppose you have a generic type like the following:
Java
public interface Sample<T> {
void sampleGenericMethod(T param);
void sampleGenericCollectionMethod(Collection<T> param);
}
Kotlin
interface Sample<T> {
fun sampleGenericMethod(param: T)
fun sampleGenericCollectionMethod(param: Collection<T>)
}
You can restrict interception of method types to certain parameter types by typing the advice parameter to the parameter type for which you want to intercept the method:
Java
@Before("execution(* ..Sample+.sampleGenericMethod(*)) && args(param)")
public void beforeSampleMethod(MyType param) {
// Advice implementation
}
Kotlin
@Before("execution(* ..Sample+.sampleGenericMethod(*)) && args(param)")
fun beforeSampleMethod(param: MyType) {
// Advice implementation
}
This approach does not work for generic collections. So you cannot define a pointcut as follows:
Java
@Before("execution(* ..Sample+.sampleGenericCollectionMethod(*)) && args(param)")
public void beforeSampleMethod(Collection<MyType> param) {
// Advice implementation
}
Kotlin
@Before("execution(* ..Sample+.sampleGenericCollectionMethod(*)) && args(param)")
fun beforeSampleMethod(param: Collection<MyType>) {
// Advice implementation
}
To make this work, we would have to inspect every element of the collection, which is not reasonable, as we also cannot decide how to treat null values in general. To achieve something similar to this, you have to type the parameter to Collection<?> and manually check the type of the elements.
Determining Argument Names
The parameter binding in advice invocations relies on matching names used in pointcut expressions to declared parameter names in advice and pointcut method signatures. Parameter names are not available through Java reflection, so Spring AOP uses the following strategy to determine parameter names:
• If the parameter names have been explicitly specified by the user, the specified parameter names are used. Both the advice and the pointcut annotations have an optional argNames attribute that you can use to specify the argument names of the annotated method. These argument names are available at runtime. The following example shows how to use the argNames attribute:
Java
@Before(value="com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)",
argNames="bean,auditable")
public void audit(Object bean, Auditable auditable) {
AuditCode code = auditable.value();
// ... use code and bean
}
Kotlin
@Before(value = "com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)", argNames = "bean,auditable")
fun audit(bean: Any, auditable: Auditable) {
val code = auditable.value()
// ... use code and bean
}
If the first parameter is of the JoinPoint, ProceedingJoinPoint, or JoinPoint.StaticPart type, you can leave out the name of the parameter from the value of the argNames attribute. For example, if you modify the preceding advice to receive the join point object, the argNames attribute need not include it:
Java
@Before(value="com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)",
argNames="bean,auditable")
public void audit(JoinPoint jp, Object bean, Auditable auditable) {
AuditCode code = auditable.value();
// ... use code, bean, and jp
}
Kotlin
@Before(value = "com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)", argNames = "bean,auditable")
fun audit(jp: JoinPoint, bean: Any, auditable: Auditable) {
val code = auditable.value()
// ... use code, bean, and jp
}
The special treatment given to the first parameter of the JoinPoint, ProceedingJoinPoint, and JoinPoint.StaticPart types is particularly convenient for advice instances that do not collect any other join point context. In such situations, you may omit the argNames attribute. For example, the following advice need not declare the argNames attribute:
Java
@Before("com.xyz.lib.Pointcuts.anyPublicMethod()")
public void audit(JoinPoint jp) {
// ... use jp
}
Kotlin
@Before("com.xyz.lib.Pointcuts.anyPublicMethod()")
fun audit(jp: JoinPoint) {
// ... use jp
}
• Using the 'argNames' attribute is a little clumsy, so if the 'argNames' attribute has not been specified, Spring AOP looks at the debug information for the class and tries to determine the parameter names from the local variable table. This information is present as long as the classes have been compiled with debug information ( '-g:vars' at a minimum). The consequences of compiling with this flag on are: (1) your code is slightly easier to understand (reverse engineer), (2) the class file sizes are very slightly bigger (typically inconsequential), (3) the optimization to remove unused local variables is not applied by your compiler. In other words, you should encounter no difficulties by building with this flag on.
If an @AspectJ aspect has been compiled by the AspectJ compiler (ajc) even without the debug information, you need not add the argNames attribute, as the compiler retain the needed information.
• If the code has been compiled without the necessary debug information, Spring AOP tries to deduce the pairing of binding variables to parameters (for example, if only one variable is bound in the pointcut expression, and the advice method takes only one parameter, the pairing is obvious). If the binding of variables is ambiguous given the available information, an AmbiguousBindingException is thrown.
• If all of the above strategies fail, an IllegalArgumentException is thrown.
Proceeding with Arguments
We remarked earlier that we would describe how to write a proceed call with arguments that works consistently across Spring AOP and AspectJ. The solution is to ensure that the advice signature binds each of the method parameters in order. The following example shows how to do so:
Java
@Around("execution(List<Account> find*(..)) && " +
"com.xyz.myapp.SystemArchitecture.inDataAccessLayer() && " +
"args(accountHolderNamePattern)")
public Object preProcessQueryPattern(ProceedingJoinPoint pjp,
String accountHolderNamePattern) throws Throwable {
String newPattern = preProcess(accountHolderNamePattern);
return pjp.proceed(new Object[] {newPattern});
}
Kotlin
@Around("execution(List<Account> find*(..)) && " +
"com.xyz.myapp.SystemArchitecture.inDataAccessLayer() && " +
"args(accountHolderNamePattern)")
fun preProcessQueryPattern(pjp: ProceedingJoinPoint,
accountHolderNamePattern: String): Any {
val newPattern = preProcess(accountHolderNamePattern)
return pjp.proceed(arrayOf<Any>(newPattern))
}
In many cases, you do this binding anyway (as in the preceding example).
Advice Ordering
What happens when multiple pieces of advice all want to run at the same join point? Spring AOP follows the same precedence rules as AspectJ to determine the order of advice execution. The highest precedence advice runs first "on the way in" (so, given two pieces of before advice, the one with highest precedence runs first). "On the way out" from a join point, the highest precedence advice runs last (so, given two pieces of after advice, the one with the highest precedence will run second).
When two pieces of advice defined in different aspects both need to run at the same join point, unless you specify otherwise, the order of execution is undefined. You can control the order of execution by specifying precedence. This is done in the normal Spring way by either implementing the org.springframework.core.Ordered interface in the aspect class or annotating it with the @Order annotation. Given two aspects, the aspect returning the lower value from Ordered.getValue() (or the annotation value) has the higher precedence.
As of Spring Framework 5.2.7, advice methods defined in the same @Aspect class that need to run at the same join point are assigned precedence based on their advice type in the following order, from highest to lowest precedence: @Around, @Before, @After, @AfterReturning, @AfterThrowing. Note, however, that due to the implementation style in Spring’s AspectJAfterAdvice, an @After advice method will effectively be invoked after any @AfterReturning or @AfterThrowing advice methods in the same aspect.
When two pieces of the same type of advice (for example, two @After advice methods) defined in the same @Aspect class both need to run at the same join point, the ordering is undefined (since there is no way to retrieve the source code declaration order through reflection for javac-compiled classes). Consider collapsing such advice methods into one advice method per join point in each @Aspect class or refactor the pieces of advice into separate @Aspect classes that you can order at the aspect level via Ordered or @Order.
5.4.5. Introductions
Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects.
You can make an introduction by using the @DeclareParents annotation. This annotation is used to declare that matching types have a new parent (hence the name). For example, given an interface named UsageTracked and an implementation of that interface named DefaultUsageTracked, the following aspect declares that all implementors of service interfaces also implement the UsageTracked interface (to expose statistics via JMX for example):
Java
@Aspect
public class UsageTracking {
@DeclareParents(value="com.xzy.myapp.service.*+", defaultImpl=DefaultUsageTracked.class)
public static UsageTracked mixin;
@Before("com.xyz.myapp.SystemArchitecture.businessService() && this(usageTracked)")
public void recordUsage(UsageTracked usageTracked) {
usageTracked.incrementUseCount();
}
}
Kotlin
@Aspect
class UsageTracking {
companion object {
@DeclareParents(value = "com.xzy.myapp.service.*+", defaultImpl = DefaultUsageTracked::class)
lateinit var mixin: UsageTracked
}
@Before("com.xyz.myapp.SystemArchitecture.businessService() && this(usageTracked)")
fun recordUsage(usageTracked: UsageTracked) {
usageTracked.incrementUseCount()
}
}
The interface to be implemented is determined by the type of the annotated field. The value attribute of the @DeclareParents annotation is an AspectJ type pattern. Any bean of a matching type implements the UsageTracked interface. Note that, in the before advice of the preceding example, service beans can be directly used as implementations of the UsageTracked interface. If accessing a bean programmatically, you would write the following:
Java
UsageTracked usageTracked = (UsageTracked) context.getBean("myService");
Kotlin
val usageTracked = context.getBean("myService") as UsageTracked
5.4.6. Aspect Instantiation Models
This is an advanced topic. If you are just starting out with AOP, you can safely skip it until later.
By default, there is a single instance of each aspect within the application context. AspectJ calls this the singleton instantiation model. It is possible to define aspects with alternate lifecycles. Spring supports AspectJ’s perthis and pertarget instantiation models ( percflow, percflowbelow, and pertypewithin are not currently supported).
You can declare a perthis aspect by specifying a perthis clause in the @Aspect annotation. Consider the following example:
Java
@Aspect("perthis(com.xyz.myapp.SystemArchitecture.businessService())")
public class MyAspect {
private int someState;
@Before(com.xyz.myapp.SystemArchitecture.businessService())
public void recordServiceUsage() {
// ...
}
}
Kotlin
@Aspect("perthis(com.xyz.myapp.SystemArchitecture.businessService())")
class MyAspect {
private val someState: Int = 0
@Before(com.xyz.myapp.SystemArchitecture.businessService())
fun recordServiceUsage() {
// ...
}
}
In the preceding example, the effect of the 'perthis' clause is that one aspect instance is created for each unique service object that executes a business service (each unique object bound to 'this' at join points matched by the pointcut expression). The aspect instance is created the first time that a method is invoked on the service object. The aspect goes out of scope when the service object goes out of scope. Before the aspect instance is created, none of the advice within it executes. As soon as the aspect instance has been created, the advice declared within it executes at matched join points, but only when the service object is the one with which this aspect is associated. See the AspectJ Programming Guide for more information on per clauses.
The pertarget instantiation model works in exactly the same way as perthis, but it creates one aspect instance for each unique target object at matched join points.
5.4.7. An AOP Example
Now that you have seen how all the constituent parts work, we can put them together to do something useful.
The execution of business services can sometimes fail due to concurrency issues (for example, a deadlock loser). If the operation is retried, it is likely to succeed on the next try. For business services where it is appropriate to retry in such conditions (idempotent operations that do not need to go back to the user for conflict resolution), we want to transparently retry the operation to avoid the client seeing a PessimisticLockingFailureException. This is a requirement that clearly cuts across multiple services in the service layer and, hence, is ideal for implementing through an aspect.
Because we want to retry the operation, we need to use around advice so that we can call proceed multiple times. The following listing shows the basic aspect implementation:
Java
@Aspect
public class ConcurrentOperationExecutor implements Ordered {
private static final int DEFAULT_MAX_RETRIES = 2;
private int maxRetries = DEFAULT_MAX_RETRIES;
private int order = 1;
public void setMaxRetries(int maxRetries) {
this.maxRetries = maxRetries;
}
public int getOrder() {
return this.order;
}
public void setOrder(int order) {
this.order = order;
}
@Around("com.xyz.myapp.SystemArchitecture.businessService()")
public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable {
int numAttempts = 0;
PessimisticLockingFailureException lockFailureException;
do {
numAttempts++;
try {
return pjp.proceed();
}
catch(PessimisticLockingFailureException ex) {
lockFailureException = ex;
}
} while(numAttempts <= this.maxRetries);
throw lockFailureException;
}
}
Kotlin
@Aspect
class ConcurrentOperationExecutor : Ordered {
private val DEFAULT_MAX_RETRIES = 2
private var maxRetries = DEFAULT_MAX_RETRIES
private var order = 1
fun setMaxRetries(maxRetries: Int) {
this.maxRetries = maxRetries
}
override fun getOrder(): Int {
return this.order
}
fun setOrder(order: Int) {
this.order = order
}
@Around("com.xyz.myapp.SystemArchitecture.businessService()")
fun doConcurrentOperation(pjp: ProceedingJoinPoint): Any {
var numAttempts = 0
var lockFailureException: PessimisticLockingFailureException
do {
numAttempts++
try {
return pjp.proceed()
} catch (ex: PessimisticLockingFailureException) {
lockFailureException = ex
}
} while (numAttempts <= this.maxRetries)
throw lockFailureException
}
}
Note that the aspect implements the Ordered interface so that we can set the precedence of the aspect higher than the transaction advice (we want a fresh transaction each time we retry). The maxRetries and order properties are both configured by Spring. The main action happens in the doConcurrentOperation around advice. Notice that, for the moment, we apply the retry logic to each businessService(). We try to proceed, and if we fail with a PessimisticLockingFailureException, we try again, unless we have exhausted all of our retry attempts.
The corresponding Spring configuration follows:
<aop:aspectj-autoproxy/>
<bean id="concurrentOperationExecutor" class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor">
<property name="maxRetries" value="3"/>
<property name="order" value="100"/>
</bean>
To refine the aspect so that it retries only idempotent operations, we might define the following Idempotent annotation:
Java
@Retention(RetentionPolicy.RUNTIME)
public @interface Idempotent {
// marker annotation
}
Kotlin
@Retention(AnnotationRetention.RUNTIME)
annotation class Idempotent// marker annotation
We can then use the annotation to annotate the implementation of service operations. The change to the aspect to retry only idempotent operations involves refining the pointcut expression so that only @Idempotent operations match, as follows:
Java
@Around("com.xyz.myapp.SystemArchitecture.businessService() && " +
"@annotation(com.xyz.myapp.service.Idempotent)")
public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable {
// ...
}
Kotlin
@Around("com.xyz.myapp.SystemArchitecture.businessService() && " + "@annotation(com.xyz.myapp.service.Idempotent)")
fun doConcurrentOperation(pjp: ProceedingJoinPoint): Any {
// ...
}
5.5. Schema-based AOP Support
If you prefer an XML-based format, Spring also offers support for defining aspects using the aop namespace tags. The exact same pointcut expressions and advice kinds as when using the @AspectJ style are supported. Hence, in this section we focus on that syntax and refer the reader to the discussion in the previous section (@AspectJ support) for an understanding of writing pointcut expressions and the binding of advice parameters.
To use the aop namespace tags described in this section, you need to import the spring-aop schema, as described in XML Schema-based configuration. See the AOP schema for how to import the tags in the aop namespace.
Within your Spring configurations, all aspect and advisor elements must be placed within an <aop:config> element (you can have more than one <aop:config> element in an application context configuration). An <aop:config> element can contain pointcut, advisor, and aspect elements (note that these must be declared in that order).
The <aop:config> style of configuration makes heavy use of Spring’s auto-proxying mechanism. This can cause issues (such as advice not being woven) if you already use explicit auto-proxying through the use of BeanNameAutoProxyCreator or something similar. The recommended usage pattern is to use either only the <aop:config> style or only the AutoProxyCreator style and never mix them.
5.5.1. Declaring an Aspect
When you use the schema support, an aspect is a regular Java object defined as a bean in your Spring application context. The state and behavior are captured in the fields and methods of the object, and the pointcut and advice information are captured in the XML.
You can declare an aspect by using the <aop:aspect> element, and reference the backing bean by using the ref attribute, as the following example shows:
<aop:config>
<aop:aspect id="myAspect" ref="aBean">
...
</aop:aspect>
</aop:config>
<bean id="aBean" class="...">
...
</bean>
The bean that backs the aspect (aBean in this case) can of course be configured and dependency injected just like any other Spring bean.
5.5.2. Declaring a Pointcut
You can declare a named pointcut inside an <aop:config> element, letting the pointcut definition be shared across several aspects and advisors.
A pointcut that represents the execution of any business service in the service layer can be defined as follows:
<aop:config>
<aop:pointcut id="businessService"
expression="execution(* com.xyz.myapp.service.*.*(..))"/>
</aop:config>
Note that the pointcut expression itself is using the same AspectJ pointcut expression language as described in @AspectJ support. If you use the schema based declaration style, you can refer to named pointcuts defined in types (@Aspects) within the pointcut expression. Another way of defining the above pointcut would be as follows:
<aop:config>
<aop:pointcut id="businessService"
expression="com.xyz.myapp.SystemArchitecture.businessService()"/>
</aop:config>
Assume that you have a SystemArchitecture aspect as described in Sharing Common Pointcut Definitions.
Then declaring a pointcut inside an aspect is very similar to declaring a top-level pointcut, as the following example shows:
<aop:config>
<aop:aspect id="myAspect" ref="aBean">
<aop:pointcut id="businessService"
expression="execution(* com.xyz.myapp.service.*.*(..))"/>
...
</aop:aspect>
</aop:config>
In much the same way as an @AspectJ aspect, pointcuts declared by using the schema based definition style can collect join point context. For example, the following pointcut collects the this object as the join point context and passes it to the advice:
<aop:config>
<aop:aspect id="myAspect" ref="aBean">
<aop:pointcut id="businessService"
expression="execution(* com.xyz.myapp.service.*.*(..)) && this(service)"/>
<aop:before pointcut-ref="businessService" method="monitor"/>
...
</aop:aspect>
</aop:config>
The advice must be declared to receive the collected join point context by including parameters of the matching names, as follows:
Java
public void monitor(Object service) {
// ...
}
Kotlin
fun monitor(service: Any) {
// ...
}
When combining pointcut sub-expressions, && is awkward within an XML document, so you can use the and, or, and not keywords in place of &&, ||, and !, respectively. For example, the previous pointcut can be better written as follows:
<aop:config>
<aop:aspect id="myAspect" ref="aBean">
<aop:pointcut id="businessService"
expression="execution(* com.xyz.myapp.service.*.*(..)) and this(service)"/>
<aop:before pointcut-ref="businessService" method="monitor"/>
...
</aop:aspect>
</aop:config>
Note that pointcuts defined in this way are referred to by their XML id and cannot be used as named pointcuts to form composite pointcuts. The named pointcut support in the schema-based definition style is thus more limited than that offered by the @AspectJ style.
5.5.3. Declaring Advice
The schema-based AOP support uses the same five kinds of advice as the @AspectJ style, and they have exactly the same semantics.
Before Advice
Before advice runs before a matched method execution. It is declared inside an <aop:aspect> by using the <aop:before> element, as the following example shows:
<aop:aspect id="beforeExample" ref="aBean">
<aop:before
pointcut-ref="dataAccessOperation"
method="doAccessCheck"/>
...
</aop:aspect>
Here, dataAccessOperation is the id of a pointcut defined at the top (<aop:config>) level. To define the pointcut inline instead, replace the pointcut-ref attribute with a pointcut attribute, as follows:
<aop:aspect id="beforeExample" ref="aBean">
<aop:before
pointcut="execution(* com.xyz.myapp.dao.*.*(..))"
method="doAccessCheck"/>
...
</aop:aspect>
As we noted in the discussion of the @AspectJ style, using named pointcuts can significantly improve the readability of your code.
The method attribute identifies a method (doAccessCheck) that provides the body of the advice. This method must be defined for the bean referenced by the aspect element that contains the advice. Before a data access operation is executed (a method execution join point matched by the pointcut expression), the doAccessCheck method on the aspect bean is invoked.
After Returning Advice
After returning advice runs when a matched method execution completes normally. It is declared inside an <aop:aspect> in the same way as before advice. The following example shows how to declare it:
<aop:aspect id="afterReturningExample" ref="aBean">
<aop:after-returning
pointcut-ref="dataAccessOperation"
method="doAccessCheck"/>
...
</aop:aspect>
As in the @AspectJ style, you can get the return value within the advice body. To do so, use the returning attribute to specify the name of the parameter to which the return value should be passed, as the following example shows:
<aop:aspect id="afterReturningExample" ref="aBean">
<aop:after-returning
pointcut-ref="dataAccessOperation"
returning="retVal"
method="doAccessCheck"/>
...
</aop:aspect>
The doAccessCheck method must declare a parameter named retVal. The type of this parameter constrains matching in the same way as described for @AfterReturning. For example, you can declare the method signature as follows:
Java
public void doAccessCheck(Object retVal) {...
Kotlin
fun doAccessCheck(retVal: Any) {...
After Throwing Advice
After throwing advice executes when a matched method execution exits by throwing an exception. It is declared inside an <aop:aspect> by using the after-throwing element, as the following example shows:
<aop:aspect id="afterThrowingExample" ref="aBean">
<aop:after-throwing
pointcut-ref="dataAccessOperation"
method="doRecoveryActions"/>
...
</aop:aspect>
As in the @AspectJ style, you can get the thrown exception within the advice body. To do so, use the throwing attribute to specify the name of the parameter to which the exception should be passed as the following example shows:
<aop:aspect id="afterThrowingExample" ref="aBean">
<aop:after-throwing
pointcut-ref="dataAccessOperation"
throwing="dataAccessEx"
method="doRecoveryActions"/>
...
</aop:aspect>
The doRecoveryActions method must declare a parameter named dataAccessEx. The type of this parameter constrains matching in the same way as described for @AfterThrowing. For example, the method signature may be declared as follows:
Java
public void doRecoveryActions(DataAccessException dataAccessEx) {...
Kotlin
fun doRecoveryActions(dataAccessEx: DataAccessException) {...
After (Finally) Advice
After (finally) advice runs no matter how a matched method execution exits. You can declare it by using the after element, as the following example shows:
<aop:aspect id="afterFinallyExample" ref="aBean">
<aop:after
pointcut-ref="dataAccessOperation"
method="doReleaseLock"/>
...
</aop:aspect>
Around Advice
The last kind of advice is around advice. Around advice runs "around" a matched method execution. It has the opportunity to do work both before and after the method executes and to determine when, how, and even if the method actually gets to execute at all. Around advice is often used to share state before and after a method execution in a thread-safe manner (starting and stopping a timer, for example). Always use the least powerful form of advice that meets your requirements. Do not use around advice if before advice can do the job.
You can declare around advice by using the aop:around element. The first parameter of the advice method must be of type ProceedingJoinPoint. Within the body of the advice, calling proceed() on the ProceedingJoinPoint causes the underlying method to execute. The proceed method may also be called with an Object[]. The values in the array are used as the arguments to the method execution when it proceeds. See Around Advice for notes on calling proceed with an Object[]. The following example shows how to declare around advice in XML:
<aop:aspect id="aroundExample" ref="aBean">
<aop:around
pointcut-ref="businessService"
method="doBasicProfiling"/>
...
</aop:aspect>
The implementation of the doBasicProfiling advice can be exactly the same as in the @AspectJ example (minus the annotation, of course), as the following example shows:
Java
public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
// start stopwatch
Object retVal = pjp.proceed();
// stop stopwatch
return retVal;
}
Kotlin
fun doBasicProfiling(pjp: ProceedingJoinPoint): Any {
// start stopwatch
val retVal = pjp.proceed()
// stop stopwatch
return pjp.proceed()
}
Advice Parameters
The schema-based declaration style supports fully typed advice in the same way as described for the @AspectJ support — by matching pointcut parameters by name against advice method parameters. See Advice Parameters for details. If you wish to explicitly specify argument names for the advice methods (not relying on the detection strategies previously described), you can do so by using the arg-names attribute of the advice element, which is treated in the same manner as the argNames attribute in an advice annotation (as described in Determining Argument Names). The following example shows how to specify an argument name in XML:
<aop:before
pointcut="com.xyz.lib.Pointcuts.anyPublicMethod() and @annotation(auditable)"
method="audit"
arg-names="auditable"/>
The arg-names attribute accepts a comma-delimited list of parameter names.
The following slightly more involved example of the XSD-based approach shows some around advice used in conjunction with a number of strongly typed parameters:
Java
package x.y.service;
public interface PersonService {
Person getPerson(String personName, int age);
}
public class DefaultFooService implements FooService {
public Person getPerson(String name, int age) {
return new Person(name, age);
}
}
Kotlin
package x.y.service
interface PersonService {
fun getPerson(personName: String, age: Int): Person
}
class DefaultFooService : FooService {
fun getPerson(name: String, age: Int): Person {
return Person(name, age)
}
}
Next up is the aspect. Notice the fact that the profile(..) method accepts a number of strongly-typed parameters, the first of which happens to be the join point used to proceed with the method call. The presence of this parameter is an indication that the profile(..) is to be used as around advice, as the following example shows:
Java
package x.y;
import org.aspectj.lang.ProceedingJoinPoint;
import org.springframework.util.StopWatch;
public class SimpleProfiler {
public Object profile(ProceedingJoinPoint call, String name, int age) throws Throwable {
StopWatch clock = new StopWatch("Profiling for '" + name + "' and '" + age + "'");
try {
clock.start(call.toShortString());
return call.proceed();
} finally {
clock.stop();
System.out.println(clock.prettyPrint());
}
}
}
Kotlin
import org.aspectj.lang.ProceedingJoinPoint
import org.springframework.util.StopWatch
class SimpleProfiler {
fun profile(call: ProceedingJoinPoint, name: String, age: Int): Any {
val clock = StopWatch("Profiling for '$name' and '$age'")
try {
clock.start(call.toShortString())
return call.proceed()
} finally {
clock.stop()
println(clock.prettyPrint())
}
}
}
Finally, the following example XML configuration effects the execution of the preceding advice for a particular join point:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xsi:schemaLocation="
http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/aop https://www.springframework.org/schema/aop/spring-aop.xsd">
<!-- this is the object that will be proxied by Spring's AOP infrastructure -->
<bean id="personService" class="x.y.service.DefaultPersonService"/>
<!-- this is the actual advice itself -->
<bean id="profiler" class="x.y.SimpleProfiler"/>
<aop:config>
<aop:aspect ref="profiler">
<aop:pointcut id="theExecutionOfSomePersonServiceMethod"
expression="execution(* x.y.service.PersonService.getPerson(String,int))
and args(name, age)"/>
<aop:around pointcut-ref="theExecutionOfSomePersonServiceMethod"
method="profile"/>
</aop:aspect>
</aop:config>
</beans>
Consider the following driver script:
Java
import org.springframework.beans.factory.BeanFactory;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import x.y.service.PersonService;
public final class Boot {
public static void main(final String[] args) throws Exception {
BeanFactory ctx = new ClassPathXmlApplicationContext("x/y/plain.xml");
PersonService person = (PersonService) ctx.getBean("personService");
person.getPerson("Pengo", 12);
}
}
Kotlin
fun main() {
val ctx = ClassPathXmlApplicationContext("x/y/plain.xml")
val person = ctx.getBean("personService") as PersonService
person.getPerson("Pengo", 12)
}
With such a Boot class, we would get output similar to the following on standard output:
StopWatch 'Profiling for 'Pengo' and '12'': running time (millis) = 0
-----------------------------------------
ms % Task name
-----------------------------------------
00000 ? execution(getFoo)
Advice Ordering
When multiple pieces of advice need to execute at the same join point (executing method) the ordering rules are as described in Advice Ordering. The precedence between aspects is determined via the order attribute in the <aop:aspect> element or by either adding the @Order annotation to the bean that backs the aspect or by having the bean implement the Ordered interface.
In contrast to the precedence rules for advice methods defined in the same @Aspect class, when two pieces of advice defined in the same <aop:aspect> element both need to run at the same join point, the precedence is determined by the order in which the advice elements are declared within the enclosing <aop:aspect> element, from highest to lowest precedence.
For example, given an around advice and a before advice defined in the same <aop:aspect> element that apply to the same join point, to ensure that the around advice has higher precedence than the before advice, the <aop:around> element must be declared before the <aop:before> element.
As a general rule of thumb, if you find that you have multiple pieces of advice defined in the same <aop:aspect> element that apply to the same join point, consider collapsing such advice methods into one advice method per join point in each <aop:aspect> element or refactor the pieces of advice into separate <aop:aspect> elements that you can order at the aspect level.
5.5.4. Introductions
Introductions (known as inter-type declarations in AspectJ) let an aspect declare that advised objects implement a given interface and provide an implementation of that interface on behalf of those objects.
You can make an introduction by using the aop:declare-parents element inside an aop:aspect. You can use the aop:declare-parents element to declare that matching types have a new parent (hence the name). For example, given an interface named UsageTracked and an implementation of that interface named DefaultUsageTracked, the following aspect declares that all implementors of service interfaces also implement the UsageTracked interface. (In order to expose statistics through JMX for example.)
<aop:aspect id="usageTrackerAspect" ref="usageTracking">
<aop:declare-parents
types-matching="com.xzy.myapp.service.*+"
implement-interface="com.xyz.myapp.service.tracking.UsageTracked"
default-impl="com.xyz.myapp.service.tracking.DefaultUsageTracked"/>
<aop:before
pointcut="com.xyz.myapp.SystemArchitecture.businessService()
and this(usageTracked)"
method="recordUsage"/>
</aop:aspect>
The class that backs the usageTracking bean would then contain the following method:
Java
public void recordUsage(UsageTracked usageTracked) {
usageTracked.incrementUseCount();
}
Kotlin
fun recordUsage(usageTracked: UsageTracked) {
usageTracked.incrementUseCount()
}
The interface to be implemented is determined by the implement-interface attribute. The value of the types-matching attribute is an AspectJ type pattern. Any bean of a matching type implements the UsageTracked interface. Note that, in the before advice of the preceding example, service beans can be directly used as implementations of the UsageTracked interface. To access a bean programmatically, you could write the following:
Java
UsageTracked usageTracked = (UsageTracked) context.getBean("myService");
Kotlin
val usageTracked = context.getBean("myService") as UsageTracked
5.5.5. Aspect Instantiation Models
The only supported instantiation model for schema-defined aspects is the singleton model. Other instantiation models may be supported in future releases.
5.5.6. Advisors
The concept of “advisors” comes from the AOP support defined in Spring and does not have a direct equivalent in AspectJ. An advisor is like a small self-contained aspect that has a single piece of advice. The advice itself is represented by a bean and must implement one of the advice interfaces described in Advice Types in Spring. Advisors can take advantage of AspectJ pointcut expressions.
Spring supports the advisor concept with the <aop:advisor> element. You most commonly see it used in conjunction with transactional advice, which also has its own namespace support in Spring. The following example shows an advisor:
<aop:config>
<aop:pointcut id="businessService"
expression="execution(* com.xyz.myapp.service.*.*(..))"/>
<aop:advisor
pointcut-ref="businessService"
advice-ref="tx-advice"/>
</aop:config>
<tx:advice id="tx-advice">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED"/>
</tx:attributes>
</tx:advice>
As well as the pointcut-ref attribute used in the preceding example, you can also use the pointcut attribute to define a pointcut expression inline.
To define the precedence of an advisor so that the advice can participate in ordering, use the order attribute to define the Ordered value of the advisor.
5.5.7. An AOP Schema Example
This section shows how the concurrent locking failure retry example from An AOP Example looks when rewritten with the schema support.
The execution of business services can sometimes fail due to concurrency issues (for example, a deadlock loser). If the operation is retried, it is likely to succeed on the next try. For business services where it is appropriate to retry in such conditions (idempotent operations that do not need to go back to the user for conflict resolution), we want to transparently retry the operation to avoid the client seeing a PessimisticLockingFailureException. This is a requirement that clearly cuts across multiple services in the service layer and, hence, is ideal for implementing through an aspect.
Because we want to retry the operation, we need to use around advice so that we can call proceed multiple times. The following listing shows the basic aspect implementation (which is a regular Java class that uses the schema support):
Java
public class ConcurrentOperationExecutor implements Ordered {
private static final int DEFAULT_MAX_RETRIES = 2;
private int maxRetries = DEFAULT_MAX_RETRIES;
private int order = 1;
public void setMaxRetries(int maxRetries) {
this.maxRetries = maxRetries;
}
public int getOrder() {
return this.order;
}
public void setOrder(int order) {
this.order = order;
}
public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable {
int numAttempts = 0;
PessimisticLockingFailureException lockFailureException;
do {
numAttempts++;
try {
return pjp.proceed();
}
catch(PessimisticLockingFailureException ex) {
lockFailureException = ex;
}
} while(numAttempts <= this.maxRetries);
throw lockFailureException;
}
}
Kotlin
class ConcurrentOperationExecutor : Ordered {
private val DEFAULT_MAX_RETRIES = 2
private var maxRetries = DEFAULT_MAX_RETRIES
private var order = 1
fun setMaxRetries(maxRetries: Int) {
this.maxRetries = maxRetries
}
override fun getOrder(): Int {
return this.order
}
fun setOrder(order: Int) {
this.order = order
}
fun doConcurrentOperation(pjp: ProceedingJoinPoint): Any {
var numAttempts = 0
var lockFailureException: PessimisticLockingFailureException
do {
numAttempts++
try {
return pjp.proceed()
} catch (ex: PessimisticLockingFailureException) {
lockFailureException = ex
}
} while (numAttempts <= this.maxRetries)
throw lockFailureException
}
}
Note that the aspect implements the Ordered interface so that we can set the precedence of the aspect higher than the transaction advice (we want a fresh transaction each time we retry). The maxRetries and order properties are both configured by Spring. The main action happens in the doConcurrentOperation around advice method. We try to proceed. If we fail with a PessimisticLockingFailureException, we try again, unless we have exhausted all of our retry attempts.
This class is identical to the one used in the @AspectJ example, but with the annotations removed.
The corresponding Spring configuration is as follows:
<aop:config>
<aop:aspect id="concurrentOperationRetry" ref="concurrentOperationExecutor">
<aop:pointcut id="idempotentOperation"
expression="execution(* com.xyz.myapp.service.*.*(..))"/>
<aop:around
pointcut-ref="idempotentOperation"
method="doConcurrentOperation"/>
</aop:aspect>
</aop:config>
<bean id="concurrentOperationExecutor"
class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor">
<property name="maxRetries" value="3"/>
<property name="order" value="100"/>
</bean>
Notice that, for the time, being we assume that all business services are idempotent. If this is not the case, we can refine the aspect so that it retries only genuinely idempotent operations, by introducing an Idempotent annotation and using the annotation to annotate the implementation of service operations, as the following example shows:
Java
@Retention(RetentionPolicy.RUNTIME)
public @interface Idempotent {
// marker annotation
}
Kotlin
@Retention(AnnotationRetention.RUNTIME)
annotation class Idempotent {
// marker annotation
}
The change to the aspect to retry only idempotent operations involves refining the pointcut expression so that only @Idempotent operations match, as follows:
<aop:pointcut id="idempotentOperation"
expression="execution(* com.xyz.myapp.service.*.*(..)) and
@annotation(com.xyz.myapp.service.Idempotent)"/>
5.6. Choosing which AOP Declaration Style to Use
Once you have decided that an aspect is the best approach for implementing a given requirement, how do you decide between using Spring AOP or AspectJ and between the Aspect language (code) style, the @AspectJ annotation style, or the Spring XML style? These decisions are influenced by a number of factors including application requirements, development tools, and team familiarity with AOP.
5.6.1. Spring AOP or Full AspectJ?
Use the simplest thing that can work. Spring AOP is simpler than using full AspectJ, as there is no requirement to introduce the AspectJ compiler / weaver into your development and build processes. If you only need to advise the execution of operations on Spring beans, Spring AOP is the right choice. If you need to advise objects not managed by the Spring container (such as domain objects, typically), you need to use AspectJ. You also need to use AspectJ if you wish to advise join points other than simple method executions (for example, field get or set join points and so on).
When you use AspectJ, you have the choice of the AspectJ language syntax (also known as the “code style”) or the @AspectJ annotation style. Clearly, if you do not use Java 5+, the choice has been made for you: Use the code style. If aspects play a large role in your design, and you are able to use the AspectJ Development Tools (AJDT) plugin for Eclipse, the AspectJ language syntax is the preferred option. It is cleaner and simpler because the language was purposefully designed for writing aspects. If you do not use Eclipse or have only a few aspects that do not play a major role in your application, you may want to consider using the @AspectJ style, sticking with regular Java compilation in your IDE, and adding an aspect weaving phase to your build script.
5.6.2. @AspectJ or XML for Spring AOP?
If you have chosen to use Spring AOP, you have a choice of @AspectJ or XML style. There are various tradeoffs to consider.
The XML style may most familiar to existing Spring users, and it is backed by genuine POJOs. When using AOP as a tool to configure enterprise services, XML can be a good choice (a good test is whether you consider the pointcut expression to be a part of your configuration that you might want to change independently). With the XML style, it is arguably clearer from your configuration which aspects are present in the system.
The XML style has two disadvantages. First, it does not fully encapsulate the implementation of the requirement it addresses in a single place. The DRY principle says that there should be a single, unambiguous, authoritative representation of any piece of knowledge within a system. When using the XML style, the knowledge of how a requirement is implemented is split across the declaration of the backing bean class and the XML in the configuration file. When you use the @AspectJ style, this information is encapsulated in a single module: the aspect. Secondly, the XML style is slightly more limited in what it can express than the @AspectJ style: Only the “singleton” aspect instantiation model is supported, and it is not possible to combine named pointcuts declared in XML. For example, in the @AspectJ style you can write something like the following:
Java
@Pointcut("execution(* get*())")
public void propertyAccess() {}
@Pointcut("execution(org.xyz.Account+ *(..))")
public void operationReturningAnAccount() {}
@Pointcut("propertyAccess() && operationReturningAnAccount()")
public void accountPropertyAccess() {}
Kotlin
@Pointcut("execution(* get*())")
fun propertyAccess() {}
@Pointcut("execution(org.xyz.Account+ *(..))")
fun operationReturningAnAccount() {}
@Pointcut("propertyAccess() && operationReturningAnAccount()")
fun accountPropertyAccess() {}
In the XML style you can declare the first two pointcuts:
<aop:pointcut id="propertyAccess"
expression="execution(* get*())"/>
<aop:pointcut id="operationReturningAnAccount"
expression="execution(org.xyz.Account+ *(..))"/>
The downside of the XML approach is that you cannot define the accountPropertyAccess pointcut by combining these definitions.
The @AspectJ style supports additional instantiation models and richer pointcut composition. It has the advantage of keeping the aspect as a modular unit. It also has the advantage that the @AspectJ aspects can be understood (and thus consumed) both by Spring AOP and by AspectJ. So, if you later decide you need the capabilities of AspectJ to implement additional requirements, you can easily migrate to a classic AspectJ setup. On balance, the Spring team prefers the @AspectJ style for custom aspects beyond simple configuration of enterprise services.
5.7. Mixing Aspect Types
It is perfectly possible to mix @AspectJ style aspects by using the auto-proxying support, schema-defined <aop:aspect> aspects, <aop:advisor> declared advisors, and even proxies and interceptors in other styles in the same configuration. All of these are implemented by using the same underlying support mechanism and can co-exist without any difficulty.
5.8. Proxying Mechanisms
Spring AOP uses either JDK dynamic proxies or CGLIB to create the proxy for a given target object. JDK dynamic proxies are built into the JDK, whereas CGLIB is a common open-source class definition library (repackaged into spring-core).
If the target object to be proxied implements at least one interface, a JDK dynamic proxy is used. All of the interfaces implemented by the target type are proxied. If the target object does not implement any interfaces, a CGLIB proxy is created.
If you want to force the use of CGLIB proxying (for example, to proxy every method defined for the target object, not only those implemented by its interfaces), you can do so. However, you should consider the following issues:
• With CGLIB, final methods cannot be advised, as they cannot be overridden in runtime-generated subclasses.
• As of Spring 4.0, the constructor of your proxied object is NOT called twice anymore, since the CGLIB proxy instance is created through Objenesis. Only if your JVM does not allow for constructor bypassing, you might see double invocations and corresponding debug log entries from Spring’s AOP support.
To force the use of CGLIB proxies, set the value of the proxy-target-class attribute of the <aop:config> element to true, as follows:
<aop:config proxy-target-class="true">
<!-- other beans defined here... -->
</aop:config>
To force CGLIB proxying when you use the @AspectJ auto-proxy support, set the proxy-target-class attribute of the <aop:aspectj-autoproxy> element to true, as follows:
<aop:aspectj-autoproxy proxy-target-class="true"/>
Multiple <aop:config/> sections are collapsed into a single unified auto-proxy creator at runtime, which applies the strongest proxy settings that any of the <aop:config/> sections (typically from different XML bean definition files) specified. This also applies to the <tx:annotation-driven/> and <aop:aspectj-autoproxy/> elements.
To be clear, using proxy-target-class="true" on <tx:annotation-driven/>, <aop:aspectj-autoproxy/>, or <aop:config/> elements forces the use of CGLIB proxies for all three of them.
5.8.1. Understanding AOP Proxies
Spring AOP is proxy-based. It is vitally important that you grasp the semantics of what that last statement actually means before you write your own aspects or use any of the Spring AOP-based aspects supplied with the Spring Framework.
Consider first the scenario where you have a plain-vanilla, un-proxied, nothing-special-about-it, straight object reference, as the following code snippet shows:
Java
public class SimplePojo implements Pojo {
public void foo() {
// this next method invocation is a direct call on the 'this' reference
this.bar();
}
public void bar() {
// some logic...
}
}
Kotlin
class SimplePojo : Pojo {
fun foo() {
// this next method invocation is a direct call on the 'this' reference
this.bar()
}
fun bar() {
// some logic...
}
}
If you invoke a method on an object reference, the method is invoked directly on that object reference, as the following image and listing show:
aop proxy plain pojo call
Java
public class Main {
public static void main(String[] args) {
Pojo pojo = new SimplePojo();
// this is a direct method call on the 'pojo' reference
pojo.foo();
}
}
Kotlin
fun main() {
val pojo = SimplePojo()
// this is a direct method call on the 'pojo' reference
pojo.foo()
}
Things change slightly when the reference that client code has is a proxy. Consider the following diagram and code snippet:
aop proxy call
Java
public class Main {
public static void main(String[] args) {
ProxyFactory factory = new ProxyFactory(new SimplePojo());
factory.addInterface(Pojo.class);
factory.addAdvice(new RetryAdvice());
Pojo pojo = (Pojo) factory.getProxy();
// this is a method call on the proxy!
pojo.foo();
}
}
Kotlin
fun main() {
val factory = ProxyFactory(SimplePojo())
factory.addInterface(Pojo::class.java)
factory.addAdvice(RetryAdvice())
val pojo = factory.proxy as Pojo
// this is a method call on the proxy!
pojo.foo()
}
The key thing to understand here is that the client code inside the main(..) method of the Main class has a reference to the proxy. This means that method calls on that object reference are calls on the proxy. As a result, the proxy can delegate to all of the interceptors (advice) that are relevant to that particular method call. However, once the call has finally reached the target object (the SimplePojo, reference in this case), any method calls that it may make on itself, such as this.bar() or this.foo(), are going to be invoked against the this reference, and not the proxy. This has important implications. It means that self-invocation is not going to result in the advice associated with a method invocation getting a chance to execute.
Okay, so what is to be done about this? The best approach (the term, “best,” is used loosely here) is to refactor your code such that the self-invocation does not happen. This does entail some work on your part, but it is the best, least-invasive approach. The next approach is absolutely horrendous, and we hesitate to point it out, precisely because it is so horrendous. You can (painful as it is to us) totally tie the logic within your class to Spring AOP, as the following example shows:
Java
public class SimplePojo implements Pojo {
public void foo() {
// this works, but... gah!
((Pojo) AopContext.currentProxy()).bar();
}
public void bar() {
// some logic...
}
}
Kotlin
class SimplePojo : Pojo {
fun foo() {
// this works, but... gah!
(AopContext.currentProxy() as Pojo).bar()
}
fun bar() {
// some logic...
}
}
This totally couples your code to Spring AOP, and it makes the class itself aware of the fact that it is being used in an AOP context, which flies in the face of AOP. It also requires some additional configuration when the proxy is being created, as the following example shows:
Java
public class Main {
public static void main(String[] args) {
ProxyFactory factory = new ProxyFactory(new SimplePojo());
factory.addInterface(Pojo.class);
factory.addAdvice(new RetryAdvice());
factory.setExposeProxy(true);
Pojo pojo = (Pojo) factory.getProxy();
// this is a method call on the proxy!
pojo.foo();
}
}
Kotlin
fun main() {
val factory = ProxyFactory(SimplePojo())
factory.addInterface(Pojo::class.java)
factory.addAdvice(RetryAdvice())
factory.isExposeProxy = true
val pojo = factory.proxy as Pojo
// this is a method call on the proxy!
pojo.foo()
}
Finally, it must be noted that AspectJ does not have this self-invocation issue because it is not a proxy-based AOP framework.
5.9. Programmatic Creation of @AspectJ Proxies
In addition to declaring aspects in your configuration by using either <aop:config> or <aop:aspectj-autoproxy>, it is also possible to programmatically create proxies that advise target objects. For the full details of Spring’s AOP API, see the next chapter. Here, we want to focus on the ability to automatically create proxies by using @AspectJ aspects.
You can use the org.springframework.aop.aspectj.annotation.AspectJProxyFactory class to create a proxy for a target object that is advised by one or more @AspectJ aspects. The basic usage for this class is very simple, as the following example shows:
Java
// create a factory that can generate a proxy for the given target object
AspectJProxyFactory factory = new AspectJProxyFactory(targetObject);
// add an aspect, the class must be an @AspectJ aspect
// you can call this as many times as you need with different aspects
factory.addAspect(SecurityManager.class);
// you can also add existing aspect instances, the type of the object supplied must be an @AspectJ aspect
factory.addAspect(usageTracker);
// now get the proxy object...
MyInterfaceType proxy = factory.getProxy();
Kotlin
// create a factory that can generate a proxy for the given target object
val factory = AspectJProxyFactory(targetObject)
// add an aspect, the class must be an @AspectJ aspect
// you can call this as many times as you need with different aspects
factory.addAspect(SecurityManager::class.java)
// you can also add existing aspect instances, the type of the object supplied must be an @AspectJ aspect
factory.addAspect(usageTracker)
// now get the proxy object...
val proxy = factory.getProxy<Any>()
See the javadoc for more information.
5.10. Using AspectJ with Spring Applications
Everything we have covered so far in this chapter is pure Spring AOP. In this section, we look at how you can use the AspectJ compiler or weaver instead of or in addition to Spring AOP if your needs go beyond the facilities offered by Spring AOP alone.
Spring ships with a small AspectJ aspect library, which is available stand-alone in your distribution as spring-aspects.jar. You need to add this to your classpath in order to use the aspects in it. Using AspectJ to Dependency Inject Domain Objects with Spring and Other Spring aspects for AspectJ discuss the content of this library and how you can use it. Configuring AspectJ Aspects by Using Spring IoC discusses how to dependency inject AspectJ aspects that are woven using the AspectJ compiler. Finally, Load-time Weaving with AspectJ in the Spring Framework provides an introduction to load-time weaving for Spring applications that use AspectJ.
5.10.1. Using AspectJ to Dependency Inject Domain Objects with Spring
The Spring container instantiates and configures beans defined in your application context. It is also possible to ask a bean factory to configure a pre-existing object, given the name of a bean definition that contains the configuration to be applied. spring-aspects.jar contains an annotation-driven aspect that exploits this capability to allow dependency injection of any object. The support is intended to be used for objects created outside of the control of any container. Domain objects often fall into this category because they are often created programmatically with the new operator or by an ORM tool as a result of a database query.
The @Configurable annotation marks a class as being eligible for Spring-driven configuration. In the simplest case, you can use purely it as a marker annotation, as the following example shows:
Java
package com.xyz.myapp.domain;
import org.springframework.beans.factory.annotation.Configurable;
@Configurable
public class Account {
// ...
}
Kotlin
package com.xyz.myapp.domain
import org.springframework.beans.factory.annotation.Configurable
@Configurable
class Account {
// ...
}
When used as a marker interface in this way, Spring configures new instances of the annotated type (Account, in this case) by using a bean definition (typically prototype-scoped) with the same name as the fully-qualified type name (com.xyz.myapp.domain.Account). Since the default name for a bean is the fully-qualified name of its type, a convenient way to declare the prototype definition is to omit the id attribute, as the following example shows:
<bean class="com.xyz.myapp.domain.Account" scope="prototype">
<property name="fundsTransferService" ref="fundsTransferService"/>
</bean>
If you want to explicitly specify the name of the prototype bean definition to use, you can do so directly in the annotation, as the following example shows:
Java
package com.xyz.myapp.domain;
import org.springframework.beans.factory.annotation.Configurable;
@Configurable("account")
public class Account {
// ...
}
Kotlin
package com.xyz.myapp.domain
import org.springframework.beans.factory.annotation.Configurable
@Configurable("account")
class Account {
// ...
}
Spring now looks for a bean definition named account and uses that as the definition to configure new Account instances.
You can also use autowiring to avoid having to specify a dedicated bean definition at all. To have Spring apply autowiring, use the autowire property of the @Configurable annotation. You can specify either @Configurable(autowire=Autowire.BY_TYPE) or @Configurable(autowire=Autowire.BY_NAME for autowiring by type or by name, respectively. As an alternative, it is preferable to specify explicit, annotation-driven dependency injection for your @Configurable beans through @Autowired or @Inject at the field or method level (see Annotation-based Container Configuration for further details).
Finally, you can enable Spring dependency checking for the object references in the newly created and configured object by using the dependencyCheck attribute (for example, @Configurable(autowire=Autowire.BY_NAME,dependencyCheck=true)). If this attribute is set to true, Spring validates after configuration that all properties (which are not primitives or collections) have been set.
Note that using the annotation on its own does nothing. It is the AnnotationBeanConfigurerAspect in spring-aspects.jar that acts on the presence of the annotation. In essence, the aspect says, “after returning from the initialization of a new object of a type annotated with @Configurable, configure the newly created object using Spring in accordance with the properties of the annotation”. In this context, “initialization” refers to newly instantiated objects (for example, objects instantiated with the new operator) as well as to Serializable objects that are undergoing deserialization (for example, through readResolve()).
One of the key phrases in the above paragraph is “in essence”. For most cases, the exact semantics of “after returning from the initialization of a new object” are fine. In this context, “after initialization” means that the dependencies are injected after the object has been constructed. This means that the dependencies are not available for use in the constructor bodies of the class. If you want the dependencies to be injected before the constructor bodies execute and thus be available for use in the body of the constructors, you need to define this on the @Configurable declaration, as follows:
Java
@Configurable(preConstruction = true)
Kotlin
@Configurable(preConstruction = true)
You can find more information about the language semantics of the various pointcut types in AspectJ in this appendix of the AspectJ Programming Guide.
For this to work, the annotated types must be woven with the AspectJ weaver. You can either use a build-time Ant or Maven task to do this (see, for example, the AspectJ Development Environment Guide) or load-time weaving (see Load-time Weaving with AspectJ in the Spring Framework). The AnnotationBeanConfigurerAspect itself needs to be configured by Spring (in order to obtain a reference to the bean factory that is to be used to configure new objects). If you use Java-based configuration, you can add @EnableSpringConfigured to any @Configuration class, as follows:
Java
@Configuration
@EnableSpringConfigured
public class AppConfig {
}
Kotlin
@Configuration
@EnableSpringConfigured
class AppConfig {
}
If you prefer XML based configuration, the Spring context namespace defines a convenient context:spring-configured element, which you can use as follows:
<context:spring-configured/>
Instances of @Configurable objects created before the aspect has been configured result in a message being issued to the debug log and no configuration of the
|
__label__pos
| 0.919611 |
0
Is there a module to mask user url with a temporary url which only last for a day and next time it generates another url for the same user?
I want to hide the users UID/username so that no two users can talk to each other but can view their profiles with a temporary url.
I checked with pathauto module but it generates a dynamic permanent url not a temporary ones.
Edit:- I used uuid_link module for this and ran cron to generate uuid for users.
1
It's very unlikely; this sounds like a very specific use-case that wouldn't need a dedicated project. You're going to have to implement your own custom module to pull this off.
You could go an aliasing route in a manner similar pathauto but that is likely to leave source paths open (e.g. user/XXX) to accessibility. A better approach is to create a simple module that:
1. Uses a hook_cron() implementation to generate random hash to UID mappings periodically and storing them in custom table.
2. Implementing a hook_menu() to generate a dynamic path like user/%random_hash that maps the random hash paths in #1 to a user profile page (generated in the the same manner like user/%user)
• yes I took this approach. I was thinking that there was already a dedicated module for this requirement. – Villie Nov 12 '15 at 3:48
Your Answer
By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.985321 |
Empty dash table
df1 = pd.DataFrame({“Symbol”:[‘alpha’,‘beta’,‘gamma’],“Currency”:[‘USD’,‘USD’,‘USD’],“Price unit”:[‘1’,‘1’,‘1’],
“Trade Unit”:[‘Kg’,‘Kg’,‘Kg’], “Lot Size”:[‘15’,‘1’,‘1’],“Tick Size”:[‘1’,’.01’,’.01’]})
dash_table.DataTable(id =“Con_specs”,columns = [{“name”:x ,“id”: ‘specs_{}’.format(i)} for i,x in enumerate(df1.columns)],
data = df1.to_dict(‘records’))
image
Not able to find the table content in the cell. Please let me know where I am doing wrong.
Hi @PG55 and welcome to the forum!
“id” needs to be column names in df1. “name” can be whatever you want to show as column headings in the table.
So this should work:
columns = [{"name":'specs_{}'.format(i),"id": x } for i,x in enumerate(df1.columns)],
Or if you just want the column heading like in your image:
`columns = [{"name":i,"id": i } for i in df1.columns]`
Voila!! Thanks a lot!!
1 Like
|
__label__pos
| 0.988146 |
ASA2k ASA2k - 11 months ago 46
HTML Question
How to make an Image display/hide using a button
I'm using the following code to make an image when a button is pressed:
<html>
<head>
<title>Image Display Test</title>
<script type="text/javascript">
<!--
function showImage(){
document.getElementById('loadingImage').style.visibility="visible";
}
-->
</script>
</head>
<body>
<input type="button" value="Show Button" onclick="showImage();"/>
<img id="loadingImage" src="pickups1.png" style="visibility:hidden"/>
</body>
</html>
So far it works, but then the image remains on the screen, Is there anyway to make it that if clicked again the image disappears?
Answer
You can check if image is visible and then hide/show it accordingly:
var elem = document.getElementById('loadingImage');
if (elem.style.visibility === 'visible') {
elem.style.visibility = 'hidden';
} else {
elem.style.visibility = 'visible';
}
or using shortcut (ternay operator):
var elem = document.getElementById('loadingImage');
elem.style.visibility = elem.style.visibility === 'visible' ? 'hidden' : 'visible';
|
__label__pos
| 0.998922 |
KB User's Guide - Advanced HTML - Create an HTML Calculator
This document is a demonstration of how to insert fields in a KB document that allow users to perform simple calculations.
Example
=
0
Creating a Basic Calculator
Paste the following into the HTML Body field editor:
<form id="calc" oninput="updateOutput()">
<input name="x" value="0" type="number">
<select name="op" onchange="updateOutput()">
<option value="0">+</option>
<option value="1">−</option>
<option value="2">×</option>
<option value="3">÷</option>
</select>
<input name="y" value="0" type="number">
<div class="equals"> = </div>
<output name="z" for="x y">0</output>
</form>
<br style="clear:both;">
Then, paste the following into the JavaScript/CSS field:
<!-- JAVASCRIPT -->
<script>
function updateOutput() {
//calculate
//get form
var form = document.getElementById("calc");
//get output
var out = form.elements["z"];
//get two numbers
//replace all instances of parseFloat with parseInt if needing to force integers
var num1 = parseFloat(form.elements["x"].value);
var num2 = parseFloat(form.elements["y"].value);
//get operator
var operator = parseFloat(form.elements["op"].value);
//set output depending on operator
switch(operator)
{
//add
case 0: out.value = num1+num2;
break;
//subtract
case 1: out.value = num1-num2;
break;
//multiply
case 2: out.value = num1*num2;
break;
//divide
case 3: out.value = (num1/num2).toFixed(2);//only two digits after decimal place
break;
default:
break;
}
}
</script>
<!-- CSS -->
<style>
/*number inputs*/
#page-content input[type="number"], #viewDraft input[type="number"] {
width:50px; height:30px;
text-align:center;
margin:3px;
float:left;}
/*select and equals elements*/
#page-content select, #page-content .equals, #viewDraft select, #viewDraft .equals {
margin:3px;
float:left;}
/*output element*/
#page-content output, #viewDraft output {
display:block;
border:1px solid #333333;
border-radius:5px;
min-width:25px; height:25px;
margin:3px; padding:2px;
text-align:center;
background:#000000;
color:#ffffff;
float:left; }
</style>
Customizing the Calculator
Below are some common ways to customize the calculator. For each example, assume that any surrounding code not shown is present and unchanged from the original format above.
• Note: The example calculators serve to illustrate changes to the input fields only; they will not process actual calculations. This method only allows one working calculator on the page.
Remove a Calculation Type
You can limit your calculator to only certain operations, e.g. only addition and subtraction. To do so, simply delete or comment out the options you'd like to exclude from the HTML.
In the example below, the options for multiplication and division have been removed:
<select name="op" onchange="updateOutput()">
<option value="0">+</option>
<option value="1">−</option>
</select>
Example:
=
0
Define Accepted Value Increments
You can specify the numeric increments that you expect to be input by end users, e.g. numbers that are a multiple of .25. Please note that this calculator does not contain a validator, so it will not actually reject any inputs; rather, it will simply highlight the input box in red if the user inputs something other than the desired increment (though this effect does not exist in all browsers).
In the example below, both input fields are set to look for values in increments of .25:
<form id="calc" oninput="updateOutput()">
<input name="x" value="0" type="number" step=".25" >
<select name="op" onchange="updateOutput()">
<option value="0">+</option>
<option value="1">−</option>
<option value="2">×</option>
<option value="3">÷</option>
</select>
<input name="y" value="0" type="number" step=".25" >
<div class="equals"> = </div>
<output name="z" for="x y">0</output>
</form>
<br style="clear:both;">
Example:
=
0
Define a Default Input Value
You can set one of both input fields to default to a certain value other than zero, e.g. you want the first input field to initially display a value of 10 when the page first loads. Please note that this will not prevent the user from changing the default value, nor will it enforce any limits on the accepted value (i.e. the user can both increase and decrease the input value from the default).
In the example below, the first input field is set to default to a value of 10, and the second input field is set to default to a value of 5:
<form id="calc" oninput="updateOutput()">
<input name="x" value="10" type="number">
<select name="op" onchange="updateOutput()">
<option value="0">+</option>
<option value="1">−</option>
<option value="2">×</option>
<option value="3">÷</option>
</select>
<input name="y" value="5" type="number">
<div class="equals"> = </div>
<output name="z" for="x y">0</output>
</form>
<br style="clear:both;">
Example:
=
0
Set Maximum/Minimum Restrictions on Input
You can specify minimum and/or maximum accepted values for your input fields, e.g. only allow values between 1 and 100. Please note that this calculator does not contain a validator, so it will not actually reject any inputs; rather, it will simply highlight the input box in red if the user inputs something outside of the desired range (though this effect does not exist in all browsers).
In the example below, the first input field is set to accept any number between 1 and 100, while the the second input value is set to accept any number higher than 5 (with no upper limit):
<form id="calc" oninput="updateOutput()">
<input name="x" value="0" type="number" min="1" max="100" >
<select name="op" onchange="updateOutput()">
<option value="0">+</option>
<option value="1">−</option>
<option value="2">×</option>
<option value="3">÷</option>
</select>
<input name="y" value="0" type="number" min="5" >
<div class="equals"> = </div>
<output name="z" for="x y">0</output>
</form>
<br style="clear:both;">
Example:
=
0
Process Values as Integers (Remove Decimals)
You can set the calculator to ignore any input decimals and only process values as integers, e.g. process 2.1+3.2 as 2+3. Please not that this simply removes any numbers after the decimal point; it does not round up to the closest integer. Additionally, this only pertains to inputs, so if you are allowing division, the output can still include a decimal.
To do this, make the following changes to these lines of JavaScript (leaving all other JavaScript as-is):
//replace all instances of parseFloat with parseInt if needing to force integers
var num1 = parseInt(form.elements["x"].value);
var num2 = parseInt(form.elements["y"].value);
//get operator
var operator = parseInt(form.elements["op"].value);
This calculator is based on the following guide: http://www.developerdrive.com/2012/06/creating-a-web-page-calculator-using-the-html5-output-element/
Keywords:web calculate calculations css javascript addition subtraction multiplication multiply division divide Doc ID:60307
Owner:Leah S.Group:KB User's Guide
Created:2016-01-29 14:24 CDTUpdated:2019-04-25 16:38 CDT
Sites:KB Demo, KB User's Guide
Feedback: 4 5
|
__label__pos
| 0.96429 |
2008N0316
ImageMagick(PerlMagick)ŁA摜
摜gARs[@킩Ȃĕʂ̃IuWFNgē摜ǂݍނƂoJȂƂĂłA܂ƌ܂肾AԂ邵ƂƂŁAƒT炠܂B
L̃R[hŁAminify.jpgAmagnify.jpgꂼꂿƏkAg傳ĂāA$imageɂeȂƂ킩܂B
#!/usr/bin/perl
use strict;
use warnings;
use Image::Magick;
# 摜ǂݍ
my $image = Image::Magick->new;
$image->Read('source.jpg');
# IuWFNg
my $imageA = $image->Clone();
my $imageB = $image->Clone();
# ꂼkAg
$imageA->Minify();
$imageB->Magnify();
# t@Cɏo
$imageA->Write('jpg:minify.jpg');
$imageB->Write('jpg:magnify.jpg');
$image->Write('jpg:original.jpg');
print "content-type:text/plain\n\n";
print 'ok';
exit;
posted by kazina | Comment(0) | TrackBack(0) | vO~OFJavȃ
̋Lւ̃Rg
Rg
O:
[AhX:
z[y[WAhX:
Rg: [K{]
FR[h: [K{]
摜̒̕pœ͂ĂB
uOI[i[FRĝݕ\܂B
̋Lւ̃gbNobN
|
__label__pos
| 0.91822 |
Skip to main content
Azure OpenAI Generative AI with Weaviate
New Documentation
The model provider integration pages are new and still undergoing improvements. We appreciate any feedback on this forum thread.
Weaviate's integration with Azure OpenAI's APIs allows you to access their models' capabilities directly from Weaviate.
Configure a Weaviate collection to use an Azure OpenAI generative AI model, and Weaviate will perform retrieval augmented generation (RAG) using the specified model and your Azure OpenAI API key.
More specifically, Weaviate will perform a search, retrieve the most relevant objects, and then pass them to the Azure OpenAI generative model to generate outputs.
RAG integration illustration
Requirements
Weaviate configuration
Your Weaviate instance must be configured with the Azure OpenAI generative AI integration (generative-openai) module.
For Weaviate Cloud (WCD) users
This integration is enabled by default on Weaviate Cloud (WCD) serverless instances.
For self-hosted users
API credentials
You must provide a valid Azure OpenAI API key to Weaviate for this integration. Go to Azure OpenAI to sign up and obtain an API key.
Provide the API key to Weaviate using one of the following methods:
• Set the AZURE_APIKEY environment variable that is available to Weaviate.
• Provide the API key at runtime, as shown in the examples below.
import weaviate
from weaviate.classes.init import Auth
import os
# Recommended: save sensitive data as environment variables
azure_key = os.getenv("AZURE_APIKEY")
headers = {
"X-Azure-Api-Key": azure_key,
}
client = weaviate.connect_to_weaviate_cloud(
cluster_url=weaviate_url, # `weaviate_url`: your Weaviate URL
auth_credentials=Auth.api_key(weaviate_key), # `weaviate_key`: your Weaviate API key
headers=headers
)
# Work with Weaviate
client.close()
Configure collection
Configure a Weaviate collection to use an OpenAI generative AI model as follows:
Select the model to be used by specifying the Azure resource name.
from weaviate.classes.config import Configure
client.collections.create(
"DemoCollection",
generative_config=Configure.Generative.azure_openai(
resource_name="<azure-resource-name>",
deployment_id="<azure-deployment-id>",
)
# Additional parameters not shown
)
Retrieval augmented generation
After configuring the generative AI integration, perform RAG operations, either with the single prompt or grouped task method.
Single prompt
Single prompt RAG integration generates individual outputs per search result
To generate text for each object in the search results, use the single prompt method.
The example below generates outputs for each of the n search results, where n is specified by the limit parameter.
When creating a single prompt query, use braces {} to interpolate the object properties you want Weaviate to pass on to the language model. For example, to pass on the object's title property, include {title} in the query.
collection = client.collections.get("DemoCollection")
response = collection.generate.near_text(
query="A holiday film", # The model provider integration will automatically vectorize the query
single_prompt="Translate this into French: {title}",
limit=2
)
for obj in response.objects:
print(obj.properties["title"])
print(f"Generated output: {obj.generated}") # Note that the generated output is per object
Grouped task
Grouped task RAG integration generates one output for the set of search results
To generate one text for the entire set of search results, use the grouped task method.
In other words, when you have n search results, the generative model generates one output for the entire group.
collection = client.collections.get("DemoCollection")
response = collection.generate.near_text(
query="A holiday film", # The model provider integration will automatically vectorize the query
grouped_task="Write a fun tweet to promote readers to check out these films.",
limit=2
)
print(f"Generated output: {response.generated}") # Note that the generated output is per query
for obj in response.objects:
print(obj.properties["title"])
References
Generative parameters
Configure the following generative parameters to customize the model behavior.
from weaviate.classes.config import Configure
client.collections.create(
"DemoCollection",
generative_config=Configure.Generative.azure_openai(
resource_name="<azure-resource-name>",
deployment_id="<azure-deployment-id>",
# # These parameters are optional
# frequency_penalty=0,
# max_tokens=500,
# presence_penalty=0,
# temperature=0.7,
# top_p=0.7,
# base_url="<custom-azure-url>"
)
# Additional parameters not shown
)
For further details on these parameters, see consult the Azure OpenAI API documentation.
Available models
See the Azure OpenAI documentation for a list of available models and their regional availability.
Further resources
Other integrations
Code examples
Once the integrations are configured at the collection, the data management and search operations in Weaviate work identically to any other collection. See the following model-agnostic examples:
• The how-to: manage data guides show how to perform data operations (i.e. create, update, delete).
• The how-to: search guides show how to perform search operations (i.e. vector, keyword, hybrid) as well as retrieval augmented generation.
References
Questions and feedback
If you have any questions or feedback, let us know in the user forum.
|
__label__pos
| 0.947887 |
跳到主要內容
Corona SDK 無法刪除多筆newImage的物件
程式碼如下:
NumCount = 1
function MeTouch(e)
if e.phase == "ended" then
if NumCount > 5 then
NumCount = 1
end
if PngGroup then
PngGroup:removeSelf()
PngGroup = nil
end
for i = 1,NumCount do
PngGroup = display.newImage("apple.png")
if i > 3 then
PngGroup.x = centerX + (50 * i/3)
PngGroup.y = 70 + (50 * (i/3))
else
PngGroup.x = centerX + (50 * i)
PngGroup.y = 70;
end
end
NumCount = NumCount +1
end
end
Runtime:addEventListener( "touch", MeTouch )
每Touch一次就會刪除原來的圖片,再載入新的圖片,但是一次Touch的事件中可能會載入多張的圖片,
例如變數NumCount=2,片段程式如下:
for i = 1,NumCount do
PngGroup = display.newImage("apple.png")
測試的結果載入2張的圖片是沒有問題的,但是在刪除圖片時卻只會刪除最後一張,片段程式如下:
PngGroup:removeSelf()
PngGroup = nil
換句話說,雖然圖片的物件名稱都是PngGroup,但是實際上還是不一樣的2個物件,所以無法以PngGroup這個名稱來刪除多個物件
解決的方法是將PngGroup寫到Table內,例如:
local tmpTable = {} --產生一個Table
for i = 1,NumCount do
PngGroup = display.newImage("apple.png")
table.insert(tmpTable,PngGroup) --將物件新增到Table
end
要刪除全部的PngGroup物件時,只要將Table內的物件,一個一個刪除就可以了。
for i = #tmpTable, 1, -1 do
local child = table.remove(tmpTable, i) -- Remove from table
if child ~= nil then
child:removeSelf()
child = nil
end
end
留言
這個網誌中的熱門文章
Shell Script簡易教學
一、概論
在許多的情況之下,我們都需要固定一組可以重覆或判斷資訊的指令,
而把這些指令存被在文字檔中,再交由Shell執行,就是Script。
一般會將Shell Script的副檔名命名為.sh,雖然副檔名在Linux中並非必要,
但是有副檔名可以讓我們更容易管理這些檔案。
假設有一個名為test.sh 的 Shell Script,首先用文字編輯器來撰寫內容如下:
#!/bin/bash
echo Hello World
第一行是必需的,它是用來定義你要使用的 shell。Linux中有許多的Shell可以使用,
如:ksh、bash,但是彼此之間語法有所差異,所以我們首先需要定義使用哪一個Shell。
而第二行的 echo 代表列出一個字串,預設會把後面的字串「Hello World」顯示在螢幕上。
將test.sh存檔後,可以用下列其中一種方式執行它:
1、直接輸入 sh test.sh
2、改變test.sh的權限,加上可以執行的權限,
chmod a+x test.sh
接著直接執行它:
./test.sh
在Shell Script中,「#」表示註解,在#後面將視為註解並且被程式忽略。
例如:
#pwd
ls -l
Shell只會執行ls -l,而不會執行ls -l
而「;」 則代表指令的分隔,例如:
pwd;ls -l
pwd
ls -l
都是一樣執行pwd及ls -l。
二、變數的使用
在Shell Script中,所有的變數都視為字串,因此不需要在定義變數類型。
在Shell中定義和使用變數時並不一樣。
例如,定義一個變數id並且設定值為2013001,接著還要將印出變數的值:
id=2013001 -> 定義變數時前面不加「$」符號
echo $id -> 使用變數時前面要加「$」符號
注意,在等號的二邊不可以有空白,否則將出現錯誤。
再介紹一個範例:
dir=/home/oracle
ls $dir
這裡我們定義了變數dir的值為/home/oracle,接著用ls指令來印出變數dir,
此時指令會變為ls /home/oracle,所以就把目錄中所有檔案都列出來。
我們再來看一個例子,說明如何使用變數來定義變數:
$ tmppath=/tmp$ tmpfile=$tmppath/abc.txt$ ec…
Line如何換行
在電腦版的Line輸入文字時,遇到需要換行的情形時,我都是用記事本先寫好再複製上去,這樣就可以有換行的效果,可是這樣的做法好像失去Line的便利性。
於是查了一下,原來有一個設定可以指定Enter換行,而不是發訊息出去。
完成設定之後,要發送訊息就改用Alt+Enter,而Enter就可以換行了。
在Windows下,利用tasklist與taskkill來刪除Process
Windows7 / Windows8 kill process
Linux下要刪除某個程序通常會使用 ps 配合 kill 來刪除程序。
例如:ps -ef |grep [PROCESS NAME]
kill -9 [PID]
在Windows下,通常是開啟工作管理員來強制結束應用程式,但是如果要寫成Script,就必須改為命令式。
TASKLIST [/S system [/U username [/P [password]]]]
[/M [module] | /SVC | /V] [/FI filter] [/FO format] [/NH]
TASKKILL [/S system [/U username [/P [password]]]]
{ [/FI filter] [/PID processid | /IM imagename] } [/T] [/F]
(Tasklist:查詢Process ; Taskkill:刪除Process)
例如要刪除已開啟的記事本(notepad):
1、查詢記事本的Process訊息
C:\> tasklist |find /i "notepad.exe"
notepad.exe 6092 Console 1 5,832 K
2、由上得知記事本的PID為6092
C:\> taskkill /f /PID 6092
成功:處理程序 PID 6092 已經終止了。
taskkill使用的參數
/f:指定此參數可強制終止處理程序
/PID:指定要終止之處理程序的 PID
3、也可以直接以程式名稱刪除
C:\ taskkill /f /im notepad.exe
成功:處理程序 "notepad.exe" <PID 6092> 已經終止了
下面是我自己測試的Script,可以刪除多個相同的程式,例如同時開啟了三個記事本:
@echo off
for /f "tokens=2 delims= " %%c in ('tasklist /FI "imagename eq notepad.exe" /FO table /NH&…
|
__label__pos
| 0.829753 |
wangshl.dll
Process name: Imaging voor Windows 95
Application using this process: Imaging voor Windows 95
Recommended: Check your system for invalid registry entries.
wangshl.dll
Process name: Imaging voor Windows 95
Application using this process: Imaging voor Windows 95
Recommended: Check your system for invalid registry entries.
wangshl.dll
Process name: Imaging voor Windows 95
Application using this process: Imaging voor Windows 95
Recommended: Check your system for invalid registry entries.
What is wangshl.dll doing on my computer?
wangshl.dll is a WANGSHL DLL This process is still being reviewed. If you have some information about it feel free to send us an email at pl[at]uniblue[dot]com
Non-system processes like wangshl.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
wangshl.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is wangshl.dll harmful?
wangshl.dll has not been assigned a security rating yet.
wangshl.dll is unrated
Can I stop or remove wangshl.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. wangshl.dll is used by 'Imaging voor Windows 95'.This is an application created by 'Wang Laboratories, Inc.'. To stop wangshl.dll permanently uninstall 'Imaging voor Windows 95' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time.
Is wangshl.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up.
Why is wangshl.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
Toolbox
ProcessQuicklink
|
__label__pos
| 0.947789 |
changeset 9476:c827ad8c1101
8022447: Fix doclint warnings in java.awt.image Reviewed-by: darcy
author prr
date Tue, 06 Aug 2013 17:12:37 -0700
parents fe04f40cf469
children 9314c199003d
files src/share/classes/java/awt/image/BufferStrategy.java src/share/classes/java/awt/image/BufferedImage.java src/share/classes/java/awt/image/ByteLookupTable.java src/share/classes/java/awt/image/ColorModel.java src/share/classes/java/awt/image/DirectColorModel.java src/share/classes/java/awt/image/ImageProducer.java src/share/classes/java/awt/image/IndexColorModel.java src/share/classes/java/awt/image/MemoryImageSource.java src/share/classes/java/awt/image/MultiPixelPackedSampleModel.java src/share/classes/java/awt/image/PixelGrabber.java src/share/classes/java/awt/image/RGBImageFilter.java src/share/classes/java/awt/image/ShortLookupTable.java src/share/classes/java/awt/image/SinglePixelPackedSampleModel.java src/share/classes/java/awt/image/WritableRaster.java
diffstat 14 files changed, 44 insertions(+), 43 deletions(-) [+]
line wrap: on
line diff
--- a/src/share/classes/java/awt/image/BufferStrategy.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/BufferStrategy.java Tue Aug 06 17:12:37 2013 -0700
@@ -55,7 +55,7 @@
* Alternatively, the contents of the back buffer can be copied, or
* <i>blitted</i> forward in a chain instead of moving the video pointer.
* <p>
- * <pre>
+ * <pre>{@code
* Double buffering:
*
* *********** ***********
@@ -72,7 +72,7 @@
* * * <------ * * <----- * *
* *********** *********** ***********
*
- * </pre>
+ * }</pre>
* <p>
* Here is an example of how buffer strategies can be created and used:
* <pre><code>
--- a/src/share/classes/java/awt/image/BufferedImage.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/BufferedImage.java Tue Aug 06 17:12:37 2013 -0700
@@ -602,12 +602,12 @@
* the raster has been premultiplied with alpha.
* @param properties <code>Hashtable</code> of
* <code>String</code>/<code>Object</code> pairs.
- * @exception <code>RasterFormatException</code> if the number and
+ * @exception RasterFormatException if the number and
* types of bands in the <code>SampleModel</code> of the
* <code>Raster</code> do not match the number and types required by
* the <code>ColorModel</code> to represent its color and alpha
* components.
- * @exception <code>IllegalArgumentException</code> if
+ * @exception IllegalArgumentException if
* <code>raster</code> is incompatible with <code>cm</code>
* @see ColorModel
* @see Raster
@@ -927,7 +927,7 @@
* each color component in the returned data when
* using this method. With a specified coordinate (x, y) in the
* image, the ARGB pixel can be accessed in this way:
- * </p>
+ * <p>
*
* <pre>
* pixel = rgbArray[offset + (y-startY)*scansize + (x-startX)]; </pre>
@@ -1131,7 +1131,7 @@
* @return an {@link Object} that is the property referred to by the
* specified <code>name</code> or <code>null</code> if the
* properties of this image are not yet known.
- * @throws <code>NullPointerException</code> if the property name is null.
+ * @throws NullPointerException if the property name is null.
* @see ImageObserver
* @see java.awt.Image#UndefinedProperty
*/
@@ -1144,7 +1144,7 @@
* @param name the property name
* @return an <code>Object</code> that is the property referred to by
* the specified <code>name</code>.
- * @throws <code>NullPointerException</code> if the property name is null.
+ * @throws NullPointerException if the property name is null.
*/
public Object getProperty(String name) {
if (name == null) {
@@ -1196,7 +1196,7 @@
* @param h the height of the specified rectangular region
* @return a <code>BufferedImage</code> that is the subimage of this
* <code>BufferedImage</code>.
- * @exception <code>RasterFormatException</code> if the specified
+ * @exception RasterFormatException if the specified
* area is not contained within this <code>BufferedImage</code>.
*/
public BufferedImage getSubimage (int x, int y, int w, int h) {
@@ -1388,7 +1388,7 @@
* @param tileY the y index of the requested tile in the tile array
* @return a <code>Raster</code> that is the tile defined by the
* arguments <code>tileX</code> and <code>tileY</code>.
- * @exception <code>ArrayIndexOutOfBoundsException</code> if both
+ * @exception ArrayIndexOutOfBoundsException if both
* <code>tileX</code> and <code>tileY</code> are not
* equal to 0
*/
@@ -1558,7 +1558,7 @@
* @return <code>true</code> if the tile specified by the specified
* indices is checked out for writing; <code>false</code>
* otherwise.
- * @exception <code>ArrayIndexOutOfBoundsException</code> if both
+ * @exception ArrayIndexOutOfBoundsException if both
* <code>tileX</code> and <code>tileY</code> are not equal
* to 0
*/
--- a/src/share/classes/java/awt/image/ByteLookupTable.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/ByteLookupTable.java Tue Aug 06 17:12:37 2013 -0700
@@ -171,7 +171,7 @@
* @exception ArrayIndexOutOfBoundsException if <code>src</code> is
* longer than <code>dst</code> or if for any element
* <code>i</code> of <code>src</code>,
- * <code>(src[i]&0xff)-offset</code> is either less than
+ * {@code (src[i]&0xff)-offset} is either less than
* zero or greater than or equal to the length of the
* lookup table for any band.
*/
--- a/src/share/classes/java/awt/image/ColorModel.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/ColorModel.java Tue Aug 06 17:12:37 2013 -0700
@@ -692,12 +692,12 @@
* <code>DataBuffer.TYPE_INT</code>.
* @param inData an array of pixel values
* @return the value of the green component of the specified pixel.
- * @throws <code>ClassCastException</code> if <code>inData</code>
+ * @throws ClassCastException if <code>inData</code>
* is not a primitive array of type <code>transferType</code>
- * @throws <code>ArrayIndexOutOfBoundsException</code> if
+ * @throws ArrayIndexOutOfBoundsException if
* <code>inData</code> is not large enough to hold a pixel value
* for this <code>ColorModel</code>
- * @throws <code>UnsupportedOperationException</code> if this
+ * @throws UnsupportedOperationException if this
* <code>tranferType</code> is not supported by this
* <code>ColorModel</code>
*/
--- a/src/share/classes/java/awt/image/DirectColorModel.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/DirectColorModel.java Tue Aug 06 17:12:37 2013 -0700
@@ -642,12 +642,12 @@
* @param inData the specified pixel
* @return the alpha component of the specified pixel, scaled from
* 0 to 255.
- * @exception <code>ClassCastException</code> if <code>inData</code>
+ * @exception ClassCastException if <code>inData</code>
* is not a primitive array of type <code>transferType</code>
- * @exception <code>ArrayIndexOutOfBoundsException</code> if
+ * @exception ArrayIndexOutOfBoundsException if
* <code>inData</code> is not large enough to hold a pixel value
* for this <code>ColorModel</code>
- * @exception <code>UnsupportedOperationException</code> if this
+ * @exception UnsupportedOperationException if this
* <code>tranferType</code> is not supported by this
* <code>ColorModel</code>
*/
@@ -1055,7 +1055,7 @@
* begin retrieving the color and alpha components
* @return an <code>int</code> pixel value in this
* <code>ColorModel</code> corresponding to the specified components.
- * @exception <code>ArrayIndexOutOfBoundsException</code> if
+ * @exception ArrayIndexOutOfBoundsException if
* the <code>components</code> array is not large enough to
* hold all of the color and alpha components starting at
* <code>offset</code>
@@ -1097,9 +1097,9 @@
* and alpha components
* @return an <code>Object</code> representing an array of color and
* alpha components.
- * @exception <code>ClassCastException</code> if <code>obj</code>
+ * @exception ClassCastException if <code>obj</code>
* is not a primitive array of type <code>transferType</code>
- * @exception <code>ArrayIndexOutOfBoundsException</code> if
+ * @exception ArrayIndexOutOfBoundsException if
* <code>obj</code> is not large enough to hold a pixel value
* for this <code>ColorModel</code> or the <code>components</code>
* array is not large enough to hold all of the color and alpha
--- a/src/share/classes/java/awt/image/ImageProducer.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/ImageProducer.java Tue Aug 06 17:12:37 2013 -0700
@@ -100,11 +100,11 @@
* <code>ImageProducer</code> should respond by executing
* the following minimum set of <code>ImageConsumer</code>
* method calls:
- * <pre>
+ * <pre>{@code
* ic.setHints(TOPDOWNLEFTRIGHT | < otherhints >);
* ic.setPixels(...); // As many times as needed
* ic.imageComplete();
- * </pre>
+ * }</pre>
* @param ic the specified <code>ImageConsumer</code>
* @see ImageConsumer#setHints
*/
--- a/src/share/classes/java/awt/image/IndexColorModel.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/IndexColorModel.java Tue Aug 06 17:12:37 2013 -0700
@@ -98,6 +98,7 @@
* Index values greater than or equal to the map size, but less than
* 2<sup><em>n</em></sup>, are undefined and return 0 for all color and
* alpha components.
+ * </a>
* <p>
* For those methods that use a primitive array pixel representation of
* type <code>transferType</code>, the array length is always one.
--- a/src/share/classes/java/awt/image/MemoryImageSource.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/MemoryImageSource.java Tue Aug 06 17:12:37 2013 -0700
@@ -37,7 +37,7 @@
* uses an array to produce pixel values for an Image. Here is an example
* which calculates a 100x100 image representing a fade from black to blue
* along the X axis and a fade from black to red along the Y axis:
- * <pre>
+ * <pre>{@code
*
* int w = 100;
* int h = 100;
@@ -52,12 +52,12 @@
* }
* Image img = createImage(new MemoryImageSource(w, h, pix, 0, w));
*
- * </pre>
+ * }</pre>
* The MemoryImageSource is also capable of managing a memory image which
* varies over time to allow animation or custom rendering. Here is an
* example showing how to set up the animation source and signal changes
* in the data (adapted from the MemoryAnimationSourceDemo by Garth Dickie):
- * <pre>
+ * <pre>{@code
*
* int pixels[];
* MemoryImageSource source;
@@ -96,7 +96,7 @@
* }
* }
*
- * </pre>
+ * }</pre>
*
* @see ImageProducer
*
--- a/src/share/classes/java/awt/image/MultiPixelPackedSampleModel.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/MultiPixelPackedSampleModel.java Tue Aug 06 17:12:37 2013 -0700
@@ -52,14 +52,14 @@
* <code>x, y</code> from <code>DataBuffer</code> <code>data</code>
* and storing the pixel data in data elements of type
* <code>dataType</code>:
- * <pre>
+ * <pre>{@code
* int dataElementSize = DataBuffer.getDataTypeSize(dataType);
* int bitnum = dataBitOffset + x*pixelBitStride;
* int element = data.getElem(y*scanlineStride + bitnum/dataElementSize);
* int shift = dataElementSize - (bitnum & (dataElementSize-1))
* - pixelBitStride;
* int pixel = (element >> shift) & ((1 << pixelBitStride) - 1);
- * </pre>
+ * }</pre>
*/
public class MultiPixelPackedSampleModel extends SampleModel
--- a/src/share/classes/java/awt/image/PixelGrabber.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/PixelGrabber.java Tue Aug 06 17:12:37 2013 -0700
@@ -35,7 +35,7 @@
* The PixelGrabber class implements an ImageConsumer which can be attached
* to an Image or ImageProducer object to retrieve a subset of the pixels
* in that image. Here is an example:
- * <pre>
+ * <pre>{@code
*
* public void handlesinglepixel(int x, int y, int pixel) {
* int alpha = (pixel >> 24) & 0xff;
@@ -65,7 +65,7 @@
* }
* }
*
- * </pre>
+ * }</pre>
*
* @see ColorModel#getRGBdefault
*
@@ -165,8 +165,8 @@
* accumulated in the default RGB ColorModel. If the forceRGB
* parameter is true, then the pixels will be accumulated in the
* default RGB ColorModel anyway. A buffer is allocated by the
- * PixelGrabber to hold the pixels in either case. If (w < 0) or
- * (h < 0), then they will default to the remaining width and
+ * PixelGrabber to hold the pixels in either case. If {@code (w < 0)} or
+ * {@code (h < 0)}, then they will default to the remaining width and
* height of the source data when that information is delivered.
* @param img the image to retrieve the image data from
* @param x the x coordinate of the upper left corner of the rectangle
@@ -233,10 +233,10 @@
* behaves in the following ways, depending on the value of
* <code>ms</code>:
* <ul>
- * <li> If <code>ms</code> == 0, waits until all pixels are delivered
- * <li> If <code>ms</code> > 0, waits until all pixels are delivered
+ * <li> If {@code ms == 0}, waits until all pixels are delivered
+ * <li> If {@code ms > 0}, waits until all pixels are delivered
* as timeout expires.
- * <li> If <code>ms</code> < 0, returns <code>true</code> if all pixels
+ * <li> If {@code ms < 0}, returns <code>true</code> if all pixels
* are grabbed, <code>false</code> otherwise and does not wait.
* </ul>
* @param ms the number of milliseconds to wait for the image pixels
--- a/src/share/classes/java/awt/image/RGBImageFilter.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/RGBImageFilter.java Tue Aug 06 17:12:37 2013 -0700
@@ -39,7 +39,7 @@
* The only method which needs to be defined to create a useable image
* filter is the filterRGB method. Here is an example of a definition
* of a filter which swaps the red and blue components of an image:
- * <pre>
+ * <pre>{@code
*
* class RedBlueSwapFilter extends RGBImageFilter {
* public RedBlueSwapFilter() {
@@ -56,7 +56,7 @@
* }
* }
*
- * </pre>
+ * }</pre>
*
* @see FilteredImageSource
* @see ImageFilter
--- a/src/share/classes/java/awt/image/ShortLookupTable.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/ShortLookupTable.java Tue Aug 06 17:12:37 2013 -0700
@@ -114,7 +114,7 @@
* @exception ArrayIndexOutOfBoundsException if <code>src</code> is
* longer than <code>dst</code> or if for any element
* <code>i</code> of <code>src</code>,
- * <code>(src[i]&0xffff)-offset</code> is either less than
+ * {@code (src[i]&0xffff)-offset} is either less than
* zero or greater than or equal to the length of the
* lookup table for any band.
*/
@@ -165,7 +165,7 @@
* @exception ArrayIndexOutOfBoundsException if <code>src</code> is
* longer than <code>dst</code> or if for any element
* <code>i</code> of <code>src</code>,
- * <code>(src[i]&0xffff)-offset</code> is either less than
+ * {@code (src[i]&0xffff)-offset} is either less than
* zero or greater than or equal to the length of the
* lookup table for any band.
*/
--- a/src/share/classes/java/awt/image/SinglePixelPackedSampleModel.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/SinglePixelPackedSampleModel.java Tue Aug 06 17:12:37 2013 -0700
@@ -57,10 +57,10 @@
* The following code illustrates extracting the bits of the sample
* representing band <code>b</code> for pixel <code>x,y</code>
* from DataBuffer <code>data</code>:
- * <pre>
+ * <pre>{@code
* int sample = data.getElem(y * scanlineStride + x);
* sample = (sample & bitMasks[b]) >>> bitOffsets[b];
- * </pre>
+ * }</pre>
*/
public class SinglePixelPackedSampleModel extends SampleModel
--- a/src/share/classes/java/awt/image/WritableRaster.java Tue Aug 06 17:11:29 2013 -0700
+++ b/src/share/classes/java/awt/image/WritableRaster.java Tue Aug 06 17:12:37 2013 -0700
@@ -372,13 +372,13 @@
* integral type and less than or equal to 32 bits in size, then calling
* this method is equivalent to executing the following code for all
* <code>x,y</code> addresses valid in both Rasters.
- * <pre>
+ * <pre>{@code
* Raster srcRaster;
* WritableRaster dstRaster;
* for (int b = 0; b < srcRaster.getNumBands(); b++) {
* dstRaster.setSample(x, y, b, srcRaster.getSample(x, y, b));
* }
- * </pre>
+ * }</pre>
* Thus, when copying an integral type source to an integral type
* destination, if the source sample size is greater than the destination
* sample size for a particular band, the high order bits of the source
|
__label__pos
| 0.666514 |
Use Google Apps Script to collect form responses on a static website and get notified on Slack, all without setting up a server.
Collect form responses using Google Apps Script
Ravgeet Dhillon
Ravgeet Dhillon
Updated on Oct 08, 2021 in Development
⏱ 17 min read
Blog banner for Collect form responses using Google Apps Script
Most of the time you are designing static websites. But almost all of them have some components like forms, comments, where you want to collect the user responses. Setting up a dedicated server for backend and database is a good option, but there is a cost overhead as well. Thankfully, you can set up this entire system using a serverless architecture.
In this blog, you will learn to use amazing Google Apps Scripts as backend and Google Spreadsheets for data persistence to collect the form responses from your static website. This approach can help you set up forms on Github Pages, Netlify, or any other hosting provider. As a bonus, you will also add a webhook to notify your Leads team on Slack whenever a new form is filled.
Contents
Creating a Google Spreadsheet
1. Create a new Google Spreadsheet and name the sheet as Sheet1.
2. Add the following fields in the top row of your spreadsheet. Make sure you name them correctly because you will be using these names in your HTML form.
Format for Google Spreadsheet for collecting form responses
Google Spreadsheet to collect form responses
Creating a Slack Bot
To notify your Leads team on the Slack, you need to create a Slack bot.
1. Go to https://api.slack.com/apps and click Create New App.
2. Give your app a name and choose your Development Workspace from the dropdown.
3. Once you have created an app, you need to turn on the Incoming Webhook feature and create a new webhook URL.
4. Create a new webhook by clicking Add New Webhook to Workspace and choose the channel you want the notifications to be posted in. Your webhook URL should look like this https://hooks.slack.com/services/T0160Uxxxxx/B0187Nxxxxx/4AZixxswHVxxxxxxxxxxxxxx. If you have access to a terminal, you can test the webhook as well by sending a POST request using cURL.
curl -X POST -H 'Content-type: application/json' --data '{'text':'Hello, World!'}' https://hooks.slack.com/services/T0160Uxxxxx/B0187Nxxxxx/4AZixxswHVxxxxxxxxxxxxxx
Setup name of your Slack app and development workspace
Setup name of your Slack app and development workspace
Creating a Google Apps Script Project
Now comes the most important and interesting part of the project. Google Apps Script is written in Javascript. So even if you have basic Javascript knowledge, setting up Google Apps will be a breeze for you.
1. Create a new project at https://script.google.com/home.
2. Create a new script file from File > New > Script, name it as Form.gs add the following code to it:
// new property service
const SCRIPT_PROP = PropertiesService.getScriptProperties()
function doGet(e) {
return handleResponse(e)
}
function handleResponse(e) {
// this prevents concurrent access overwritting data
// you want a public lock, one that locks for all invocations
const lock = LockService.getPublicLock()
lock.waitLock(30000) // wait 30 seconds before conceding defeat
try {
// next set where you write the data - you could write to multiple/alternate destinations
const doc = SpreadsheetApp.openById(SCRIPT_PROP.getProperty('key'))
const sheet = doc.getSheetByName(SHEET_NAME)
const headRow = 1
const headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues()[0]
const nextRow = sheet.getLastRow() + 1 // get next row
const row = []
// loop through the header columns
for (i in headers) {
switch (headers[i]) {
case 'timestamp':
row.push(new Date())
break
default:
const str = e.parameter[headers[i]]
row.push(str.trim().substring(0, CHARACTER_LIMIT))
break
}
}
// add data to the spreadsheet
sheet.getRange(nextRow, 1, 1, row.length).setValues([row])
// send thanks email to customer
const emailStatus = notifyCustomer(row)
// send notification to slack
postToSlack(row, emailStatus)
// return json success results
return ContentService
.createTextOutput(JSON.stringify({'result': 'success'}))
.setMimeType(ContentService.MimeType.JSON)
}
catch (e) {
// if error then log it and return response
Logger.log(e)
return ContentService
.createTextOutput(JSON.stringify({'result': 'error'}))
.setMimeType(ContentService.MimeType.JSON)
}
finally {
// release lock
lock.releaseLock()
}
}
function setup() {
const doc = SpreadsheetApp.getActiveSpreadsheet()
SCRIPT_PROP.setProperty('key', doc.getId())
}
Don’t forget to run the setup function. It is important to connect your project with the Google Spreadsheet and gain the right permissions.
1. Create a new script file from File > New > Script and name it as Email.gs. In this file, you will write the code that sends an email back to the customer on your behalf.
2. Add the following code to this script file:
function notifyCustomer(data) {
const name = data[1]
const message = 'Hi' + name + '. Your response has been received. you will get in touch with you shortly.'
// check if you can send an email
if (MailApp.getRemainingDailyQuota() > 0) {
const email = data[2]
// send the email on your behalf
MailApp.sendEmail({
to: email,
subject: 'Thanks for contacting RavSam Web Solutions.',
body: message
})
return true
}
}
• you will again create a new script file from File > New > Script and name it as Slack.gs
• In this file, you will write the code that notifies your Leads team on the form submission.
• Add the following code to this script file:
function postToSlack(data, emailSent) {
const name = data[1]
const email = data[2]
const phone = data[3]
const service = data[4]
const notes = data[5]
// check if email was sent
if (emailSent) const emailStatus = 'Email Sent'
else const emailStatus = 'Email Not Sent'
// create a message format
const payload = {
'attachments': [{
'text': 'Lead Details',
'fallback': 'New Customer Lead has been received',
'pretext': 'New Customer Lead has been received',
'fields': [
{
'title': 'Full Name',
'value': name,
'short': true
},
{
'title': 'Phone',
'value': '<tel:' + phone + '|' + phone + '>',
'short': true
},
{
'title': 'Service',
'value': service,
'short': true
}
{
'title': 'Email',
'value': emailStatus + ' to <mailto:' + email + '|' + email + '>',
'short': false
},
{
'title': 'Notes',
'value': notes,
'short': false
},
],
'mrkdwn_in': ['text', 'fields'],
'footer': 'Developed by <https://www.ravsam.in|RavSam Web Solutions>',
}]
}
// prepare the data to be sent with POST request
const options = {
'method' : 'post',
'contentType' : 'application/json',
'payload' : JSON.stringify(payload)
}
// send a post request to your webhook URL
return UrlFetchApp.fetch(webhookUrl, options)
}
1. Finally, create a script file from File > New > Script and name it as Variables.gs to store your constant variables. In this file, you will store your constant variables that are referenced in the project.
2. Add the following code to this script file:
// enter sheet name where data is to be written below
const SHEET_NAME = 'Sheet1'
// set a max character limit for each form field
const CHARACTER_LIMIT = 1000
// slack bot weebhook URL
const webhookUrl = 'https://hooks.slack.com/services/T0160Uxxxxx/B0187Nxxxxx/4AZixxswHVxxxxxxxxxxxxxx'
So your project is ready, but there is still one last thing to do. You need to deploy your project as a Web App so that you can access it over the Internet.
Deploying a Google Apps Script Project
At this point, you are done with code and now is the time to deploy your project as a Web App.
1. Visit Publish > Deploy as Web App...
2. Make sure you set the Who has access to the app: to Anyone, even anonymous. This is important so that you can make an unauthorized call to your Web App.
3. Finally, deploy the web app and copy the web app’s URL. The URL should look like this https://script.google.com/macros/s/AKfycbxSF9Y4V4qmZLxUbcaMB0Xhmjwqxxxxxxxxxxxxxxxxxxxxxxx/exec
Deploy the Google Apps Script project as a web app
Deploy the Google Apps Script project as a web app
Setting up a HTML Form
On your static website, add the following Bootstrap form:
<form id="contact-form" class="needs-validation" role="form" novalidate>
<div class="row">
<div class="col-md-6">
<div class="form-group">
<input type="text" name="name" class="form-control" placeholder="Full Name" required>
</div>
</div>
<div class="col-md-6">
<div class="form-group">
<input type="email" name="email" class="form-control" placeholder="Email" required>
</div>
</div>
<div class="col-md-6">
<div class="form-group">
<input type="tel" name="phone" class="form-control" placeholder="Mobile No." required>
</div>
</div>
<div class="col-md-6">
<div class="form-group">
<input type="text" name="service" class="form-control" placeholder="Service" required>
</div>
</div>
<div class="col-12">
<div class="form-group">
<textarea class="form-control rounded" rows="8" name="notes" placeholder="Any Notes" required></textarea>
</div>
</div>
<div class="col-12 mt-3">
<button class="btn btn-primary" type="submit" name="submit">Submit request -></button>
</div>
</div>
</form>
You need to make sure that the form fields’ names are same as headers in the Google Spreadsheet.
Setting up Javascript
Finally, you need to add some Javascript to make AJAX call to the Google Apps Script:
<script src="/assets/jquery/dist/jquery.min.js"></script>
<script src="/assets/popper.js/dist/umd/popper.min.js"></script>
<script src="/assets/bootstrap/dist/js/bootstrap.min.js"></script>
<script>
// for validating the forms
(function () {
'use strict';
window.addEventListener(
'load', function () {
const formObject = $('#contact-form');
const form = formObject[0];
if (form != undefined) {
form.addEventListener(
'submit',
function (event) {
const submitBtn = $('button[name="submit"]')[0];
submitBtn.disabled = true;
submitBtn.innerHTML = 'Submitting request...';
if (form.checkValidity() === false) {
submitBtn.disabled = false;
submitBtn.innerHTML = 'Submit request ->';
event.preventDefault();
event.stopPropagation();
}
else {
const url = 'https://script.google.com/macros/s/AKfycbxSF9Y4V4qmZLxUbcaMB0Xhmjwqxxxxxxxxxxxxxxxxxxxxxxx/exec';
const redirectSuccessUrl = '/thanks/';
const redirectFailedUrl = '/failed/';
const xhr = $.ajax({
url: url,
method: 'GET',
dataType: 'json',
data: formObject.serialize(),
success: function (data) {
submitBtn.disabled = false;
submitBtn.innerHTML = 'Submit request ->';
$(location).attr('href', redirectSuccessUrl);
},
error: function (data) {
submitBtn.disabled = false;
submitBtn.innerHTML = 'Submit request ->';
$(location).attr('href', redirectFailedUrl);
},
});
event.preventDefault();
event.stopPropagation();
}
form.classList.add('was-validated');
},
false
);
}
},
false
);
})();
</script>
If the form submission is successful, your customer will be redirected to the Thanks page. However, if anything goes wrong, your customer will be redirected to the Failed page.
Results
Reload your website to reflect the changes you made. Fill the form by adding all the required details and submit the form.
Fill out the sample website form
Fill out the website form
Hurray! You have received a notification sent by your Customer Leads bot.
Notification received in the Slack channel
Notification received in the Slack channel
Next, check your Google Spreadsheet as well and see whether the form response was recorded or not. You can see in the screenshot below that the form response has been successfully stored in the spreadsheet.
Form response recorded in Google Spreadsheet
Form response recorded in Google Spreadsheet
Using this workflow, you can get in touch with your customers as soon as possible and convert the leads into happy clients. Moreover, there is no need to set up servers and databases for collecting form responses on your website. You can use the same approach to collect comments on your blog posts as well.
📫
Loved this post? Join our Newsletter.
We write about React, Vue, Flutter, Strapi, Python and Automation. We don't spam.
Please add a valid email.
By clicking submit button, you agree to our privacy policy.
Thanks for subscribing to our newsletter.
There was some problem while registering your newsletter subscription. Please try again after some time or notify the owners at [email protected]
ABOUT AUTHOR
Ravgeet Dhillon
Ravgeet is a Co-Founder and Developer at RavSam. He helps startups, businesses, open-source organizations with Digital Product Development and Technical Content Writing. He is a fan of Jamstack and likes to work with React, Vue, Flutter, Strapi, Node, Laravel and Python. He loves to play outdoor sports and cycles every day.
Got a project or partnership in mind?
Let's Talk
Contact Us ->
|
__label__pos
| 0.871438 |
Welcome to our community
Be a part of something great, join today!
Axiomatic Approach
solakis
Active member
Dec 9, 2012
380
Given the following axioms:
For all a,b,c we have:
1) a+b = b+c
2) a+(b+c)=(a+b)+c
3) ab = ba
4) a(bc) = (ab)c
5) a(b+c) =ab+ac
NOTE,here the multiplication sign (.) between the variables have been ommited
6) There ia a number called 0 such that for all a,
a+0 =a
7)For each a, there is a number -a such that for any a,
a+(-a) = 0
8)There is a number called 1(diofferent from 0) such that for any a,
a1 = a
9)For each a which is different than 0there exists a number called 1/a such that;
a.(1/a)= 1.
For any numbers a,b
10) either a<b or a>b or a=b
11) if a<b and b<c then a<c
12) if a<b then a+c<b+c for any c
13) if a<b and c>0 then ac<bc for any c.
Then by using only the axioms stated above prove:
A) a0 = 0 ,B) 0<1
In trying to prove A i followed the proof shown below:
1) 0+0=0 .......................by using axiom 6
2) (0+0)x =0x.................by multiplying both sides by x
3)0x+ox =0x.....................by using axiom 5
4) (0x+0x) +(-0x) =0x+(-0x) ..................by adding (-0x) to both sides
5) 0x +[0x+(-0x)]= 0x+(-0x)....................by using axiom 2
6) 0x +0 = 0 .....................by using axiom 7
7) 0x = 0 ..........................by using axiom 6
For B ,I could not show a proof based only on the axioms stated above
Is my proof for A, correct 100%?
Evgeny.Makarov
Well-known member
MHB Math Scholar
Jan 30, 2012
2,533
3)0x+ox =0x.....................by using axiom 5
You need to use commutativity first. Also note that adding the same term to both sides or multiplying both sides by the same number is, in fact, an application of an equality axiom. It is sometimes considered a part of first-order logic and is thus not listed. Finally, note that -0x in the proof is -(0x), not (-0)x.
Yes, with the correction about commutativity before distributivity, the proof of A) is correct. I'll think about a proof of B).
solakis
Active member
Dec 9, 2012
380
You need to use commutativity first. Also note that adding the same term to both sides or multiplying both sides by the same number is, in fact, an application of an equality axiom. It is sometimes considered a part of first-order logic and is thus not listed. Finally, note that -0x in the proof is -(0x), not (-0)x.
Yes, with the correction about commutativity before distributivity, the proof of A) is correct. I'll think about a proof of B).
Itried the following proof for B:
1) 1<0 or 1>0........................by axioms 8 ND 10
2) For 1>0 there is nothing to prove
3) For 1<0 we have :
4) 1+(-1)< 0+(-1).........................by axiom 12
5) 1+(-1)< (-1) +0.........................by axiom 1
6) 0< -1........................................by axioms 6 and 7
7) Now here we need a definition which is missing .
The definition is : for a,b a<b <=> b>a
8) So according to the above definition we have:
-1>0
9) -1>0
10) 1(-1)< 0(-1) ........................by axiom 13
11) 1(-1)< 0..............................by part A (0A=0)
12) 1(-1)+1<0+1........................BY axiom 12
13) 1.1+ 1(-1)<1+0....................by axioms 1,3 and 8
14) 1(1+(-1))< 1.........................by axioms 5 and 6
15) 1.0 <1.................................by axiom 7
16) 0<1.....................................by part A
Evgeny.Makarov
Well-known member
MHB Math Scholar
Jan 30, 2012
2,533
I believe your proof of B is correct. Well done.
solakis
Active member
Dec 9, 2012
380
I believe your proof of B is correct. Well done.
That means that the department of Mathematics o f the University of the Witwatersrand In Johannesburg in the year 1965 gave a wrong question in the final exams of Mathematics I.
They repeated the same mistake the year 1966 ,although they slightly changed the order axioms as shown below:
1) exactly one of a>b,a<b or a=b holds
2) if a>b ,b>c then a>c
3) if c>0 ,a>b then ac>bc
4) if a>b then a+c>b+c for any c
Here they also asked for a proof of :
A) ao =0 and B) 1>0
again based only on the axioms stated above
I think the set of order axioms that will produce a proof of 1>0 without the definition, a>b <=> b<a is the fololowing:
1) exactly one of a>b,b>a or a=b holds
2) if a>b ,b>c then a>c
3) if c>0 ,a>b then ac>bc
4) if a>b then a+c>b+c for any c
Evgeny.Makarov
Well-known member
MHB Math Scholar
Jan 30, 2012
2,533
Yes, you need a connection between $a<b$ and $b>a$. Maybe the authors of the problem believed that the equivalence is assumed. Similarly to how in the context of a field one often writes $x/y$ meaning $x\cdot y^{-1}$, maybe they assumed that $b>a$ is defined to mean $a<b$. This approach is used, for example, in the Coq theorem prover in regard to real numbers.
|
__label__pos
| 0.920316 |
opacity of images over video
gpakat's picture
when placing a graphic with an irregular form created with a transparent channel (e.g. the colour white defined as transparent in a PNG file) over an interactive video on a slide the irregular form displays over the interactive video.because the white channel becomes transparent (when the opacity setting is set to 0) However, the same graphic place over interactive video by using the interactive video editor displays as a rectangular shape, because the transparent channel shows up as white. This is indesirable for creating interesting interacdtions. The same is true for Text items placed over interactive video. Why is this indiscrepancy there, and can it be fixed?
Attachments:
falcon's picture
Thank you for reporting. I think it's a missing feature in interactive video. Interactive video doesn't support setting background color/alpha and always uses white
|
__label__pos
| 0.998971 |
首页 新闻 赞助 找找看
组件中获取不到vuex里的数据,怎么解决?
0
悬赏园豆:5 [已解决问题] 解决于 2018-10-19 15:41
问题描述
我在做 cnodejs 的话题展示页,cnode-article-vx.vue 和 author.vue 有共同的父组件 detail.vue,cnode-article-vx.vue 中通知 vuex 发请求拿到话题数据,保存在 vuex 里。
author.vue 中,我想拿到 vuex 存的话题数据中的作者,再发请求获取用户数据。但是我在获取作者时报错了。
相关代码
cnode-article-vx.vue
methods: {
//通知 vuex 获取文章细览及向上传递作者名、回复
getDetailVuex(){
this.$store.commit('showSpinMu', {show: true});
let id = this.$route.params.id;
let accesstoken = Cookies.get('accesstoken');
if(accesstoken){
//登录了
this.$store.dispatch('getDetailAc', {id, accesstoken});
}else{
//没登录
this.$store.dispatch('getDetailAc', {id});
}
}
}
vuex
actions: {
//获取文章细览
getDetailAc(store, payload) {
let accesstoken = payload.accesstoken ? payload.accesstoken : '';
return getDetail(payload.id, { accesstoken }).then(({ data }) => {
store.commit('detailMu', { detail: data.data });
});
}
},
mutations: {
//文章细览
detailMu(state, payload) {
state.detail = payload.detail;
state.spinShow = false;
console.log(state.detail.author.loginname);
}
}
author.vue
computed: {
//从 vuex 获取作者名
authorName(){
return this.$store.state.detail.author.loginname;
}
}
你期待的结果是什么?实际看到的错误信息又是什么?
在 vuex 中能够打印出 loginname,author.vue 报的错误如下:
Uncaught (in promise) TypeError: Cannot read property 'loginname' of undefined
请问该怎么解决?
zanetti的主页 zanetti | 初学一级 | 园豆:128
提问于:2018-10-18 20:48
< >
分享
最佳答案
0
子组件计算属性会先执行,当时你的state.detial还没有被赋值,也就是说还不具有author或者loginname属性,
所以你要在authorName中return之前,执行一次commit赋值;如:
computed: {
//从 vuex 获取作者名
authorName(){
this.$store.commit('detailMu',{detial:{author:{loginname:"456"}}});
return this.$store.state.detail.author.loginname;
}
}
收获园豆:5
你风致 | 老鸟四级 |园豆:2211 | 2018-10-19 11:21
确实是这个原因,但是 loginname 是 cnode-article-vx.vue 通知 vuex 发请求获取到的。我在 cnode-article-vx.vue 的计算属性里获取 vuex 里的 loginname,再向上传递到 detail.vue,再向下传递给 author.vue,再按照您说的先 commit,也会报错。
感觉这种情况用 vuex 好像不适合,后来我直接在 cnode-article-vx.vue 发请求获取话题数据,再将 loginname 通过组件传递到 author.vue,能实现且更容易。谢谢您的回答~
zanetti | 园豆:128 (初学一级) | 2018-10-19 15:40
其他回答(1)
0
你的loginname未定义,确定不是字段名错了?
徒然喜欢你 | 园豆:1741 (小虾三级) | 2018-10-19 08:40
字段名没错啊,拿到的话题数据是这样写的:
"author":{
"loginname":"nswbmw",
"avatar_url":"https://avatars0.githubusercontent.com/u/4279697?v=4&s=120"
}
而且我在 mutations 里能够打印出用户名
console.log(state.detail.author.loginname);
支持(0) 反对(0) zanetti | 园豆:128 (初学一级) | 2018-10-19 14:02
清除回答草稿
您需要登录以后才能回答,未注册用户请先注册
|
__label__pos
| 0.996521 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.