content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
icon
Asyncio-compatible timeout context manager
async_timeout_python310-4.0.2-3-any
The context manager is useful in cases when you want to apply timeout logic around block of code or in cases when asyncio.wait_for() is not suitable. Also it's much faster than asyncio.wait_for() because timeout doesn't create a new task.
Name
async_timeout_python310
Repository
HaikuPorts
Repository Source
haikuports_x86_64
Version
4.0.2-3
Download Size
6.8 KB
Source available
No
Categories
None
Version Views
2
|
__label__pos
| 0.54988 |
Decentralized and Centralization system
Answer to question 1
> Decentralized and Centralization system
> Decentralized system
In computing terms, a decentralized network architecture distributes workloads among several machines, instead of relying on a single central server. This trend has evolved from the rapid advancements of desktop and laptop computers, which now offer performance well beyond the needs of most business applications; meaning the extra compute power can be put to use for distributed processing.
As its name implies, decentralized systems don’t have one central owner. Instead, they use multiple central owners, each of which usually stores a copy of the resources users can access.
A decentralized system can be just as vulnerable to crashes as a centralized one. However, it is by design more tolerant to faults. That’s because when one or more central owners or servers fail, the others can continue to provide data access to users
Resources remain active if at least one of the central servers continue to operate. Usually, this means that system owners can repair faulty servers and address any other problems while the system itself continues to run as usual.
Server crashes in a decentralized system may affect the performance and limit access to some data. But in terms of overall system uptime, this system offers a big improvement over a centralized system.
[[https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSi-8c95X3TPevqtKPnJyFs_NG_5M7nE_wFkGkWqeR2yL0ovGuCYxP8LLI&s=10]
]
images(1).jpg
However, decentralized systems are still prone to the same security and privacy risks to users as centralized systems. And while their fault tolerance is higher, this comes at a price. Maintaining a decentralized system is usually more expensive.
> Centralized System
In a centralized system, all users are connected to a central network owner or “server”. The central owner stores data, which other users can access, and also user information. This user information may include user profiles, user-generated content, and more. A centralized system is easy to set up and can be developed quickly.
Answer to question 2
> Comparison between Centralized system and Decentralized system
Ultimately, network centralization was created as a way to improve efficiency and take advantage of potential economies of scale. On the other hand, decentralization looks to improve the speed and flexibility of your network by localizing processing power to the individual user. And these are some of the difference between them.
> Maintenance
Cebtralized system: is Low
Decentralized system : is Moderate
> In Scalability
Cebtralized system: is Low
Decentralized system : is Moderate
> In Development
Decentralized : is Moderate
Centralized system: is High
> In Revolution
Cebtralized system: Low
Decentralized system : is High
Answer to question 3
> Advantage and disadvantage of Decentralized system and centralized system
images(1).jpg
1. Decentralization may be, in part, merely the result of circumstances
2. It helps to coordinate work and put all activities on track. There are certain special circumstances forcing managers to reserve authority and centralize decision-making power.
3. Actions can be focused on what needs to be done and duplications can be put to rest.
4. It becomes easy to get all units and people do the same thing in the same way at the same time without wastage of resources and energies.
5. Decentralisation is the best solution to handle emergency situations—such as declining sales, cutting costs, using resources productively, pushing a competitor to the wall, handle policy
However, centralization makes it extremely difficult for managers to process the bundles of data regarding products, markets, costs, finances, people etc., in quick time and take decisions in an appropriate manner. The managers are burdened with a great amount of detailed and exhaustive work; they have to work for painfully long hours and take a stuffed briefcase (of problems) home with them.
Centralization forces top management to possess a broad view that may be beyond their capacity (Carvel). The vast amount of power given to a few people may be abused (power corrupts absolutely, and may be used as a ‘whip’). More dangerously, the fortunes of the organization depend on the health and vitality of top management people. The organization is highly vulnerable to what happens to its dynamic and talented top management people. Centralization floods communication lines to a few individuals at the top of the organization.
As a result, the speed of communications upward and decisions processes are slow. Lastly, centralization kills the initiative, self-reliance and judgment of lower level personnel. It inhibits the development of operating personnel.
> Disadvantages of Decentralisation:
Decentralization is not a sure bet. It could prove to be a troublesome exercise if not carried out in a proper way, in the following ways:
[https://www.shutterstock.com/image-illustration/differences-between-blockchains-illustrated-centralized-vs-1424872424]
1. Conflict:
Decentralization puts increased pressure on divisional heads to realize profits at any cost. This may lead to inter-divisional rivalry leading to bitter fights. Each divisional head might be tempted to build his own empire at the cost of others. Problems of coordination and control may also arise when such ‘mini-companies’ or ‘little empires’ exist within an organisation.
1. Cost:
Decentralization results in a duplication of staff effort. To be independent, each division should have access to purchasing, personnel, marketing and other specialists. As a result each division is expected to carry a large group of staff specialists at enormous cost.
1. Some Disadvantages of Decentralization Relate to the Profit-Centre Concept:
Often capable and competent individuals may not be available to run the divisionalised organizations.
4 Freedom of action may lead to diversity of decisions. Many a time the remote control from headquarters may prove to be ineffective as the enterprise grows.
5 decentralization demands training programmes that may be time-consuming and highly expensive.
> Advantages of Centralized System
1. Easy to physically secure. It is easy to secure and service the server and client nodes by virtue of their location
Smooth and elegant personal experience – A client has a dedicated system which he uses(for example, a personal computer) and the company has a similar system which can be modified to suit custom needs
Dedicated resources (memory, CPU cores, etc)
2 More cost efficient for systems upto a certain limit
As the central systems take less funds to set up,
3 they have an edge when small systems have to be built
Quick updates are possible –
4 Only one machine to update.
Easy detachment of a node from the system.
5 Just remove the connection of the client node from the server and voila! Node detached.
> Disadvantages of Centralized System –
1 Highly dependent on the network connectivity
2 System can fail if the nodes lose connectivity as there is only one central node.
3 No graceful degradation of system – abrupt failure of the entire system
4 Less possibility of data backup. If the server node fails and there is no backup, you lose the data straight away
5 Difficult server maintenance –
There is only one server node and due to availability reasons, it is inefficien
> Answer to question 4
• Management Centralized organizational structures rely on one individual to make decisions and provide direction for the company. Small businesses often use this structure since the owner is responsible for the company’s business operations.
• Decentralized organizational structures often have several individuals responsible for making business decisions and running the business. Decentralized organizations rely on a team environment at different levels in the business. Individuals at each level in the business may have some autonomy to make business decisions.
Structural Advantages of Centralized Organizations
Centralized organizations can be extremely efficient regarding business decisions. Business owners typically develop the company’s mission and vision, and set objectives for managers and employees to follow when achieving these goals.
Use of Expertise in Decentralized Organizations
Decentralized organizations utilize individuals with a variety of expertise and knowledge for running various business operations. A broad-based management team helps to ensure the company has knowledgeable directors or managers to handle various types of business situations.
Structural Disadvantages of Centralized Organizations
Centralized organizations can suffer from the negative effects of several layers of bureaucracy. These businesses often have multiple management layers stretching from the owner down to frontline operations. Business owners responsible for making every decision in the company may require more time to accomplish these tasks, which can result in sluggish business operations.
Structural Disadvantages of Decentralized Organizations
Decentralized organizations can struggle with multiple individuals having different opinions on a particular business decision. As such, these businesses can face difficulties trying to get everyone on the same page when making decisions.
Answer to question 5
differences-between-blockchains-illustrated-centralized-260nw-1424872424.jpg
> blockchain is the way the data is structured. A blockchain collects information together in groups, also known as blocks, that hold sets of information. Blocks have certain storage capacities and, when filled, are chained onto the previously filled block, forming a chain of data known as the “blockchain.” All new information that follows that freshly added block is compiled into a newly formed block that will then also be added to the chain once filled.
[https://www.shutterstock.com/image-illustration/differences-between-blockchains-illustrated-centralized-vs-1424872424]
A database structures its data into tables whereas a blockchain, like its name implies, structures its data into chunks (blocks) that are chained together. This makes it so that all blockchains are databases but not all databases are blockchains. This system also inherently makes an irreversible timeline of data when implemented in a decentralized nature. When a block is filled it is set in stone and becomes a part of this timeline. Each block in the chain is given an exact timestamp when it is added to the chain.
Decentralization
For the purpose of understanding blockchain, it is instructive to view it in the context of how it has been implemented by Bitcoin. Like a database, Bitcoin needs a collection of computers to store its blockchain. For Bitcoin, this blockchain is just a specific type of database that stores every Bitcoin transaction ever made. In Bitcoin’s case, and unlike most databases, these computers are not all under one roof, and each computer or group of computers is operated by a unique individual or group of individuals.
Imagine that a company owns a server comprised of 10,000 computers with a database holding all of its client's account information. This company has a warehouse containing all of these computers under one roof and has full control of each of these computers and all the information contained within them. Similarly, Bitcoin consists of thousands of computers, but each computer or group of computers that hold its blockchain is in a different geographic location and they are all operated by separate individuals or groups of people. These computers that makeup Bitcoin network are called nodes.
In this model, Bitcoin’s blockchain is used in a decentralized way.
Thanks professor.
|
__label__pos
| 0.869256 |
Troubleshooting
Use this section to troubleshoot issues with your Puppet Comply installation.
Reset your Comply password
If you forget your password, you can reset it in the user admin console.
1. SSH into your Comply node and run the following commands to retrieve the admin username and password:
kubectl exec $(kubectl get pod -l app.kubernetes.io/name=comply-auth -o jsonpath="{.items[0].metadata.name}") -- /bin/bash -c 'cat /etc/keycloak/admin-user'
kubectl exec $(kubectl get pod -l app.kubernetes.io/name=comply-auth -o jsonpath="{.items[0].metadata.name}") -- /bin/bash -c 'cat /etc/keycloak/admin-password'
2. Navigate to https://<COMPLY-HOSTNAME>/auth/admin using the FQDN of your Comply node.
3. Login using the credentials from step 1.
4. Navigate to Users.
5. Click View all users and select the user account you want to update, and click Edit.
6. Select the Credentials tab and the reset password.
Access logs
If you run into issues with Puppet Comply, you can download the relevant log files. The Comply logs are stored in Puppet Application Manager.
1. Log into Puppet Application Managerhttps://<PUPPET-APPLICATION-MANAGER-ADDRESS>:8800.
2. Select the Troubleshoot tab, and click Analyse Comply.
3. Download the bundle of log files.
Resolve Comply domain
If the Puppet Comply gatekeeper is unable to resolve the Comply domain, try the following troubleshooting steps.
When you assign a hostname to Comply, it needs to be resolved by the pods in your Kubernetes cluster. A preflight check verifies the domain you specified in the configuration is resolvable. You must ensure that the nodes can resolve their own hostnames, through either local host mapping or a reachable DNS server.
1. To verify your whether your hostname is resolvable, run the following commands:
kubectl exec $(kubectl get pod -l app=kotsadm -o jsonpath="{.items[0].metadata.name}") -- /bin/sh -c 'curl --SI <hostname>'
If the hostname was resolved, the command returns an exit code 0 with no output.
If the hostname cannot be resolved, the command returns an exit code 6. Proceed to step 2 to add DNS entries.
2. To add DNS entries for CoreDNS, run the following command to open the CoreDNS configuration maps:
kubectl -n kube-system edit configmaps coredns
3. Add a hosts entry below kubernetes. This is where you configure the DNS entry for Comply. For example:
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
hosts {
10.23.24.25 comply.mycompany.net comply // IP_address canonical_hostname [aliases...]
fallthrough
}
prometheus :9153
4. Run the command from step 1 to verify whether the DNS entry was updated:
kubectl exec $(kubectl get pod -l app=kotsadm -o jsonpath="{.items[0].metadata.name}") -- /bin/sh -c 'curl --SI <hostname>'
5. Re-run the preflight checks.
Resolve failed assessor upgrade
If an upgrade of the assessor has failed on one of your nodes, try the following troubleshooting step.
If the upgrade of an assessor on a node fails, the node is marked in red on the Inventory page. Failures may be due to network issues. If that is the case, Comply attempts to upgrade the node once connectivity returns. An hourly background task runs to check if nodes have been upgraded or not. If a node does not upgrade and remains red on the Inventory page, run the Puppet agent. If the upgrade continues to fail, see the Puppet agent logs for more information.
|
__label__pos
| 0.955251 |
Hi readers, How are you all? In today’s article, I’m going to explain the differences between HDD, SSD, and SSHD. If you are going to build a PC or purchasing a laptop then you must aware of the components that you are going to use on your PC. So one of the key components of your computer is stored.
To store our data, some kind of storage is required, whether it is internal storage or external storage. Internal storage is available in our phones and also in our laptops and computers, but talking about external stories. We can increase the storage of our phone by using a micro sd card for our phone. Also, we use external hard disks for external storage.
You must have heard the name of HDD, SSD, and SSHD before or after taking a computer. Although both of them are meant for storage, if you want to build a pc, then you must know about HDD, SSD, and SSHD, what is the difference between them and which one is better and why?
What is SSD?
The full name of SSD is Solid-state drive, Which means there are no moving parts in it. It is also a non-volatile device. This means that Power is not required to save any data stored in it.
SSD Sandisk
Once the data is stored and the power is cut, it will not forget the stored data. There is a technical difference or else both RAM and SSD are Flash memory. But RAM is a volatile memory, while SSD is a non-volatile memory.
Since it does not have any mechanical parts, it is also better than HDD in terms of power consumption. Also, Performance also gives high speed from HDD. The only thing is that its price is much higher than the HDD.
How SSDs work?
This is a Flash memory. Flash memory was invented by Fujio Masuoka while working at the Toshiba company. As you know, data is not removed even when power is cut in Flash memory.
SSDs have a majority of transistors. It uses NAND based flash memory, which is made up of floating gate transistors. This technique consists of two gates. One Floating gate while another Control gate.
A volt charge control program is applied to the gate to program a single memory cell so that the electrons attach to the control gate at the same time the floating gate traps them. Subsequently, electrons remain in the substrate medium near the floating gate for years. In this way, a cell (one bit) is charged. Later MLC, TLC i.e. multi-level cell devices, Triple-level cell devices were developed which store many bits of information in a cell in seconds.
Advantages of SSD
• As all the data is saved in the chip, then its speed is very high and can fast read and write.
• Uses power very rarely.
• There is not much size and weight is also very less.
Disadvantages Of SSD
• Much more expensive than HDD.
• SSD of very high capacity is very less.
• not easily found in the normal market.
You can buy some of the best HDDs from here
What is HDD?
The full name of HDD is Hard Disk Drive, which is an Electro-magnetic, non-volatile storage device. It consists of moving parts. It has a Spinning Platter that revolves around the spindle. The spinning platter is coated with a layer of magnetic material.
HDD sandisk
To read/write the data above it is mounted on the head which is at the end of the Actuator arm. Its data process or performance depends on how much is its RPM i.e. Round per minute. HDD has two main components
1. Mechanical components
These components such as Spindle, Magnetic Platter, Actuator, Read / write head, the motor is what we see.
2. Electrical components
All Magnetic hard drives have a Microprocessor. With its Associative Memory is also (auxiliary memory) which Printed circuit board is on.
Also read Top 5 Programming languages to learn in 2020
How HDDs work?
Whenever data has to be written in the system, the electrical current is transferred to the disk surface by the write head and the data is accessed in binary code. When this process is reversed to read the data, that is, the electrical current read head is transferred by the magnetic surface. HDDs are connected to the computer’s motherboard via cables. The codes of binary code 0 and 1 are decoded by the computer.
What is SATA?
SATA means Serial Advanced Technology Attachment. This is an interface that is better and faster than PATA ie Parallel ATA. This is the standard for transferring data to hard drives through which data is transferred from HDD to the computer via SATA.
Colour code factor of HDDs
Whenever we go to buy HDD or build a PC, our attention is only on its storage capacity at the time of taking HDD. But you should know that HDDs come in 6 types and 6 colors depending on the performance and operation.
Their WD (Western digital) color code and description is given below –
WD Blue These drives are for General Consumers and for General Purpose. Those who have to do their own small work. Those who do not have to do heavy work. These are currently available in the market in capacities ranging from 500 GB to 1 TB.
WD Black These drives are better for those who have to do heavy tasks, such as Mixing, Animation, Graphics work, etc. This is the best performance drive for them.
WD Green This drive comes for Secondary storage. If you take this drive, then its performance is very slow. It is used to store data that is not used frequently. By the way, its specialty is also that the drive of this color does not corrupt quickly.
WD Purple These color drives come for those tasks which have to be performed 24 hours and that too only write in large amounts. Like surveillance is for CCTV cameras etc. They have good writing ability.
WD Red These drives are for NAS related tasks. NAS means Network-attached storage. If you want to host your own site or customers’ sites by creating your own server, then this HDD is better for you. It is used in hosting companies for things like 24 hours of files, data sharing.
WD Gold These are for the Drives Enterprise-class. In which a lot of data is stored and shares that too in real-time.
Advantages Of HDD
• HDDs are very cheap
• HDD of high capacity is very easy to find.
• you can also purchase from any company.
Disadvantage Of HDD
• Speed of data read-write low
• Power uses more
• After 5-6 years, it starts having problems because it is a mechanical device and moving parts inside it.
• Size is more and weight is also more
You can buy some of the best HDDs from here
What is SSHD?
SSHD toshiba
SSHD means Solid-state hybrid drive It has components of both SSD and HDD. It looks like an HDD. It also has all the parts of HDD, there is a separate space for SSD. Nowadays, by buying 1TB SSHD in the market, you get 8GB SSD and the rest of HDD performance.
This means that like both SSD and HDD storage devices, it is also a physical storage device. The only difference is that in this one hybrid drive we get the mixed performance of both.
How SSHD works?
An SSHD-installed computer automatically decides which of your data will be stored in which port of the SSHD. Generally, the computer’s OS (operating system), Boot files are all stored in the SSD part.
While the rest of the other files, software, multimedia files, etc. are stored in HDD storage. These decisions are controlled by SSHD. One more thing to note is that any task or app that you use repeatedly or more, it will be automatically saved in SSD.
Advantages of SSHD
• This is more reliable in use and gives high speed with large space.
• This doesn’t have more moving parts.
• SSHD is less expensive compared to SSD and can be used in the long term.
Disadvantage of SSHD
There is a higher chance of getting damaged by the HDD portion of SSHD which is made up of fragile if it dropped.
You can buy some of the best HDDs from here
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.7186 |
Name: ___________________
Date:___________________
kwizNET Subscribers, please login to turn off the Ads!
Email us to get an instant 20% discount on highly effective K-12 Math & English kwizNET Programs!
Grade 3 - Mathematics
6.6 Subtracting Money
Method:
1. Start from the rightmost numbers.
2. Move on to the tenths, tens and ones.
Example:
1. Start from the rightmost numbers.The result is 8 minus 6.
2. The next is 5 minus 1 which is 4.
3. The next is 4 minus 2 which is 2.
4. The next is 8 minus 3, which is 5.
Answer: $52.42
Example:
1. Start from the rightmost numbers. Borrow 1 from previous number. The result is 17 minus 8.
2. The next is 7 minus 4 which is 3.
3. The next is 5 minus 3 which is 2.
4. The next is 8 minus 1, which is 7.
Answer: $72.39
Directions: Subtract the following. Also write at least ten examples of your own.
Question 1: $ 54.30
- $ 20.43
Answer:
Question 2: $ 56.34
- $ 22.16
Answer:
Question 3: $ 67.81
- $ 20.24
Answer:
Question 4: $ 46.31
- $ 28.19
Answer:
Question 5: $ 23.64
- $ 5.05
Answer:
Question 6: This question is available to subscribers only!
Question 7: This question is available to subscribers only!
Question 8: This question is available to subscribers only!
Question 9: This question is available to subscribers only!
Question 10: This question is available to subscribers only!
Subscription to kwizNET Learning System costs less than $1 per month & offers the following benefits:
• Unrestricted access to grade appropriate lessons, quizzes, & printable worksheets
• Instant scoring of online quizzes
• Progress tracking and award certificates to keep your student motivated
• Unlimited practice with auto-generated 'WIZ MATH' quizzes
• Child-friendly website with no advertisements
© 2003-2007 kwizNET Learning System LLC. All rights reserved. This material may not be reproduced, displayed, modified or distributed without the express prior written permission of the copyright holder. For permission, contact [email protected]
For unlimited printable worksheets & more, go to http://www.kwizNET.com.
|
__label__pos
| 0.923423 |
Math 116: Study Guide - Chapter 6
1. Determine the order of the matrix
2. Write the system of linear equations as an augmented matrix. Do not solve the system
3. Solve the system of linear equations using Gauss Jordan elimination
4. Solve the system of linear equations using Cramer's Rule
5. Given two 2x2 matrices A and B, find: A + B, 3A, AB, A squared, A inverse, the determinant of A. Also evaluate a function, f(A).
6. Use a determinant to find the equation of a line passing through the given points. The model is given.
7. True or False - 5 parts. You should definitely know about commutativity and division of both scalars and matrices.
8. Some statements are given. You must decide if performing those operations will return an row-equivalent matrix. Four parts.
9. Solve the matrix equations for X. Three parts. Know that when you factor a scalar out of a matrix, you need to multiply the scalar by I: example AX-5X = (A-5I)X, not (A-5)X
10. Multiply two matrices together.
11. Solve a 3x3 system linear equations using Gauss Jordan Elimination.
Notes
# 1 2 3 4 5 6 7 8 9 10 11 Tot
Pts 3 3 6 6 14 3 5 4 6 4 6 60
|
__label__pos
| 0.999805 |
Angular 8|9 Drag and Drop File Uploading with MongoDB & Multer
Last updated on by Digamber
In this Angular 8|9 drag and drop file uploading tutorial, we will learn to upload multiple image files in MongoDB database using Node and Express. In this tutorial we will create a basic Angular app in which we will create a custom directive to build Angular drag and drop functionality.
Tutorial Objective
• Building Angular drag and drop file uploading Layout with HTML/CSS
• Creating a Node server to upload image files
• Creating Custom Drag and Drop directive
• Using Multer for Multiple file uploading
• Multiple files uploading with progress bar
Install Angular App
Let’s start by installing basic Angular app, run the following command:
ng new angular-dragdrop-fileupload
Then, navigate to the newly created Angular project:
cd angular-dragdrop-fileupload
Next, create Angular component for drag and drop file upload.
ng g c drag-drop
Next, run command to install Bootstrap.
npm install bootstrap
Add the Bootstrap CSS in package.json file.
"styles": [
"node_modules/bootstrap/dist/css/bootstrap.min.css",
"src/styles.css"
]
Run command to start your Angular project.
ng serve --open
Build Node/Express Server
Build a node server with express js to store the uploaded files on the MongoDB database. We will use Multer to store the image files along with other NPM packages.
Run the command from Angular project’s root to generate backend folder:
mkdir backend && cd backend
Create separate package.json for node server.
npm init
Run command to install required NPM packages.
npm install body-parser cors express mongoose multer --save
Also, install nodemon NPM module, it starts the server whenever any change occurs in server code.
npm install nodemon --save-dev
Define MongoDB Database
Create database folder inside the backend folder and also create a file backend/database/db.js in it.
module.exports = {
db: 'mongodb://localhost:27017/meanfileupload'
}
Define Mongoose Schema
Create models folder inside the backend directory, then create a file User.js and place the following code inside of it.
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
// Define Schema
let userSchema = new Schema({
_id: mongoose.Schema.Types.ObjectId,
avatar: {
type: Array
},
}, {
collection: 'users'
})
module.exports = mongoose.model('User', userSchema)
Build File Upload REST API with Multer & Express
Let’s first create a folder and name it public inside the backend folder. Here, in this folder where we will store all the uploaded files.
Run the command from the backend folder’s root.
mkdir public
Create a routes folder inside the backend folder. Create a file user.routes.js inside of it. Here we ill import express, multer and mongoose NPM modules. By using these services we will build REST API for storing multiple files in MongoDB database.
Add the given below code inside the user.routes.js.
let express = require('express'),
multer = require('multer'),
mongoose = require('mongoose'),
router = express.Router();
// Multer File upload settings
const DIR = './public/';
const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, DIR);
},
filename: (req, file, cb) => {
const fileName = file.originalname.toLowerCase().split(' ').join('-');
cb(null, fileName)
}
});
var upload = multer({
storage: storage,
// limits: {
// fileSize: 1024 * 1024 * 5
// },
fileFilter: (req, file, cb) => {
if (file.mimetype == "image/png" || file.mimetype == "image/jpg" || file.mimetype == "image/jpeg") {
cb(null, true);
} else {
cb(null, false);
return cb(new Error('Only .png, .jpg and .jpeg format allowed!'));
}
}
});
// User model
let User = require('../models/User');
router.post('/create-user', upload.array('avatar', 6), (req, res, next) => {
const reqFiles = []
const url = req.protocol + '://' + req.get('host')
for (var i = 0; i < req.files.length; i++) {
reqFiles.push(url + '/public/' + req.files[i].filename)
}
const user = new User({
_id: new mongoose.Types.ObjectId(),
avatar: reqFiles
});
user.save().then(result => {
console.log(result);
res.status(201).json({
message: "Done upload!",
userCreated: {
_id: result._id,
avatar: result.avatar
}
})
}).catch(err => {
console.log(err),
res.status(500).json({
error: err
});
})
})
router.get("/", (req, res, next) => {
User.find().then(data => {
res.status(200).json({
message: "User list retrieved successfully!",
users: data
});
});
});
module.exports = router;
We used Multer’s upload.array() method to upload the multiple files on the server. This method takes 2 arguments, first we pass the file name which we will be using to store the file values. Second parameter relates to the number of file we can upload at a time. Then we defined the reqFiles array here we will store the uploaded file’s path with full URL.
Configure Node/Express Server
Create server.js file inside the backend folder. Then, place the following code inside the server.js file.
let express = require('express'),
mongoose = require('mongoose'),
cors = require('cors'),
bodyParser = require('body-parser'),
dbConfig = require('./database/db');
// Routes to Handle Request
const userRoute = require('../backend/routes/user.route')
// MongoDB Setup
mongoose.Promise = global.Promise;
mongoose.connect(dbConfig.db, {
useNewUrlParser: true
}).then(() => {
console.log('Database sucessfully connected')
},
error => {
console.log('Database could not be connected: ' + error)
}
)
// Setup Express.js
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({
extended: false
}));
app.use(cors());
// Make Images "Uploads" Folder Publicly Available
app.use('/public', express.static('public'));
// API Route
app.use('/api', userRoute)
const port = process.env.PORT || 4000;
const server = app.listen(port, () => {
console.log('Connected to port ' + port)
})
// Error
app.use((req, res, next) => {
// Error goes via `next()` method
setImmediate(() => {
next(new Error('Something went wrong'));
});
});
app.use(function (err, req, res, next) {
console.error(err.message);
if (!err.statusCode) err.statusCode = 500;
res.status(err.statusCode).send(err.message);
});
Start Node Server
Open terminal and run command to start the MongoDB server.
mongod
Then, open another terminal and run following command.
nodemon server.js
Next, you can checkout node server running on the following Url: http://localhost:4000/api
API Method URL
GET http://localhost:4000/api
POST /api/create-user
You can test out Angular file uploading REST APIs Url in Postmen:
Angular Drag and Drop File UploadA
Create Angular 9 Drag and Drop File Uploading Directive
In this step, we will create HostBinding and HostListeners to manage the drag and drop functionality for Angular file upload task.
Run command to create directive in Angular project.
ng g d drag-drop-file-upload
In the drag-drop-file-upload.directive.ts file, we will define 3 HostListners such as Dragover, Dragleave and Drop along with HostBinding for background-color.
import { Directive, EventEmitter, Output, HostListener, HostBinding } from '@angular/core';
@Directive({
selector: '[appDragDropFileUpload]'
})
export class DragDropFileUploadDirective {
@Output() fileDropped = new EventEmitter<any>();
@HostBinding('style.background-color') private background = '#ffffff';
// Dragover Event
@HostListener('dragover', ['$event']) dragOver(event) {
event.preventDefault();
event.stopPropagation();
this.background = '#e2eefd';
}
// Dragleave Event
@HostListener('dragleave', ['$event']) public dragLeave(event) {
event.preventDefault();
event.stopPropagation();
this.background = '#ffffff'
}
// Drop Event
@HostListener('drop', ['$event']) public drop(event) {
event.preventDefault();
event.stopPropagation();
this.background = '#ffffff';
const files = event.dataTransfer.files;
if (files.length > 0) {
this.fileDropped.emit(files)
}
}
}
Create Angular 9 Service
We need to create Angular service, here in this file we will create a method in which we will make HTTP POST request to store the uploaded files in the mongoDB database.
Use JavaScript’s FormData() method to store the Reactive Forms value in the database via Reactive Form. To track the file upload progress define the reportProgress and observe values in Http method.
import { Injectable } from '@angular/core';
import { Observable, throwError } from 'rxjs';
import { catchError } from 'rxjs/operators';
import { HttpErrorResponse, HttpClient } from '@angular/common/http';
@Injectable({
providedIn: 'root'
})
export class DragdropService {
constructor(private http: HttpClient) { }
addFiles(images: File) {
var arr = []
var formData = new FormData();
arr.push(images);
arr[0].forEach((item, i) => {
formData.append('avatar', arr[0][i]);
})
return this.http.post('http://localhost:4000/api/create-user', formData, {
reportProgress: true,
observe: 'events'
}).pipe(
catchError(this.errorMgmt)
)
}
errorMgmt(error: HttpErrorResponse) {
let errorMessage = '';
if (error.error instanceof ErrorEvent) {
// Get client-side error
errorMessage = error.error.message;
} else {
// Get server-side error
errorMessage = `Error Code: ${error.status}\nMessage: ${error.message}`;
}
console.log(errorMessage);
return throwError(errorMessage);
}
}
Create Drag and Drop File Upload Component
Now, we will create the layout for drag and drop file upload component. In this tutorial we will be using Reactive Forms to store the files and Node server to store the files into the mongoDB database.
Import ReactiveFormsModule in app.module.ts file to enable the service.
import { ReactiveFormsModule } from '@angular/forms';
@NgModule({
declarations: [...],
imports: [
ReactiveFormsModule
],
bootstrap: [...]
})
export class AppModule { }
Next, add the code inside the app/drag-drop.component.html file.
<div class="container fileUploadWrapper">
<form [formGroup]="form">
<div class="row">
<!-- Progress Bar -->
<div class="col-md-12" *ngIf="progress">
<div class="progress form-group">
<div class="progress-bar progress-bar-striped bg-success" role="progressbar"
[style.width.%]="progress">
</div>
</div>
</div>
<div class="col-md-12">
<div class="fileupload" appDragDropFileUpload (click)="fileField.click()"
(fileDropped)="upload($event)">
<span class="ddinfo">Choose a file or drag here</span>
<input type="file" name="avatars" #fileField (change)="upload($event.target.files)" hidden multiple>
</div>
</div>
<div class="col-md-12">
<div class="image-list" *ngFor="let file of fileArr; let i = index">
<div class="profile">
<img [src]="sanitize(file['url'])" alt="">
</div>
<p>{{file.item.name}}</p>
</div>
<p class="message">{{msg}}</p>
</div>
</div>
</form>
</div>
Apply design to Angular drag and drop file uploading component, navigate to styles.css and paste the following code.
* {
box-sizing: border-box;
}
body {
margin: 0;
padding: 25px 0 0 0;
background: #291464;
}
.container {
margin-top: 30px;
max-width: 500px;
}
.progress {
margin-bottom: 30px;
}
.fileupload {
background-image: url("./assets/upload-icon.png");
background-repeat: no-repeat;
background-size: 100px;
background-position: center;
background-color: #ffffff;
height: 200px;
width: 100%;
cursor: pointer;
/* border: 2px dashed #0f68ff; */
border-radius: 6px;
margin-bottom: 25px;
background-position: center 28px;
}
.ddinfo {
display: block;
text-align: center;
padding-top: 130px;
color: #a0a1a2;
}
.image-list {
display: flex;
width: 100%;
background: #C2DFFC;
border: 1px solid;
border-radius: 3px;
padding: 10px 10px 10px 15px;
margin-bottom: 10px;
}
.image-list p {
line-height: normal;
padding: 0;
margin: 0 0 0 14px;
display: inline-block;
position: relative;
top: -2px;
color: #150938;
font-size: 14px;
}
.message {
text-align: center;
color: #C2DFFC;
}
.remove {
background: transparent;
border: none;
cursor: pointer;
}
.profile {
width: 40px;
height: 40px;
overflow: hidden;
border-radius: 4px;
display: inline-block;
}
.profile img {
width: 100%;
}
.remove img {
width: 15px;
position: relative;
top: -2px;
}
.fileUploadWrapper .card-body {
max-height: 330px;
overflow: hidden;
overflow-y: auto;
}
@media(max-width: 767px) {
.container {
width: 280px;
margin: 20px auto 100px;
}
}
Paste the following code in app/drag-drop.component.ts file:
import { Component, OnInit } from '@angular/core';
import { FormBuilder, FormGroup, FormArray } from "@angular/forms";
import { DragdropService } from "../dragdrop.service";
import { HttpEvent, HttpEventType } from '@angular/common/http';
import { DomSanitizer } from '@angular/platform-browser';
@Component({
selector: 'app-drag-drop',
templateUrl: './drag-drop.component.html',
styleUrls: ['./drag-drop.component.css']
})
export class DragDropComponent implements OnInit {
fileArr = [];
imgArr = [];
fileObj = [];
form: FormGroup;
msg: string;
progress: number = 0;
constructor(
public fb: FormBuilder,
private sanitizer: DomSanitizer,
public dragdropService: DragdropService
) {
this.form = this.fb.group({
avatar: [null]
})
}
ngOnInit() { }
upload(e) {
const fileListAsArray = Array.from(e);
fileListAsArray.forEach((item, i) => {
const file = (e as HTMLInputElement);
const url = URL.createObjectURL(file[i]);
this.imgArr.push(url);
this.fileArr.push({ item, url: url });
})
this.fileArr.forEach((item) => {
this.fileObj.push(item.item)
})
// Set files form control
this.form.patchValue({
avatar: this.fileObj
})
this.form.get('avatar').updateValueAndValidity()
// Upload to server
this.dragdropService.addFiles(this.form.value.avatar)
.subscribe((event: HttpEvent<any>) => {
switch (event.type) {
case HttpEventType.Sent:
console.log('Request has been made!');
break;
case HttpEventType.ResponseHeader:
console.log('Response header has been received!');
break;
case HttpEventType.UploadProgress:
this.progress = Math.round(event.loaded / event.total * 100);
console.log(`Uploaded! ${this.progress}%`);
break;
case HttpEventType.Response:
console.log('File uploaded successfully!', event.body);
setTimeout(() => {
this.progress = 0;
this.fileArr = [];
this.fileObj = [];
this.msg = "File uploaded successfully!"
}, 3000);
}
})
}
// Clean Url
sanitize(url: string) {
return this.sanitizer.bypassSecurityTrustUrl(url);
}
}
Conclusion
Finally, Angular 8|9 Drag and Drop multiple files uploading tutorial with MongoDB & Multer is over.
Git Repo
Digamber
Digamber
Digamber Rawat is From Uttarakhand, land of Gods located in northwestern part of India. He is a Data Scientist by profession and a primary author of this blog.
|
__label__pos
| 0.990496 |
20,701 reputation
12871
bio website
location
age
visits member for 2 years, 8 months
seen 5 hours ago
1h
awarded Popular Question
5h
comment NDSolve giving the wrong solution?
... the one you showed y=-4*x/(1+3*x). The point is, these are singular solutions. There is a singularity in your ode, which is why NDSolve gives a 1/0 error. So, there is nothing wrong with NDSolve. Notice that DSolve did not solve this.
5h
comment NDSolve giving the wrong solution?
do not have time now, but quick comment: you have non-linear ode. A non-linear ode can admit a singular solution (which can't be obtained from the general solution by giving the constants of integration specific values). Another solution is y=-2. Another is y=(-4*((x^4-2*x^2+1)/(x^4+8*x^3+18*x^2+8*x+1))^(1/2)*x^4-20*((x^4-2*x^2+1)/(x^4+8*x^3+18*x^2+8*x+1))^(1/2)*x^3-4*x^4-24*((x^4-2*x^2+1)/(x^4+8*x^3+18*x^2+8*x+1))^(1/2)*x^2-16*x^3-20*((x^4-2*x^2+1)/(x^4+8*x^3+18*x^2+8*x+1))^(1/2)*x-32*x^2-4*((x^4-2*x^2+1)/(x^4+8*x^3+18*x^2+8*x+1))^(1/2)-16*x-4)/(3*x^4+8*x^3+14*x^2+8*x+3) as well as the..
15h
revised Creating a foggy image
typo
15h
answered Creating a foggy image
1d
comment Steps in row reduction?
Only way I know to look into LU more is by using the {lu,p,c}=LUDecomposition[m] call, where c gives the rows used for pivoting as mentioned in the help. You can't tell how to control the pivoting. There is no option there. from reference.wolfram.com/language/tutorial/… it says LUDecomposition, Inverse, RowReduce, and Det use Gaussian elimination with partial pivoting. That is about it.
1d
comment Output from LinearModelFit not showing entire result
There should be a windows saying show more, show all, etc...? reference.wolfram.com/language/tutorial/…
1d
comment Is this integral solvable?
You can try the free Wolfram cloud, it has Mathematica there for free wolfram.com/programming-cloud/pricing click on the free option. Also can try Wolfram alpha, it is supposed to be able to do integration as well.
1d
answered Different results of a definite integral $\int_0^{\cosh ^{-1}(a)} \frac{1}{\sqrt{a^2 \text{sech}^2(x)-1}}\, dx$
1d
comment How to remove error at large time in NDSolve
it works ok for me using V 10.01 on windows. moved the slider all the way to the right and I see no errors. !Mathematica graphics may be you can upgrade to 10.01
Sep
19
awarded Generalist
Sep
18
comment Tracking Initialization
@MichaelE2 I know for Manipulate that the following happens, and I would assume the same for DyanmicModule, since Manipulate is just a DynamicModule: The body of Manipulate (the expression part) is checked for valid input/form, but it is not evaluated until Initialization is finished. The question of when Initialization is run or not is separate. But when it needs to be evaluated, then yes, the body/dynamics/expression will wait until Initialization is finished running.
Sep
18
comment Tracking Initialization
I do not think what I said is different from what you are saying. Initialization is evaluated to generate the dynamic expression first time it is needed to be displayed on the screen.
Sep
17
comment Tracking Initialization
I am not sure I understand what you mean by this tracking to be reflected in Dynamic updating Since Initialization is processed once from top down. (to generate the initial expression). Once this is done, Initialization basically goes away.
Sep
17
comment Tracking Initialization
You can always use Print :) DynamicModule[{x}, Dynamic[x], Initialization :> (x = 1; Print["x=2"]; x = 2; Print["x=3"]; x = 3) ]
Sep
17
asked StateSpaceRealization “ControllableCompanion” does not generate ControllableCompanion form from DE
Sep
16
comment Manipulate a parameter within the PlotLabel
f = {1, 2, 3, 4, 5, 6}; Manipulate[ Plot[Sin[f[[g]] x], {x, -2, 2}, PlotLabel -> Row[{"f = ", f[[g]]}]], {{g, 1, "index"}, 1, Length[f], 1, Appearance -> "Labeled"} ]
Sep
14
revised Wrong answer from DSolve?
added error message
Sep
14
answered Wrong answer from DSolve?
Sep
6
awarded differential-equations
|
__label__pos
| 0.979568 |
Elementary Geometry for College Students (7th Edition)
Published by Cengage
ISBN 10: 978-1-337-61408-5
ISBN 13: 978-1-33761-408-5
Chapter 10 - Section 10.6 - The Three-Dimensional Coordinate System - Exercises - Page 491: 43
Answer
$P = (5,6,5)$
Work Step by Step
$l_1: (x.y.z) = (2,3,-1)+n(1,1,2)$ $l_2: (x.y.z) = (7,7,2)+r(-2,-1,3)$ Let $P = (x,y,z)$. The x-coordinate of $l_1$ and $l_2$ must both equal $x$: $x = 2+n(1) = 7+r(-2)$ $n = 5-2r$ The y-coordinate of $l_1$ and $l_2$ must both equal $y$: $y = 3+n(1) = 7+r(-1)$ $n = 4-r$ We can equate the two expressions of $n$ to find $r$: $5-2r = 4-r$ $r = 1$ We can use $l_2$ to find $P$: $P = (x,y,z) = (7,7,2)+r(-2,-1,3)$ $P = (7,7,2)+(1)(-2,-1,3)$ $P = (7-2,7-1,2+3)$ $P = (5,6,5)$
Update this answer!
You can help us out by revising, improving and updating this answer.
Update this answer
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
__label__pos
| 0.750608 |
It is possible to use colors on the Unix command-line interface.
learn more… | top users | synonyms (1)
5
votes
2answers
642 views
Man page highlight color
To colorize my man pages, I put this code from archlinux.org into .bashrc: man() { env LESS_TERMCAP_mb=$'\E[01;31m' \ LESS_TERMCAP_md=$'\E[01;38;5;74m' \ LESS_TERMCAP_me=$'\E[0m' \ ...
5
votes
5answers
5k views
How do you colorize only some keywords for a bash script?
I am running some unit test code. The unit test code outputs regular text. There is a lot of the text so I want to highlight for the user important keywords. In this case the keywords are "PASS" and ...
1
vote
1answer
2k views
Echo/Printing text in the color of a given hex code (regardless of Xresources/Xdefaults)
is there any way to echo/print text in the color of a given hex code (#000000, #FFFFFF, etc.) regardless of one's own Xresources/Xdefaults color definitions? Like, if Bob had in his ...
1
vote
1answer
306 views
bash prompt - long command circle back to same line after adding color
Note: I am using Putty and my TERM is set to XTERM. I have added the color to my bash prompt as PS1="\[\033[0;32m\]\d \t \u\e[1;33m@\H /\W $ \[\033[1;37m\]" just to simplify PS1 in English - ...
17
votes
1answer
8k views
How can I simply adjust monitor color temperture in X?
I have a new 27" iMac, on which I run Fedora 15 (with no Apple OS) — just boot straight to Linux. The colors are basically accurate to my eye, but the white point is much higher than I'd like (that ...
14
votes
2answers
784 views
Colored output?
When I execute a command from a terminal that prints coloured output (such as ls or gcc), the coloured output is printed. From my understanding, the process is actually outputting ANSI escape codes, ...
8
votes
2answers
20k views
set gnome terminal background/text color from bash script
I would like to setup my gnome terminal's background(#002b36) and foreground color in ubuntu 13, using bash script. I tried gconftool but couldn't succeed. GCONFTOOL-2(1) User ...
7
votes
2answers
663 views
How to determine the current color of the console output?
I know that, if a coloured terminal is available, one can colour the output of it using escape characters. But is there a possibility to find out, which colour the output is currently being displayed ...
6
votes
1answer
1k views
Is it possible to use named colors in Zsh beyond ANSI names?
I know there are ways to use ANSI color names in Zsh (such as red), but Zsh supports 256 colors by number. I'm curious if there's any way to refer to the non-ANSI colors by a name? (Without just ...
5
votes
4answers
330 views
Coloring output of forked processes
I have a runscript that starts some processes and sends them to the background mongod & pid_mongo=$! redis-server & pid_redis=$! # etc. All these processes then output concurrently to ...
4
votes
3answers
799 views
cascaded grep matches color code as pattern
I am piping output of one grep comand into another grep. The first grep is using --color=always, so that the first match is colored. In practice, that means that the match is enclosed between two ...
3
votes
1answer
2k views
dircolors on zsh: Unrecognized keywords: MULTIHARDLINK, RESET & CAPABILITY
I am trying to get the solarized color theme to work in my terminal. I read the instructions here but I get the following dircolors error: dircolors: `/home/avazquez/.dircolors_zsh':90: unrecognized ...
3
votes
2answers
554 views
Print console colors
Wrote a bash function to display the console colors. But, it seems some of the colors are not possible to show that way! (?) Also, note the strange "bright black"! (Note: The below screendump is of ...
2
votes
1answer
190 views
What are some methods I can use to create colorful MOTD messages when logging in?
From time to time I've come across colorful ASCII art styled messages when logging into a server. How are these messages constructed?
2
votes
4answers
655 views
How to change color of a character while tailing and tr
I regularly use this command line but I would like to change the color of the "|" to green in the output. Does anyone know how I can accomplish this? tail -f file.log | tr '\001' '|' | grep TEST
1
vote
4answers
294 views
Set, backup and restore colors in the terminal
I use this program to display all color available in the terminal. That waht I get: This program doesn't show me colors below 100. Why? Is there a way to display them. Is the are way to change ...
6
votes
2answers
405 views
How to colourise hidden files in `ls` file listings?
LS_COLORS environmental variable lets you decide on colours for different file types for ls command, such as, directories, regular files, links etc. I suppose that dot files are considered a variation ...
4
votes
1answer
2k views
Pass colors from ls through pipe to awk 'print' statement
This is a follow-up to my question from yesterday, Show sum of file sizes in directory listing. Thanks to Zero Piraeus and a point in the right direction by Mauritz Hansen, I now have function ...
4
votes
3answers
2k views
Prevent watch breaking colors
I'm altering scripts at work that monitor log files to single out certain items and colorize them. The final output is a list of 6-digit numbers in several columns. I've been able to add a ~ to the ...
4
votes
3answers
2k views
Overriding the shade of color displayed
I have to work on systems which display some colors that are hard to read. I ssh into these systems, but don't have management permission to change the colors they display. Is there any way I can ...
3
votes
1answer
71 views
Clear to end of line uses the wrong background color in tmux
In ZSH prompt expansion, the command %E is supposed to "Clear to end of line." This works: However, it does not work in tmux: The issue seems to be with BCE (Background Color Erase). In screen, I ...
3
votes
1answer
1k views
Why doesn't terminal show color schema once logged as root?
I'm currently using Debian 7 Wheezy, and I've noticed that the colors palette on terminal (which might be useful to identify among a large set of files and folders) are not enabled once I log as root. ...
3
votes
3answers
3k views
Colorize Bash Console Color
I need to be able to set my CentOS 6.4 bash prompt color to yellow. I've managed to find where to set this (.bashrc) and the ANSI color for yellow (\e[0;33m). I've setup my prompt as follows: ...
3
votes
1answer
1k views
LS_COLORS settings for specific types of files
I'm trying to set up my color scheme for ls, and I'm having trouble finding information about exactly what parameters I have to work with, or where those come from. And especially setting colors for ...
2
votes
1answer
55 views
Clear to end of line uses the wrong background color in screen
In ZSH prompt expansion, the command %E is supposed to "Clear to end of line." This works. We see it in the grey bar going all the way across. However, if I call "screen", the %E stops working: ...
2
votes
1answer
213 views
No colour in MOTD
I recently bought a Raspberry Pi, and have started playing around with it. After changing my MOTD, (to include colours), the colour codes are coming up as raw-text instead of executing. I am ...
2
votes
2answers
321 views
Change PS1 color when connected to other host via SSH
I'm trying to change PS1 look based on what host I'm connected in using SSH. My current PS1: PS1='\[\e[1;32m\]\u@\h\[\e[1;34m\] \w\[\e[1;31m\]$(__git_ps1)\[\e[1;0;37m\] \$\[\e[0m\] ' For host host1 ...
2
votes
0answers
80 views
Is it possible to modify the colors in bash? [duplicate]
Possible Duplicate: Is it possible to configure Bash so that STDERR can be a different color than STDOUT? In bash, the only color I know how to modify is the prompt, using \[ and \]. From ...
2
votes
2answers
3k views
Setting up LS colors with a human-readable script in tcsh
I have a shell script (set_up_my_ls_colors.sh) that, if I call from my shell, it configures my color scheme for ls. The nice thing about the script is that it allows me to configure colors in a ...
1
vote
0answers
72 views
Adjusting colors for mutt's status line in multi-account setup
Question How can mutt be configured correctly so, no matter the number of accounts it handles, color settings can be adjusted per account along with the folder-hooks? Details In a mutt ...
1
vote
1answer
785 views
cp /etc/DIR_COLORS ~/.dir_colors not responding
I'm using CentOS 6.5 and Putty. My problem is that directory file names are shown in dark blue color which is hard to read. I google searched and found this link; basically it's copying the DIR_COLORS ...
1
vote
2answers
851 views
How can I list LS_COLORS in colour?
I recall that eval "dircolors -b" used to display the colours that LS_COLORS was using, based on the file types or extensions. It was not simply the colour values that were displayed but the colours ...
0
votes
1answer
393 views
why are colors mixed up in CentOS 7 terminal?
I have CentOS 7 running as a guest OS inside a VirtualBox host. I opened a terminal to write some commands, but the colors are all messed up as shown in the following print screen: It looks ...
0
votes
1answer
42 views
Why dircolors don't return enything for bash called from php
I have this code in php: header("Content-Type: text/plain"); exec("/bin/bash -c 'dircolors -b'", $result); echo implode("\n", $result); but it return LS_COLORS=''; export LS_COLORS Why I'm not ...
0
votes
3answers
462 views
Bash in php exec in webpage don't colors for png and mp3 files on Linux
I have code like this in php: header("Content-Type: text/plain"); exec("/bin/bash -c 'ls --color=always'", $result); echo implode("\n", $result); and I've got result (escape is not visible) ...
0
votes
1answer
2k views
How to highlight the whole log-line in color with multitail
I'm trying to use multitail to tail logs with color highlights. I defined a custom color scheme in multitail.conf, something like this: colorscheme:my-color cs_re:red:^\[E cs_re:yellow:^\[W ...
|
__label__pos
| 0.742712 |
Calculate Values
Created:12/26/2000
Description:
This sample demonstrates how to perform a calculation outside of an edit session. Calculations done outside of an edit session are normally faster then calculations within an edit session, but there is no way to undo the changes.
The first routine performs a simple calculation where all records in an integer field are set to 10. The second rountine uses VBA to determine the length of each feature and writes this value to the appropriate row in the table. The last routine is a function which is called by the first 2 routines. It returns True if the data you are using can be edited outside of an edit session. This sample mimics the behavior of the calculate values command in the column context menu of the table window.
To see how to perform calculations within an edit session, see the examples in the help for the ICalculator and ICalculatorCallback Interfaces.
How to use:
1. Paste the code into VBA.
2. Make sure ArcMap is not in edit mode.
3. Select the feature layer that you want to calculate. The calc_VBA routine requires that the layer have polyline geometry.
4. For the calc_Simple script, make sure that a short field named shortfld is in the selected layer. The calc_VBA script also requires a double field named len. Alternatively, you can adjust the scripts to use different field names.
5. Run the sample.
Public Sub calc_Simple()
On Error GoTo EH
Dim pDoc As IMxDocument
Set pDoc = ThisDocument
' Get the layer that is selected in the TOC
' it must be a polygon layer
Dim pFeatLayer As IFeatureLayer
Dim pFeatClass As IFeatureClass
Dim pUnKnown As IUnknown
Set pUnKnown = pDoc.SelectedLayer
If pUnKnown Is Nothing Then
MsgBox "Must have a layer selected in the table of contents."
Exit Sub
End If
Set pFeatLayer = pUnKnown
Set pFeatClass = pFeatLayer.FeatureClass
' This calculation is to be done outside of an edit session.
Dim pEditor As IEditor
Dim pID As New UID
pID = "esriCore.Editor"
Set pEditor = Application.FindExtensionByCLSID(pID)
If pEditor.EditState = esriStateEditing Then
MsgBox "This sample requires that ArcMap is not in edit mode"
Exit Sub
End If
' Also, check to see if the selected layer supports editing without
' an edit session
If Not CanEditWOEditSession(pFeatClass) Then
MsgBox "This layer cannot be edited outside of an edit session"
Exit Sub
End If
' Find the field named AreaCalc
Dim pCalc As ICalculator
Dim pTable As ITable
Dim pField As IField
Dim intFldIndex As Integer
Set pTable = pFeatClass
intFldIndex = pTable.FindField("shortfld")
If intFldIndex = -1 Then
MsgBox "There must be a field named AreaCalc in the layer"
Exit Sub
End If
' Perform the calculation. Make sure to use an update cursor when
' editing outside of an edit session.
Dim pCursor As ICursor
Set pCalc = New Calculator
Set pCursor = pFeatClass.Update(Nothing, True)
With pCalc
Set .Cursor = pCursor
.Expression = "10"
.Field = "shortfld"
End With
pCalc.Calculate
Exit Sub
EH:
MsgBox Err.Number &" "& Err.Description
End Sub
Public Sub calc_VBA()
On Error GoTo EH
Dim pDoc As IMxDocument
Set pDoc = ThisDocument
' Get the layer that is selected in the TOC
' it must be a line layer
Dim pFeatLayer As IFeatureLayer
Dim pFeatClass As IFeatureClass
Dim pUnKnown As IUnknown
Set pUnKnown = pDoc.SelectedLayer
If pUnKnown Is Nothing Then
MsgBox "Must have a line layer selected in the table of contents."
Exit Sub
End If
Set pFeatLayer = pUnKnown
Set pFeatClass = pFeatLayer.FeatureClass
If Not pFeatClass.ShapeType = esriGeometryPolyline Then
MsgBox "Selected layer must be a line layer"
Exit Sub
End If
' This calculation is to be done outside of an edit session.
Dim pEditor As IEditor
Dim pID As New UID
pID = "esriCore.Editor"
Set pEditor = Application.FindExtensionByCLSID(pID)
If pEditor.EditState = esriStateEditing Then
MsgBox "This sample requires that ArcMap is not in edit mode"
Exit Sub
End If
' Also, check to see if the selected layer supports editing without
' an edit session
If Not CanEditWOEditSession(pFeatClass) Then
MsgBox "This layer cannot be edited outside of an edit session"
Exit Sub
End If
' Find the fields named x and y
Dim pCalc As ICalculator
Dim pTable As ITable
Dim pField As IField
Dim intFldIndex As Integer
Set pTable = pFeatClass
intFldIndex = pTable.FindField("len")
If intFldIndex = -1 Then
MsgBox "There must be a field named len in the layer"
Exit Sub
End If
' Perform the calculation
Set pCalc = New Calculator
Dim pCursor As ICursor
Set pCursor = pFeatClass.Update(Nothing, True)
With pCalc
Set .Cursor = pCursor
.PreExpression = "Dim dblLength as double" & vbNewLine & _
"Dim pCurve As ICurve" & vbNewLine & _
"Set pCurve = [Shape]" & vbNewLine & _
"dblLength = pCurve.Length"
.Expression = "dblLength"
.Field = "len"
End With
pCalc.Calculate
Set pCursor = Nothing
Exit Sub
EH:
MsgBox Err.Number &" "& Err.Description
End Sub
' Returns TRUE if the Table can be edited outside of an edit session
Private Function CanEditWOEditSession(pTable As ITable) As Boolean
Dim pVersionedObject As IVersionedObject
Dim pObjClassInfo2 As IObjectClassInfo2
Dim bolVersioned As Boolean
Dim bolEditable As Boolean
' See if the data is versioned
If Not TypeOf pTable Is IVersionedObject Then
bolVersioned = False
Else
Set pVersionedObject = pTable
bolVersioned = pVersionedObject.IsRegisteredAsVersioned
End If
' Check the CanBypassEditSession property
Set pObjClassInfo2 = pTable
bolEditable = pObjClassInfo2.CanBypassEditSession
If bolEditable And Not bolVersioned Then
CanEditWOEditSession = True
Else
CanEditWOEditSession = False
End If
End Function
|
__label__pos
| 0.923404 |
SRM 615 Div2 Medium LongLongTripDiv2
問題
SRM 615 - TopCoder Wiki
距離1もしくは距離Bにジャンプできる時, 1方向のみT回ジャンプして距離Dに到達できるかどうか判定する問題.
解答
距離Bのジャンプの回数によって到達できる距離は単調増加に伸びる. そのため,距離Bのジャンプの回数を二分探索で求めることが出来る.
解説を見ると,exactにO(1)で求めることが出来た. うーん,なるほど.
class LongLongTripDiv2
{
public:
string isAble(long long D, int T, int B)
{
long long low = 0;
long long up = T;
while (low <= up) {
long long mid = (low + up) / 2;
long long v = mid * B + (T - mid);
if (v == D) return "Possible";
else if (v < D) {
low = mid + 1;
}else {
up = mid - 1;
}
}
return "Impossible";
}
};
|
__label__pos
| 0.748124 |
Google goggles
Discussion in 'iPad Apps' started by mickeymost1, Nov 18, 2011.
1. mickeymost1
mickeymost1 iPF Novice
Joined:
Oct 4, 2011
Messages:
25
Thanks Received:
0
Trophy Points:
0
Ratings:
+0 / 0
Hello ,is there anyone that can tell me if I can get and use google goggles on my I pad2 also is it free to use and download and how do I use it. Thanks very much
Last edited by a moderator: Nov 18, 2011
2. Mickey330
Mickey330 Administrator Staff Member
Joined:
Aug 30, 2010
Messages:
11,614
Thanks Received:
1,861
Trophy Points:
113
Location:
Western NY state (USA)
Ratings:
+2,047 / 0
The Google Search app (free at the App Store) has Google Goggles in it, and it works on the iPhone. I only have an iPad1, so I am unsure if it works on the iPad2.
Have you downloaded it and see if it works? Wouldn't hurt to try, seeing as how it's free...
Marilyn
3. twerppoet
twerppoet iPad Legend
Joined:
Jan 8, 2011
Messages:
17,205
Thanks Received:
2,444
Trophy Points:
113
Location:
Walla Walla, WA
Ratings:
+2,975 / 1
The Google app does include Google Goggles, but you won't find it under the Apps button at the bottom. Instead, on the search screen there is camera icon. You tap that to go to camera mode, where you can take a picture and see the colorful squares scan your picture.
The Google app is universal and works fine on the iPad 2; however the camera is not as good, so don't expect the results to be as reliable. Though in my three tests (2 DVD covers and a jar of JIF peanut butter) both devices worked fine. I did have to log in with my Google account on the iPad before it would give me results. There was a button provided at the bottom of the camera page warning me, and providing a quick way to log in.
In my experience most of the UPC and other scanning software works fine on the iPad 2, though if it is not universal you have to put up withe small or 2x pixel view. The camera typically requires a bit more care to position and focus. It is more sensitive to bad light, and of course more awkward to position correctly.
4. A.K
A.K iPF Novice
Joined:
Oct 30, 2011
Messages:
95
Thanks Received:
2
Trophy Points:
8
Location:
Iceland
Ratings:
+2 / 0
Try the google app which includes google goggles and voice search...amazing stuff I have to say...the browser is as close to chrome as we can get on an I pad. If it were not for safari being this good that would be my default browser
Similar Threads
1. SunnyFL
Replies:
2
Views:
16,789
2. info
Replies:
8
Views:
3,844
3. SingingSabre
Replies:
68
Views:
6,801
4. Jon.Stillman
Replies:
6
Views:
2,546
5. King.Khoja
Replies:
1
Views:
680
Loading...
Share This Page
Search tags for this page
google goggles for ipad
,
google goggles ipad
,
google goggles ipad download
,
how to use google goggles ipad
,
how to use google goggles on ipad
|
__label__pos
| 0.756396 |
Home Linux Understanding Linux File Permissions: How to Use Chmod 777
Understanding Linux File Permissions: How to Use Chmod 777
by admin
If you’re a new Linux user, you probably encountered the Chmod command at some point early on. Perhaps someone told you to “chmod 777” to move a file to a certain folder, and it worked! So what does the chmod command do and what do the numbers mean?
This article will discuss everything you need to know about Linux file permissions. It’s important to know this to understand the chmod command and the numbers that correspond to certain access levels. Whether you use Ubuntu, Fedora, or a more exotic Linux distro you should understand when it’s okay to set permissions to 777 using the CHMOD command and when you should use a different setting.
Understanding Linux File Permissions: How to Use Chmod 777
How Linux File Permissions Work
In Linux, the operating system determines who can access a certain file based on file permission, ownership, and attributes. The system allows you, the owner or admin, to enable access restrictions to various files and directories. You can improve the security of your system by giving access only to users and programs you trust.
Understanding User Classes
A specific user and a group own every single file and directory. This means there are three categories of users to which you can assign a certain level of access. These users are classified as follows:
• Owner
• Group
• Others
You can see these groups visually in Ubuntu by right-clicking on any directory, selecting Properties, and going to the Permissions tab.
Understanding Linux File Permissions: How to Use Chmod 777
The Owner is the person with all the power. Usually, they have full access to every file and directory and can change the file permissions of other users as well.
The Group consists of a number of users that have a certain level of access to a file or directory given by the Owner. For example, a group of users can be excluded from modifying a file while being granted access to view that file.
The Others class simply represents guest users that don’t fall into the other two categories. By default, their level of access is usually restricted. It’s up to the Owner to determine what guests users can or can’t do.
Understanding File Permission Levels
As the Owner you can assign three levels of access to your files and directories:
1. Read: It gives you limited access to a file or directory. All you can do is read the file or view the directory’s contents. You can’t edit files, and you can’t remove or add any new files to the directory.
2. Write: It lets you read and edit files. If you assign this level of access to a directory, you can also remove or add files.
3. Execute: It’s only important when running or executing files. For example, you can’t run a script or a program without permission to Execute.
By combining Classes and Permissions, you can control how much access a specific user has to a file or directory.
Permission Symbols and Numbers Explained
File permissions are represented numerically or symbolically. You can use both symbols and numbers to change file and directory permissions. The easiest method is with numbers, but you should also understand the symbols. So let’s take a look at the symbols behind file permissions first.
File Permission Symbols
You can view your permissions for all content in a certain directory if you type the following command in the terminal:
ls -l
Understanding Linux File Permissions: How to Use Chmod 777
You can navigate to any directory by using the cd command. If you’re a complete beginner, check out our article on basic Linux commands.
In our example, the directory contains two other directories and one file. The permissions are written using (1+) 9 symbols that can be split into triplets for an easier understanding. Let’s examine the first set of permissions for the Books directory:
drwxrwxr-x
Let’s split it for readability:
d rwx rwx r-x
The first symbol is d, and it stands for directory. It can also be a dash symbol if it’s a file, as you can see in the third set of permissions for the Outline.docx file.
Next, we have three groups of symbols. The first group represents the Owner’s permission levels, the second group is for the Group class, and the third represents Others.
Each set of 3 symbols means read, write, execute – in that order. So the Owner has permission to read, write, and execute all files and directories found inside the Test directory. Here’s a visual representation:
Understanding Linux File Permissions: How to Use Chmod 777
When you see a dash symbol instead of r, w, or x, it means that permission doesn’t exist.
File Permission Numbers
The numeric format for file permissions is simple. In essence, the file permission codes have three digits:
• The first one is for the file owner.
• The second one represents the file’s group.
• The last digit is for everyone else.
The digits range from 0 to 7 where:
• 4 = read.
• 2 = write.
• 1 = execute.
• 0 = no permission.
The permission digit of each class is determined by summing up the values of the permissions. In other words, each digit for each class can be the sum of 4, 2, 1, and 0. Here’s a full list of permissions:
• 0 (0 + 0 + 0) = The user class doesn’t have any permissions.
• 1 (0 + 0 + 1) = Execute permission only.
• 2 (0 + 2 + 0) = Write permission only.
• 3 (0 + 2 + 1) = Write and execute permissions.
• 4 (4 + 0 + 0) = Read permission only.
• 5 (4 + 0 + 1) = Read and execute permissions.
• 6 (4 + 2 + 0) = Read and write permissions.
• 7 (4 + 2 + 1) = All permissions.
For example, a 644 permission means that the file owner has read and write permissions, while the other two classes have only read permission. Setting permissions by using the number format requires only basic math.
Understanding Linux File Permissions: How to Use Chmod 777
change permission virus file on unix OS.
Permission 777
As you’ve probably already guessed, a 777 permission gives read, write, and execute permissions to all three user classes. In other words, anyone who has access to your system can read, modify, and execute files. Use it only when you trust all your users and don’t need to worry about security breaches.
Permission 777 is used often because it’s convenient, but you should use it sparingly. In fact, we recommend never using it because the security risks are too great. An unauthorized user could compromise your system or, for example, change your website to distribute malware.
You should give permission 755 instead. That way, you as the file owner have full access to a certain file or directory, while everyone else can read and execute, but not make any modifications without your approval.
Modifying File Permissions with Chmod
You can change file permission with the help of the chmod command. The most basic way of using this command without any other variables is as follows:
chmod 777 filename
Replace “filename” with the name of the file and its path.
Keep in mind that the only users with the power to change file permissions are those with root access, the file owners, and anyone else with sudo powers.
You may also like
|
__label__pos
| 0.931729 |
(in-package "COMMON-LISP-USER")
;;------------------------------------------------------------------------------
;;
;; File: ->LIST.LISP
;; Created: 2/25/93
;; Author: Will Fitzgerald
;;
;; Description: Simple conversion utilities for strings to lists
;;
;;------------------------------------------------------------------------------
(defmethod ->list ((self string) &key
(start 0)
(char-bag '(#\Space))
(test #'(lambda (ch) (not (member ch char-bag :test 'char=))))
(post-process 'identity))
"Converts SELF into a list,
starting at START;
dividing words at boundaries defined by characters in CHAR-BAG,
or at boundaries defined by TEST;
each item is run through POST-PROCESS as it is created. POST-PROCESS can
be destructive (eg, NSTRING-DOWNCASE)."
(labels ((->list* (position)
(let* ((pos (position-if-not test self :start position))
(new-pos (if pos (position-if test self :start pos) nil)))
(cond
((and pos new-pos)
(cons (funcall post-process (subseq self position pos))
(->list* new-pos)))
(pos (list (funcall post-process (subseq self position pos))))
(t (list (funcall post-process (subseq self position))))))))
(let ((pos (position-if test self :start start)))
(if pos (->list* pos) nil))))
(defmethod ->symbols ((self string) &optional (package *package*))
"Converts a string into a list of symbols interned into PACKAGE, ignoring
everything but alphanumerics and dashes."
(->list self
:post-process #'(lambda (str)
(intern (nstring-upcase str) package))
:test #'(lambda (ch) (or (alphanumericp ch)
(char= ch #\-)))))
(defmethod ->symbols ((self null) &optional (package *package*))
(declare (ignore package)) nil)
|
__label__pos
| 0.75267 |
Get free ebooK with 50 must do coding Question for Product Based Companies solved
Fill the details & get ebook over email
Thank You!
We have sent the Ebook on 50 Must Do Coding Questions for Product Based Companies Solved over your email. All the best!
Compile Time Polymorphism in C++
Last Updated on December 26, 2023 by Ankit Kochar
Compile-time polymorphism is a fundamental concept in C++ programming that facilitates flexibility and efficiency in code execution. It allows developers to create versatile programs by enabling functions and classes to exhibit different behaviors based on the context they are used in, all resolved during compilation rather than runtime. In C++, compile-time polymorphism is primarily achieved through function overloading, templates, and inheritance, providing a powerful mechanism for creating reusable and adaptable code structures.
This article delves into the core principles of compile-time polymorphism in C++, exploring its various implementations, advantages, and best practices. From understanding the fundamental concepts to implementing them in practical scenarios, this guide aims to equip developers with the knowledge needed to leverage compile-time polymorphism effectively in their C++ projects.
What is Polymorphism in C++?
An important aspect of object-oriented programming is polymorphism, which makes it possible to treat objects of different classes as though they were objects of the same class by using virtual functions and inheritance. In other words, multiple forms are referred to as polymorphism.
A real-life example of polymorphism is the concept of a "vehicle". A vehicle can be a car, a truck, a motorcycle, or any other mode of transportation. Each type of vehicle has its own unique characteristics and behavior, but they all share certain common traits, such as the ability to move from one place to another.
There are mainly two types of polymorphism:
• Compile Time Polymorphism in C++: Compile time polymorphism in C++ is invoked during the compile time of the program. There are two ways to achieve compile-time polymorphism in C++:
• Function Overloading
• Operator Overloading
• Run Time Polymorphism in C++: Run time polymorphism in C++ is invoked during the run time of the program. There is one way to achieve the run time polymorphism in C++:
• Function Overriding
What is Compile Time Polymorphism in C++?
The compile time polymorphism in C++ is a type of polymorphism, which refers to the ability of a programming language to determine the appropriate method or function to call at compile time-based on the types of arguments being passed.
There are a couple of ways to achieve compile time polymorphism in C++.
1. Function Overloading:
The C++ language’s function overloading feature enables us to define multiple functions that have the same name but different parameters. When we want to perform the same operation on various data types, or when we want to offer different levels of functionality depending on the quantity or kind of arguments passed, this can be helpful.
In C++, we simply define two or more functions with the same name but different parameters to overload a function. The compiler chooses which version of the function to call based on the number, type, and ordering of the arguments passed.
Let’s take an example to understand the compile time polymorphism in C++ using the function overloading.
// compile time polymorphism in C++
#include <iostream>
#include <bits/stdc++.h>
using namespace std;
class PrepBytes {
public:
// Function with 1 int parameter
void add(int x,int y)
{
std::cout << "The Sum of "<< x <<" and "<<y <<" is: " << x+y<<"\n";
}
// Function with same name but
// 1 double parameter
void add(int x, int y, int z)
{
std::cout << "The Sum of "<< x <<", "<< y <<" and "<<z <<" is: " << x+y+z << "\n";
}
// Function with same name and
// 2 int parameters
void add(double x, double y)
{
std::cout << "The Sum of "<< x <<" and "<<y <<" is: " << x+y<< "\n";
}
void add(double x, double y, double z)
{
std::cout << "The Sum of "<< x << ", " << y <<" and "<< z <<" is: " << x+y+z<< "\n";
}
};
int main()
{
PrepBytes obj;
obj.add(7,8);
obj.add(10,14,16);
obj.add(4.5, 6.2);
obj.add(1.3, 4.6, 7.2);
return 0;
}
Output:
The Sum of 7 and 8 is: 15
The Sum of 10, 14 and 16 is: 40
The Sum of 4.5 and 6.2 is: 10.7
The Sum of 1.3, 4.6 and 7.2 is: 13.1
In the above C++ program, the class Prepbytes contains several methods with the same name add. During compile time, it will be decided which method to call based on the given arguments in the function parameter. For example, if we give 3 int data type arguments in the add function then it will call the second add function.
2. Operator Overloading:
In C++, a feature called operator overloading enables operators like +, -, *, /, and others to be redefined for user-defined data types. As a result, we can specify what the operator does when used with objects belonging to our own class, which enables us to write more logical and expressive code.
In C++, we define a function that starts with the operator keyword and the symbol we want to overload. For instance, we might define a function with the following signature to overload the addition operator + for the class "MyClass":
MyClass operator+(const MyClass& obj) const;
The ampersand (&) indicates that the function takes its argument by reference rather than by value and the keyword "const" indicates that it does not modify the object it is called on.
Let’s take an example to understand the compile time polymorphism in C++ using operator overloading.
// compile time ploymorphism in C++
#include <iostream>
using namespace std;
class Complex {
public:
Complex(double r = 0.0, double i = 0.0) : real(r), imag(i) {}
Complex operator+(const Complex& obj) const {
return Complex(real + obj.real, imag + obj.imag);
}
void display() const {
cout << real << " + " << imag << "i" << endl;
}
private:
double real, imag;
};
int main() {
Complex c1(1.0, 2.0), c2(2.0, 3.0);
Complex c3 = c1 + c2; // calls the overloaded + operator
c3.display();
return 0;
}
Output:
3 + 5i
In this example, we define a class called "Complex" that represents complex numbers with a real and imaginary components. We overload the addition operator (+) by defining a method called "operator+" that takes a Complex object as a reference and returns a new Complex object representing the sum of the two Complex objects.
Conclusion
In conclusion, compile-time polymorphism stands as a cornerstone of C++ programming, offering a robust way to enhance code reusability, flexibility, and performance. By utilizing techniques like function overloading, templates, and inheritance, developers can create versatile and efficient programs that adapt to different data types and contexts without sacrificing performance.
Understanding the nuances of compile-time polymorphism not only enhances code readability and maintainability but also enables the development of scalable and adaptable software solutions. Mastery of these concepts empowers C++ programmers to write more concise, flexible, and efficient code, contributing to the creation of robust applications across various domains.
FAQs of compile time polymorphism in C++
Here are some FAQs related to Compile Time Polymorphism in C++.
1. What is the difference between compile-time polymorphism and runtime polymorphism in C++?
Compile-time polymorphism, achieved through techniques like function overloading, templates, and static polymorphism (using inheritance), resolves method calls at compile time. Runtime polymorphism, commonly achieved through dynamic polymorphism using virtual functions and inheritance, resolves method calls at runtime based on the actual object type.
2. How does function overloading contribute to compile-time polymorphism?
Function overloading allows the creation of multiple functions with the same name but different parameters within the same scope. During compilation, the appropriate function to execute is determined based on the number and types of arguments passed to it, contributing to compile-time polymorphism.
3. What are templates, and how are they related to compile-time polymorphism?
Templates in C++ enable the creation of generic functions and classes that can work with any data type. They allow for compile-time instantiation of code based on different data types, facilitating compile-time polymorphism by generating specific code for each data type at compile time.
4. What are the advantages of using compile-time polymorphism in C++?
Compile-time polymorphism offers several benefits, including improved code reusability, performance optimization through early binding, better error detection during compilation, and enhanced readability by providing a clear structure for different behaviors based on context.
5. When should I use compile-time polymorphism in my C++ programs?
Use compile-time polymorphism when you want to create flexible and efficient code that can handle multiple data types or contexts without sacrificing performance. It is beneficial when the behavior of functions or classes needs to be determined at compile time rather than runtime.
6. What are the advantages of compile-time polymorphism?
Compile-time polymorphism can help you write cleaner, more efficient code by allowing you to reuse function names and reduce redundancy. It also helps catch errors at compile time instead of at runtime, which can save you time and effort in debugging.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.994747 |
A Different Society
1. Philosophy Forum
2. » Young Philosophers Forum
3. » A Different Society
Get Email Updates Email this Topic Print this Page
Reply Mon 25 Jan, 2010 07:58 am
First I have two things to ask of the readers before they read this. I am writing a political sci-fi novel and need opinions. Here are my first two conceptions that I hope for my fellow philosophical enthusiasts to keep in mind.
1. The tehnology mentioned is, in the context of the story, already attained and used.
2. By whatever ways, this new government mentioned, has been able to use the technology in a widespread fashion without resistance.
So by way of technology how would a society act when no emotions are used? As in, for example, the movie Equilibrium, where society takes injections that supress' emotions. Art in this world is gone, no music, no poetry, no writing, nothing. People are trapt and unable to do anything individual, to desensitize the world to past atrocities, everyone in the world has been named after certain people, such as Adolf Hilter, Josef Stalin, or Mussolini, do you [the reader] think that this type of desenstizing would work?
Arjuna
Reply Mon 25 Jan, 2010 10:03 am
@gotmilk9991,
gotmilk9991;122375 wrote:
First I have two things to ask of the readers before they read this. I am writing a political sci-fi novel and need opinions. Here are my first two conceptions that I hope for my fellow philosophical enthusiasts to keep in mind.
1. The tehnology mentioned is, in the context of the story, already attained and used.
2. By whatever ways, this new government mentioned, has been able to use the technology in a widespread fashion without resistance.
So by way of technology how would a society act when no emotions are used? As in, for example, the movie Equilibrium, where society takes injections that supress' emotions. Art in this world is gone, no music, no poetry, no writing, nothing. People are trapt and unable to do anything individual, to desensitize the world to past atrocities, everyone in the world has been named after certain people, such as Adolf Hilter, Josef Stalin, or Mussolini, do you [the reader] think that this type of desenstizing would work?
Our history is part of our identity. It sounds like your fictional culture wants to consciously shape its own identity. So it works to take "nature" into its hands. What would be the motive for that?
This is kin to dystopic visions like We, 1984, and Brave New World, all of which propose societies in which the powerful are all malice and no love... thus the societies described are nothing more than representations of self mutilation... headed toward self annihilation... (like in the case of Communism.) Each book suggests that life and identity grow and evolve on their own... like plants... we let them grow.. we can't make them grow because as much as we may hope to supersede nature, we don't own the magic of nature.. we don't own life, we are life.
3k1yp2
Reply Tue 26 Jan, 2010 11:33 am
@gotmilk9991,
they wouldn't act because they wouldn't care; they would just do. They would focus on survival i guess. sorry if its a breach of the rules to be on here even thouh i don't fit the age criteria.
Pepijn Sweep
Reply Tue 26 Jan, 2010 12:39 pm
@gotmilk9991,
Sure about the novel? Maybe it's more fun to see what you can do with all this data. Sorry to see it doesn't work well to prevent terrorism, financial implosions etc.
Deckard
Reply Wed 27 Jan, 2010 02:31 pm
@Pepijn Sweep,
It would be challenging to write the narrative in the first person. How would a person completely without emotion write? Why would a person without emotion write?
curiouscat
Reply Mon 8 Mar, 2010 04:12 pm
@Deckard,
What is the goal of a society without emotion?
mister kitten
Reply Mon 8 Mar, 2010 04:14 pm
@curiouscat,
curiouscat;137676 wrote:
What is the goal of a society without emotion?
Show us we need them.
curiouscat
Reply Mon 8 Mar, 2010 04:26 pm
@mister kitten,
mister kitten;137678 wrote:
Show us we need them.
good point. Otherwise we'd just be organic robots.
mister kitten
Reply Mon 8 Mar, 2010 08:36 pm
@curiouscat,
curiouscat;137683 wrote:
good point. Otherwise we'd just be organic robots.
Eating, sh*ting, sleeping, repeating...
Pyrrho
Reply Tue 9 Mar, 2010 01:16 pm
@gotmilk9991,
gotmilk9991;122375 wrote:
First I have two things to ask of the readers before they read this. I am writing a political sci-fi novel and need opinions. Here are my first two conceptions that I hope for my fellow philosophical enthusiasts to keep in mind.
1. The tehnology mentioned is, in the context of the story, already attained and used.
2. By whatever ways, this new government mentioned, has been able to use the technology in a widespread fashion without resistance.
So by way of technology how would a society act when no emotions are used? As in, for example, the movie Equilibrium, where society takes injections that supress' emotions. Art in this world is gone, no music, no poetry, no writing, nothing. People are trapt and unable to do anything individual, to desensitize the world to past atrocities, everyone in the world has been named after certain people, such as Adolf Hilter, Josef Stalin, or Mussolini, do you [the reader] think that this type of desenstizing would work?
If one were successful in completely eliminating emotions, all of the people would die, because they would do nothing. The reason you eat is because you want to eat; take away the desire, and you do not eat. The reason you get out of the way of a speeding car in the road is because you do not want to be hit by it. Take away the desire to not be hit, and you no longer have a motive to get out of the way.
"Desire", of course, is feeling, which means it is emotion.
This, by the way, is one of the absurdities of the original idea presented of Vulcans in Star Trek, as a life of pure logic is impossible. Without emotion, there would be no goals at all, and so one would not act at all. Of course, in Star Trek, they quickly abandoned the idea of Vulcans being purely logical after all, but that is beside the point.
Logic and reason are good for being able to find the means to achieve one's goals, so they are extremely important. But they never set any ultimate goals.
pshingle
Reply Sat 13 Mar, 2010 08:03 pm
@gotmilk9991,
I personally believe that the narrative would be much more interesting if told from the point of view of an alternative character, perhaps and individual who has avoided the stripping of emotion. The basis of a good narative novel is to have a main protagonist, antagonist, climax, conclusion, ect... The novel would do well to examine every available aspect of this new society, if only to provide contrast to the lives that we lead now.
1. Philosophy Forum
2. » Young Philosophers Forum
3. » A Different Society
Copyright © 2016 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.04 seconds on 09/29/2016 at 11:10:59
|
__label__pos
| 0.700334 |
0
$\begingroup$
I'm trying to count running time of build heap in heap sort algorithm
BUILD-HEAP(A)
heapsize := size(A);
for i := floor(heapsize/2) downto 1
do HEAPIFY(A, i);
end for
END
enter image description here image by HostMath
suppose this is the tree
4 .. height2
/ \
2 6 .. height 1
/\ /\
1 3 5 7 .. height 0
what I understand here $O(h)$ means worst case for heapify for each node, so height=ln n if the node is in the root for example to heapify node 2,1,3 it takes $ln_2 3 =1.5$ the height of root node 2 is 1, so the call to HEAPIFY is $ln_2 n=height = O(h)$
im not sure about this $\frac{n}{2^{h+1}}$ , is the number of nodes for any given height , suppose height is 1 and sum of nodes is 3 such as 2,1,3, so $\frac{n}{2^{h+1}}= \frac{3}{2^{0+1}}=1.5=2$ , when height is 0 there is at most two nodes. am i correct?
suppose given height is 0 so it is the last layer, then when sum of nodes is 7 , $\frac{n}{2^{h+1}}$ =$\frac{n}{2^{0+1}}$=$\frac{7}{2}=3.5=4?$ -> {1,3,5,7} if the root is 4
the summation is lg n because it sum the total height when it do heapify?
and last is to count the big-oh, BUILD HEAPIFY will call HEAPIFY $\frac{n}{2}$ times, and each will be $ln_2 n$ = height of the root, so $O(\frac{n}{2} * ln_2 n)$ ?
please correct me if i am wrong thanks!
https://www.growingwiththeweb.com/data-structures/binary-heap/build-heap-proof/ this is the reference i used, and also i read about this in CLRS book
$\endgroup$
0
$\begingroup$
Even if i don't really understand your point, i'm try to figure out an intuitive proof for building heap time complexity, starting from a simple recursive algorithm for building heap.
You have a collection of n unsorted element, called A, you want from this collection build a heap(in your case a min-heap because you are mantaining the minimum of collection in the root).
So modelling the heap you want to create like a binary tree, it must shows some property like:
1. complete: the tree is "fullfilled" until second-last level and the leaves are "compatted" on the left.
2. logarithmic height: if the number in the heap is n then the height is O(logn) (you can proof this starting from the first property)
3. order relation between nodes: in particular, if v is father of w in the tree then $$ value(v)<= value(w) $$
Now think A as an unsorted binary tree you can recursively construct an heap follows this step:
1. Make recursive call on the left sub-tree
2. Make recursive call on the right sub-tree
3. Call heapify on the root of the current tree
heapify
Heapify on a node v simply compare v with it sons, and exchange v with the lowest son if it is less than v. Do this repeatedly until exists a son of v that have less value.
heapify complexity
So in heapify you make, at most, as many exchanges as high as the tree is, and since the only operation you perform is compare with the son(at most 2) for each level
=> time complexity is O(logn) for the 2nd property.
now for compute the overall complexity you can set a recursion function considering the worst case in wich A is a perfect binary tree (complete and with last level fullfilled).
So let's assume A is a perfect binary tree, the recursion function is:
$$ T(n) = T((n-1)/2) + O(log(n)) $$
because in each recursion step you consider half-part of the tree and exclude the root, and in each step with n as parameter you pay $$ O(log(n))$$ for heapify.
So by Master theorem (https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms))
$$T(n) = O(n) $$
so you can deduce that building an heap from an unsorted collection of value is linear in the number of element in the collection.
Know this maybe don't answer your question but i hope can help!
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.997954 |
Chat Completion API - System prompt does't work all the time
const sytemPrompt = `You are ShopXY Bot, a helpful and friendly assistant for an ecommerce website.
For every customer question, produce JSON output with the following keys: action, orderNumber, response. Here are the specifics:
- For every question about a product or product type, produce JSON '{"action": "search_product", "orderNumber": 0, "response": <response>}'.
- For every question about order or delivery, capture order number and then produce JSON '{"action": "track_order", "orderNumber": <orderNumber>, "response": <response>}'.
- If the customer has received damaged items, capture order number and then produce JSON '{"action": "upload_proof", "orderNumber": <orderNumber>, "response": <response>}'.
- If the customer wants to cancel the order, capture order number and then produce JSON '{"action": "cancel_order", "orderNumber": <orderNumber>, "response": <response>}'.
If you don't know the answer, say 'Please contact support!`;
this.conversation.conversationHistoryWithContextInfo.push({'role': system, 'content': sytemPrompt });
const chatRequest: CreateChatCompletionRequest = {
model: 'gpt-3.5-turbo',
messages: this.conversation.conversationHistoryWithContextInfo as ChatCompletionRequestMessage[],
temperature: 0,
max_tokens: 512
};
const response = await this.openai.createChatCompletion(chatRequest).then(result => result.data);
In the above code, I set the context for the chat completion API, with the system message. It works as expected(produces JSON) sometimes, some other times it doesn’t work - it doesn’t produce the JSON, it replies with some other text. I also set the temperature to 0.
What am I missing here? Should I add anything to the prompt to make it work consistently all the time?
Often the system prompt is more like a “soft” suggestion rather than instructions, while the user prompt is more solid. I use system more to guide answers and tone rather than something strict like format. I would have this part in system: “You are ShopXY Bot, a helpful and friendly assistant for an ecommerce website.”
And then the rest in a user prompt.
It might be a bit vulnerable to prompt injections that way, though.
Thank you for the suggestion. I tried that, but it didn’t work. Maybe I’ve to rephrase the prompt in a better way.
Providing some examples can help, as can some repetition. Skip to the middle of this article I wrote—to the “task-specific completions” section—for some examples.
Actually, you might want to try the full approach laid out in the article. Separate the classification from the completion, and then provide a single very rigid JSON template for each task’s completion.
If you don’t want to refactor your approach this much, I’d start by adding a few extra user/assistant exchanges to your API call with some examples, like “Hey, my product arrived in pieces and I’m not happy about it!” along with the expected result. It can also help to be explicit about what the response should be if the user doesn’t provide the necessary details for filling in placeholders.
Also, as @smuzani mentioned, at present, the system prompt is probably not the best place for these instructions.
2 Likes
I read your article. I think that is an interesting approach when you cannot fit every scenario in a single prompt, given the token limit and making the AI model to respond consistently. Yes, I’ll try that, thank you.
1 Like
|
__label__pos
| 0.952285 |
Dupin cyclide
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A Dupin cyclide
In mathematics, a Dupin cyclide or cyclide of Dupin is any geometric inversion of a standard torus, cylinder or double cone. In particular, these latter are themselves examples of Dupin cyclides. They were discovered by (and named after) Charles Dupin in his 1803 dissertation under Gaspard Monge.[1] The key property of a Dupin cyclide is that it is a channel surface (envelope of a one-parameter family of spheres) in two different ways. This property means that Dupin cyclides are natural objects in Lie sphere geometry.
Dupin cyclides are often simply known as cyclides, but the latter term is also used to refer to a more general class of quartic surfaces which are important in the theory of separation of variables for the Laplace equation in three dimensions.
Dupin cyclides were investigated not only by Dupin, but also by A. Cayley und J.C. Maxwell.
Today, Dupin cylides are used in computer-aided design (CAD), because cyclide patches have rational representations and are suitable for blending canal surfaces (cylinder, cones, tori, and others).
Definitions and properties[edit]
There are several equivalent definitions of Dupin cyclides. In \R^3, they can be defined as the images under any inversion of tori, cylinders and double cones. This shows that the class of Dupin cyclides is invariant under Möbius (or conformal) transformations. In complex space \C^3 these three latter varieties can be mapped to one another by inversion, so Dupin cyclides can be defined as inversions of the torus (or the cylinder, or the double cone).
Since a standard torus is the orbit of a point under a two dimensional abelian subgroup of the Möbius group, it follows that the cyclides also are, and this provides a second way to define them.
A third property which characterizes Dupin cyclides is that their curvature lines are all circles (possibly through the point at infinity). Equivalently, the curvature spheres, which are the spheres tangent to the surface with radii equal to the reciprocals of the principal curvatures at the point of tangency, are constant along the corresponding curvature lines: they are the tangent spheres containing the corresponding curvature lines as great circles. Equivalently again, both sheets of the focal surface degenerate to conics.[2] It follows that any Dupin cyclide is a channel surface (i.e., the envelope of a one-parameter family of spheres) in two different ways, and this gives another characterization.
The definition in terms of spheres shows that the class of Dupin cyclides is invariant under the larger group of all Lie sphere transformations; any two Dupin cyclides are Lie-equivalent. They form (in some sense) the simplest class of Lie-invariant surfaces after the spheres, and are therefore particularly significant in Lie sphere geometry.[3]
The definition also means that a Dupin cyclide is the envelope of the one-parameter family of spheres tangent to three given mutually tangent spheres. It follows that it is tangent to infinitely many Soddy's hexlet configurations of spheres.
Parametric and implicit representation[edit]
(CS): A Dupin cyclide can be represented in two ways as the envelope of a one parametric pencil of spheres, i.e. it is a canal surface with two directrices. The pair of directrices consists either of an ellipse and a hyperbola or of two parabolas. In the first case one defines the cyclide as elliptic, in the second case as parabolic. In both cases the conics are contained in two mutually orthogonal planes. In extreme cases (if the ellipse is a circle) the hyperbola degenerates to a line and the cyclide is a torus of revolution.
A further special property of a cyclide is:
(CL): Any curvature line of a Dupin cyclide is a circle.
Elliptic cyclides[edit]
An elliptic cyclide can be represented parametrically by the following formulas (s. weblinks):
x=\frac{d(c-a\cos u\cos v)+b^2\cos u}{a-c\cos u \cos v} \ ,
y=\frac{b\sin u (a-d\cos v)}{a-c\cos u \cos v} \ ,
z=\frac{b\sin v (c \cos u-d))}{a-c\cos u \cos v} \ ,
0\le u,v <2\pi \ .
The numbers a,b,c,d fulfill the conditions a>b>0, c^2=a^2-b^2, d\ge 0 and determine the shape of the ellipse \frac{x^2}{a^2}+\frac{y^2}{b^2}=1, z=0 and the hyperbola \frac{x^2}{c^2}-\frac{z^2}{b^2}=1, y=0.
For u=const , v=const respectively one gets the curvature lines (circles) of the surface.
The corresponding implicit representation is:
(x^2+y^2+z^2+b^2-d^2)^2-4(ax-cd)^2-4b^2y^2=0 \ .
In case of a=b one gets c=0, i. e. the ellipse is a circle and the hyperbola degenerates to a line. The corresponding cyclides are tori of revolution.
(ellipt.) Dupin cyclides for designparameters a,b,c,d
d=0 0<d<c d=c c<d d=a a<d
Zyklide-0.svg Zyklide-11.svg Zyklide-1.svg Zyklide-2.svg Zyklide-4.svg Zyklide-3.svg
symm. horn cyclide horn cyclide horn cyclide ring cyclide ring cyclide spindle cyclide
Parabolic cyclides[edit]
A parabolic cyclide can be represented by the following parametric representation:
x=\frac{p}{2}\, \frac{2v^2+k(1-u^2-v^2)}{1+u^2+v^2} \ ,
y=pu\, \frac{v^2+k}{1+u^2+v^2} \ ,
z=pv\, \frac{1+u^2-k}{1+u^2+v^2} \ ,
-\infty<u,v<\infty \ .
The number p determines the shape of both the parabolas: y^2=p^2-2px, z=0 and z^2=2px, y=0.
A corresponding implicit representation is
(x+(\frac{k}{2}-1)p)(x^2+y^2+z^2- \frac{k^2p^2}{4})+pz^2=0 \ .
parabolic Dupin cyclides for designparameters p=1, k
k=0.5 k=1 k=1.5
Zyklide-p-0.svg Zyklide-p-1.svg Zyklide-p-2.svg
ring cyclide horn cyclide horn cyclide
Remark: By displaying the circles there appear gaps which are caused by the necessary restriction of the parameters u,v.
Dupin cyclides and geometric inversions[edit]
ring cyclide generated by an inversion of a cylinder at a sphere (magenta)
parabolic ring cyclide generated by an inversion of a cylinder containing the origin
horn cyclide generated by an inversion of a cone
ring cyclide generated by an inversion of a torus
An advantage for investigations of cyclides is the property:
(I): Any Dupin cyclide is the image either of a right circular cylinder or a right circular double cone or a torus of revolution by an inversion (reflection at a sphere).
The inversion at the sphere with equation x^2+y^2+z^2=R^2 can be described analytically by:
(x,y,z) \rightarrow \frac{R^2\cdot(x,y,z)}{x^2+y^2+z^2} \ .
The most important properties of an inversion at a sphere are:
1. Spheres and circles are mapped on the same objects.
2. Planes and lines containing the origin (center of inversion) are mapped on themselves.
3. Planes and lines not containing the origin are mapped on spheres or circles passing the origin.
4. An inversion is involutory (identical with the inverse mapping).
5. An inversion preserves angles.
One can map arbitrary surfaces by an inversion. The formulas above give in any case parametric or implicit representations of the image surface, if the surfaces are given parametrically or implicitly. In case of a parametric surface one gets:
(x(u,v),y(u,v),z(u,v)) \rightarrow \frac{R^2\cdot(x(u,v),y(u,v),z(u,v))}{x(u,v)^2+y(u,v)^2+z(u,v)^2} \ .
But: Only in case of right circular cylinders and cones and tori of revolution one gets Dupin cyclides and vice versa.
Example cylinder[edit]
a) Because lines, which do not contain the origin, are mapped by an inversion at a sphere (in picture: magenta) on circles containing the origin the image of the cylinder is a ring cyclide with mutually touching circles at the origin. As the images of the line segments, shown in the picture, there appear on line circle segments as images. The spheres which touch the cylinder on the inner side are mapped on a first pencil of spheres which generate the cyclide as a canal surface. The images of the tangent planes of the cylinder become the second pencil of spheres touching the cyclide. The latter ones pass through the origin.
b) The second example inverses a cylinder that contains the origin. Lines passing the origin are mapped onto themselves. Hence the surface is unbounded and a parabolic cyclide.
Example cone[edit]
The lines generating the cone are mapped on circles, which intersect at the origin and the image of the cone's vertex. The image of the cone is a double horn cyclide. The picture shows the images of the line segments (of the cone), which are circles segments, actually.
Example torus[edit]
Both the pencils of circles on the torus (shown in the picture) are mapped on the corresponding pencils of circles on the cyclide. In case of a self-intersecting torus one would get a spindle cyclide.
Separation of variables[edit]
Dupin cyclides are a special case of a more general notion of a cyclide, which is a natural extension of the notion of a quadric surface. Whereas a quadric can be described as the zero-set of second order polynomial in Cartesian coordinates (x1,x2,x3), a cyclide is given by the zero-set of a second order polynomial in (x1,x2,x3,r2), where r2=x12+x22+x32. Thus it is a quartic surface in Cartesian coordinates, with an equation of the form:
A r^4 + \sum_{i=1}^3 P_i x_i r^2 + \sum_{i,j=1}^3 Q_{ij} x_i x_j + \sum_{i=1}^3 R_i x_i + B = 0
where Q is a 3x3 matrix, P and R are a 3-dimensional vectors, and A and B are constants.[4]
Families of cyclides give rise to various cyclidic coordinate geometries.
In Maxime Bôcher's 1891 dissertation, Ueber die Reihenentwickelungen der Potentialtheorie, it was shown that the Laplace equation in three variables can be solved using separation of variables in 17 conformally distinct quadric and cyclidic coordinate geometries. Many other cyclidic geometries can be obtained by studying R-separation of variables for the Laplace equation.[5]
See also[edit]
Notes[edit]
References[edit]
• Cecil, Thomas E. (1992), Lie sphere geometry, New York: Universitext, Springer-Verlag, ISBN 978-0-387-97747-8 .
• Eisenhart, Luther P. (1960), "§133 Cyclides of Dupin", A Treatise on the Differential Geometry of Curves and Surfaces, New York: Dover, pp. 312–314 .
• Hilbert, David; Cohn-Vossen, Stephan (1999), Geometry and the Imagination, American Mathematical Society, ISBN 0-8218-1998-4 .
• Moon, Parry; Spencer, Domina Eberle (1961), Field Theory Handbook: including coordinate systems, differential equations, and their solutions, Springer, ISBN 0-387-02732-7 .
• O'Connor, John J.; Robertson, Edmund F. (2000), "Pierre Charles François Dupin", MacTutor History of Mathematics archive .
• Pinkall, Ulrich (1986), "§3.3 Cyclides of Dupin", in G. Fischer, Mathematical Models from the Collections of Universities and Museums, Braunschweig, Germany: Vieweg, pp. 28–30 .
• Miller, Willard (1977), Symmetry and Separation of Variables .
• A. Cayley: On the cyclide. In: Quarterly Journal of Pure and Applied Mathematics. 12, 1873, p. 148–163.
• V. chandru, D. Dutta, C.M. Hoffmann: On the geometry of Dupin cyclides. In: The Visual Computer. 1989 (5), p. 277–290.
• C. Dupin: Applications de Geometrie et de Mechanique. Bachelier, Paris 1822.
• F. Klein, W. Blaschke: Vorlesungen Über Höhere Geometrie. Springer-Verlag, 1926, ISBN 978-3-642-98494-5, p. 56.
• J. C. Maxwell: On the cyclide. In: Quarterly Journal of Pure and Applied Mathematics. 9, 1868, p. 111–126.
• M. J. Pratt: Cyclide Blending in Solid Modelling. In: Wolfgang Strasser, Hans-Peter Seidel (Hrsg.): Theory and Practice in Geometric Modelling. Springer-Verlag, 1989, ISBN 0-387-51472-4, p. 235.
• Y. L. Srinivas, V. Kumar, D. Dutta: Surface design using cyclide patches. In: Computer-Aided Design. Volume 28, Issue 4, 1996, p. 263–276.
External links[edit]
|
__label__pos
| 0.990511 |
Swift Language Tour Part 1: Named Parameters
This is the first post in a series of posts I plan to run on some of the features I find most interesting in Apple's new programming lanague. I am also keeping a github repo up to date with most of the content that I will write about on here.
In swift, by default, a function takes anonymous parameters:
func printTwoStrings(strA: String, strB: String, yay: String) {
println("\(strA) \(strB)")
}
printTwoStrings("Hello", "World")
Sometimes, it can be more readable if you specify a named parameter. You can do this by specifying the parameter name before declaring the local variable name.
func sayHello(to name: String) {
println("Hello, \(name)")
}
sayHello(to: "Bob")
If you want to expose an external name, with the same name as your local variable, you can do so by prefixing the variable with a #.
func sayHello(#to: String) {
println("Hello, \(to)")
}
sayHello(to: "Bob")
Default values and Named Parameters
If you specify a default value, swift will add an external name to the parameter for you.
var counter = 0
func increment(by: Int = 1) {
counter += by
}
increment()
println(counter) // prints 1
increment(by: 5)
println(counter) // prints 6
Methods and Named Parameters
To make swift look more "Objective-C like", Apple decided to have all but the first parameter be named by default on instance methods. For example:
class Person {
var name: String
init(name: String) {
self.name = name
}
func say(message: String, to: String) {
println("\(self.name) says: \(message) to \(to)")
}
}
var bob = Person(name: "Bob")
bob.say("Hello", to: "Paul")
If you prefer to define a differently named local variable, you can do so by explicitly setting it:
class Person {
var name: String
init(name: String) {
self.name = name
}
func say(message: String, to otherPerson: String) {
// We now have named the variable otherPerson
println("\(self.name) says: \(message) to \(otherPerson)")
}
}
var bob = Person(name: "Bob")
bob.say("Hello", to: "Paul")
You can also specify that you want the parameter to be anonymous by using _ for the external name.
class Person {
var name: String
init(name: String) {
self.name = name
}
func say(message: String, _ otherPerson: String) {
// We now have named the variable otherPerson
println("\(self.name) says: \(message) to \(otherPerson)")
}
}
var bob = Person(name: "Bob")
bob.say("Hello", "Paul")
Named parameters are an interesting addition that allow for something of a hybrid between Ruby's splat operator and more traditional anonymous parameters found in other lanagues like C, Javascript, etc. This concludes my first post on Swift. Check back for more language features soon.
Ryan Schmukler
I'm a software developer, hacker and entrepreneur. I love making cool things. Follow me on Github or Twitter.
Brooklyn, New York
|
__label__pos
| 0.999808 |
January 5, 2022
What is Cyber Security?
What is Cyber Security? BoomLogic
What is Cyber Security or Computer Security or Information Technology Security? It’s the practice of protecting computers, mobile phones, servers, electronic systems, IoT, and other devices, networks, and data from malicious attacks that disrupts the services they provide.
There are some common categories that we can classify Cyber Security:
🢂 Application and System security
🢂 Network security
🢂 Information and Data security
🢂 Disaster recovery and business continuity
🢂 Operational security
🢂 End-user Learning
The threat is real, and it is growing.
Cyber threats are increasing in numbers and becoming more complex every year. There are even reports that the data exposed every year double. As the pandemic raged on, medical services have become the most targeted, retailers, public, government, and private entities are targeted as well. Cybercriminals aim to acquire financial and medical data. In fact, any business that uses networks can be attacked for their customer data or even for corporate espionage.
Spending on Cyber Security to combat these threats are already critical and may reach billions by next year. This is probably why a lot of governments have taken the hint and are already giving guidance so that better and more effective Cyber Security practices can be put in place.
The NIST, or National Institute of Standards and Technology, has forged a Cyber Security Framework, you can take a look at their website here: https://www.nist.gov/cyberframework
Skimming through the framework, you’ll know they recommend one thing– constant, vigilant, monitoring.
Now, how do Cyber Criminals do the deed? Watch out for these common tricks:
1. Malware
Malware is a portmanteau of malicious and software. It’s software, application, code, script, that is used to damage or disrupt a computer. It’s spread through email, downloads, or even through external drives. Most of the time, it’s because of money, but they can also be politically motivated.
Types of Malware
➽ Virus: Like the biological ones, it replicates, attaches to a clean host file then infects your system.
➽ Trojans: Named for the Trojan Horse, it hides inside real software. Once the user opens up that software, it runs smoothly, not knowing that a Trojan has been released and can now cause damage.
➽ Spyware: Another word combination– spy + software. It’s malware that records and steals information, it can be a password or a credit card.
➽ Adware: Malware inside advertising software.
➽ Botnets: Malware infected computers, used by cybercriminals to perform tasks over the internet without permission from the legitimate user.
➽ Ransomware: Ah, the infamous one. No matter if you are a tech or not, this kind of Malware has been in the news. It’s Malware that locks down a user’s files and data, even entire systems. The files and data are held in ransom until the perpetrators are paid.
2. SQL injection
Through an SQL or structured language query injection, a cybercriminal can take control and steal data from a database. Using a malicious SQL Statement, Cybercriminals exploit vulnerabilities in database applications to insert malicious code and take lead. Imagine all the kinds of information they have access to when this happens.
3. Phishing
That email from your financial institution, or your boss, that looks so real asking you for some sensitive information? That’s a Phishing email. Phishing attacks steal your information, like credit card data, or app PINs, for financial gain. It can also take the form of SMS, called Smishing.
4. Man-in-the-middle attack
Nope, not that Michael Jackson song. A man-in-the-middle attack happens when a cybercriminal comes between two individuals and intercepts their communication. For example, in an open Wi-Fi in a cafe, a person can intercept the data from one person’s device going to the internet by using tools that pretend to be the Wi-Fi.
5. Denial-of-service attack
A denial-of-service attack happens when a website or a computer system is bombarded with traffic that overwhelms the networks and servers. Essentially, the system becomes unusable as it can no longer get legitimate requests and the system is stopped.
These are just the most common forms of Cyber threats. As technology evolves and takes other forms, we will continue to seek better protection methods and the criminals will find other ways to create cyber security threats.
That’s what Cyber Security or Computer Security or Information Technology Security is in a nutshell. That’s our first peek into this deep dark world. Now that you have an idea about how these threats are initiated, delivered, and get inside your system, our next blog will deal with how you can protect and arm yourself from these threats and cyber attacks. Don’t worry, Boom Logic’s got your back.
|
__label__pos
| 0.661048 |
Tips Manage Users and Access with Amazon IAM
ngoctru511
New member
[TIẾNG VIỆT]:
** Cách quản lý người dùng và truy cập với Amazon IAM **
Amazon Identity and Access Management (IAM) là một dịch vụ web giúp bạn quản lý quyền truy cập vào tài nguyên AWS.Người dùng IAM có thể được cấp các cấp độ truy cập khác nhau vào tài nguyên, chẳng hạn như khả năng tạo, xóa hoặc sửa đổi tài nguyên.Bạn cũng có thể sử dụng IAM để tạo các nhóm người dùng và gán quyền cho các nhóm đó.
** Tạo người dùng **
Để tạo người dùng, bạn có thể sử dụng bảng điều khiển quản lý AWS hoặc AWS CLI.Trong bảng điều khiển quản lý AWS, hãy truy cập bảng điều khiển IAM và nhấp vào ** người dùng **.Sau đó, nhấp vào ** Thêm người dùng ** và nhập tên người dùng và mật khẩu cho người dùng.Bạn cũng có thể chọn để bật xác thực đa yếu tố cho người dùng.
Trong AWS CLI, bạn có thể tạo người dùng bằng cách chạy lệnh sau:
`` `
AWS IAM CREATE-User --User-NAME <Username>
`` `
Sau đó, bạn có thể sử dụng lệnh sau để đặt mật khẩu cho người dùng:
`` `
AWS IAM set-user-password --User-name <username>
`` `
** Gán quyền **
Khi bạn đã tạo người dùng, bạn cần gán quyền cho người dùng.Bạn có thể làm điều này bằng cách tạo vai trò ** ** và gán vai trò cho người dùng.Vai trò là một tập hợp các quyền có thể được gán cho người dùng hoặc nhóm người dùng.
Để tạo vai trò, bạn có thể sử dụng bảng điều khiển quản lý AWS hoặc AWS CLI.Trong bảng điều khiển quản lý AWS, hãy truy cập bảng điều khiển IAM và nhấp vào vai trò ** **.Sau đó, nhấp vào ** Tạo vai trò ** và chọn loại vai trò bạn muốn tạo.
Trong AWS CLI, bạn có thể tạo một vai trò bằng cách chạy lệnh sau:
`` `
AWS IAM CREATE-ROLE-ROLE-NAME <-Game-name> --Sassume-role-policy-Document <Chính sách-DOCUMENT>
`` `
Tham số `DOCITUMUM số là một tài liệu JSON chỉ định các quyền mà vai trò sẽ có.Bạn có thể tìm thêm thông tin về các tài liệu chính sách trong [Tài liệu IAM] (https://docs.aws.amazon.com/iam/latest/userguide/access_policies.html).
Khi bạn đã tạo vai trò, bạn có thể gán vai trò cho người dùng bằng cách chạy lệnh sau:
`` `
AWS IAM Đính kèm-ROLE-Chính sách--ROLE-MAME <Tên tên>-Chính sách
`` `
Tham số 'Chính sách-ANN` là tên tài nguyên Amazon (ARN) của chính sách mà bạn muốn đính kèm với vai trò.
** Quản lý các khóa truy cập **
Khi bạn tạo người dùng, AWS sẽ tự động tạo hai khóa truy cập cho người dùng.Khóa truy cập là một cặp chuỗi: Khóa truy cập bí mật ** và khóa truy cập công khai ** **.Bạn có thể sử dụng khóa truy cập bí mật để ký các yêu cầu cho các dịch vụ AWS.Khóa truy cập công cộng có thể được sử dụng để tạo mã thông báo phiên, sau đó có thể được sử dụng để ký các yêu cầu.
Bạn có thể xem các khóa truy cập của người dùng trong Bảng điều khiển quản lý AWS hoặc AWS CLI.Trong bảng điều khiển quản lý AWS, hãy truy cập bảng điều khiển IAM và nhấp vào ** người dùng **.Sau đó, nhấp vào tab Khóa ** Access ** cho người dùng.
Trong AWS CLI, bạn có thể xem các khóa truy cập của người dùng bằng cách chạy lệnh sau:
`` `
AWS IAM DANH SÁCH-ACCESS-KEYS --USER-NAME <Username>
`` `
Bạn cũng có thể tạo, xóa và xoay các khóa truy cập của người dùng bằng bảng điều khiển quản lý AWS hoặc AWS CLI.
** Quản lý các nhóm **
Bạn có thể sử dụng các nhóm để quản lý người dùng có quyền tương tự.Bạn có thể tạo một nhóm và sau đó thêm người dùng vào nhóm.Khi bạn thêm người dùng vào một nhóm, người dùng thừa hưởng quyền của nhóm.
Để tạo một nhóm, bạn có thể sử dụng bảng điều khiển quản lý AWS hoặc AWS CLI.Trong bảng điều khiển quản lý AWS, hãy truy cập bảng điều khiển IAM và nhấp vào các nhóm ** **.Sau đó, nhấp vào ** Tạo nhóm ** và nhập tên cho nhóm.
Trong AWS CLI, bạn có thể tạo một nhóm bằng cách chạy lệnh sau:
`` `
AWS IAM CREATE-GROUP-NHÓM TUYỆT VỜI <-Group-name>
`` `
Sau đó, bạn có thể thêm người dùng vào nhóm bằng cách
[ENGLISH]:
**How to Manage Users and Access with Amazon IAM**
Amazon Identity and Access Management (IAM) is a web service that helps you manage access to AWS resources. IAM users can be granted different levels of access to resources, such as the ability to create, delete, or modify resources. You can also use IAM to create groups of users and assign permissions to those groups.
**Creating Users**
To create a user, you can use the AWS Management Console or the AWS CLI. In the AWS Management Console, go to the IAM console and click **Users**. Then, click **Add user** and enter a username and password for the user. You can also choose to enable multi-factor authentication for the user.
In the AWS CLI, you can create a user by running the following command:
```
aws iam create-user --user-name <username>
```
You can then use the following command to set a password for the user:
```
aws iam set-user-password --user-name <username> --password <password>
```
**Assigning Permissions**
Once you have created a user, you need to assign permissions to the user. You can do this by creating a **role** and assigning the role to the user. A role is a collection of permissions that can be assigned to a user or group of users.
To create a role, you can use the AWS Management Console or the AWS CLI. In the AWS Management Console, go to the IAM console and click **Roles**. Then, click **Create role** and select the type of role you want to create.
In the AWS CLI, you can create a role by running the following command:
```
aws iam create-role --role-name <role-name> --assume-role-policy-document <policy-document>
```
The `policy-document` parameter is a JSON document that specifies the permissions that the role will have. You can find more information about policy documents in the [IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html).
Once you have created a role, you can assign the role to a user by running the following command:
```
aws iam attach-role-policy --role-name <role-name> --policy-arn <policy-arn>
```
The `policy-arn` parameter is the Amazon Resource Name (ARN) of the policy that you want to attach to the role.
**Managing Access Keys**
When you create a user, AWS automatically creates two access keys for the user. An access key is a pair of strings: a **secret access key** and a **public access key**. You can use the secret access key to sign requests to AWS services. The public access key can be used to generate a session token, which can then be used to sign requests.
You can view your user's access keys in the AWS Management Console or the AWS CLI. In the AWS Management Console, go to the IAM console and click **Users**. Then, click the **Access Keys** tab for the user.
In the AWS CLI, you can view your user's access keys by running the following command:
```
aws iam list-access-keys --user-name <username>
```
You can also create, delete, and rotate your user's access keys using the AWS Management Console or the AWS CLI.
**Managing Groups**
You can use groups to manage users who have similar permissions. You can create a group and then add users to the group. When you add a user to a group, the user inherits the permissions of the group.
To create a group, you can use the AWS Management Console or the AWS CLI. In the AWS Management Console, go to the IAM console and click **Groups**. Then, click **Create group** and enter a name for the group.
In the AWS CLI, you can create a group by running the following command:
```
aws iam create-group --group-name <group-name>
```
You can then add users to the group by
Join Telegram ToolsKiemTrieuDoGroup
Back
Top
|
__label__pos
| 0.983348 |
:: choice ::
go green! go red! go blue!
random good link:
Child Relief and You
random evil link:
SCO
random quote:
"When all you've got is a hammer, everything looks like a nail. "
visit kde.org! visit debian.org!
programming Qt applications in python
Step 1: simple hello world
Step 2: a button
Step 3: a more structured approach
Step 4: using Qt designer
This tutorial aims to provide a hands-on guide to learn the basics of building a small Qt application in python.
To follow this tutorial, you should have basic python knowledge, knowledge of Qt, however, is not necessary. I'm using Linux in these examples and am assuming you already have a working installation of python and pyqt. To test that, open a python shell by simply typing python in a console to start the interactive interpreter and type
>>> import qt
If this doesn't yield an error message, you should be ready to roll. The examples in this tutorial are kept as easy as possible, showing useful ways to write and structure your program. It is important that you read the source code of the example files, most of the stuff that is done is explained in the code. Use the examples and try to change things, play around with them. This is the best way to get comfortable with it.
Hello, world!
Let's start easy. Popping up a window and displaying something. The following small program will popup a window showing "Hello world!", obviously.
#!/usr/bin/env python
import sys
from qt import *
# We instantiate a QApplication passing the arguments of the script to it:
a = QApplication(sys.argv)
# Add a basic widget to this application:
# The first argument is the text we want this QWidget to show, the second
# one is the parent widget. Since Our "hello" is the only thing we use (the
# so-called "MainWidget", it does not have a parent.
hello = QLabel("Hello world!",None)
# We have to let the application know which widget is its MainWidget ...
a.setMainWidget(hello)
# ... and that it should be shown.
hello.show()
# Now we can start it.
a.exec_loop()
download source of hello.py
About 7 lines of code, and that's about as easy as it can get.
A button
Let's add some interaction! We'll replace the label saying "Hello, World!" with a button and assign an action to it. This assignment is done by connecting a signal, an event which is sent out when the button is pushed to a slot, which is an action, normally a function that is run in the case of that event.
#!/usr/bin/env python
import sys
from qt import *
a = QApplication(sys.argv)
# Our function to call when the button is clicked
def sayHello():
print "Hello, World!"
# Instantiate the button
hellobutton = QPushButton("Say 'Hello world!'",None)
# And connect the action "sayHello" to the event "button has been clicked"
a.connect(hellobutton, SIGNAL("clicked()"), sayHello)
# The rest is known already...
a.setMainWidget(hellobutton)
hellobutton.show()
a.exec_loop()
download source of hellobutton.py
Urgh, that looks like a crappy approach
You can imagine that coding this way is not scalable nor the way you'll want to continue working. So let's make that stuff pythonic, adding structure and actually using object-orientation in it. We create our own application class, derived from a QApplication and put the customization of the application into its methods: One method to build up the widgets and a slot which contains the code that's executed when a signal is received.
#!/usr/bin/env python
import sys
from qt import *
class HelloApplication(QApplication):
def __init__(self, args):
""" In the constructor we're doing everything to get our application
started, which is basically constructing a basic QApplication by
its __init__ method, then adding our widgets and finally starting
the exec_loop."""
QApplication.__init__(self, args)
self.addWidgets()
self.exec_loop()
def addWidgets(self):
""" In this method, we're adding widgets and connecting signals from
these widgets to methods of our class, the so-called "slots"
"""
self.hellobutton = QPushButton("Say 'Hello world!'",None)
self.connect(self.hellobutton, SIGNAL("clicked()"), self.slotSayHello)
self.setMainWidget(self.hellobutton)
self.hellobutton.show()
def slotSayHello(self):
""" This is an example slot, a method that gets called when a signal is
emitted """
print "Hello, World!"
# Only actually do something if this script is run standalone, so we can test our
# application, but we're also able to import this program without actually running
# any code.
if __name__ == "__main__":
app = HelloApplication(sys.argv)
download source of helloclass.py (Note that the above code shows exactly the same as the second example, but it's scalable.
gui coding sucks
... so we want to use Qt3 Designer for creating our GUI. In the picture, you can see a simple GUI, with in green letters the names of the widgets. What we are going to do is
• We compile the .ui file from Qt designer into a python class
• We subclass that class and use it as our mainWidget
• This way, we're able to change the user interface afterwards from Qt designer, without having it messing around in the code we added.
pyuic testapp_ui.ui -o testapp_ui.py
makes a python file from it which we can work with.
The way our program works can be described like this:
• We fill in the lineedit
• Clicking the add button will be connected to a method that reads the text from the lineedit, makes a listviewitem out of it and adds that to our listview.
• Clicking the deletebutton will delete the currently selected item from the listview.
• Here's the heavily commented code:
#!/usr/bin/env python
from testapp_ui import TestAppUI
from qt import *
import sys
class HelloApplication(QApplication):
def __init__(self, args):
""" In the constructor we're doing everything to get our application
started, which is basically constructing a basic QApplication by
its __init__ method, then adding our widgets and finally starting
the exec_loop."""
QApplication.__init__(self,args)
# We pass None since it's the top-level widget, we could in fact leave
# that one out, but this way it's easier to add more dialogs or widgets.
self.maindialog = TestApp(None)
self.setMainWidget(self.maindialog)
self.maindialog.show()
self.exec_loop()
class TestApp(TestAppUI):
def __init__(self,parent):
# Run the parent constructor and connect the slots to methods.
TestAppUI.__init__(self,parent)
self._connectSlots()
# The listview is initially empty, so the deletebutton will have no effect,
# we grey it out.
self.deletebutton.setEnabled(False)
def _connectSlots(self):
# Connect our two methods to SIGNALS the GUI emits.
self.connect(self.addbutton,SIGNAL("clicked()"),self._slotAddClicked)
self.connect(self.deletebutton,SIGNAL("clicked()"),self._slotDeleteClicked)
def _slotAddClicked(self):
# Read the text from the lineedit,
text = self.lineedit.text()
# if the lineedit is not empty,
if len(text):
# insert a new listviewitem ...
lvi = QListViewItem(self.listview)
# with the text from the lineedit and ...
lvi.setText(0,text)
# clear the lineedit.
self.lineedit.clear()
# The deletebutton might be disabled, since we're sure that there's now
# at least one item in it, we enable it.
self.deletebutton.setEnabled(True)
def _slotDeleteClicked(self):
# Remove the currently selected item from the listview.
self.listview.takeItem(self.listview.currentItem())
# Check if the list is empty - if yes, disable the deletebutton.
if self.listview.childCount() == 0:
self.deletebutton.setEnabled(False)
if __name__ == "__main__":
app = HelloApplication(sys.argv)
download source of testapp.py download ui file testapp_ui.ui download compiled ui testapp_ui.py
useful to know
Creating the GUI in Qt designer does not only make it easier creating the GUI, but it's a great learning tool, too. You can test how a widget looks like, see what's available in Qt and have a look at properties you might want to use.
The C++ API documentation is also a very useful (read: necessary) tool when working with PyQt. The API is translated pretty straightforward, so after having trained a little, you'll find the developers API docs one of the tools you really need. When working from KDE, konqueror's default shortcut is qt:[widgetname], so [alt]+[F2], "qt:qbutton directly takes you to the right API documentation page. Trolltech's doc section has much more documentation which you might want to have a look at.
The examples in this tutorial have been created using Qt 3.3. I might update the tutorial when there's a usable version of the Qt bindings for Qt 4 available, but at this moment, using PyQt4 does not make sense.
This document is published under the GNU Free Documentation License.
07-12-2005, 19:22 h
© Sebastian Kügler
[Parsetime: 0.0032sec]
|
__label__pos
| 0.547059 |
Back to Blog
26th April 2024
Progressive Web Apps PWA vs Native Apps: How to Choose One? A Complete Comparison
For a business that’s venturing into app development for the first time, deciding what type of app will work best for your business can be a little difficult. Some terms you’ll likely come across frequently include Progressive Web Apps, hybrid apps, and native apps.
The debate between PWA vs native apps is a long-standing one, with each one presenting valid pros and cons. In this comprehensive guide, we’ll explore everything you need to know about Progressive Web Apps and native apps. We’ll also consider the pros and cons of each of these app options so you can make an informed decision about which of these is best for your business.
• A Progressive Web App is a flexible web application that combines some of the best features of websites and mobile apps.
• A native mobile app is a software developed for a specific mobile operating system, which is why it typically has native capabilities.
• The primary difference between a Progressive Web Application and a native app is in how they’re built.
• PWAs are easier to build and cost less, native mobile apps are more time and resource-intensive, but they tend to deliver superior native performance.
What is PWA?
Before diving into the differences between a Progressive Web App and a native app, there’s a need to understand how each of them works. A Progressive Web App is a flexible web application that combines some of the best features of websites and mobile apps. PWAs are built using web technologies like HTML, CSS, and JavaScript frameworks that work across different platforms.
Unlike native apps that you’ll have to download and install on your mobile phone, Progressive Web Apps load directly in a web browser but still deliver a full-screen experience similar to what you get with an app. A PWA can also be added to the home screen of your device with an icon. This icon creates a shortcut to the PWA, but it’s not a full app installation since it essentially acts like a website when opened.
To learn more about PWAs and find answers to questions like what is a Progressive Web App, read our article on this topic. You’ll find more information about the definition, features, and examples of Progressive Web Apps in this article.
What are Native Apps?
A native mobile app is software developed for a specific mobile operating system (usually Android or iOS). They are built using the operating system’s native programming languages and tools. This allows them to access the full capabilities of the hardware and software of the mobile device on which they work such as the camera, GPS function, accelerometer, push notifications, and so on.
Since they’re designed specifically for the phone’s operating system, native apps generally deliver a faster and smoother experience to users. Some native apps may even work without an internet connection. Native apps are downloaded from the app store and often require regular updates.
Difference Between Progressive Web App and Native App
To truly understand the key differences between Progressive Web Apps vs native app, you’ll have to evaluate how they’re developed and what each of them can do. Here’s an overview of the differences between native apps vs PWA.
How They’re Written
The primary difference between a Progressive Web Application and a native app is in how they’re built. Progressive Web Apps are primarily designed to run inside a web browser like a website. Consequently, they’re built with web technologies such as HTML, CSS, and JavaScript. This allows them to work on pretty much any device with a web browser.
Native apps, on the other hand, are built specifically for the operating system where you need them to work. Developers use native programming languages for each platform to build the app. iOS apps are built with Swift and Objective-C while Android apps are built with Java and Kotlin.
Cost of Development
When it comes to speed and overall cost of development, Progressive Web Apps are generally cheaper and easier to build compared to native mobile apps. That’s because they’re built with technologies that work across platforms. All you need is a single code base that loads seamlessly across all devices and operating systems.
For native applications, you’ll have to build at least two versions of the site (for iOS and Android) using different programming languages. This means double the time and double the development efforts. In most cases, you’ll have to hire separate development teams to handle the development of each version of the app. Updating and maintaining each version of the app can also be resource-intensive.
Distribution
Native apps are difficult to distribute compared to PWAs. Apart from building separate versions of your native app for different platforms, you’ll also have to submit each version to a separate app store to get it to the users. Users can only download iOS apps on Apple’s App Store while Android apps are on Google Play Store. Other less popular options include Amazon’s App Store, Huawei App Gallery, and Windows Phone Store.
Getting your apps on these stores can be a little complicated. That’s because they often set a stringent set of requirements that all apps must meet before they can be published. This includes both technical requirements and ethical guidelines. Some platforms also require developers to pay a fee to maintain a developer account and submit apps. You’ll also have to invest in App Store optimization to make your apps more visible on the store and get it across to more users.
With Progressive Web Apps, there’s no need to upload or install an app package. Users can easily find your app online especially if you invest in optimizing the app for search engines which makes it easier to find your app organically.
Trust
All a user needs to access a Progressive Web App is a web browser and a URL. Without the cumbersome requirements set by app stores, it’s difficult to trust Progressive Web Apps to be safe and secure. The stringent technical and ethical requirements of each store prevent developers from rolling out poor-quality apps to users. This boosts their reliability and gives users more confidence to download and install the application on their devices.
PWA vs Native Performance
Native apps are developed separately for iOS and Android devices. This ensures that the app is tailor-made for each operating system and can access the full capabilities of the devices on which they operate. By accessing the built-in phone features like the GPS, camera, or fingerprint sensor, native apps can deliver more advanced capabilities.
With Progressive Web Apps, the developer simply creates a responsive web interface and publishes it. The performance and how this web app will be displayed is up to the user’s browser and screen parameters. While most Progressive Web Apps today try to strike a balance between a responsive website and apps, the experience isn’t always the same across all devices and browsers.
As a corollary, the cost of the smooth performance of native apps is that they’re often heavyweight and resource-intensive. PWAs on the other hand are built to be lightweight. Since nothing is installed on your device, they take up very little memory space on your device. For instance, Starbucks’ Progressive Web App uses 99.84% less space compared to its iOS app. Similarly, X’s (formerly Twitter) Progressive Web App is only about 1-3% of the size of its native app.
Offline Capability
Another impressive capability of native apps is the possibility of working offline. In many web apps, users can still access some basic information and app functionality even when they’re not connected to the internet. Many modern PWAs are starting to borrow a leaf from this as well and may work offline using cached data.
This allows the app to display certain parts of the app to users until the device can be connected to a network. However, anything that isn’t part of the web page’s caching system cannot be displayed until connectivity is restored.
PWA vs native comparison chart
What to Choose? PWA vs Native Pros and Cons
To choose between a Progressive Web App and a native app, you’ll have to weigh the advantages of PWA over native apps and vice versa. Each of these applications has its unique benefits and downsides as summarized in the section above. Which of these options to go for depends on what’s most important to you as a developer.
For instance, if you’re on a limited budget or you want to roll out an app quickly, Progressive Web Apps might be ideal for you. However, if performance and platform compatibility are a big deal for you, then a native app might be a better option. Here’s a summary of the pros and cons of Progressive Web Apps and native apps.
Advantages of PWA
• Better Access: A Progressive Web App works on any device with a web browser.
• Faster development and lower cost: Only one app needs to be built and it leverages existing web technologies which makes development quicker and cheaper compared to native counterparts.
• No installation required: This makes it easier for users to access the app. Not installing any app package also means the app doesn’t take up much space on the user’s device.
• Easier maintenance: Progressive Web Apps require less maintenance and are easier to update.
• No app store restrictions: Avoids cumbersome app store approval processes and potential rejections.
You can learn more about the pros and cons of Progressive Web Applications in our detailed article about PWA benefits
Disadvantages of PWA
• Limited functionality: Since PWAs don’t have access to native device capabilities, they’re often limited. This may make this option unsuitable for building complex apps.
• Trust issues: PWAs are not scrutinized by native app stores, which raises issues about their reliability.
Advantages of Native
• Rich features and functionality: Native apps get full access to native device features such as the camera, camera, app notifications, and so on, delivering a powerful and seamless user experience.
• Better performance: Native apps deliver smoother and faster performance compared to PWAs.
• Enhanced security: App stores have stringent measures in place to ensure only the most secure native apps are rolled out.
• App Store discoverability: Users can easily find native apps through the App Store search.
• Better personalization: Many native apps can collect user data using sensors and other hardware features to personalize user experience.
Disadvantages of Native
• Access is limited to native mobile devices: Requires separate development for different operating systems (iOS and Android). Native apps also require installation on your device.
• Development costs and time: Native apps cost more to build. The development process is also time-consuming and may require different teams which makes it unsuitable for businesses with limited resources.
• Cumbersome app store approval process: Needs to go through app store approval which can add time and potential for rejection.
Conclusion – Comparison of PWA Apps vs Native Apps
Considering all the benefits and downsides of Progressive Web Apps and native applications, it’s difficult to say which of these apps works better. However, Progressive Web Apps are often the preferred choice for many companies because of the ease of access they offer users. These apps are not limited to a specific operating system since they run within the web browser of any device.
Another major benefit of Progressive Web Apps is how easy and cheap the PWA development process can be. You can learn more about the process of building a PWA and get started with building your custom mobile app by partnering with reliable web app developers like CrustLab.
Whether it’s an e-commerce app, business application, or any other type of mobile app, CrustLab’s mobile app development services will help you develop top-quality Progressive Web Apps to elevate your brand’s recognition and online presence. Contact us today to schedule a free consultation and get started with PWA mobile app development for your business.
FAQ
01. Why is a PWA better than a native app?
The main benefit of a Progressive Web App over a native app is that PWAs are built to work across various operating systems and mobile devices. They run in the web browser, which means you can open a PWA whether you’re using an iOS, Android, or Windows device. The cross-platform compatibility of Progressive Web Apps also makes them easier to build. Unlike native apps where you have to build for multiple operating systems, you’ll only have to build one progressive app for all devices, making them faster and cheaper to build.
02. Can PWA replace native?
Progressive Web Apps offer several benefits over native apps. They’re accessible across various mobile platforms thanks to their no-installation design and are generally easier to build and distribute. However, this does not mean they are better than native apps or that they can replace them. Native apps deliver better performance and a smoother experience for mobile users.
03. What is the difference between PWA and native medium?
The essential difference between a Progressive Web App and a typical native app is in how they’re built. A progressive app is written with web technologies like HTML, CSS, and JavaScript. Consequently, they work like conventional websites and run directly inside a web browser. A native app is built with the native programming language of a specific operating system. They have to be installed on the device and are designed to have full access to the features and hardware of the mobile operating system on which they have been built.
04. Should I build a PWA or a native app?
To decide whether you should build a Progressive Web App or a native app, you have to determine which option would be the perfect fit for your specific needs. If you want an app that will be accessible across different platforms, then a progressive app would be ideal for you. However, if your priority is performance and access to device features, then a native app would be better. You also have to consider the cost, timeline, and resources you are willing to dedicate to your project.
05. How do I convert PWA to the native app?
The easiest way to convert a Progressive Web App to a native app is to wrap the PWA in a native app container. Tools like Cordova or Capacitor allow you to package the code and JavaScript files for your PWA within a native app shell so it can deliver some native app functionalities. However, the app’s core logic will still be based on the original web code. The alternative option is to rebuild the app from scratch with native code. This is a more cost and time-intensive approach since it means you’ll have to rewrite the app’s code base using native programming languages for each platform. But you’ll get a smooth-functioning native application this way.
|
__label__pos
| 0.668557 |
5 releases
0.1.5 Mar 27, 2022
0.1.4 Mar 18, 2022
0.1.3 Mar 3, 2022
0.1.2 Feb 23, 2022
0.1.1 Feb 23, 2022
#8 in #sniffing
MIT/Apache and LGPL-3.0
19KB
425 lines
Nets is a Rust language crate for accessing the packet sniffing capabilities of pcap . It's use rust-pcap/pcap.
Features:
• List Devices
• parse http request/response
• display http header information
Depends:
• rust-pcap
• http
• Linux/MacOSX libpcap, Windows WinPcap
License:
• "MIT OR Apache-2.0"
Install
git clone https://github.com/asmcos/nets
cd nets
Cargo build
Demo
Ok(ParsedPacket { len: 0, timestamp: "", headers: [Tcp(TcpHeader { source_port: 50683, dest_port: 443, sequence_no: 286770016, ack_no: 0, data_offset: 11, reserved: 0, flag_urg: false, flag_ack: false, flag_psh: false, flag_rst: false, flag_syn: true, flag_fin: false, window: 65535, checksum: 14832, urgent_pointer: 0, options: None }), Ipv4(IPv4Header { version: 4, ihl: 20, tos: 0, length: 64, id: 0, flags: 2, fragment_offset: 0, ttl: 64, protocol: TCP, chksum: 11203, source_addr: 192.168.1.5, dest_addr: 12.27.16.10 }), Ether(EthernetFrame { source_mac: MacAddress([0, 116, 111, 112, 113, 122]), dest_mac: MacAddress([20, 113, 18, 15, 0, 10]), ethertype: IPv4 })], remaining: [] })
• http parse
Ok(Complete(330)),Request { method: Some("GET"), path: Some("/js/polyfill.min.js?features=es6"), version: Some(1), headers: [Header { name: "Host", value: "rustai.cn" }, Header { name: "Connection", value: "keep-alive" }, Header { name: "User-Agent", value: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36" }, Header { name: "Accept", value: "*/*" }, Header { name: "Referer", value: "http://rustai.cn/" }, Header { name: "Accept-Encoding", value: "gzip, deflate" }, Header { name: "Accept-Language", value: "zh-CN,zh;q=0.9" }] }
Dependencies
~3–4.5MB
~87K SLoC
|
__label__pos
| 0.691573 |
Friday, August 27, 2010
Fluent Assertions 1.2.3 released
After a week of testing in some of our projects, it is time to remove the beta marker from release 1.2.3 of our fluent assertion framework, Fluent Assertions. It’s just a small release, as was the previous one, but it still adds some nice additions requested by the community. This is the official list of changes:
• Added an As<T>() extension method to downcast an object in a fluent way.
• Added a ShouldNotThrow() and ShouldNotThrow<TException>() extension method for asserting that a (particular) exception did not occur.
• Fixed a NullReferenceException when comparing the properties of an object to the null-valued properties of another.
• Fixed an IndexOutOfRangeException when comparing an empty collection with another for equality.
• Fixed a bug where two equivalent collections that contained duplicate items were not seen as equivalent
• Minor improvements to the verification messages.
• Refactored the internal structure of the exception-related assertions, and removed an extra layer of inheritance. Should improve the extensibility.
As usual, you can download Fluent Assertions 1.2.3 from its resident CodePlex site.
Wednesday, August 25, 2010
ALM Practices Part 11: Modeling the business domain using Domain Models
What is it?
A domain model is typically depicted by a UML class diagram in which the classes and associations represent the business concepts and the relationships between them. Although you can use a piece of paper to draw up a domain model, in most cases a UML case tool is better suited for that. Business concepts such as orders, customers or contracts are represented by classes stereotyped as entity. Associations are used to illustrate the roles which that entity fulfills in in the relationship with other entities. An example of a domain model representing a recipe registration application might look like this:
clip_image002[7]
Why would you do it?
• Because a domain model centralizes the concepts and behavior of the business domain at one place
• Because while creating it, you'll run into many new important questions on both the static and dynamic aspects of the domain, including constraints, relationships, the business process, roles and responsibilities
• Because it can be used to verify that the developers and other team members have the same view on the domain as the business people have.
What’s the minimum you need to do?
• You should clearly communicate that the domain model should not be confused with a data model. It does not imply anything about the database schema and should not contain specific decisions or changes because of that database.
• Make sure all entities, association roles, attributes, operations and other documentation comply with the Ubiquitous Language.
• Write in your native language, unless company policy says otherwise.
• Avoid bidirectional associations. It makes understanding the life cycle of related entities more difficult and introduces technical complexity that ripples throughout the entire system. I also found that bidirectional associations reduces the obviousness of the user interface, because a bidirectional relationships clouds the parent-child obviousness.
• Always use directional associations. Base those on the conceptional relationship between the entities, not on the way you intend to query the domain for presentation purposes.
• Always add roles to your associations. They may seem obvious, but by thinking about the role, you may find misinterpretations or an opportunity to get more insights in its purpose.
• Don't specify a multiplicity of 1, but always specify the others explicitly.
• Use a composite association when the child's lifecycle is dictated by the parent (like the Recipe-Ingredient association).
• Don't include any technical terms or .NET-specific types such as string or integer. Use text, number, money or another type from the Ubiquitous Language instead.
• Make sure that the individual class models are small and do not show more than what the model is intended for.
• If an entity has a status attribute, carefully consider what impact each status has on the editability and associations of that entity. If these constraint become very complex, consider modeling the State Pattern or choose a design in which every state is represented by a dedicated subclass of the entity.
• Do not include mutual exclusive associations. If you need these, your entity is probably representing multiple conflicting concepts and should be split up in multiple entities. Again, if these are related to a status attribute, see previous remark.
What’s the usual thing to do?
• Group related entities that change together, have a dependent lifecycle or form a conceptual whole into a so-called aggregate and choose one entity as the entity that controls access to the others. This is called the aggregate root.
• My rule of thumb for finding the right aggregate roots is to create a diagram that only contains the aggregate roots. That diagram should include the most important concepts of your domain and clearly illustrate its dependencies.
• Include detailed constraints on the maximum length of text attributes, ranges of numbers and dates.
• If such constraints tend to repeat itself for the same kind of data (e.g. an email address, ISBN number, credit card number or currency), consider introducing new classes representing domain-specific primitives with the associated constraints. Stereotype these as a value object and use them instead of primitive types. For instance, the above domain model relies on the following value objects.
image
• If you find that the same set of entities have completely different meanings and dynamics depending on the department or organizational unit you ask, and you don't see a chance to get them aligned, you may need to introduce different domain models per organizational unit. This practice is often referred to as having more than one bounded context. In other words, different organizational units require different interpretations of the same domain concepts.
• Consider reading InfoQ's Domain Driven Design Quickly to get a quick-start into the world of Eric Evans' Domain Driven Design.
• Consider using Sparx’s UML case tool Enterprise Architect since it can generate Microsoft Word documents and a HTML version of the models, making it easy to share the models with the business people.
Tuesday, August 10, 2010
ALM Practices Part 10: Work Item Tracking
What is it?
Using Team Foundation Server’s User Story, Task and Bug work item types as the central unit of work for all the activities done within a project.
Why would you do it?
• Because it adds traceability between the functional requirements and the way these requirements have been realized on a technical level.
• Because it introduces a common view of the project between developers and the business people.
• Because it allows you to keep track of why certain code changes were made.
• Because it creates an automatic link between the work items representing functional changes, bug fixes and other code changes and the automatic builds.
• Because it reduces the chance of ending up with loose ends.
What’s the usual thing to do?
• Track each and every activity that needs to be done in a project as a work item.
• Use that work item as a central placeholder for all related communication, code changes, discussions, etc.
How do you do that?
• Register all functional and non-functional requirements as User Story work items and all bugs as Bug work items.
• Also include non-coding related activities such as deployment activities, writing documentation, assisting the marketing department and such. This improves the transparency of the work that is done in a project.
• Break these down into one or more linked Task work items and use these for each and every source control check-in.
• If you are faced with a research task which is difficult to estimate, consider using a Spike with a time-boxed amount of work.
• Never check-in anything without associating it with a task.
• Never check-in anything without providing a clear description describing the set of changes (as Check-In Comments).
• Use Check-In Policies to require developers to provide the comment and the associated work item(s).
• Rigorously file all loose ends and ‘things to do at some time’ as a Task, preferably linked to the associated Bug or User Story.
• Consider adding a custom Storyotype field to the User Story work item type to differentiate between functional and non-functional requirements and to assist in scoping the User Stories.
Tuesday, August 03, 2010
ALM Practices Part 9: Ubiquitous Language
What is it?
The Ubiquitous Language is a unified language consisting of verbs and nouns from the business domain of an enterprise system and should be used by all team members, stakeholders and other involved persons.
Why would you do it?
• Because it creates a bridge of understanding between the developers and the business people.
• Because it forces different stakeholders (possibly working in different departments) to use the same name for the same concept.
• Because it improves the readability of the domain model for both developers and business people.
• Because it improves traceability from functional design, through the user interface all the way into the details of the code base.
What’s the usual thing to do?
• Maintain a list of verbs and nouns from the domain and promote its usage during verbal communication, in documentation, in user interfaces, and in the entire codebase.
How do you do that?
• Let the business people propose a proper verb or noun for the concept.
• Be aware of the fact that different stakeholders may use the same name for different concepts, or different names for the same concepts. Make sure they choose a single unique name for each individual concept.
• If a name conflicts with a common engineering concept and serious confusion is expected, consider proposing an alternative name.
• Unless it is company policy to communicate in English, write your domain model in the native language of the business stakeholders.
• If your policy is to write code in English and the Ubiquitous Language is in your native language, create a glossary that maps the verbs and nouns from the native domain model into the English translations. Be very pro-active in enforcing these specific translations, because from experience I know developers have a tendency for coming up with alternative translations.
• Be aware of names that are often used intertwined, but represent the same concept. Force one word for that.
• Be aware of concepts with multiple intentions. They may be different things and require different entities in your domain model.
• Also be aware for a concept that appears to have mutual exclusive business rules in different circumstances. In most cases, introducing different concepts (and thus different entities) can prevent significant maintenance burden and confusion.
• If, throughout the project, you come up with a better name for a concept, make sure you adapt your entire code base, functional documentation and the application itself. If you don’t, you end up with many different names for the same concept, causing significant confusion in the project.
|
__label__pos
| 0.828965 |
Fortinet Exams NSE7_SDW-6.4 Exam Dumps NSE7_SDW-6.4 Exam Questions NSE7_SDW-6.4 PDF Dumps NSE7_SDW-6.4 VCE Dumps
[April-2021]Valid NSE7_SDW-6.4 PDF Dumps and NSE7_SDW-6.4 VCE Free Download in Braindump2go[Q11-Q27]
April/2021 Latest Braindump2go NSE7_SDW-6.4 Exam Dumps with PDF and VCE Free Updated Today! Following are some new NSE7_SDW-6.4 Real Exam Questions!
QUESTION 11
Which components make up the secure SD-WAN solution?
A. FortiGate, FortiManager, FortiAnalyzer, and FortiDeploy
B. Application, antivirus, and URL, and SSL inspection
C. Datacenter, branch offices, and public cloud
D. Telephone, ISDN, and telecom network
Correct Answer: A
QUESTION 12
Refer to the exhibit.
Which two statements about the status of the VPN tunnel are true? (Choose two.)
A. There are separate virtual interfaces for each dial-up client.
B. VPN static routes are prevented from populating the FortiGate routing table.
C. FortiGate created a single IPsec virtual interface that is shared by all clients.
D. 100.64.3.1 is one of the remote IP address that comes through index interface 1.
Correct Answer: CD
QUESTION 13
Refer to exhibits.
Exhibit A shows the SD-WAN rules and exhibit B shows the traffic logs. The SD-WAN traffic logs reflect how FortiGate processed traffic.
Which two statements about how the configured SD-WAN rules are processing traffic are true? (Choose two.)
A. The implicit rule overrides all other rules because parameters widely cover sources and destinations.
B. SD-WAN rules are evaluated in the same way as firewall policies: from top to bottom.
C. The All_Access_Rules rule load balances Vimeo application traffic among SD-WAN member interfaces.
D. The initial session of an application goes through a learning phase in order to apply the correct rule.
Correct Answer: AB
QUESTION 14
What are the two minimum configuration requirements for an outgoing interface to be selected once the SD-WAN logical interface is enabled? (Choose two.)
A. Specify outgoing interface routing cost.
B. Configure SD-WAN rules interface preference.
C. Select SD-WAN balancing strategy.
D. Specify incoming interfaces in SD-WAN rules.
Correct Answer: AB
QUESTION 15
Refer to the exhibit.
Based on the exhibit, which statement about FortiGate re-evaluating traffic is true?
A. The type of traffic defined and allowed on firewall policy ID 1 is UDP.
B. Changes have been made on firewall policy ID 1 on FortiGate.
C. Firewall policy ID 1 has source NAT disabled.
D. FortiGate has terminated the session after a change on policy ID 1.
Correct Answer: B
QUESTION 16
What are two reasons why FortiGate would be unable to complete the zero-touch provisioning process? (Choose two.)
A. The FortiGate cloud key has not been added to the FortiGate cloud portal.
B. FortiDeploy has connected with FortiGate and provided the initial configuration to contact FortiManager.
C. FortiGAte has obtained a configuration from the platform template in FortiGate cloud.
D. A factory reset performed on FortiGate.
E. The zero-touch provisioning process has completed internally, behind FortiGate.
Correct Answer: AE
QUESTION 17
Which two statements reflect the benefits of implementing the ADVPN solution to replace conventional VPN topologies? (Choose two.)
A. It creates redundant tunnels between hub-and-spokes, in case failure takes place on the primary links.
B. It dynamically assigns cost and weight between the hub and the spokes, based on the physical
distance.
C. It ensures that spoke-to-spoke traffic no longer needs to flow through the tunnels through the hub.
D. It provides direct connectivity between all sites by creating on-demand tunnels between spokes.
Correct Answer: CD
QUESTION 18
Refer to the exhibit.
Based on output shown in the exhibit, which two commands can be used by SD-WAN rules? (Choose two.)
A. set cost 15.
B. set source 100.64.1.1.
C. set priority 10.
D. set load-balance-mode source-ip-based.
Correct Answer: CD
QUESTION 19
Refer to the exhibit.
Which two statements about the debug output are correct? (Choose two.)
A. The debug output shows per-IP shaper values and real-time readings.
B. This traffic shaper drops traffic that exceeds the set limits.
C. Traffic being controlled by the traffic shaper is under 1 Kbps.
D. FortiGate provides statistics and reading based on historical traffic logs.
Correct Answer: AB
QUESTION 20
In the default SD-WAN minimum configuration, which two statements are correct when traffic matches the default implicit SD-WAN rule? (Choose two.)
A. Traffic has matched none of the FortiGate policy routes.
B. Matched traffic failed RPF and was caught by the rule.
C. The FIB lookup resolved interface was the SD-WAN interface.
D. An absolute SD-WAN rule was defined and matched traffic.
Correct Answer: AC
QUESTION 21
Refer to the exhibit.
Which statement about the trace evaluation by FortiGate is true?
A. Packets exceeding the configured maximum concurrent connection limit are denied by the per-IP shaper.
B. The packet exceeded the configured bandwidth and was dropped based on the priority configuration.
C. The packet exceeded the configured maximum bandwidth and was dropped by the shared shaper.
D. Packets exceeding the configured concurrent connection limit are dropped based on the priority configuration.
Correct Answer: A
QUESTION 22
Refer to the exhibit.
FortiGate has multiple dial-up VPN interfaces incoming on port1 that match only FIRST_VPN.
Which two configuration changes must be made to both IPsec VPN interfaces to allow incoming connections to match all possible IPsec dial-up interfaces? (Choose two.)
A. Specify a unique peer ID for each dial-up VPN interface.
B. Use different proposals are used between the interfaces.
C. Configure the IKE mode to be aggressive mode.
D. Use unique Diffie Hellman groups on each VPN interface.
Correct Answer: BD
QUESTION 23
Refer to exhibits.
Exhibit A shows the firewall policy and exhibit B shows the traffic shaping policy.
The traffic shaping policy is being applied to all outbound traffic; however, inbound traffic is not being evaluated by the shaping policy.
Based on the exhibits, what configuration change must be made in which policy so that traffic shaping can be applied to inbound traffic?
A. The guaranteed-10mbps option must be selected as the per-IP shaper option.
B. The guaranteed-10mbps option must be selected as the reverse shaper option.
C. A new firewall policy must be created and SD-WAN must be selected as the incoming interface.
D. The reverse shaper option must be enabled and a traffic shaper must be selected.
Correct Answer: B
QUESTION 24
Refer to the exhibit.
What must you configure to enable ADVPN?
A. ADVPN should only be enabled on unmanaged FortiGate devices.
B. Each VPN device has a unique pre-shared key configured separately on phase one.
C. The protected subnets should be set to address object to all (0.0.0.0/0).
D. On the hub VPN, only the device needs additional phase one settings.
Correct Answer: B
QUESTION 25
Which two statements describe how IPsec phase 1 main mode id different from aggressive mode when performing IKE negotiation? (Choose two.)
A. A peer ID is included in the first packet from the initiator, along with suggested security policies.
B. XAuth is enabled as an additional level of authentication, which requires a username and password.
C. A total of six packets are exchanged between an initiator and a responder instead of three packets.
D. The use of Diffie Hellman keys is limited by the responder and needs initiator acceptance.
Correct Answer: BC
QUESTION 26
What are two benefits of using FortiManager to organize and manage the network for a group of FortiGate devices? (Choose two.)
A. It simplifies the deployment and administration of SD-WAN on managed FortiGate devices.
B. It improves SD-WAN performance on the managed FortiGate devices.
C. It sends probe signals as health checks to the beacon servers on behalf of FortiGate.
D. It acts as a policy compliance entity to review all managed FortiGate devices.
E. It reduces WAN usage on FortiGate devices by acting as a local FortiGuard server.
Correct Answer: AD
QUESTION 27
What would best describe the SD-WAN traffic shaping mode that bases itself on a percentage of available
bandwidth?
A. Per-IP shaping mode
B. Shared policy shaping mode
C. Interface-based shaping mode
D. Reverse policy shaping mode
Correct Answer: B
Resources From:
1.2021 Latest Braindump2go NSE7_SDW-6.4 Exam Dumps (PDF & VCE) Free Share:
https://www.braindump2go.com/nse7-sdw-6-4.html
2.2021 Latest Braindump2go NSE7_SDW-6.4 PDF and NSE7_SDW-6.4 VCE Dumps Free Share:
https://drive.google.com/drive/folders/1ZF64HYe3ZFxcWI0ZQMeUB0F-CvAjFBqg?usp=sharing
3.2021 Free Braindump2go NSE7_SDW-6.4 Exam Questions Download:
https://www.braindump2go.com/free-online-pdf/NSE7_SDW-6.4-PDF-Dumps(12-27).pdf
https://www.braindump2go.com/free-online-pdf/NSE7_SDW-6.4-VCE-Dumps(1-11).pdf
Free Resources from Braindump2go,We Devoted to Helping You 100% Pass All Exams!
Braindump2go Testking Pass4sure Actualtests Others
$99.99 $124.99 $125.99 $189 $29.99/$49.99
Up-to-Dated
Real Questions
Error Correction
Printable PDF
Premium VCE
VCE Simulator
One Time Purchase
Instant Download
Unlimited Install
100% Pass Guarantee
100% Money Back
Leave a Reply
|
__label__pos
| 0.614727 |
Java if-then and if-then-else statements (translated from Java tutorials)
Source: Internet
Author: User
From http://www.cnblogs.com/ggjucheng/archive/2012/12/16/2820834.html
English from http://docs.oracle.com/javase/tutorial/java/nutsandbolts/if.html
If-then statement
If-then statements are the most basic control flow statements. It tellsProgram, Only when the test computation returns trueCode. For example,BicycleYou can use the brake deceleration function only when the bicycle is in motion.ApplybrakesOne possible implementation of the method is as follows:
Void applybrakes () {// the "if" clause: bicycle must be moving if (ismoving) {// The "then" clause: decrease current speed currentspeed --;}}
When this test is calculated as false (indicating that the bicycle is not in motion), The if-then statement control code jumps to the end.
Also, the braces are optional and there is no braces. The "then" clause contains only one statement.
Void applybrakes () {// same as abve, but without braces if (ismoving) currentspeed --;}
Deciding when to omit braces is a matter of personal taste. Ignore braces, which may cause the code to be weak. If the second statement is to be added to the "then" clause, a common error will forget to add the new braces. The compiler cannot capture errors in this case, and the program will produce incorrect results.
If-then-else statement
When the "if" clause is calculated as false, the if-then-else statement provides the second path for program execution. You canIn the applybrakes method, the if-then-else statement is used. When the bicycle is not in motion, the system requests the brakes to slow down and execute some actions. In this case, this behavior simply outputs an error message saying that the bicycle has stopped.
Void applybrakes () {If (ismoving) {currentspeed --;} else {system. Err. println ("the bicycle has" + "already stopped! ");}}
The following program,Ifelsedemo, based on the score value, assigns a level: score over 90% is A, score over 80% is B, and so on.
Class ifelsedemo {public static void main (string [] ARGs) {int testscore = 76; char grade; If (testscore> = 90) {grade = 'a ';} else if (testscore> = 80) {grade = 'B';} else if (testscore> = 70) {grade = 'C';} else if (testscore> = 60) {grade = 'D';} else {grade = 'F';} system. out. println ("grade =" + grade );}}
The output result of the program is:
Grade = C
You may notice that the value of testscore can meet multiple expressions in the composite statement:76> = 70And76> = 60. However, once a condition is met, the appropriate statement will be executed (grade = 'C';), and the remaining conditions will not be calculated.
Related Article
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
Tags Index:
|
__label__pos
| 0.791864 |
Thread Rating:
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
How to create simple one way drivers
#1
There was discussion elsewhere about it being too hard to create simple, ad hoc drivers when someone needs to get at least simple support for a device into place, if there's not a driver available currently. Since there are so many devices, this isn't a terribly unlikely scenario.
So, I figured I'd provide a quick overview of how you can go about doing such a thing. It's not that hard, and I doubt it would be much easier in any other product. But of course lots of people have the perception that everything in CQC is extremely complex, so I guess not to many people bother to look into it.
Anyway, my own bitterness aside, there are two ways to approach this.
The Passthrough Driver
[Indent]The simplest way is to use the passthrough driver. Here is the driver documentation, but I'll also provide some higher level guidance here.
http://www.charmedquark.com/Web2/ExtCont...hrough.htm
Basically, this driver allows you to just send through raw commands to the device. So it's very simple but it's the most limited of the schemes available. It has two fields:
• SendBinMsg - Send a command to a device that uses a binary protocol
• SendTextMsg - Send a command to a device that uses a text protocol
Every device will fall into one of these. And of course since text is just numbers as well (with special meaning assigned to those numbers, e.g. 32 is a space, 65 is an upper case A, in the example of ASCII text), you can always use the SendBinMsg if you want to. But usually it's easier to send it as text if the device defines it's commands in terms of text strings, since it's easier for you to read and write.
When you install the driver, it will ask you two questions. One is the text encoding, which is only used if the device accepts text formatted commands. Most of the time the encoding will be ASCII, and that is the default but you can pick another. This will allow the driver to convert the text into the appropriate numbers, via the type of predefined mapping I mentioned above, and send those numbers to the device, in the form of a sequence of bytes.
The other prompt, also for text style devices only, is the extra characters that need to be added to the text messages, which it refers to as the 'text decoration'. There has to be some way for the device to recognize a message. It is most common for there to be two schemes. One is a start/stop scheme, where two special characters are used as the first and last character of the message. The other is an end line type scheme, where one or two special characters indicate the end of a command. These are often referred to by us geeks as 'delimiters', because the indicate limits of the message.
The driver supports some very commonly used schemes. If your driver uses text messages with those of those types of message delimiting schemes, you should select that scheme. If your device uses one not supported, you will have to send your messages as binary, so that you can support the required delimiter scheme yourself. But most devices will use one of the supported ones.
So, let's say, to take an example from a real device, the command to power on an A/V receiver zone is:
Code:
PWON<CR>
This is a text type control protocol, so you are just sending readable text (or semi-readable since they are usually heavily abbreviated for efficiency.) And, in this case, the delimiting scheme is that the commands end with a carriage return character. So you should have indicated the CR terminated text decoration scheme when you installed the driver.
So, if you want to power on the device, you would do this action command:
Code:
Devices::FieldWrite(MyDenon.SendTextMsg, PWON);
That would send the text string PWON to the driver. It would take that, apply the CR to the end of the string, convert that combined text into a sequence of bytes, and it would send the resulting bytes to the device. All other commands would work similarly. You would look up in the control protocol document the form of the command you want to send, and you would just send that text string.
If the device is binary, i's a little more complicated, but not much so. Instead of sending readable text, you must send the raw numbers. You are still actually typing out the numbers as text of course, but the driver knows in this case that it has to convert the characters not into their ASCII values, but to treat them as written out numbers. So 10 doesn't become the two binary values 31 30 (ASCII 1 and 0 digits), it becomes the binary value 10.
In this case those numbers must be in hexadecimal format, which means base 16. That might sound overly geeky, but in many to most cases the numbers will be presented in that format in the protocol documentation, and it also means that every number can be two bytes, since the range of values a single byte can contain (in hex) is 0 to FF. Any values less than 10 (hex) should be given a leading zero, e.g. 0A. So a command might look like:
Code:
50 0A FF 11 13
The driver will parse this string, and convert every two character block into a binary number. It will then send those numbers as is to the device. So the above command would cause five bytes to be sent to the device. If there is any sort of delimiter scheme used, you have to just write it into the value you send. But most binary schemes don't use delimiters, because it's difficult to insure that those characters wouldn't also show up as actual command values. So the above command becomes:
Code:
Devices::FieldWrite(MyDenon.SendTextMsg, 50 0A FF 11 13);
That's it. Basically the passthrough driver just gives you a means to send raw commands to the device, so you can do whatever the protocol allows you to do. If you only need a few small commands, it's pretty practical.
[/Indent]
Dean Roddey
Explorans ex terminum defectum
Reply
#2
The PDL Language
[INDENT]The passthrough driver is quite useful. It's simple and you can quickly gen up a handful of commands that you might need to get basic control over a device. However it has some big deficiencies:
• You are hard coding device specific commands into automation logic, instead of dealing with meaningful field names like Power, Mute, etc...
• If a command takes a number of possible varitions (such as source input selection, or audio processing mode selection), you have to separately build up the commands for each of them and send them literally.
• You have no way to get back information from the device, i.e. the control is one way only.
The PDL language can get around all of those issues without introducing a lot more complexity. Though, we won't deal with the two way thing here. You can do two way drivers using PDL, but we are looking here at how you can get quick, simple control over devices, so we will concentrate on one way PDL drivers, i.e. outgoing commands only.
Here is an example of a simple, one-way PDL driver. Basically all it has to do is define the fields the driver will support, and then to define what will be sent out to the device when one of those fields is written to:
Code:
[CQCProto Version="2.0" Encoding="ISO-8859-1"]
ProtocolInfo=
TextEncoding="ASCII";
ProtocolType="OneWay";
EndProtocolInfo;
Fields=
Field=Power
Type=Boolean;
Access=Write;
EndField;
EndFields;
WriteCmds=
WriteCmd=Power
Send=
BoolSel(&WriteVal, "PWON", "PWOFF");
"\r";
EndSend;
EndWriteCmd;
EndWriteCmds;
As you can see there is not much to it. There is an opening line that you would use as is. In this case, the protocol is text based, so we indiate that the text encoding is ASCII. And we indicate it is a one way protocol, which means that we only have to provide the two sections mentioned above.
In the Fields= section, we define a field named Power. It is a boolean field, so it has True/False values, and it is write only (they would all always be write only in a one driver.) In the WriteCmds= section, we indicate what to send to the device when each field is written to. There is a WriteCmd= block for each field. Inside that is a Send= block to indicate what to send. In a two way device there would also be a block to indicate what replies to expect but we don't deal with that here.
The Send= block is a just a sequence of values to concatenate together and send. Since this is an ASCII device, it's just text strings to send. In our case, we want to send PWON if the user writes True to the field and PWOFF if the user writes False. So we use a simple 'expression' called BoolSel, which just takes a boolean value and based on that returns one of the two other values. In our case we pass it &WriteVal, which is the value that was written to the field. So, if the user writes True, it returns PWON, else it returns PWOFF. We then on the next line indicate we want to add a \r, which reprsents a carriage return.
The driver evaluates any expressions, then builds up the final string by concatenating all the values (in order) together, then converts the string to the equivalent byte values and sends them out.
You can see that this is not that much more complicated than the pass through driver. But, it means you are now dealing with real driver fields, and you don' have to have separate commands for every possible variation, such as off and on in this case. You have one field that takes the value written and incorporates it into the message to send.
Once you have your file written, which should be named along the lines of MakeModel.CQCProto so something like MyDevice.CQCProto, you can then test it out. For now it doesn't matter where you save it.
You will also need to create a manifest file for the driver. This file tells CQC about the driver, what type of device connection it uses and so forth. Here is a minimal example that would be fine for the above driver, assuming a socket connection:
Code:
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE CQCCfg:DriverCfg PUBLIC
"urn:charmedquark.com:CQC-DrvManifest.DTD"
"CQCDriverCfg.DTD">
<CQCCfg:DriverCfg>
<!-- Define the server driver config -->
<CQCCfg:ServerCfg CQCCfg:LibName="MyDevice"
CQCCfg:DrvType="GenProto"/>
<!-- Define the common driver config -->
<CQCCfg:CommonCfg CQCCfg:Author="Bubba Jones"
CQCCfg:Contact="[email protected]"
CQCCfg:Description="Simple, one way driver for
my device"
CQCCfg:DisplayName="My Device"
CQCCfg:Category="Miscellaneous"
CQCCfg:Make="MyStuff"
CQCCfg:Model="MyDevice"
CQCCfg:Version="1.0"/>
<CQCCfg:ConnCfg CQCCfg:Editable="True">
CQCCfg:Port="5780" CQCCfg:SockProto="SockProto_TCP"/>
</CQCCfg:ConnCfg>
</CQCCfg:DriverCfg>
The manifest is a little technical, so if you need help, just ask on the forum. It's not too hard to set up though once you've done one. Basically you define the server side driver, by indicating the name (MyDevice as we named it previous) and indicate it's a generic protocol (PDL) driver. Then there's some descriptive info that is displayed to the user when he selects the driver. Then there is information describing the socket port or socket connection info, if that is relevant. Save this using the same base name as the PDL file, so in our case MyDevice.Manifest. For now, it doesn't matter where you save it.
Once you have the manifest, you can test it by running the "Develop PDL Drivers" option from the Windows start menu. Use the Session -> New Session menu option to start a new session. Open up your manifest file. It will then prompt you for the CQCProto file as well, so select that. You will then go through the standard driver installation wizard to set up the driver, just as you would when you normally load it. If there are errors in your manifest file it will tell you the line and describe the error. So edit it and save it and try to load it again until you get it happy. Ask on the forum if you can't figure it out.
You can now use the Start button to start the driver running. If there are any errors in your PDL file, it will complain when you press Start, and tell you the line where the error occurred. Make the required changes and save the file and press Start again. Iterate this process until you get no more errors.
Once there are no errors, the driver will start running. Since it's a completely one way driver, it won't do anything until you write a field. You will just see that it went through the usual driver steps of initializing, getting its comm resource (opening the socket connection in this case), and got connected to the device, like this:
[Image: PDLDebug1.png]
So let's test it by changing a field. First, make sure the "Send Bytes" check box is checked. That will make it show you the outgoing bytes you are sending. Then select a field and press the Change Fld... button. Enter a new value to send, and press Save. If it works, the debugger will show you the bytes you sent. It doesn't show the text, it actually shows the raw bytes that the text was converted into, so that you can be sure that you are seeing exacly what was sent.
[Image: PDLDebug2.png]
You should also see that the device responded correctly. If not, something is wrong with your command, so check the bytes shown in the output to make sure they are correct. If not, press Stop to stop the session, edit your PDL file, then press Start to start it up again and try again until you get the desired result.[/INDENT]
Dean Roddey
Explorans ex terminum defectum
Reply
#3
[indent]
continued...
Adding more fields is must more of the same. In order to handle some field types you will want to learn more about the available PDL expressions, which are in the Driver Development technical document on the web site. One common thing you will need to do, is to map a readable text value to some driver specific value. The most common reason you do this is for enumerated fields. So for instance the driver has a playback command and the values for the command PB1, PB2, and PB3, for run, stop and pause, you wouldn't want to make the user deal with those heavily abreviated values. You would want to let the user deal with human readable values, like Run, Stop and Pause. That means your field will get that values written to it, but it must map them to the device specific values. To deal with this, PDL provides a mapping mechanism. Here is the PDL driver updated to support such a field.
Code:
[CQCProto Version="2.0" Encoding="ISO-8859-1"]
ProtocolInfo=
TextEncoding="ASCII";
ProtocolType="OneWay";
EndProtocolInfo;
Maps=
// A map for some miscellaneous adjustment (inc/dec) operations
Map=PlaybackMap
Type=Card1;
Items=
Item="Run" , 49;
Item="Stop" , 50;
Item="Pause" , 51;
EndItems;
EndMap;
EndMaps;
Fields=
Field=Power
Type=Boolean;
Access=Write;
EndField;
Field=Playback
Type=String;
Access=Write;
Limits="Enum: Run, Stop, Pause";
EndField;
EndFields;
WriteCmds=
WriteCmd=Power
Send=
BoolSel(&WriteVal, "PWON", "PWOFF");
"\r";
EndSend;
EndWriteCmd;
WriteCmd=Playback
Send=
MapTo(PlaybackMap, &WriteVal);
"\r";
EndSend;
EndWriteCmd;
EndWriteCmds;
Here you see that we have added a Maps= section, and within that we defined a map. A map has an entry for each value you want to be able to map between. So we need one for Run, Stop and Pause, which map to the values 49, 50, and 51 respectively (the ASCII values for 1, 2, and 3.) In the Playback field's Send block, we use the MapTo() expression to map the value written to the field, using the PlaybackMap map. So writing Pause to the field will result in the string "PB3\r" being sent to the device (which in ASCII map to the hex byte values "50 42 33 0D", and we should see that in the debug output.
[Image: PDLDebug3.png]
Once you are happy with your driver, use the Tools -> Package Driver to package up the driver. You can then import this package file and how your driver will show in CQC. Any time you want to make changes, just work with your local copies of the PDL/Manifest files and make the desired changed. Package them up again and import the package. The new files will overwrite the previous ones. Do a Reconfigure on the driver to pick up the changes.
[/Indent]
Dean Roddey
Explorans ex terminum defectum
Reply
Possibly Related Threads...
Thread Author Replies Views Last Post
create a grid of temp values from DataLogDB IVB 6 4,046 12-13-2015, 09:37 PM
Last Post: IVB
Create a Customizable Dialog wuench 24 21,449 05-27-2015, 09:12 AM
Last Post: wuench
Create Custom DropDown Widget jrlewis 2 10,926 08-05-2011, 08:26 AM
Last Post: jrlewis
How to create a "Run Time" counter and display beelzerob 13 11,870 01-12-2009, 08:06 AM
Last Post: beelzerob
Forum Jump:
Users browsing this thread: 1 Guest(s)
|
__label__pos
| 0.856818 |
Boomerange Boomerange - 3 months ago 6x
Javascript Question
How to initiate Gmaps in hidden element
I have one element (#hidden-element) which is hidden by default. When I click on button (#btn-toggle) I want to make this element visible. For now, everything is fine. The element really shows up, but if I click on the button for the first time, maps won't shows up. Then I click to hide element, then again to show hidden element for the second time and now the map is here.
So, my question is, how can I be shure that map will shows up for the first time (I thing I have to initialize maps or something like that) and can I somehow destroy map object? And is destroying the map object even necessary?
$(document).ready(function(){
// function for showing the map with markers - gmaps4rails gem
function showMap(){
var handler = Gmaps.build('Google');
handler.buildMap({ internal: {id: 'multi_markers'}}, function(){
var markers = handler.addMarkers([
{ lat: 43, lng: 3.5},
{ lat: 45, lng: 4},
{ lat: 47, lng: 3.5},
{ lat: 49, lng: 4},
{ lat: 51, lng: 3.5}
]);
handler.bounds.extendWith(markers);
handler.fitMapToBounds();
});
}
// this will hide the element when document is ready
$("#hidden-element").hide();
// if I click on button with ID btn-toggle, element will shows up or hide
$("#btn-toggle").click(function(){
if ($('#hidden-element').is(':hidden')){
// is necesarry to have condition and check if element is hidden or visible?
} else if ($('#hidden-element').is(':visible')){
// element is hidden? Show it!
showMap();
}
$("#hidden-element").toggle(1000);
});
});
Answer
So it is not an error of library. I think Gmaps4Rails don't know the precise position of the element when element is showing up. So you have to ensure that element is fully visible and then show the map:
$("#hidden-element").toggle(1000).promise().done(function(){
if ($('#hidden-element').is(':visible')){
showMap();
}
});
This basically means that maps will load after element is fully visible and took his place on screen.
Comments
|
__label__pos
| 0.969368 |
reactive swift example
Alibabacloud.com offers a wide variety of articles about reactive swift example, easily find your reactive swift example information here online.
A Web Backend example that runs swift development under the "Getting Started with Swift" Ubuntu compilation
specific language governing permissions and * limitations under the License. **//*** creates a simple HTTP server this listens for incoming connections on port 9080.* for each request receieved, the Server simply sends a simple Hello World message* back to the client.**/#if OS (Linux) import Glibc#elseimport Darwin#endifi Mport utilsimport foundation//Create server Socketlet address = parseaddress () Let SERVER_SOCKFD = Createsocket (address)/ /Listen on socket with queue of 5listen (SERVER_SOC
A highly adaptive example of tableview cell in Swift
3 constraints. Example Import UikitClass Viewcontroller:uiviewcontroller, Uitableviewdelegate, Uitableviewdatasource {var catalog = [[String]] ()@IBOutlet weak var tableview:uitableview!Override Func Viewdidload () {Super.viewdidload ()Initializing a list of dataCatalog.append (["section I: Swift environment set-up","As the swift development
The wording of a single example in Swift
In Swift, a single case differs from OC in that it is written in many different ways, and if a. Swift file creates only one class, you can use that dispatch_once notation, If there are many classes in a. Swift file, an error will be made, and it needs to be written as below, so that the page is more simpleDeclaring static properties at the bottom 1 static var res
A single example of the Swift design pattern (SINGLETON)
" static Let shared = Swiftsingleton () } var single1 = swiftsingleton.shared var single2 = swiftsingleton.shared Single2.name = "2" println ("------->\ (single1.name)") println ("------->\ (single2.name)") Copy CodeThe printing results are as follows: ------->1 ------->2 Program ended with exit code:0 Copy CodeAs can be seen from the above, through the implementation of the struct, we can not guarantee that there is only
Swift Difficulty-an example of a construction rule in succession
overloading of constructorsIn swift, subclasses do not inherit the constructor of the parent class by default.Constructors ' overloads follow the rules of the constructor chain (1.1-1.3)The constructor inherits the following rules:2.1. If no specified constructors are defined in the subclass, the specified constructors for all parent classes are automatically inherited2.2. If all the parent classes in the subclass are provided with the specified cons
Example of SWIFT multi-thread security lock for ios development
LockNo concurrency, no encoding. As long as we talk about multithreading or concurrent code, it may be difficult for us to bypass the discussion of locks. Simply put, in order to securely access the same resource in different threads, we need these access sequences. There are many lock methods in Cocoa and Objective-C, but the most common method in daily development is @ synchronized. This keyword can be used to modify a variable, and automatically add and remove mutex locks. In this way, the va
Example of the custom load progress bar component of Swift development
Often in development, in order to make the code more concise, we often encapsulate the commonly used functions into components (or UI components, UI controls), and this is also more conducive to the reuse of code.I wrote an article about how to implement custom components by inheriting UIView: Swift-Inherits UIView implements custom visualization components (with a scorecard sample) This article describes how to use custom components in storyboard, w
Swift HTTP Network Operation library alamofire implementation file download, breakpoint continuation example
Seven, use Alamofire for file download 1, custom download File save directoryThe following code downloads the logo picture to the user's document directory (Documnets directory), and the filename is unchanged. Alamofire.download (. Get, "Yun_qi_img/logo.png") {Temporaryurl, response inLet FileManager = Nsfilemanager.defaultmanager ()Let Directoryurl = Filemanager.urlsfordirectory (. Documentdirectory,Indomains:. Userdomainmask) [0]Let pathcomponent = Response.suggestedfilenameReturn Directoryu
An example of swift generic syntax advanced processing
Swift refers to several languages, adding generics as a mechanism that makes people love and hate. Generics increase the expressiveness of the language and reduce redundancy, which is good news, but the bad news is: For complex implementations, seven around eight do not revolve, the syntax is easy to stun people ... Here's one example. This is a simplified example
[Swift] Day18 & amp; 19: A simple example
[Swift] Day18 amp; 19: A simple exampleSwift90Days-step 1 of a simple small application: Clarify the task After learning the basic syntax, we can finally get started. Taking this article as an example, we will create a simple application: PassUITableViewShow three small animals PassNSUserDefaultsData Persistence PassUIWebViewLoad more data from WikipediaDue to limited time, this blog is not a tutorial
Swift Difficulty-an example of a construction rule in succession
another convenience constructor is called, and the other facilitates the constructor to invoke the specified constructor end }}(ii) inheritance and overloading of constructorsIn swift, subclasses do not inherit the constructor of the parent class by default.Constructors ' overloads follow the rules of the constructor chain (1.1-1.3)The constructor inherits the following rules:2.1. If no specified constructors are defined in the subclass, the speci
Swift's TableView cell image width fixed, highly adaptive example
("MyCell",Forindexpath:indexpath) as! ImagetableviewcellGet the corresponding entry contentsLet entry = Catalog[indexpath.row]Cell Title and content settingsCell.titleLabel.text = entry[0]Cell.loadimage (Entry[1])return cell}Override Func didreceivememorywarning () {Super.didreceivememorywarning ()}} 3, get pictures from the network The example above we are loading the local picture directly. In the actual project, we usually load the pictures on
The QQ landing interface layout of Swift development example
= 5Qqnumber.keyboardtype = Uikeyboardtype.numberpadSelf.view.addSubview (Qqnumber)Self. Qqnumber = QqnumberPassword entry hintvar passtext = UILabel (Frame:cgrectmake (+, Uiscreen.mainscreen (). bounds.size.width-60, 30))Passtext.text = " Please enter password "Self.view.addSubview (Passtext)Password Entry boxvar passnumber = Uitextfield (Frame:cgrectmake (+, Uiscreen.mainscreen (). bounds.size.width-60, 30))Passnumber.placeholder = " Please enter password "PassNumber.layer.borderWidth = 1PassN
Example of Swift single case
Sometimes, we need a class as long as the initialization is enough, such as the audio player this instance, so we need to use a single case, familiar with C + + and OC know how to writeClass Csingleton/* Lazy */{public: Static Csingleton * getinstance () { if (m_pinstance = NULL)//Determine if the first call to m_pinstance = new Csingleton; return m_pinstance; } void Relaseinstance () { Delete this; } Private: Csingleton ()
Managing Fmdb databases in Swift with a single example
Off the clock ... Get Me out of here. Use Swift to share a single example of managing Fmdb databases:Created by Qin Zhiwei on 14-6-12.import uikitclass zwdbmanager:nsobject {//If the Fmdbdatabase header file is added to the bridge file Var database:f Mdatabase? var lock:nslock? Create a singleton class Func shareinstance ()->zwdbmanager{struct qzsingle{static var predicate:dispatch_once_t = 0
Swift form TableView A feature implementation example of loading new data
For tables (TableView), Drop-down refresh data, pull-load data should be the two most commonly used data update operations. For the former, I originally wrote a related article: Swift-Drop-down to refresh the functionality of the data implementation (using Uirefreshcontrol). This time I talk about the implementation of the latter.Say is pull load data, in fact, when we scroll the table content to the last row, the system will automatically get new con
Swift implementation of the heap sort algorithm code example _swift
depth direction.(3) Heap sorting: Heap sorting is done using the two processes above. The first is to build the heap from the elements. The root node of the heap is then removed (typically in exchange with the last node), the previous Len-1 node is continued for the heap adjustment process, and then the root node is removed so that all nodes are removed. The time complexity of the heap sort process is O (NLGN). Because the time complexity of the heap is O (n) (called once), the time complexity
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
|
__label__pos
| 0.940727 |
2016-03-05
Can a computer (or human) simulate itself?
A virtual machine is a common example of simulation. It can take the description of a (virtual) "computer" state and continue the execution from that point on. You can simulate a huge computer given a computer with more memory than the virtual machine, although it might be way slower. However, such virtual machine does not answer the problem because it normally simulates a computer with less memory.
The question is, can a computer to simulate a virtual computer that has the same capabilities as itself? Before answering this, let's exclude some trivial (forbidden) cases of "simulations":
1. A calculator will always do exactly what it does, it will be identical with itself, we will not call this a simulation of itself.
2. Copying the exact state of another identical computer in a "twin brother" will not be considered simulation.
3. Starting 2 identical computers with the same input will not be considered like one is simulating the other.
Normally you can simulate any system that has discrete states using a bigger computer. Discrete states means a finite number of logical states, as opposed to physical systems where you might not be able to describe the whole state (think quantum physics) or systems where the expected behavior depends on possible outside influences (think gravity). We don't want to simulate here any possible outside influences (cosmic rays), just the expected behavior according to that system's specifications.
The requirement here is that a separate "simulator" program, running on an identical computer, will receive the state of the "computer to be simulated" and it is able to simulate it from now on. This should be possible for any state that the simulated computer might have. The simulation should be distinct from the computer that is simulated. You should be able to tell that now the computer is simulating a certain program in a virtual computer, as distinct from just running that program. You cannot get extra memory when simulating, you have to store the whole state of the simulated computer inside this "simulation" computer. After the "simulated" computer state is read from an external support, this support must be removed and cannot be used during simulation.
We are not concerned about simulating what is happening in the actual hardware (transistors, electrons), but just the logical state that is observable by binary output devices. The speed is not of concern here.
Mathematically not...
Intuitively, you need more memory in order to accommodate the state of the computer that you want to simulate and the "simulator" program. However, this does not automatically exclude the possibility.
You can think about storing the simulated computer state on hard drive and load only parts when needed. This would not work however, as the simulated computer might also have the hard driver full of useful data that needs to be stored.
Actually, any computer can be simulated by a Turing machine, using a Universal Turing Machine. We use to consider computers as being "Turing complete", so they can theoretically simulate any other "Turing" machine, like another computer. However, there is one big difference: the computer does not have an infinite tape like the Turing machine does. The "Universal Turing Machine" benefits from an infinite tape, so it can always accommodate enough more memory to simulate an arbitrary big other Turing machine (like a computer). However, our simulating computer does not have more memory than ... itself.
Not having more memory than itself is not automatically a proof that a computer cannot simulate itself. For example you could receive the state of the simulated computer compressed, so you can decompress it on need. Or somehow the program can be hidden in the hardware specifications and always present in the state of the simulated computer... You can imagine various tricks, like the program that when executed is able to print it's own source code. We need to find a better argument than this intuitions.
A sketch of proof is this: if our computer can simulate any state of himself, he should be also able to simulate itself while he simulate itself, and so on. We can create an infinite series of distinct states that our computer should be able to be in: simulating the simulation of simulation of simulation ... *N ... of itself. Our computer has a limited number of states (even if they are many), so it cannot accommodate an infinite series of simulation of simulation... Therefore, a finite memory computer will not be able to simulate all the functionalities of... himself.
Practically, almost yes
If we just add a little more memory, just to store the simulation program it is, however, possible to simulate a computer, the same as we do with virtual machines. Similarly, you can simulate any computer functionality that will not use a small region of it's memory for the program. In this case the series of simulations will have bigger and bigger memory for the simulator, or smaller and smaller memory of the simulated computer.
Can a human simulate itself?
If we consider the mind of a human as the product of a biological computer that can have a finite representation, the same should also apply to humans. A human will not be able to fully simulate itself, however it is not impossible to simulate a subset of it's abilities, like: "if I would be put in this situation, I would do like this".
More interesting is the ability of a human to simulate another human mind process or feeling through empathy. A human can somehow deeply understand what another human is experiencing.
Actually, any communication act is an attempt to pass to another human a simulation of it's own thought of mind. The simulation is never identical, but the intent is to transmit the though as similar as possible.
Please share this article if you find it interesting. Thank you.
No comments:
Post a Comment
|
__label__pos
| 0.950463 |
How to Track Website Visitors by IP Address for Improved Analytics and Security
Published on August 21, 2023
Do you want to track IP addresses for your website? Knowing the IP address of your website visitors can be incredibly valuable for a variety of reasons, such as identifying potential threats or understanding your target audience better. By using a tracking code, you can easily gather this information and analyze it.
IP address tracking involves capturing the unique numerical identifier that is assigned to every device connected to the internet. This code acts as a virtual address, allowing data to be transmitted between devices. When someone visits your website, you can use a tracking code to record their IP address.
But how does a tracking code work? A tracking code is a small snippet of JavaScript that you insert into your website's HTML code. This code is executed whenever a visitor loads your website, allowing it to collect and transmit data to a tracking system. The tracking system processes and stores this information, making it accessible for analysis.
What is an IP Address?
An IP address is a unique numerical label assigned to each device on a computer network that uses the Internet Protocol for communication. It serves two main purposes: identifying the host or network interface and providing the location of the device in the network.
An IP address consists of a series of numbers separated by periods. The most common format used is IPv4, which has four sections of numbers ranging from 0 to 255. For example, 192.168.0.1 is an IPv4 address. However, with the increasing number of connected devices, IPv6 has been introduced, which uses a longer format with eight sections of numbers and letters.
When it comes to tracking, an IP address plays a crucial role. By assigning a unique IP address to each device, it enables websites, servers, and online services to identify and communicate with the devices. This allows for the delivery of content, tracking user activity, and providing personalized experiences.
IPv4 Format IPv6 Format
192.168.0.1 2001:0db8:85a3:0000:0000:8a2e:0370:7334
Tracking an IP address can be done using a tracking code or script embedded in a website or an application. When a device accesses the website or uses the application, the code captures the IP address and sends it to the server. The server can then analyze the IP address to determine the location of the device or gather other relevant information for tracking purposes.
It's important to note that IP addresses can be dynamic or static. Dynamic IP addresses are temporary and can change each time a device connects to the network, while static IP addresses are permanent and stay the same. Tracking dynamic IP addresses might be more challenging as they can be reassigned to different devices.
In conclusion, an IP address is a unique identifier that allows devices to communicate and be tracked on a network. It consists of a series of numbers or numbers and letters, and it plays a crucial role in tracking user activity and providing personalized experiences on the internet.
Why should you track IP address?
Tracking IP addresses can provide valuable information about the location and identity of individuals accessing your website or online services. By using a tracking code, you can monitor and analyze the IP addresses of your visitors, gaining insights into their geographic location, network provider, and even the type of device they are using.
This information can be crucial for various reasons:
1. Security: Tracking IP addresses can help identify potential security threats or suspicious activities on your website. By monitoring IP addresses, you can detect unauthorized access attempts, block malicious users, and protect sensitive information.
2. Geographical targeting: Knowing the location of your website visitors based on their IP addresses allows you to tailor your content, products, or services to specific regions or markets. This can help increase engagement and conversion rates by delivering personalized experiences.
3. Analytics and marketing: Tracking IP addresses enables you to gather data about user behavior, such as the pages they visit, the duration of their visits, and their navigation patterns. This data can be used to optimize your website, improve user experience, and develop targeted marketing campaigns.
4. Fraud prevention: By tracking IP addresses, you can detect and prevent fraudulent activities, such as multiple registrations from the same IP or suspicious transactions. This can help protect your business and your customers from scams and identity theft.
In conclusion, tracking IP addresses can provide valuable insights into your website visitors, enhance security measures, optimize your marketing efforts, and prevent fraud. By utilizing a tracking code, you can harness the power of IP address tracking for the benefit of your online presence.
How to track IP address with a tracking code?
If you want to track the IP address of a visitor on your website, you can do so by using a tracking code. This tracking code is a unique identifier that allows you to collect information about the visitor's IP address.
To start tracking the IP address, you need to embed the tracking code into your website. This code is usually provided by a tracking service or analytics platform. It is essential to place this code on every page that you want to track.
Once the code is in place, it will start recording the IP addresses of your website's visitors. The tracking code collects this information in the background without the visitor knowing. It assigns a unique identifier to each IP address, allowing you to differentiate between different users.
With the help of the tracking code, you can monitor and analyze the IP addresses of your website visitors. This data can provide valuable insights into the geographical location of your audience and help you customize your website content or marketing campaigns accordingly.
However, it's essential to remember the ethical and legal implications of tracking IP addresses. Be sure to comply with the applicable laws and regulations regarding privacy and data protection. Always inform your visitors about the use of tracking codes and obtain their consent if necessary.
In conclusion, tracking the IP address of your website visitors can be done by embedding a tracking code into your website. This code allows you to collect and analyze valuable data about your audience's geographical location. Just remember to handle this information responsibly and comply with privacy laws.
What is a tracking code?
A tracking code is a unique code or identifier used to track and monitor various activities on a website or online platform. In the context of IP addresses, a tracking code is used to collect information about the visitors of a website, specifically their IP addresses.
Where can you get a tracking code?
Tracking codes are typically provided by various IP tracking services or analytics platforms. These codes can be obtained by signing up for an account with these services and accessing the provided tracking code.
Some popular IP tracking service providers include:
Service Provider Description
Google Analytics Google Analytics is a widely used analytics platform that provides tracking code snippets for website owners to track visitor data, including IP addresses.
Clicky Clicky is another popular analytics service that offers tracking code snippets to monitor website traffic and IP addresses of visitors.
Matomo Matomo (formerly Piwik) is an open-source analytics platform that offers tracking code snippets for website owners to track visitor data and IP addresses.
Once you have signed up for an account with one of these services, you can usually find the tracking code by navigating to the settings or tracking section of the platform. From there, you can copy the provided tracking code and insert it into the HTML of your website to start tracking IP addresses.
How does a tracking code work?
A tracking code is a small piece of code that is placed on a website to track visitors and their behavior. It is usually embedded in a web page's HTML code and is invisible to the site visitors. The tracking code collects various data, including the IP address of the visitor.
The IP address is a unique numerical label assigned to each device connected to a computer network. It acts as an identifier for the device and can provide information about the visitor's geographical location.
When a visitor accesses a website with a tracking code, the code automatically captures the visitor's IP address. This information is then sent to a tracking server, where it is processed and stored. The tracking server can analyze the IP address to determine the visitor's location, internet service provider, and other details.
Benefits of tracking IP address:
Tracking IP addresses can provide valuable insights to website owners and marketers. Here are some benefits:
1. Visitor Analytics: Tracking IP addresses allows website owners to gather data on visitor demographics, device types, and browsing behavior. This information can help optimize the website's content, design, and user experience.
2. Personalization: IP address tracking can enable personalized experiences for visitors based on their geographical location. For example, a website can display content in the visitor's language or offer location-specific promotions.
3. Security: Tracking IP addresses can help identify potential security threats, such as suspicious or malicious activity originating from specific IP addresses. This information can be used to enhance website security measures.
Implementation of the tracking code:
To track IP addresses, a tracking code needs to be implemented correctly on a website. This typically involves placing the code within the website's HTML code, preferably before the </body> closing tag. Once implemented, the tracking code will start capturing visitor data, including their IP addresses.
It's important to note that tracking IP addresses must be done in compliance with privacy laws and regulations. Website owners should provide clear information about data collection practices and offer options for visitors to opt-out if desired.
How to insert a tracking code on your website?
If you want to track the IP addresses of your website visitors, you can do so by inserting a tracking code into your website's HTML code. This tracking code will be responsible for gathering the necessary information about the visitors, including their IP addresses.
Step 1: Obtain a tracking code
The first step in inserting a tracking code is to obtain one. There are various tracking code providers available online that offer different functionalities. Research and choose a tracking code provider that suits your needs.
Step 2: Access your website's HTML code
To insert the tracking code, you need to access your website's HTML code. This can usually be done through a website management platform or a text editor. Locate the HTML file of the page you want to track.
Step 3: Insert the tracking code
Once you have accessed the HTML code, find the appropriate place to insert the tracking code. It is usually recommended to insert the code just before the closing & tag.
To insert the tracking code, copy the code provided by your tracking code provider and paste it into the desired location within the HTML code. Make sure to save the changes after inserting the code.
Now, every time a visitor accesses your website, the tracking code will capture their IP address and send it to your tracking code provider. You will then be able to analyze the collected data and gain insights into your website's traffic.
What information can you get from tracking an IP address?
When you track an IP address, you can gather various pieces of information about the device and its location. This information includes:
• Geolocation: Tracking an IP address allows you to determine the country, city, and even the approximate latitude and longitude of the device.
• Internet Service Provider (ISP): By tracking the IP address, you can also find out which ISP provides the internet connection for the device.
• Network Type: You can determine whether the IP address belongs to a residential, commercial, or other type of network.
• Proxy Detection: Tracking an IP address can help identify whether the connection is going through a proxy server, which can mask the true location of the device.
• Threat Level: Some IP tracking tools provide information about the potential threat level associated with the IP address, including whether it has been flagged for suspicious activity.
• Internet Service: In addition to the ISP, tracking an IP address can give you insights into the type of internet connection used, such as broadband, mobile, or satellite.
Overall, tracking an IP address can provide valuable information for various purposes, including geotargeting, cybersecurity, and network analysis. However, it's important to note that the accuracy and availability of this information may vary depending on the tracking tool used and the specific IP address being tracked.
Can you track someone's exact location with an IP address?
Tracking someone's exact location based solely on their IP address is not possible. While an IP address can provide some information about the general geographic location of a device, it cannot pinpoint the exact location of an individual. IP addresses are assigned to different devices by internet service providers (ISPs) and can be shared among multiple users, making it difficult to accurately track an individual's location.
However, with the help of advanced techniques and cooperation from ISPs, law enforcement agencies and certain organizations can sometimes trace an IP address to a specific individual or location. This is typically done through legal channels and requires a court order.
It's important to note that even when an IP address is traced to a specific location, it may not provide the exact physical address of the device. Instead, it may only provide information about the general area or city where the device is located.
Additionally, individuals can take steps to protect their privacy and prevent their exact location from being tracked through their IP address. Using a virtual private network (VPN) or anonymizing services can help mask an IP address and make it more difficult to trace.
In conclusion, while an IP address can provide some information about the general geographic location of a device, tracking someone's exact location with an IP address alone is not possible.
How accurate is IP address tracking?
Tracking IP addresses can be a useful tool for various purposes, such as identifying potential threats, monitoring website traffic, or delivering location-specific content. However, it's important to understand that IP address tracking is not always 100% accurate.
An IP address is a unique numerical code assigned to each device connected to a network. It can provide information about the general location of the device, but it cannot pinpoint the exact physical address or identify the specific individual using the device.
The accuracy of IP address tracking depends on several factors. First, the IP address itself may not always accurately reflect the user's location. For example, someone may be using a virtual private network (VPN) to mask their true IP address, making it appear as if they are accessing the internet from a different location.
Additionally, IP addresses can be dynamic, meaning they change regularly. This can make it challenging to track a specific user over time, especially if they are using different devices or internet connections.
Furthermore, IP address tracking relies on databases that map IP addresses to geographic locations. These databases may not always be up to date or accurate, leading to potential discrepancies in determining a user's location.
Despite these limitations, IP address tracking can still provide valuable insights and help in many situations. However, it's important to consider that it should not be solely relied upon for precise location information or as a tool for personal identification.
Overall, while IP address tracking can provide a general idea of a user's location, its accuracy is subject to various factors. For more accurate and granular location tracking, additional methods, such as GPS or other device-specific technologies, may be required.
Are there any legal implications of tracking IP addresses?
When it comes to tracking IP addresses, there can be legal implications that need to be considered. It is important to understand the laws and regulations surrounding the tracking of IP addresses in your jurisdiction.
In some countries, tracking IP addresses without proper consent may be considered a violation of privacy laws. Individuals have the right to privacy, and their personal information, including their IP addresses, may be protected. Therefore, it is crucial to ensure that you are in compliance with the applicable privacy laws before tracking IP addresses.
Furthermore, if you are planning to use the tracked IP addresses for any purpose, such as targeted advertising or website analytics, you may also need to comply with additional regulations. These may include data protection laws and regulations, which govern the collection, storage, and use of personal data. Failure to comply with these laws can lead to legal consequences.
It is also important to note that tracking IP addresses can have different legal implications depending on the context. For example, if you are tracking IP addresses for security purposes, such as investigating cybersecurity threats or identifying potential hackers, there may be different legal considerations involved.
To ensure that you are tracking IP addresses legally, it is recommended to consult with a legal professional who specializes in privacy and data protection laws. They can provide guidance and help you navigate the legal implications associated with tracking IP addresses in your specific jurisdiction.
How to track IP address without a tracking code?
In order to track an IP address without using a tracking code, you can rely on various methods and tools available online. Here are some ways to accomplish this:
1. Using online IP lookup tools
There are several websites that offer free IP lookup services. Simply visit one of these websites and enter the IP address you want to track. The website will provide you with information such as the location, owner, and other details associated with that IP address.
2. Analyzing server logs
If you have access to the server logs, you can find the IP addresses of the visitors in the log files. By analyzing these logs, you can gather information about the IP addresses that accessed your website, including their location and other relevant details.
3. Using web analytics tools
Web analytics tools like Google Analytics can provide you with valuable information about the visitors to your website, including their IP addresses. These tools can help you track the IP address of the visitors, along with other metrics such as pageviews, bounce rate, and more.
4. Checking email headers
If you received an email from the person you want to track, you can check the email headers to find their IP address. Most email clients allow you to view the headers of the email, which will contain the IP address of the sender.
By using these methods, you can track an IP address without the need for a tracking code. However, it's important to note that tracking someone's IP address without their consent may violate privacy laws, so make sure to use these methods responsibly and within legal boundaries.
What are some popular IP address tracking tools?
There are several popular IP address tracking tools available that can help you track the location and other details of an IP address. These tools use various methods to gather information and provide it to the user. Here are some of the popular IP address tracking tools:
• IP Address Tracker: This tool allows you to track the IP address and provides information about its location, ISP, and other details. It also allows you to view the IP address history and generate reports.
• What Is My IP: This tool displays your current IP address and provides information about its location, hostname, and ISP. It also shows the IP address used for your internet connection.
• IP Location Finder: This tool helps you find the location of an IP address by providing details such as country, city, and latitude/longitude coordinates.
• IP Tracker: This tool allows you to track the IP address and provides information about its location, hostname, ISP, and other details. It also offers information about the IP subnet and provides a traceroute feature.
• IP WHOIS Lookup: This tool allows you to look up the details of an IP address, including the owner's contact information, location, and registration date. It also provides information about the Autonomous System (AS) associated with the IP address.
These are just a few examples of the popular IP address tracking tools available. Each tool has its own features and capabilities, so you can choose the one that best suits your needs for tracking IP addresses.
How to protect your IP address from being tracked?
When it comes to online activities, it's important to prioritize your privacy and security. One way to do this is by taking steps to protect your IP address from being tracked. By implementing a few simple measures, you can safeguard yourself from prying eyes and keep your online activities private.
Use a Virtual Private Network (VPN)
One of the most effective ways to protect your IP address is by using a Virtual Private Network (VPN). A VPN encrypts your internet connection and routes it through a secure server, making it difficult for anyone to track your IP address or monitor your online activities. With a VPN, your IP address is masked, allowing you to browse the internet anonymously.
Disable Geolocation Services
Geolocation services can track your IP address and provide information about your physical location. To prevent this, disable geolocation services on your devices. In most cases, you can find this option in your device settings or browser settings. By turning off geolocation services, you can maintain your privacy and prevent your IP address from being tracked.
Use Anti-Tracking Tools
To further protect your IP address from tracking, consider using anti-tracking tools. These tools block tracking codes embedded in websites and prevent them from collecting your IP address or other data. Some popular anti-tracking tools include browser extensions that automatically block tracking scripts and cookies, giving you greater control over your online privacy.
In conclusion, taking steps to protect your IP address from tracking is crucial for maintaining your online privacy. By using a VPN, disabling geolocation services, and utilizing anti-tracking tools, you can safeguard your IP address and ensure your online activities remain private.
What are the limitations of IP address tracking?
While tracking an IP address can provide valuable information, there are certain limitations to consider. It's important to understand these limitations to get a clearer picture of what can and cannot be achieved through this method of tracking.
1. Inaccuracy:
IP address tracking relies on public and private databases to identify the geographical location of the IP address. However, these databases are not always up-to-date or accurate. Therefore, there is a chance that the tracking information may not reflect the actual physical location of the device.
2. Shared IP addresses:
Some internet service providers (ISPs) use shared IP addresses, meaning multiple devices share the same IP address. This makes it difficult to track the specific device associated with a particular IP address, as it could be any of the devices within that shared network.
3. Proxy servers and VPNs:
Proxy servers and virtual private networks (VPNs) allow users to mask their IP address and appear as if they are browsing from a different location. This makes it challenging to accurately track the IP address to its original source.
4. Dynamic IP addresses:
Many internet service providers assign dynamic IP addresses to their users. This means that the IP address assigned to a device can change over time. Tracking an IP address under such circumstances becomes more difficult, as the address may no longer be associated with the same device.
Limitations Description
Inaccuracy IP address tracking can be inaccurate due to outdated or incorrect databases.
Shared IP addresses Multiple devices can share the same IP address, making it difficult to track a specific device.
Proxy servers and VPNs Proxy servers and VPNs can hide the original IP address and location, making tracking challenging.
Dynamic IP addresses IP addresses can change over time, making it harder to track a specific device.
Can you track an IP address on a mobile device?
Yes, it is possible to track the IP address of a mobile device. An IP address is a unique identifier assigned to each device that connects to the internet. This includes not only computers and laptops but also smartphones and tablets.
Tracking an IP address on a mobile device can provide valuable information about the device's location and activities. Law enforcement agencies, for example, may use IP tracking to investigate crimes or locate suspects.
There are different methods that can be used to track an IP address on a mobile device. One common method involves embedding a tracking code within a website or an email. When the user accesses the website or opens the email on their mobile device, the tracking code collects information about their IP address, location, and other details.
Using a tracking code
In order to track an IP address on a mobile device, a tracking code must be placed on a webpage or an email. This code can be written in various programming languages such as JavaScript or PHP. Once the tracking code is executed, it captures the user's IP address and sends it back to the server for analysis.
It's important to note that tracking someone's IP address without their consent may raise privacy concerns. Therefore, it is essential to adhere to legal and ethical guidelines when it comes to IP tracking.
Conclusion:
While it is possible to track an IP address on a mobile device, it is important to approach this practice responsibly and with respect for privacy. Understanding the potential implications and legal considerations is crucial to ensure proper use of IP tracking methods.
How to track IP address in real-time?
If you want to track an IP address in real-time, you can use a tracking code. This code is inserted into a website or an application and can gather information about the IP address of the user. With this code, you can track the location, device, and even the browsing behavior of the user.
Tracking codes are usually embedded in the header or footer of a webpage, and they are triggered when the page is loaded. Once the code is triggered, it can send the IP address to a tracking service or store it in a database for further analysis.
There are different tracking code technologies available, such as JavaScript or server-side scripting languages. JavaScript is commonly used because it runs on the client-side and can gather more information about the user's device and browsing behavior. Server-side scripting languages like PHP or Python can also be used to track the IP address and perform additional analysis on the server side.
When using a tracking code, it is important to comply with privacy regulations and inform the users about the data that is being collected. This can be done through a privacy policy or a consent form that the user needs to agree to before the tracking code is activated.
Tracking IP addresses in real-time can provide valuable insights for businesses and organizations. It can help to optimize website performance, analyze user behavior, and personalize the user experience. However, it is important to use this technology responsibly and ensure that user privacy is respected.
What are the alternatives to IP address tracking?
While IP address tracking can be a useful tool for tracking website visitors and gathering information, it is not the only method available. There are several alternatives that can be used to track users without relying solely on IP address tracking.
1. Tracking Cookies
One alternative to IP address tracking is the use of tracking cookies. When a user visits a website, a cookie is stored on their device that tracks their browsing behavior and preferences. This allows website owners to gather information about their visitors and tailor their content and advertising accordingly.
2. User Accounts
Another alternative is to require users to create an account in order to access certain features or content on a website. By tracking user activity within their account, website owners can gather information about user behavior and preferences without relying on IP addresses.
3. Device Fingerprinting
Device fingerprinting is a technique that involves gathering information about a user's device, such as its operating system, browser version, and screen resolution. This information can be used to create a unique identifier for each device, allowing website owners to track users without relying on IP addresses.
4. Third-Party Tracking Services
Many third-party tracking services, such as Google Analytics, offer alternative methods of tracking website visitors. These services use a combination of techniques, including cookies and device fingerprinting, to gather information about users and their behavior on a website.
By utilizing these alternative tracking methods, website owners can gather a wealth of information about their users without relying solely on IP address tracking. Each method has its own advantages and disadvantages, so it's important to carefully consider which ones are most appropriate for your specific tracking needs.
How to track IP address in Google Analytics?
Tracking the IP address of your website visitors can provide valuable insights into the location and demographics of your audience. Google Analytics offers an easy and effective way to track IP addresses using a tracking code.
To start tracking IP addresses in Google Analytics, you will need to obtain your tracking code from the Google Analytics website. Once you have the code, you can add it to the header section of your website's HTML code.
The tracking code consists of a unique identifier that is associated with your website. When a visitor accesses your website, the code collects information about their IP address and sends it to Google Analytics.
To view the IP address information in Google Analytics, you can navigate to the "Audience" section and then click on "Geo" and "Network". Here, you will find detailed reports on the geographical location and network provider of your website visitors.
It's important to note that tracking IP addresses in Google Analytics must be done in compliance with privacy laws and regulations. Make sure to inform your visitors about the data you collect and obtain their consent if necessary.
Steps to track IP address in Google Analytics:
1. Obtain your tracking code from the Google Analytics website.
2. Add the tracking code to the header section of your website's HTML code.
3. Access the "Audience" section in Google Analytics.
4. Click on "Geo" and "Network" to view IP address information.
What are the benefits of tracking IP addresses for businesses?
Tracking IP addresses can provide businesses with valuable insights and benefits. Here are some key advantages:
1. Enhanced security
By tracking IP addresses, businesses can identify potential security threats and take proactive measures to protect their systems and sensitive data. They can track suspicious IP addresses and block them from accessing their network, reducing the risk of cyberattacks and data breaches.
2. Targeted marketing
Tracking IP addresses allows businesses to gather information about their website visitors. With this data, they can analyze visitor demographics, geographic location, and browsing behavior to better understand their target audience. This knowledge enables businesses to create personalized marketing campaigns and deliver targeted ads, resulting in higher conversion rates and improved ROI.
In addition, businesses can track IP addresses to determine which marketing channels are driving the most traffic and conversions. By tracking the IP addresses of visitors who arrived through specific campaigns or advertisements, businesses can assess the effectiveness of their marketing strategies and make data-driven decisions to optimize their marketing efforts.
3. Fraud detection and prevention
Tracking IP addresses can help businesses identify and prevent fraudulent activities. By monitoring IP addresses associated with suspicious or fraudulent behavior, businesses can detect patterns and anomalies that indicate possible fraudulent transactions or unauthorized access attempts. This allows them to take immediate action to mitigate the risk and protect their customers and businesses from financial losses.
Overall, tracking IP addresses empowers businesses with valuable insights that can help improve security, tailor marketing strategies, and detect fraudulent activities. It is a powerful tool for any business looking to optimize their online presence and protect their digital assets.
How to track IP address in WordPress?
If you want to track the IP address of your WordPress website visitors, there are several methods you can use. Monitoring the IP address can provide valuable insight into your website traffic and help you identify potential security threats or suspicious activity.
Method 1: Using a Plugin
One of the easiest ways to track IP addresses in WordPress is by using a plugin. There are many plugins available in the WordPress plugin directory that can help you monitor and log visitor IPs. Simply search for "IP tracking" or "visitor logging" plugins and choose the one that suits your needs. Install and activate the plugin, and it will automatically start tracking the IP addresses of your website visitors.
Method 2: Analyzing Server Logs
Another method to track IP addresses in WordPress is by analyzing the server logs. Most hosting providers offer access to server logs, which contain information about all requests made to your website, including the IP addresses of the visitors. You can use tools like AWStats, Webalizer, or Google Analytics to analyze the server logs and extract the IP addresses of your visitors.
Note: Analyzing server logs requires some technical knowledge, so it may not be suitable for beginners.
Whichever method you choose, tracking IP addresses can provide valuable data that can help you optimize your website and enhance its security. So, consider implementing IP tracking on your WordPress website today!
What is the difference between tracking IP address and tracking cookies?
When it comes to tracking, both IP address and cookies play a crucial role. However, there are some key differences between them:
• Code: IP address tracking involves using a unique set of numbers that identifies a device's network connection, while tracking cookies rely on a piece of code stored on the user's browser.
• Address: IP address tracking focuses on collecting information about the location and user's internet service provider, while tracking cookies track a user's browsing behavior and preferences.
• IP: IP address tracking is more static, as the IP address rarely changes unless the user switches networks. Tracking cookies, on the other hand, can be easily deleted or blocked by the user.
In summary, IP address tracking provides information about the user's location and network connection, while tracking cookies track a user's browsing behavior and can be easily managed by the user. Both methods have their benefits and drawbacks, and their use depends on the specific tracking needs and goals.
How to track IP address in social media?
Tracking the IP address of someone in social media can be a useful tool for various reasons. Whether it's to gather information, increase security, or monitor online activities, understanding how to track an IP address in social media can provide valuable insights.
One way to track an IP address in social media is by using a tracking code. This code can be embedded in a link or an image, and when clicked or viewed, it will collect the IP address of the user. This information can then be analyzed to determine the user's location and other relevant details.
The tracking code can be generated using various online tools or programming languages. It is important to ensure that the code is properly implemented to avoid any legal or ethical implications. Additionally, it is essential to consider the privacy and consent of the individuals being tracked.
Once the tracking code is in place, the IP address can be logged and stored for further analysis. With this data, you can gain insights into the demographics of your social media audience, identify potential threats or suspicious activities, and improve your overall social media strategy.
It's important to note that tracking someone's IP address in social media should be done responsibly and within legal boundaries. It's always recommended to consult with legal professionals or experts in the field to ensure compliance with relevant laws and regulations.
Tracking IP addresses in social media can be a powerful tool when used ethically and responsibly. It can provide valuable information and insights that can help improve online security, understand user behavior, and enhance social media strategies.
What are the ethical considerations of tracking IP addresses?
When it comes to tracking IP addresses, there are several ethical considerations that need to be taken into account. While tracking IP addresses can be a useful tool for various purposes, it is important to recognize and respect the privacy rights of individuals.
One of the main ethical concerns of tracking IP addresses is the potential invasion of privacy. Every individual has the right to privacy, and tracking their IP address without their consent can be seen as a violation of this right. It is important to obtain proper consent or have a legitimate reason for tracking someone's IP address.
Furthermore, the data collected through IP tracking can be quite personal and sensitive. It is crucial to handle this data with care and ensure that it is stored securely. Unauthorized access or misuse of this data can lead to serious privacy breaches and can be considered unethical.
Transparency is another important ethical consideration when it comes to tracking IP addresses. Users should be informed about the tracking practices in place and given the opportunity to opt out if they wish to do so. Providing clear and understandable information about the purpose of the tracking and how the collected data will be used is essential.
Additionally, it is important to use the tracked IP addresses for legitimate purposes only. Tracking IP addresses for malicious intent or using the data for unethical activities is highly unethical. It is important to ensure that the tracking is carried out in a responsible manner and that the collected data is used in a way that respects ethical guidelines.
Summary: Tracking IP addresses raises ethical concerns regarding privacy, storage and security of personal data, transparency, and responsible use.
How to track IP address in email campaigns?
Tracking the IP address of recipients in your email campaigns can provide valuable insights into the location and behavior of your audience. By tracking IP addresses, you can gather information such as the geographic location of your recipients, their internet service provider, and any suspicious or unusual activity.
To track IP addresses in your email campaigns, you can insert a tracking code in your emails. This code, often in the form of an invisible image or a small pixel, allows you to collect information about the recipients' IP addresses when they open your emails. This tracking code can be generated by various email marketing platforms or tools.
Once the tracking code is embedded in your emails, every time a recipient opens your email, the code will send information about their IP address to your tracking system. This data can then be analyzed to understand the demographics and engagement patterns of your audience.
Tracking IP addresses can help you personalize your email campaigns based on the location of your recipients. For example, if you have different offers or promotions for specific regions, you can segment your email list based on IP addresses and send targeted content to each segment.
Additionally, tracking IP addresses can help you identify any suspicious activity or potential threats to your email campaigns. If you notice multiple opens from the same IP address or unusual patterns, you can take necessary actions such as reviewing your email list or blocking certain IP addresses.
It's important to note that tracking IP addresses should be done in compliance with privacy laws and regulations. Make sure to inform your recipients about the tracking and provide them with an option to opt out if they prefer not to be tracked.
In conclusion, tracking IP addresses in email campaigns can provide valuable insights into your audience's location and behavior. By using tracking codes, you can personalize your emails and identify any suspicious activity. However, it's crucial to prioritize privacy and ensure compliance with relevant regulations.
What are some common misconceptions about IP address tracking?
Many people have misconceptions about IP address tracking and the associated tracking code. Here are some common misconceptions:
1. IP address tracking can provide the exact physical location of a person
Contrary to popular belief, IP address tracking cannot provide the exact physical location of a person. While it can provide information about the general geographic area associated with an IP address, it is not accurate enough to pinpoint a person's exact location.
2. IP address tracking can track individual devices
Another misconception is that IP address tracking can track individual devices. In reality, IP addresses are assigned to routers or modems, not specific devices. Therefore, it is not possible to track an individual device solely based on its IP address.
However, by combining other tracking methods and data, such as cookies or login information, it may be possible to track a specific device to some extent.
3. IP address tracking is always accurate and reliable
While IP address tracking can be a useful tool, it is not always accurate and reliable. IP addresses can be dynamic, meaning they can change over time or with each new internet connection. This can lead to inaccuracies and make it difficult to track a specific IP address consistently.
Furthermore, IP address tracking relies on databases that may not always have up-to-date or complete information. This can result in incorrect or outdated location data.
It is also worth noting that some individuals may use methods to mask or hide their IP addresses, making it even more challenging to track them accurately.
4. IP address tracking can provide personal and identifying information
IP address tracking is limited to providing general information about the geographic area associated with an IP address. It does not provide personal or identifying information about the individual using the IP address.
Any claims or advertisements suggesting that IP address tracking can provide personal details such as names, addresses, or contact information are misleading and should be treated with caution.
In conclusion, it is important to understand the limitations and misconceptions associated with IP address tracking. While it can provide some general information about the geographic area associated with an IP address, it is not a foolproof method for tracking individuals or obtaining personal information.
Always use IP address tracking responsibly and in compliance with privacy laws and regulations.
How to track IP address for free?
Tracking someone's IP address can be a useful tool for various reasons, such as identifying the location of website visitors or detecting potential malicious activities. Luckily, there are free tools available that allow you to track IP addresses easily and efficiently.
1. Online IP trackers
There are numerous online IP tracking tools that provide you with the ability to track IP addresses for free. These tools usually require you to input the target IP address, and they will display various information about it, such as the location, internet service provider, and even the approximate latitude and longitude of the IP address.
2. Tracing using the command prompt
If you prefer a more hands-on approach, you can use the command prompt on your computer to track IP addresses. By following a few simple steps, you can obtain valuable information about an IP address.
1. Open the command prompt on your computer by pressing the Windows key + R and typing "cmd".
2. In the command prompt window, type "tracert [IP address]" and press Enter. Replace "[IP address]" with the actual IP address you want to track.
3. The command prompt will display a list of IP addresses along the route to the target IP address, allowing you to identify the path it takes.
By utilizing these free methods, you can easily track IP addresses and gain valuable insights about their origins and potential activities. However, it's essential to remember that tracking someone's IP address should always be done ethically and within the boundaries of the law.
What are the future trends in IP address tracking?
As technology continues to advance, the future of IP address tracking is likely to bring about several trends that will revolutionize how we track IP addresses and gather information. Here are some of the key trends to watch out for:
1. Enhanced Geolocation Accuracy
One of the main goals in IP address tracking is to determine the geolocation of the IP address accurately. In the future, advancements in tracking technology are likely to lead to enhanced geolocation accuracy. This will allow businesses and organizations to pinpoint the exact location of an IP address with greater precision, providing valuable insights for targeted marketing campaigns and fraud prevention.
2. Mobile Tracking
In the age of smartphones and mobile devices, tracking IP addresses on mobile platforms will become increasingly important. As more people access the internet using their mobile devices, tracking codes will need to be adapted to gather information from mobile users. This will enable businesses to gather valuable data on mobile browsing habits and target their marketing efforts accordingly.
3. Advanced Tracking Codes
The development of advanced tracking codes will be crucial in the future of IP address tracking. These codes will not only track the IP address but also gather additional information such as browsing habits, device information, and user behavior. This will provide businesses with a comprehensive understanding of their users and allow for more personalized and targeted marketing campaigns.
4. Privacy Concerns
With the increasing prevalence of online tracking, privacy concerns will continue to be a major factor in the future of IP address tracking. As tracking technology advances, there will be a greater need for regulations and policies to protect user privacy. Striking a balance between tracking for legitimate purposes and respecting user privacy rights will be a key challenge.
In conclusion, the future of IP address tracking holds great potential for advancements in geolocation accuracy, mobile tracking, advanced tracking codes, and privacy protection. These trends will shape the way businesses gather and utilize IP address information to improve their marketing strategies and prevent fraud.
Question-answer:
What is an IP address?
An IP address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.
Why would I want to track an IP address?
Tracking an IP address can be useful for various reasons such as identifying the geographical location of a user, monitoring online activities, preventing fraud, and enhancing network security.
How can I track an IP address?
To track an IP address, you can use a tracking code, also known as a web beacon or pixel tag. This code is embedded in a website or email, and when a user interacts with the website or opens the email, the code sends information back to the tracking system, including the user's IP address.
Is tracking IP addresses legal?
The legality of tracking IP addresses depends on the jurisdiction and purpose. In general, tracking IP addresses for legitimate purposes such as network security or preventing fraud is legal. However, using IP tracking for malicious activities or invading someone's privacy may be illegal. It is important to comply with applicable laws and regulations when tracking IP addresses.
What are the limitations of tracking IP addresses?
While tracking IP addresses can provide valuable information, there are some limitations to consider. IP addresses can be dynamic, meaning they change over time. Additionally, IP addresses may be shared by multiple devices within a network, making it difficult to pinpoint the exact user. Furthermore, users can use methods such as VPNs or proxy servers to conceal their true IP address.
What is an IP address?
An IP address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.
Ads:
|
__label__pos
| 0.993523 |
Disney Patent | Augmented Reality Device With Predefined Object Data
Patent: Augmented Reality Device With Predefined Object Data
Publication Number: 10217289
Publication Date: 20190226
Applicants: Disney Enterprises, Inc.
Abstract
Embodiments capture one or more images of a visual scene that includes a first physical object. A first region of the first physical object to apply one or more augmentations to is determined. Embodiments determine which configuration the first physical object is currently in. The first physical object is configured to be physically manipulatable into each of a plurality of configurations. A sequence of frames is rendered for display in which the first region of the first physical object is animated in a predefined manner depicting a virtual light source within the first physical object, based on the determined configuration of the first physical object, by applying the one or more augmentations to a first virtual object generated based on predefined geometric information corresponding to a determined object type of the first physical object. The rendered sequence of frames is output for display using one or more display devices.
BACKGROUND
Field of the Invention
The present invention generally relates to a human-computer interface and more specifically to techniques for recognizing and displaying predefined objects on an augmented reality device.
Description of the Related Art
Computer graphics technology has come a long way since video games were first developed. Relatively inexpensive 3D graphics engines now provide nearly photo-realistic interactive game play on hand-held video game, home video game and personal computer hardware platforms costing only a few hundred dollars. These video game systems typically include a hand-held controller, game controller, or, in the case of a hand-held video game platform, an integrated controller. A user or player uses the controller to send commands or other instructions to the video game system to control a video game or other simulation being played. For example, the controller may be provided with a manipulator (e.g., a joystick) and buttons operated by the user.
Many hand-held gaming devices include some form of camera device which may be used to capture an image or a series of images of a physical, real-world scene. The captured images can then be displayed, for instance, on a display of the hand-held gaming device. Certain devices may be configured to insert virtual objects into the captured images before the images are displayed. Additionally, other devices or applications may enable users to draw or paint particular within a captured image of a physical scene. However, as such alterations apply only to a single image of the physical scene, subsequent captured images of the physical scene from different perspectives may not incorporate the user’s alterations.
SUMMARY
Embodiments provide a method, computer-readable memory and augmented reality device for displaying a first physical object. The method, computer-readable memory and augmented reality device include capturing, using one or more camera devices, one or more images of a visual scene that includes a first physical object. The method, computer-readable memory and augmented reality device also include determining a first region of the first physical object to apply one or more augmentations to. Additionally, the method, computer-readable memory and augmented reality device include determining which one of a plurality of configurations the first physical object is currently in, wherein the first physical object is configured to be physically manipulatable into each of the plurality of configurations. The method, computer-readable memory and augmented reality device include rendering a sequence of frames for display in which the first region of the first physical object is animated in a predefined manner depicting a virtual light source within the first physical object, based on the determined configuration of the first physical object, by applying the one or more augmentations to a first virtual object generated based on predefined geometric information corresponding to a determined object type of the first physical object. The method, computer-readable memory and augmented reality device further include outputting the rendered sequence of frames for display using one or more display devices.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited aspects are attained and can be understood in detail, a more particular description of embodiments of the invention, briefly summarized above, may be had by reference to the appended drawings.
It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
FIG. 1 is a block diagram illustrating an augmented reality device configured with an augmented reality component, according to one embodiment described herein.
FIG. 2 illustrates an augmented reality device viewing an augmented reality toy, according to one embodiment described herein.
FIG. 3 is a screenshot of the screen of the augmented reality device shown in FIG. 2, according to one embodiment described herein.
FIG. 4 is a flow diagram illustrating a method for displaying an augmented reality toy on an augmented reality device, according to one embodiment described herein.
FIG. 5 is a block diagram illustrating an augmented reality device configured with an augmented reality component, according to one embodiment described herein.
DETAILED DESCRIPTION
Generally, embodiments of the invention provide techniques for displaying content on an augmented reality device. As used herein, an augmented reality device refers to any device capable of displaying a real-time view of a physical, real-world environment while altering elements within the displayed view of the environment. As such, unlike a virtual reality device which displays a view of virtual world, an augmented reality device displays a view of the real world but augments elements using computer graphics technology. Such an augmented reality device may include and/or be communicatively coupled to a camera device (or multiple camera devices) used to capture a view of the real-world environment and may further include computer software and/or hardware configured to augment elements of the captured scene. For example, an augmented reality device could capture a series of images of a coffee cup sitting on top of a table, modify the series of images so that the coffee cup appears as an animated cartoon character and display the modified series of images in real-time to a user. As such, when the user looks at the augmented reality device, the user sees an augmented view of the physical real-world environment in which the user is located.
Embodiments provide techniques for displaying an augmented reality toy on an augmented reality device. Software on the augmented reality device may capturing a visual scene for display using one or more cameras of the augmented reality device. The visual scene includes the augmented reality toy. For example, cameras could be used to capture one or more images of a toy castle sitting atop a table. The software could identify the augmented reality toy as a first predetermined object type, based on one or more object identifiers associated with the first physical object. For example, the toy castle could include a marker that corresponds to a particular object type, where different types of augmented reality toys are labeled with different markers, each corresponding to a respective object type. Additionally, the marker could be embedded using a material that is difficult or impossible to see with the human eye (e.g., an infrared-absorbing ink). In such an example, the augmented reality device could be configured with an infrared camera capable of detecting the embedded marker and, upon detecting the embedded marker, the software could determine the predetermined object type that the particular marker corresponds to (e.g., a particular type of toy castle).
Furthermore, the software could be configured to identify the object type of the augmented reality toy based on its shape. For instance, the software on the augmented reality device could analyze the visual scene to determine a plurality of edges of the first physical object within the visual scene, and could use the determined plurality of edges to identify the predetermined object type of the augmented reality toy. In one embodiment, the augmented reality toy is configured with a transmitter (e.g., a RF transmitter) that transmits a signal with embedded data specifying an object type identification code. Software on the augmented reality device could receive the signal and could determine the predetermined object type based on the specified object type identification code.
Additionally, the software could retrieve predefined geometric information corresponding to the first predetermined object type. The geometric information could specify, for instance, dimensions of objects in the predetermined object type, the shape of the objects in the predetermined object type, and so on. Additionally, the geometric information could identify one of more effect areas on the objects in the predetermined object type. Continuing the example, the toy castle could include several windows and these could be identified as effect areas in the geometric data for the castle.
The software could then render a sequence of frames for display in which an appearance of the first physical object is augmented, based on the predefined geometric information. For example, the window effect areas on the toy castle could be augmented to appear as if light is emitting from the windows. As another example, one or more animated virtual characters could be depicted on, in or around the augmented reality toy. For instance, an animated virtual character could be shown through the castle windows in the sequence of frames walking around a room within the toy castle. Of note, such a depiction may be shown, even though the toy castle itself may not include any interior rooms. For example, when viewed outside of the augmented reality device, the toy castle could be a plastic castle with several stickers depicting windows. However, when viewed through the augmented reality device, these windows could appear as realistic windows emitting light from an interior room of the castle, and with one or more animated virtual characters moving throughout the room. Advantageously, doing so provides an improved experience for users of the augmented reality toy.
FIG. 1 is a block diagram illustrating an augmented reality device configured with a display correction component, according to one embodiment of the present invention. As shown, the augmented reality device 100 includes an augmented reality component 110, camera devices 120, a display device 130 and an accelerometer 140. The camera devices 120 may include cameras for capturing a visual scene. As used herein, a visual scene refers to a view(s) of the real-world environment in which the device 100 is being used. For instance, a visual scene may be a series of images of a real-world environment. The camera devices 120 may also include one or more user-facing cameras. The augmented reality component 110 could use such a user-facing camera device 120 to, e.g., determine an angle at which the user is viewing the display device 130. Generally, the accelerometer 140 is a device capable of measuring the physical (or proper) acceleration of the augmented reality device 100. The augmented reality component 110 may use the accelerometer 140 to, e.g., determine when the position of the augmented reality device 100 is changing, which could indicate the user’s viewing angle of the display device 130 is also changing.
Generally, the augmented reality component 110 is configured to recognize augmented reality toys within a visual scene (e.g., a series of frames captured using the camera devices 120) and to adjust the depiction of the visual scene on the augmented reality device based on predefined data associated with the augmented reality toys. For instance, the augmented reality component 110 could analyze a visual scene captured using the cameras 120 and identify augmented reality toys within the visual scene. More specifically, as the visual scene represents a three-dimensional space (i.e., the physical environment captured using the cameras 120), the augmented reality component 110 could determine an area of three-dimensional space occupied by each identified augmented reality toy. For example, the augmented reality component 110 could be preconfigured with geometric data that defines geometric properties (e.g., size, shape, color, etc.) for particular toys, and could use the geometric data to identify instances of the predefined toys within the visual scene and the three-dimensional space each object occupies.
In one embodiment, the augmented reality toy is configured with a transmitter (e.g., a radio frequency (RF) transmitter) that sends out a signal encoded with data specifying a type identifier. In such an embodiment, the augmented reality component 110 could receive the signal (e.g., using a receiver or transceiver on the augmented reality device 100) and could determine the type identifier encoded within the signal. The augmented reality component 110 could then determine the type of the toy, based on the type identifier.
In another embodiment, the augmented reality toy may contain an embedded marker that identifies the type of the toy. For instance, the augmented reality toy could contain a quick response (QR) code that specifies a type identifier corresponding to the type of the toy. More generally, however, any type of marker capable of identifying a type or a type identifier may be used. The augmented reality component 110 could then detect the embedded marker (e.g., using a camera 120 of the augmented reality device 100) and could determine the type of the toy based on the embedded marker. In a particular embodiment, the embedded marker is impossible or difficult to detect using the human eye. For example, the embedded marker could be expressed using an infrared-absorbing material that is invisible or nearly invisible to the human eye, and the augmented reality component 110 could be configured to detect the embedded marker using one or more infrared cameras on the augmented reality device 100. Advantageously, doing so allows the marker to be embedded in the augmented reality toy without disrupting the aesthetics of the toy.
Upon identifying an augmented reality toy within the visual scene, the augmented reality component 110 could then retrieve predefined data associated with the identified toy. For example, the augmented reality component 110 could determine that the augmented reality toy is a castle and could retrieve augmentation data associated with the castle object type. Such augmentation data could specify, for instance, one or more areas of the castle toy to augment and how the areas should be augmented. As an example, the physical castle toy could include several stickers that depict windows of the castle, the augmentation data could specify that these stickers should be augmented to appear as real windows that emit light. Additionally, the augmentation data could specify that the augmented windows should depict one or more animated virtual characters shown within the castle. The augmented reality component 110 could then render a series of frames depicting an augmented virtual scene based on the augmentation data. Advantageously, by recognizing the physical toy as a particular type of augmented reality toy, the augmented reality component 110 can provide augmentations that are specific to the particular type of toy, thereby enhancing the appearance of the toy and the user’s experience with the toy.
Additionally, the augmented reality component 110 could depict interactions between virtual characters and the augmented reality toy based on the type of the toy. For instance, upon detecting an arctic castle toy, the augmented reality component 110 could generate a series of frames depicting an ice patch next to the toy. Moreover, upon determining that a virtual character within the augmented reality scene is coming into contact with the ice patch, the augmented reality component 110 could depict the virtual character as slipping on the ice. Advantageously, doing so helps to create a more immersive and improved experience for users of the augmented reality toy.
In addition to identifying the type of the augmented reality toy, the augmented reality component 110 can use predefined geometric data associated with the type of toy to augment the augmented reality toy’s appearance. For instance, such geometric data could specify the shape and dimensions of a staircase on the augmented reality castle, and the augmented reality component 110 could use this information to render frames realistically depicting a virtual character walking up the steps of the toy castle. Additionally, by pre-configuring the augmented reality component 110 with geometric data specifying the shape of the stairs, the augmented reality component 110 does not need to approximate the shape and size of the stairs based on the toy’s appearance in the captured visual scene.
Additionally, the augmented reality component 110 on the augmented reality device could measure one or more environmental illumination characteristics of an environment in which the augmented reality device is located. Environmental illumination characteristics could include, for instance, a position of a light source within an environment in which the augmented reality device is located, an angle of the light source, an indication of whether the light source is omnidirectional, a color of the light source, and an intensity of the light source and a reflectivity value of the first physical object. Based on the environmental illumination characteristics, the augmented reality component 110 could adjust the appearance of the augmented first physical object and virtual characters/objects within the augmented reality scene, based on the measured one or more environmental illumination characteristics. For instance, the augmented reality component 110 could identify one or more shadows within the visual scene and could render shadows for one or more virtual characters or objects within the augmented reality scene based on the identified shadows. As an example, the augmented reality component 110 could determine that a toy castle has a shadow on the left side of the captured visual scene, indicating that a light source is shining on the toy castle from the right side of the captured visual scene. In such an example, the augmented reality component 110 could render shadows for virtual objects and characters in the augmented reality scene, based on a virtual light source shining from the right side of the augmented reality scene.
While the aforementioned examples refer to identifying light sources based on shadows of physical objects within the captured visual scene, these examples are without limitation and it is contemplated that numerous other techniques could be used to identify light sources within the physical environment. For instance, the augmented reality device 100 could be configured with multiple cameras positioned on multiple, different sides of the device 100, and the augmented reality component 110 could use images from these other cameras to identify light sources positioned throughout the physical environment. As another example, the rendered sequence of frames could depict a virtual pond positioned on the table next to the toy castle and could augment the appearance of the virtual pond to show reflections from one or more light sources within the environment. Moreover, the augmented reality component 110 could depict these reflections as having an effect on other virtual objects/characters or the physical toy within the augmented reality scene. For instance, the augmented reality component 110 could depict light reflected from the virtual pond shining onto the walls of the toy castle. Doing so provides a more dynamic and realistic augmented reality world that is capable of adapting to the environment in which the augmented reality device is located.
In addition to augmenting the appearance of the augmented reality toy, the augmented reality component 110 could also augment the acoustics of the toy. For instance, the augmented reality component 110 could be configured to recognize a stuffed animal dog toy, and when viewing the toy dog with the augmented reality device, the augmented reality component 110 could play sound effects associated with the toy dog. For instance, when the user views the toy dog with the augmented reality device, the augmented reality component 110 could render a series of frames depicting the toy dog as an animated dog and could further play sound effects corresponding to the animation (e.g., a barking noise when the animated dog barks).
Additionally, the augmented reality component 110 could be configured to depict interactions between animated virtual character and the augmented reality toy based on a set of dynamics rules. The dynamics rules may define dynamics interactions for visual scenes displayed on the augmented reality device. In one embodiment, the dynamics rules used may be determined based on the type of augmented reality toy in the visual scene. As an example, a spaceship augmented reality toy could be associated with a set of low-gravity dynamics rules and the augmented reality component 110, upon detecting the visual scene includes the spaceship toy, could apply the set of low-gravity dynamics rules to virtual characters within the augmented reality scene.
While the aforementioned example describes an embodiment configured to augment a three-dimensional toy’s appearance, such an example is without limitation and is provided for illustrative purposes only. Moreover, it is explicitly contemplated that embodiments can be configured to interact with two-dimensional objects as well. For example, an augmented reality device could be configured to recognize and augment images shown on the pages of a story book. As an example, a first page of a story book could include a picture of a castle and could include an embedded marker (e.g., a unique symbol embedded in the page using an infrared ink). In such an example, the augmented reality component 110 could capture a visual scene including the page of the book and could further detect the embedded marker (e.g., using an infrared camera). The augmented reality component 110 could then render frames depicting the castle on the page as having one or more augmentations. For example, the augmented castle could appear to stand out from the page and have a three-dimensional appearance. Additionally, a virtual character could be shown moving about the page and interacting with the castle. Advantageously, doing so allows the two-dimensional picture of the castle to, in effect, “come alive” with an altered appearance and/or interactions with virtual characters in the augmented reality world, thereby enhancing the user’s experience with the story book.
FIG. 2 illustrates an augmented reality device viewing an augmented reality toy, according to one embodiment described herein. As shown, the scene 240 includes a toy castle 210 sitting atop a table 215. Additionally, the scene 240 includes an augmented reality device 100 that is viewing the toy castle 210 and is rendering and displaying one or more frames depicting an augmented reality scene on its display device 245. As discussed above, the augmented reality component 110 could identify the type of the toy castle 210 (e.g., based on the appearance of the toy castle, based on a type identifier encoded in a signal, based on an embedded marker, etc.) and could augment the appearance of the toy castle 210 as shown on the augmented reality device 100 based on the determined type.
As shown on the display device 245, a number of different augmentations have been applied to the toy castle. FIG. 3 shows a screenshot of the screen of the augmented reality device shown in FIG. 2, according to one embodiment described herein. Here, the screenshot 300 includes a visual depiction 310 of the castle 210 and a number of different augmentations. The augmentations include a moat 320, fireworks 330, ponies 340, a tree 350 and a drawbridge 360. Of note, the physical toy castle 210 depicted in FIG. 2 does not include any of the augmentations 320, 330, 340, 350 and 360, but instead these augmentations have been created and applied to the castle’s appearance on the augmented reality device 100 based on a determination of the toy type of the physical castle 210.
Moreover, the various augmentations may be static virtual objects or animated virtual objects. For instance, in the depicted example, the drawbridge 360 could appear as static, while the fireworks 330 could appear as an animated virtual object. Additionally, the various virtual objects depicted in the augmented reality scene may appear to interact with one another. For instance, the ponies 340 could appear to walk around the augmented reality scene and could enter the castle by crossing the drawbridge 360. Furthermore, in some situations, the virtual objects 340 may appear as fully or partially occluded by other virtual objects or by the toy castle 310. For example, as the ponies cross the draw bridge, they could be partially or fully occluded by the castle 310. In one embodiment, the augmented reality component 110 is configured to optimize the depicted scene by performing occlusion culling operations for one or more of the virtual objects.
In one embodiment, the virtual objects in the augmented reality scene are depicted as visually affecting other objects (both virtual and physical) within the scene. For instance, as the fireworks 330 explode, the augmented reality component 110 could augment the appearance of the castle 310 (i.e., the physical toy) so that it appears light from the exploding fireworks is reflecting on the castle 310. Additionally, the augmented reality component 110 could augment the appearance of the water 320 (i.e., a virtual object) could be augmented so show the reflection of the exploding fireworks.
As discussed above, the augmented reality component 110 is configured to determine an object type of the toy castle and to generate the augmentations based on the determined type. Thus, while the particular castle includes augmentations such as fireworks 330 and a moat 320, a different toy castle (e.g., an ice castle) could include other, different augmentations (e.g., Eskimos, polar bears, etc.). More generally, it is broadly contemplated that any type of augmented reality toy and virtual objects may be used, consistent with the functionality described herein. Advantageously, by determining an object type of the augmented reality toy and by generating the augmented reality scene based on the determined object type, embodiments can realistically depict augmentations for the augmented reality toy and can tailor the augmentations to be contextually relevant to the augmented reality toy.
In one embodiment, the augmented reality component 110 is configured to render virtual characters that interact in different ways with the physical toy, based on a state of the physical toy. For instance, assume that a second castle toy includes a physical drawbridge that a child can open and close. When the second castle toy is viewed with the augmented reality device, the augmented reality component 110 could determine which state the drawbridge is currently in and could render animated virtual characters accordingly. Thus, for example, an animated virtual character could appear to walk across the drawbridge and enter the castle when the drawbridge is lowered (i.e., a first state), and the animated virtual character could appear to be trapped either inside or outside of the castle when the drawbridge is raised (i.e., a second state). Advantageously, doing so provides a more immersive and interactive experience for users of the augmented reality device.
In a particular embodiment, the augmented reality component 110 on a first augmented reality device is configured to synchronize with a second augmented reality component 110 on a second augmented reality device. Such synchronization can occur between local augmented reality components, remote augmented reality components, or a combination therebetween. For example, two separate users with two separate augmented reality devices could view the same toy castle at the same time. In such an example, the augmented reality components 110 on each of the augmented reality devices could synchronize with one another, such that each of the two users sees the same augmentations to the castle occurring at the same time. As another example, a remote user could be viewing a separate instance of the toy castle remotely from the two users, but could be in contact (e.g., via video conferencing) with the two users. In such an example, the augmented reality component 110 on the remote user’s augmented reality device could synchronize (e.g., over a network such as the Internet) with the augmented reality devices of the two local users, such that the two local users and the remote user all see the same augmentations occurring at the same time on their augmented reality devices. Advantageously, doing so helps to provide a more immersive experience for users of the augmented reality device, when there are multiple users viewing the same physical (and/or different instances of the physical object) at the same time using augmented reality devices.
FIG. 4 is a flow diagram illustrating a method for displaying an augmented reality toy on an augmented reality device, according to one embodiment described herein. As shown, the method 400 begins at block 410, where the augmented reality component 110 captures a visual scene. For instance, the augmented reality component 110 could use one or more cameras 120 on the augmented reality device 100 to capture the visual scene.
The augmented reality component 110 then identifies an augmented reality toy within the visual scene and determines an object type of the augmented reality toy (block 415). As discussed above, the augmented reality component 110 could identify the augmented reality toy using a variety of techniques. For instance, the augmented reality component 110 could be preconfigured with geometric data (e.g., size, shape, coloration, etc.) for various types of augmented reality toys, and the augmented reality component 110 could use the geometric data to identify the augmented reality toy within the visual scene as a particular object type. As an example, where the augmented reality component 110 is configured with the geometric data for several different types of augmented reality toys, the augmented reality component 110 could determine which set of geometric data best matches the toy in the visual scene.
In one embodiment, the augmented reality component 110 is configured to identify the augmented toy by detecting a marker embedded within the toy. For example, a QR code could be embedded in the toy using an infrared material (e.g., an infrared ink), such that the QR code is difficult if not impossible to see with the human eye. In such an example, the augmented reality component 110 could use one or more infrared cameras on the augmented reality device to detect the QR code and could then determine an object type corresponding to the detected QR code.
In a particular embodiment, the augmented reality toy is configured with an RF transmitter (or transceiver) that transmits a signal encoded with an object type identifier. The augmented reality component 110 could then use an RF receiver (or transceiver) in the augmented reality device 100 to receive the signal. The augmented reality component 110 could then analyze determine the object type identifier encoded within the received signal and could determine the object type based on this identifier. Generally, any of the aforementioned techniques may be used for identifying the augmented reality toy, or a combination of these techniques may be used. For instance, the augmented reality component 110 could identify the toy as a first object type based on the geometric data and could confirm the identification by verifying that a signal specifying the first object type is being broadcast.
Upon identifying the augmented reality toy, the augmented reality component 110 augmented the appearance of the augmented reality toy within the visual scene displayed on the augmented reality device (block 420). For instance, the augmented reality component 110 could determine one or more augmentations associated with the determined object type of the augmented reality toy, and could render one or more frames depicting the determined augmentation(s) applied to the captured visual scene. As an example, where the augmented reality toy is a toy castle, the augmented reality component 110 could determine that this toy is associated with a fireworks augmentation. The augmented reality component 110 could then render frames depicting virtual fireworks going off above the toy castle. Additionally, the rendered frames may augment the appearance of the toy castle as well based on the applied augmentations. For instance, the toy castle’s appearance could be augmented so that it appears light from the virtual fireworks is reflecting on to the castle displayed on the augmented reality device 100. Once augmented reality component 110 renders frames depicting one or more augmentations to the visual scene, the frames are output for display (block 425) and the method 400 ends.
Additionally, the augmented reality component 110 could be configured to use the predefined geometric data for the augmented reality toy in generating the augmentations. For instance, assume that the toy castle includes a flight of stairs leading up to the walls of the castle, and that the predefined geometric data specifies the shape and dimensions of these stairs. The augmented reality component 110 could then use the predefined geometric data to depict a virtual character walking up the flight of stairs. By preconfiguring the augmented reality device 100 with data specifying the size and shape of the stairs, the augmented reality component 110 can more accurately and realistically render frames depicting a virtual character walking up the stairs. Doing so enhances the overall appearance of the rendered frames and thus may improve the user’s experience with the augmented reality device and the augmented reality toy as well.
FIG. 5 is a block diagram illustrating an augmented reality device configured with a surface painting component, according to one embodiment described herein. In this example, the augmented reality device 100 includes, without limitation, a processor 500, storage 505, memory 510, I/O devices 520, a network interface 525, camera devices 120, a display devices 130 and an accelerometer device 140. Generally, the processor 500 retrieves and executes programming instructions stored in the memory 510. Processor 500 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, and the like. The memory 510 is generally included to be representative of a random access memory. The network interface 525 enables the augmented reality device 100 to connect to a data communications network (e.g., wired Ethernet connection or an 802.11 wireless network). Further, while the depicted embodiment illustrates the components of a particular augmented reality device 100, one of ordinary skill in the art will recognize that augmented reality devices may use a variety of different hardware architectures. Moreover, it is explicitly contemplated that embodiments of the invention may be implemented using any device or computer system capable of performing the functions described herein.
The memory 510 represents any memory sufficiently large to hold the necessary programs and data structures. Memory 510 could be one or a combination of memory devices, including Random Access Memory, nonvolatile or backup memory (e.g., programmable or Flash memories, read-only memories, etc.). In addition, memory 510 and storage 505 may be considered to include memory physically located elsewhere; for example, on another computer communicatively coupled to the augmented reality device 100. Illustratively, the memory 510 includes an augmented reality component 110 and an operating system 515. The operating system 515 generally controls the execution of application programs on the augmented reality device 100. Examples of operating system 515 include UNIX, a version of the Microsoft Windows.RTM. operating system, and distributions of the Linux.RTM. operating system. (Note: Linux is a trademark of Linus Torvalds in the United States and other countries.) Additional examples of operating system 515 include custom operating systems for gaming consoles, including the custom operating systems for systems such as the Nintendo DS.RTM. and Sony PSP.RTM..
The I/O devices 520 represent a wide variety of input and output devices, including displays, keyboards, touch screens, and so on. For instance, the I/O devices 520 may include a display device used to provide a user interface. As an example, the display may provide a touch sensitive surface allowing the user to select different applications and options within an application (e.g., to select an instance of digital media content to view). Additionally, the I/O devices 520 may include a set of buttons, switches or other physical device mechanisms for controlling the augmented reality device 100. For example, the I/O devices 520 could include a set of directional buttons used to control aspects of a video game played using the augmented reality device 100.
The augmented reality component 110 generally is configured to render frames for display on the augmented reality device that depict an augmented reality toy. The augmented reality component 110 could capture a visual scene for display. Here, the visual scene could include a first physical object captured using the camera devices 120. The augmented reality component 110 could identify the first physical object as a first predetermined object type, based on one or more object identifiers associated with the first physical object. Examples of such identifiers may include an embedded marker within the first physical object, a signal received from a transmitter associated with the first physical object, and so on. The augmented reality component 110 may also retrieve predefined geometric information corresponding to the first predetermined object type. The augmented reality component 110 may then render a sequence of frames for display in which an appearance of the first physical object is augmented, based on the predefined geometric information.
In the preceding, reference is made to embodiments of the invention. However, the invention is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the invention. Furthermore, although embodiments of the invention may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the invention. Thus, the preceding aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
Aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Embodiments of the invention may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in “the cloud,” without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources.
Typically, cloud computing resources are provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present invention, a user may access environmental illumination data available in the cloud. For example, a augmented reality component 110 could execute on an augmented reality device 100 operated by a user and collect environment illumination data pertaining to the user’s current environment. In such a case, the augmented reality component 110 could transmit the collected data to a computing system in the cloud for storage. When the user again returns to same environment, the augmented reality component 110 could query the computer system in the cloud to retrieve the environmental illumination data and could then use the retrieved data to realistically model lighting effects on painted objects within an augmented reality scene displayed on the augmented reality device 100. Doing so allows a user to access this information from any device or computer system attached to a network connected to the cloud (e.g., the Internet).
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special-purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
更多阅读推荐......
|
__label__pos
| 0.938399 |
python - ylabel - 如何正確地忽略異常?
python title position (7)
當你只想做一個嘗試 - 除了沒有處理異常之外,你怎麼在Python中做到這一點?
以下是正確的做法嗎?
try:
shutil.rmtree(path)
except:
pass
如何正確地忽略異常?
有幾種方法可以做到這一點。
然而,例子的選擇有一個簡單的解決方案,不包括一般情況。
具體到這個例子:
代替
try:
shutil.rmtree(path)
except:
pass
做這個:
shutil.rmtree(path, ignore_errors=True)
這是shutil.rmtree特有的一個參數。 通過執行以下操作,您可以看到它的幫助,並且您會發現它也可以允許錯誤的功能。
>>> import shutil
>>> help(shutil.rmtree)
由於這只涵蓋了示例的狹義情況,因此如果這些關鍵字參數不存在,我將進一步演示如何處理此問題。
一般的做法
由於上述內容僅涵蓋範例的狹義範例,因此如果這些關鍵字參數不存在,我將進一步演示如何處理此問題。
Python 3.4新增功能:
您可以導入suppress上下文管理器:
from contextlib import suppress
但只能抑制最具體的例外情況:
with suppress(FileNotFoundError):
shutil.rmtree(path)
你會默默地忽略FileNotFoundError
>>> with suppress(FileNotFoundError):
... shutil.rmtree('bajkjbkdlsjfljsf')
...
>>>
docs
和其他任何完全抑制異常的機制一樣,這個上下文管理器應該只用於覆蓋非常具體的錯誤,在那裡默默地繼續執行程序是正確的。
請注意, suppressFileNotFoundError僅在Python 3中可用。
如果你希望你的代碼在Python 2中工作,請參閱下一節:
Python 2&3:
當你只是想做一個嘗試/除非沒有處理異常,你如何在Python中做到這一點?
以下是正確的做法嗎?
try :
shutil.rmtree ( path )
except :
pass
對於Python 2兼容的代碼來說, pass是一個正確的方式來讓一個沒有操作的語句。 但是當你做一個except: ...... except: ,這與except BaseException:的操作相同except BaseException:它包括GeneratorExitKeyboardInterruptSystemExit ,並且通常你不想捕獲這些東西。
事實上,您應該盡可能具體地指定例外。
這是Python(2) 異常層次結構的一部分,正如你所看到的,如果你捕捉到更多的一般異常,你可以隱藏你並不期望的問題:
BaseException
+-- SystemExit
+-- KeyboardInterrupt
+-- GeneratorExit
+-- Exception
+-- StopIteration
+-- StandardError
| +-- BufferError
| +-- ArithmeticError
| | +-- FloatingPointError
| | +-- OverflowError
| | +-- ZeroDivisionError
| +-- AssertionError
| +-- AttributeError
| +-- EnvironmentError
| | +-- IOError
| | +-- OSError
| | +-- WindowsError (Windows)
| | +-- VMSError (VMS)
| +-- EOFError
... and so on
你可能想在這裡發現OSError,也許你不關心的異常是沒有目錄。
我們可以從errno庫中獲取特定的錯誤編號,並重新評估如果我們沒有那個:
import errno
try:
shutil.rmtree(path)
except OSError as error:
if error.errno == errno.ENOENT: # no such file or directory
pass
else: # we had an OSError we didn't expect, so reraise it
raise
請注意,裸舉會引發原始異常,這可能是您在這種情況下需要的。 寫得更簡潔,因為我們並不需要在異常處理中顯式pass代碼:
try:
shutil.rmtree(path)
except OSError as error:
if error.errno != errno.ENOENT: # no such file or directory
raise
當你只想做一個try catch而不處理異常時,你如何在Python中做到這一點?
這取決於你的意思是“處理”。
如果你的意思是在沒有採取任何行動的情況下抓住它,你發布的代碼將會起作用。
如果你的意思是你想對異常採取行動而不會停止堆棧上的異常,那麼你需要這樣的東西:
try:
do_something()
except:
handle_exception()
raise #re-raise the exact same exception that was thrown
在python中處理異常:如果您有一些可能引發異常的可疑代碼,您可以通過將可疑代碼放置在try:區塊中來維護您的程序。
try:
# your statements .............
except ExceptionI:
# your statments.............
except ExceptionII:
# your statments..............
else:
# your statments
在python中,我們處理類似於其他語言的異常,但差異在於某些syntex差異,例如 -
try:
#Your Code in which exception can occur
except <here we can put particular exception name>:
#we can call that exception here also, like ZeroDivisionError()
#now your code
#we can put finally block also
finally:
#YOur Code..
通常認為最好的做法是只捕獲你感興趣的錯誤。對於shutil.rmtree ,可能是OSError
>>> shutil.rmtree("/fake/dir")
Traceback (most recent call last):
[...]
OSError: [Errno 2] No such file or directory: '/fake/dir'
如果你想默默地忽略那個錯誤,你會這樣做:
try:
shutil.rmtree(path)
except OSError:
pass
為什麼? 說你(不知何故)意外地傳遞一個整數而不是一個字符串,如:
shutil.rmtree(2)
它會給出錯誤“TypeError:強制為Unicode:需要字符串或緩衝區,找到int” - 您可能不想忽略該錯誤,這可能很難調試。
如果您肯定要忽略所有錯誤,請捕獲Exception而不是純粹的except:語句。 同樣,為什麼?
不指定異常會捕獲每個異常,包括例如sys.exit()使用的SystemExit異常:
>>> try:
... sys.exit(1)
... except:
... pass
...
>>>
將此與以下正確的退出相比較:
>>> try:
... sys.exit(1)
... except Exception:
... pass
...
shell:~$
如果你想寫更好的行為代碼, OSError異常可以代表各種錯誤,但在上面的例子中,我們只想忽略Errno 2 ,所以我們可以更具體:
try:
shutil.rmtree(path)
except OSError, e:
if e.errno == 2:
# suppress "No such file or directory" error
pass
else:
# reraise the exception, as it's an unexpected error
raise
您也可以import errno並將if更改為if e.errno == errno.ENOENT:
首先,我從這個帖子引用傑克奧康納的回答。 被引用的線程被關閉了,所以我寫在這裡:
“Python 3.4中提供了一種新的方法:
from contextlib import suppress
with suppress(Exception):
# your code
以下是添加它的提交: http://hg.python.org/cpython/rev/406b47c64480 : http://hg.python.org/cpython/rev/406b47c64480
下面是作者Raymond Hettinger談論這個以及各種其他Python熱點: https://youtu.be/OSGv2VnC0go?t=43m23s ://youtu.be/OSGv2VnC0go https://youtu.be/OSGv2VnC0go?t=43m23s t https://youtu.be/OSGv2VnC0go?t=43m23s 43m23s
我對此的補充是Python 2.7的等價物:
from contextlib import contextmanager
@contextmanager
def ignored(*exceptions):
try:
yield
except exceptions:
pass
然後你就像在Python 3.4中一樣使用它:
with ignored(Exception):
# your code
try-except
|
__label__pos
| 0.963314 |
java正则表达式
11年前正抓紧高考,记得当时对计算机特别着迷,基本每周都要买一份电脑报,介绍计算机硬件,软件方面的东西,上课也偷偷的拿出来你看。
无意中接触到了互联网开发语言java,便下载了一些尚学堂的基础视频教程,主讲老师 马士兵,讲课特幽默,听他讲课也是一种享受,从那时就走上了it之路。
最近想做一个数据采集器,需要用到正则表达式,也想回味听一下当年马士兵老师讲课的视频,便整理了如下java正则表达式学习笔记
1.正则表达式基础
2.邮件地址页面抓取
3.代码统计
正则表达式基础:
1 public static void main(String[] args) {
2 //简单认识java正则表达式
3 p("abc".matches("..."));//一个"."表示一个字符
4 p("a8729a".replaceAll("\\d", "-"));//替换,java里面用两个\\代表一个\
5
6 //编译后执行
7 Pattern p = Pattern.compile("[a-z]{3}");
8 Matcher m = p.matcher("fgh");
9 p(m.matches());
10 p("fgh".matches("[a-z]{3}"));//上面可以这样写
11
12
13 //初步认识 . * + ?
14 p("a".matches("."));//.表示一个字符
15 p("aa".matches("aa"));
16 p("aaaa".matches("a*"));//*表示0个或多个
17 p("".matches("a*"));
18 p("aaaa".matches("a+"));//+表示1个或多个
19 p("aaaa".matches("a?"));//?表示0个或1个
20 p("".matches("a?"));
21 p("a".matches("a?"));
22 p("214523145234532".matches("\\d{3,100}"));//数字 3位至100位
23 p("192.168.0.aaa".matches("\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}"));//ip地址验证
24 p("192".matches("[0-2][0-9][0-9]"));
25
26
27 //范围
28 p("a".matches("[abc]"));//取其中abc里面的一个字符
29 p("a".matches("[^abc]"));//非abc任意字符
30 p("A".matches("[a-zA-Z]"));
31 p("A".matches("[a-z]|[A-Z]"));//
32 p("A".matches("[a-z[A-Z]]"));
33 p("R".matches("[A-Z&&[RFG]]"));//取交集
34
35
36 //类型 认识\s \w \d \b \
37 /**
38 \s:表示\t\n\x0B\f\r and \S:表示非\s
39 \w:表示 [a-zA-Z_0-9]构成单词字符 and \W:表示非\w
40 \d:表示0-9 and \D:表示非0-9
41 */
42 p(" \n\r\t".matches("\\s{4}"));
43 p(" ".matches("\\S"));
44 p("a_8".matches("\\w{3}"));
45 p("abc888&^%".matches("[a-z]{1,3}\\d+[&^#%]+"));
46 p("\\".matches("\\\\"));
47
48
49 //POSIX Style linux操作系统标准写法
50 p("a".matches("\\p{Lower}"));
51
52 //边界处理 ^ $ \b:单词边界 (^位于[]是取反)
53 p("hello sir".matches("^h.*"));//^ 开头的为h
54 p("hello sir".matches(".*ir$"));//$ 前面有0-多个并且以ir结尾
55 p("hello sir".matches("^h[a-z]{1,3}o\\b.*"));
56 p("hellosir".matches("^h[a-z]{1,3}o\\b.*"));
57 //whilte lines
58 p(" \n".matches("^[\\s&&[^\\n]]*\\n$"));
59 p("aaa 8888c".matches(".*\\d{4}."));
60 p("aaa 8888c".matches(".*\\b\\d{4}."));
61 p("aaa8888c".matches(".*\\d{4}."));
62 p("aaa8888c".matches(".*\\b\\d{4}."));
63
64
65 //email
66 p("[email protected]".matches("[\\w[.-]]+@[\\w[.-]]+\\.[\\w]+"));
67
68 /*//查找方式 matches find lookingAt
69 Pattern p = Pattern.compile("\\d{3,5}");
70 String s = "123-34345-234-00";
71 Matcher m = p.matcher(s);
72 p(m.matches());
73 m.reset();
74 p(m.find());
75 p(m.start() + "-" + m.end());
76 p(m.find());
77 p(m.start() + "-" + m.end());
78 p(m.find());
79 p(m.start() + "-" + m.end());
80 p(m.find());
81 //p(m.start() + "-" + m.end());
82 p(m.lookingAt());
83 p(m.lookingAt());
84 p(m.lookingAt());
85 p(m.lookingAt());*/
86
87
88 //字符串替换 replacement
89 /*
90 Pattern p = Pattern.compile("java", Pattern.CASE_INSENSITIVE);
91 Matcher m = p.matcher("java Java JAVa JaVa IloveJAVA you hateJava afasdfasdf");
92 StringBuffer buf = new StringBuffer();
93 int i=0;
94 while(m.find()) {
95 i++;
96 if(i%2 == 0) {
97 m.appendReplacement(buf, "java");
98 } else {
99 m.appendReplacement(buf, "JAVA");
100 }
101 }
102 m.appendTail(buf);
103 p(buf);
104 */
105
106 //分组 group
107 /*
108 Pattern p = Pattern.compile("(\\d{3,5})([a-z]{2})");
109 String s = "123aa-34345bb-234cc-00";
110 Matcher m = p.matcher(s);
111 while(m.find()) {
112 p(m.group());
113 }
114 */
115
116 //qulifiers
117 /*
118 Pattern p = Pattern.compile(".{3,10}+[0-9]");
119 String s = "aaaa5bbbb68";
120 Matcher m = p.matcher(s);
121 if(m.find())
122 p(m.start() + "-" + m.end());
123 else
124 p("not match!");
125 */
126
127 //non-capturing groups
128 /*
129 Pattern p = Pattern.compile(".{3}(?=a)");
130 String s = "444a66b";
131 Matcher m = p.matcher(s);
132 while(m.find()) {
133 p(m.group());
134 }
135 */
136
137 //back refenrences
138 /*
139 Pattern p = Pattern.compile("(\\d(\\d))\\2");
140 String s = "122";
141 Matcher m = p.matcher(s);
142 p(m.matches());
143 */
144
145 //flags简写
146 //Pattern p = Pattern.compile("java", Pattern.CASE_INSENSITIVE);
147 // p("Java".matches("(?i)(java)"));
148 }
149
150 public static void p(Object o) {
151 System.out.println(o);
152 }
邮件地址页面抓取
1 public static void main(String[] args) {
2 try {
3 BufferedReader br = new BufferedReader(new FileReader("D:\\share\\courseware\\1043633.html"));
4 String line = "";
5 while((line=br.readLine()) != null) {
6 parse(line);
7 }
8 } catch (FileNotFoundException e) {
9 // TODO Auto-generated catch block
10 e.printStackTrace();
11 } catch (IOException e) {
12 // TODO Auto-generated catch block
13 e.printStackTrace();
14 }
15 }
16
17 private static void parse(String line) {
18 Pattern p = Pattern.compile("[\\w[.-]]+@[\\w[.-]]+\\.[\\w]+");
19 Matcher m = p.matcher(line);
20 while(m.find()) {
21 System.out.println(m.group());
22 }
23 }
代码统计
1 static long normalLines = 0;
2 static long commentLines = 0;
3 static long whiteLines = 0;
4
5 public static void main(String[] args) {
6 File f = new File("D:\\share\\JavaProjects\\TankWar1.9.11\\src");
7 File[] codeFiles = f.listFiles();
8 for(File child : codeFiles){
9 if(child.getName().matches(".*\\.java$")) {
10 parse(child);
11 }
12 }
13
14 System.out.println("normalLines:" + normalLines);
15 System.out.println("commentLines:" + commentLines);
16 System.out.println("whiteLines:" + whiteLines);
17
18 }
19
20 private static void parse(File f) {
21 BufferedReader br = null;
22 boolean comment = false;
23 try {
24 br = new BufferedReader(new FileReader(f));
25 String line = "";
26 while((line = br.readLine()) != null) {
27 line = line.trim();
28 if(line.matches("^[\\s&&[^\\n]]*$")) {
29 whiteLines ++;
30 } else if (line.startsWith("/*") && !line.endsWith("*/")) {
31 commentLines ++;
32 comment = true;
33 } else if (line.startsWith("/*") && line.endsWith("*/")) {
34 commentLines ++;
35 } else if (true == comment) {
36 commentLines ++;
37 if(line.endsWith("*/")) {
38 comment = false;
39 }
40 } else if (line.startsWith("//")) {
41 commentLines ++;
42 } else {
43 normalLines ++;
44 }
45 }
46 } catch (FileNotFoundException e) {
47 e.printStackTrace();
48 } catch (IOException e) {
49 e.printStackTrace();
50 } finally {
51 if(br != null) {
52 try {
53 br.close();
54 br = null;
55 } catch (IOException e) {
56 e.printStackTrace();
57 }
58 }
59 }
60 }
关注我的微信共享学习,讨论更多技术知识
posted @ 2017-03-06 16:46 liyuan3210> 阅读(...) 评论(...) 编辑 收藏
|
__label__pos
| 0.998354 |
Copyright © University of Cambridge. All rights reserved.
'Take Your Dog for a Walk' printed from http://nrich.maths.org/
Show menu
Each day Mr Pearson takes his dog for a walk.
You can see him on the interactivity below.
Full Screen Version
If you can see this message Flash may not be working in your browser
Please see http://nrich.maths.org/techhelp/#flash to enable it.
Try moving Mr Pearson and his dog using your computer mouse. The graph shows how far Mr Pearson is walking from his house after a certain amount of time.
What happens to the graph once Mr Pearson gets back to his house after his walk?
Can you make a curved line on the graph?
Describe how Mr Pearson must walk to create this curve.
How must Mr Pearson walk to make the curve steeper?
And can you make the curve shallower? How does Mr Pearson walk this time?
Some graphs for you to try to reproduce can be found in the notes .
|
__label__pos
| 0.990989 |
如何使用Vue.js创建基于WebSockets的实时应用
2023-06-16 11:39:14
如何使用Vue.js创建基于WebSockets的实时应用
Vue.js是一种流行的前端开发框架,可以用于创建各种类型的Web应用程序。而WebSockets是一种在网络中实现双向通信的协议,它适用于需要实时数据传输的应用程序。在本文中,我们将学习如何将Vue.js与WebSockets结合使用来创建实时应用程序。首先,让我们看一下如何在Vue.js中使用WebSockets。
// 创建WebSocket对象
let socket = new WebSocket("ws://localhost:8080");
// 监听WebSocket的打开事件
socket.addEventListener("open", () => {
console.log("WebSocket连接已经打开");
});
// 监听WebSocket的关闭事件
socket.addEventListener("close", () => {
console.log("WebSocket连接已经关闭");
});
// 监听WebSocket的错误事件
socket.addEventListener("error", (error) => {
console.error("WebSocket错误:" + error);
});
// 监听WebSocket的消息事件
socket.addEventListener("message", (event) => {
console.log("收到WebSocket消息:" + event.data);
});
在上面的代码中,我们首先使用WebSockets创建了一个WebSocket对象。然后,我们使用addEventListener方法来监听WebSocket的打开、关闭、错误和消息事件。在监听到消息事件时,我们将消息打印到控制台。
// Vue.js组件
Vue.component("chat-room", {
template: `
<div>
<h1>Chat Room</h1>
<ul>
<li v-for="message in messages" :key="message">{{ message }}</li>
</ul>
<input v-model="messageText" @keyup.enter="sendMessage" placeholder="Type your message here">
</div>
`,
data() {
return {
messageText: "",
messages: []
};
},
methods: {
sendMessage() {
// 发送消息到WebSocket服务器
socket.send(this.messageText);
// 将消息添加到消息列表中
this.messages.push(this.messageText);
// 将输入框的值清空
this.messageText = "";
}
}
});
// 创建Vue.js应用程序
new Vue({
el: "#app"
});
上述代码是Vue.js组件,它定义了一个聊天室,它包含一个消息列表和一个文本输入框,可以用来发送新消息。发送消息时,将消息添加到消息数组中,并通过WebSocket发送到服务器。这个组件还使用v-for指令在消息列表中显示所有消息。
接下来,我们看一下如何在服务器端使用WebSockets。
const WebSocket = require("ws");
// 创建WebSocket服务器
const server = new WebSocket.Server({ port: 8080 });
// 监听WebSocket的连接事件
server.on("connection", (socket) => {
console.log("WebSocket连接已建立");
// 监听WebSocket的消息事件
socket.on("message", (message) => {
console.log("收到WebSocket消息:" + message);
//向所有客户端发送消息
server.clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
});
});
// 监听WebSocket的关闭事件
socket.on("close", () => {
console.log("WebSocket连接已关闭");
});
});
上述代码是用Node.js编写的WebSocket服务器端代码。当客户端连接到服务器时,服务器将监听这个连接的消息事件。在收到任何消息时,服务器会将消息发送给所有连接到服务器的客户端。在连接关闭时,服务器会将一个连接关闭事件发送给所有客户端。
最后,我们需要启动WebSocket服务器。我们可以使用以下命令在命令行中运行服务器代码:
node server.js
在浏览器中运行Vue.js应用程序代码,并访问localhost:8080,在两个标签页之间发送一些消息,你将看到消息实时更新。
• 作者:
• 原文链接:
更新时间:2023-06-16 11:39:14
|
__label__pos
| 0.973437 |
PageRenderTime 484ms CodeModel.GetById 161ms app.highlight 12ms RepoModel.GetById 139ms app.codeStats 0ms
/Doc/library/framework.rst
http://unladen-swallow.googlecode.com/
ReStructuredText | 340 lines | 195 code | 145 blank | 0 comment | 0 complexity | affe29f8d53e549a0f9679eb94ae5232 MD5 | raw file
1
2:mod:`FrameWork` --- Interactive application framework
3======================================================
4
5.. module:: FrameWork
6 :platform: Mac
7 :synopsis: Interactive application framework.
8 :deprecated:
9
10
11The :mod:`FrameWork` module contains classes that together provide a framework
12for an interactive Macintosh application. The programmer builds an application
13by creating subclasses that override various methods of the bases classes,
14thereby implementing the functionality wanted. Overriding functionality can
15often be done on various different levels, i.e. to handle clicks in a single
16dialog window in a non-standard way it is not necessary to override the complete
17event handling.
18
19.. note::
20
21 This module has been removed in Python 3.x.
22
23Work on the :mod:`FrameWork` has pretty much stopped, now that :mod:`PyObjC` is
24available for full Cocoa access from Python, and the documentation describes
25only the most important functionality, and not in the most logical manner at
26that. Examine the source or the examples for more details. The following are
27some comments posted on the MacPython newsgroup about the strengths and
28limitations of :mod:`FrameWork`:
29
30
31.. epigraph::
32
33 The strong point of :mod:`FrameWork` is that it allows you to break into the
34 control-flow at many different places. :mod:`W`, for instance, uses a different
35 way to enable/disable menus and that plugs right in leaving the rest intact.
36 The weak points of :mod:`FrameWork` are that it has no abstract command
37 interface (but that shouldn't be difficult), that its dialog support is minimal
38 and that its control/toolbar support is non-existent.
39
40The :mod:`FrameWork` module defines the following functions:
41
42
43.. function:: Application()
44
45 An object representing the complete application. See below for a description of
46 the methods. The default :meth:`__init__` routine creates an empty window
47 dictionary and a menu bar with an apple menu.
48
49
50.. function:: MenuBar()
51
52 An object representing the menubar. This object is usually not created by the
53 user.
54
55
56.. function:: Menu(bar, title[, after])
57
58 An object representing a menu. Upon creation you pass the ``MenuBar`` the menu
59 appears in, the *title* string and a position (1-based) *after* where the menu
60 should appear (default: at the end).
61
62
63.. function:: MenuItem(menu, title[, shortcut, callback])
64
65 Create a menu item object. The arguments are the menu to create, the item title
66 string and optionally the keyboard shortcut and a callback routine. The callback
67 is called with the arguments menu-id, item number within menu (1-based), current
68 front window and the event record.
69
70 Instead of a callable object the callback can also be a string. In this case
71 menu selection causes the lookup of a method in the topmost window and the
72 application. The method name is the callback string with ``'domenu_'``
73 prepended.
74
75 Calling the ``MenuBar`` :meth:`fixmenudimstate` method sets the correct dimming
76 for all menu items based on the current front window.
77
78
79.. function:: Separator(menu)
80
81 Add a separator to the end of a menu.
82
83
84.. function:: SubMenu(menu, label)
85
86 Create a submenu named *label* under menu *menu*. The menu object is returned.
87
88
89.. function:: Window(parent)
90
91 Creates a (modeless) window. *Parent* is the application object to which the
92 window belongs. The window is not displayed until later.
93
94
95.. function:: DialogWindow(parent)
96
97 Creates a modeless dialog window.
98
99
100.. function:: windowbounds(width, height)
101
102 Return a ``(left, top, right, bottom)`` tuple suitable for creation of a window
103 of given width and height. The window will be staggered with respect to previous
104 windows, and an attempt is made to keep the whole window on-screen. However, the
105 window will however always be the exact size given, so parts may be offscreen.
106
107
108.. function:: setwatchcursor()
109
110 Set the mouse cursor to a watch.
111
112
113.. function:: setarrowcursor()
114
115 Set the mouse cursor to an arrow.
116
117
118.. _application-objects:
119
120Application Objects
121-------------------
122
123Application objects have the following methods, among others:
124
125
126.. method:: Application.makeusermenus()
127
128 Override this method if you need menus in your application. Append the menus to
129 the attribute :attr:`menubar`.
130
131
132.. method:: Application.getabouttext()
133
134 Override this method to return a text string describing your application.
135 Alternatively, override the :meth:`do_about` method for more elaborate "about"
136 messages.
137
138
139.. method:: Application.mainloop([mask[, wait]])
140
141 This routine is the main event loop, call it to set your application rolling.
142 *Mask* is the mask of events you want to handle, *wait* is the number of ticks
143 you want to leave to other concurrent application (default 0, which is probably
144 not a good idea). While raising *self* to exit the mainloop is still supported
145 it is not recommended: call ``self._quit()`` instead.
146
147 The event loop is split into many small parts, each of which can be overridden.
148 The default methods take care of dispatching events to windows and dialogs,
149 handling drags and resizes, Apple Events, events for non-FrameWork windows, etc.
150
151 In general, all event handlers should return ``1`` if the event is fully handled
152 and ``0`` otherwise (because the front window was not a FrameWork window, for
153 instance). This is needed so that update events and such can be passed on to
154 other windows like the Sioux console window. Calling :func:`MacOS.HandleEvent`
155 is not allowed within *our_dispatch* or its callees, since this may result in an
156 infinite loop if the code is called through the Python inner-loop event handler.
157
158
159.. method:: Application.asyncevents(onoff)
160
161 Call this method with a nonzero parameter to enable asynchronous event handling.
162 This will tell the inner interpreter loop to call the application event handler
163 *async_dispatch* whenever events are available. This will cause FrameWork window
164 updates and the user interface to remain working during long computations, but
165 will slow the interpreter down and may cause surprising results in non-reentrant
166 code (such as FrameWork itself). By default *async_dispatch* will immediately
167 call *our_dispatch* but you may override this to handle only certain events
168 asynchronously. Events you do not handle will be passed to Sioux and such.
169
170 The old on/off value is returned.
171
172
173.. method:: Application._quit()
174
175 Terminate the running :meth:`mainloop` call at the next convenient moment.
176
177
178.. method:: Application.do_char(c, event)
179
180 The user typed character *c*. The complete details of the event can be found in
181 the *event* structure. This method can also be provided in a ``Window`` object,
182 which overrides the application-wide handler if the window is frontmost.
183
184
185.. method:: Application.do_dialogevent(event)
186
187 Called early in the event loop to handle modeless dialog events. The default
188 method simply dispatches the event to the relevant dialog (not through the
189 ``DialogWindow`` object involved). Override if you need special handling of
190 dialog events (keyboard shortcuts, etc).
191
192
193.. method:: Application.idle(event)
194
195 Called by the main event loop when no events are available. The null-event is
196 passed (so you can look at mouse position, etc).
197
198
199.. _window-objects:
200
201Window Objects
202--------------
203
204Window objects have the following methods, among others:
205
206
207.. method:: Window.open()
208
209 Override this method to open a window. Store the Mac OS window-id in
210 :attr:`self.wid` and call the :meth:`do_postopen` method to register the window
211 with the parent application.
212
213
214.. method:: Window.close()
215
216 Override this method to do any special processing on window close. Call the
217 :meth:`do_postclose` method to cleanup the parent state.
218
219
220.. method:: Window.do_postresize(width, height, macoswindowid)
221
222 Called after the window is resized. Override if more needs to be done than
223 calling ``InvalRect``.
224
225
226.. method:: Window.do_contentclick(local, modifiers, event)
227
228 The user clicked in the content part of a window. The arguments are the
229 coordinates (window-relative), the key modifiers and the raw event.
230
231
232.. method:: Window.do_update(macoswindowid, event)
233
234 An update event for the window was received. Redraw the window.
235
236
237.. method:: Window.do_activate(activate, event)
238
239 The window was activated (``activate == 1``) or deactivated (``activate == 0``).
240 Handle things like focus highlighting, etc.
241
242
243.. _controlswindow-object:
244
245ControlsWindow Object
246---------------------
247
248ControlsWindow objects have the following methods besides those of ``Window``
249objects:
250
251
252.. method:: ControlsWindow.do_controlhit(window, control, pcode, event)
253
254 Part *pcode* of control *control* was hit by the user. Tracking and such has
255 already been taken care of.
256
257
258.. _scrolledwindow-object:
259
260ScrolledWindow Object
261---------------------
262
263ScrolledWindow objects are ControlsWindow objects with the following extra
264methods:
265
266
267.. method:: ScrolledWindow.scrollbars([wantx[, wanty]])
268
269 Create (or destroy) horizontal and vertical scrollbars. The arguments specify
270 which you want (default: both). The scrollbars always have minimum ``0`` and
271 maximum ``32767``.
272
273
274.. method:: ScrolledWindow.getscrollbarvalues()
275
276 You must supply this method. It should return a tuple ``(x, y)`` giving the
277 current position of the scrollbars (between ``0`` and ``32767``). You can return
278 ``None`` for either to indicate the whole document is visible in that direction.
279
280
281.. method:: ScrolledWindow.updatescrollbars()
282
283 Call this method when the document has changed. It will call
284 :meth:`getscrollbarvalues` and update the scrollbars.
285
286
287.. method:: ScrolledWindow.scrollbar_callback(which, what, value)
288
289 Supplied by you and called after user interaction. *which* will be ``'x'`` or
290 ``'y'``, *what* will be ``'-'``, ``'--'``, ``'set'``, ``'++'`` or ``'+'``. For
291 ``'set'``, *value* will contain the new scrollbar position.
292
293
294.. method:: ScrolledWindow.scalebarvalues(absmin, absmax, curmin, curmax)
295
296 Auxiliary method to help you calculate values to return from
297 :meth:`getscrollbarvalues`. You pass document minimum and maximum value and
298 topmost (leftmost) and bottommost (rightmost) visible values and it returns the
299 correct number or ``None``.
300
301
302.. method:: ScrolledWindow.do_activate(onoff, event)
303
304 Takes care of dimming/highlighting scrollbars when a window becomes frontmost.
305 If you override this method, call this one at the end of your method.
306
307
308.. method:: ScrolledWindow.do_postresize(width, height, window)
309
310 Moves scrollbars to the correct position. Call this method initially if you
311 override it.
312
313
314.. method:: ScrolledWindow.do_controlhit(window, control, pcode, event)
315
316 Handles scrollbar interaction. If you override it call this method first, a
317 nonzero return value indicates the hit was in the scrollbars and has been
318 handled.
319
320
321.. _dialogwindow-objects:
322
323DialogWindow Objects
324--------------------
325
326DialogWindow objects have the following methods besides those of ``Window``
327objects:
328
329
330.. method:: DialogWindow.open(resid)
331
332 Create the dialog window, from the DLOG resource with id *resid*. The dialog
333 object is stored in :attr:`self.wid`.
334
335
336.. method:: DialogWindow.do_itemhit(item, event)
337
338 Item number *item* was hit. You are responsible for redrawing toggle buttons,
339 etc.
340
|
__label__pos
| 0.514171 |
【Java】URLからファイルをダウンロードするには...
サンプル
Main.java
Yahooのロゴを落としてくる
import java.io.BufferedOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
public class Main {
// ★ここは自分で書き直してください★
private static final String InputData = "http://k.yimg.jp/images/top/sp2/cmn/logo-ns-131205.png";
private static final String OutputData = "C:\\Temp\\yahoobloglogo.png";
private static final int BufferSize = 4096;
public static void main(String[] args) {
try {
URL url = new URL(InputData);
HttpURLConnection urlConnection =
(HttpURLConnection) url.openConnection();
// false の場合、ユーザーとの対話処理は許可されていません。
urlConnection.setAllowUserInteraction(false);
// true の場合、プロトコルは自動的にリダイレクトに従います
urlConnection.setInstanceFollowRedirects(true);
// URL 要求のメソッドを"GET"に設定
urlConnection.setRequestMethod("GET");
urlConnection.connect();
// HTTP 応答メッセージから状態コードを取得します
int httpStatusCode = urlConnection.getResponseCode();
if (httpStatusCode != HttpURLConnection.HTTP_OK) {
throw new Exception();
}
Main.writeStream(urlConnection.getInputStream(), OutputData);
System.out.println("Completed!!");
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static void writeStream(InputStream inputStream, String outputPath)
throws Exception {
int availableByteNumber;
byte[] buffers = new byte[BufferSize];
try (DataInputStream dataInputStream = new DataInputStream(inputStream);
DataOutputStream outputStream = new DataOutputStream(
new BufferedOutputStream(new FileOutputStream(outputPath)))) {
while ((availableByteNumber = dataInputStream.read(buffers)) > 0) {
outputStream.write(buffers, 0, availableByteNumber);
}
} catch (Exception ex) {
throw ex;
}
}
}
注意:ダメだった実装例
* 詳細は以下の関連記事を参照のこと
HttpURLConnection を使ったファイルダウンロードで空のファイル(0Byte)ができる
http://blogs.yahoo.co.jp/dk521123/34832039.html
関連記事
HttpURLConnection を使ったファイルダウンロードで空のファイル(0Byte)ができる
http://blogs.yahoo.co.jp/dk521123/34832039.html
Servlet】 ファイル ダウンロード
http://blogs.yahoo.co.jp/dk521123/33641421.html
Servlet】 ZIP圧縮と同時にファイルをダウンロードさせる
http://blogs.yahoo.co.jp/dk521123/33647497.html
|
__label__pos
| 0.510148 |
HomeTemplate ➟ 20 √ 20 Two Column Proof Worksheet
√ 20 Two Column Proof Worksheet
Reteaching Worksheet Holmdel from two column proof worksheet, image source: yumpu.com
two column proof worksheets lesson worksheets two column proof displaying all worksheets to two column proof worksheets are geometric proofs geometryh work proofs in two column form two column proofs proving introduction to two column proofs congruence solve each write a reason for every using cpctc with triangle congruence geometry proving statements about segments and angles two column proofs worksheets kiddy math two column proofs two column proofs displaying top 8 worksheets found for this concept some of the worksheets for this concept are two column proofs geometric proofs geometryh work proofs in two column form two column proofs congruent triangles 2 column proofs proving introduction to two column proofs congruence solve each write a reason for every two column proof worksheets kiddy math two column proof two column proof displaying top 8 worksheets found for this concept some of the worksheets for this concept are geometric proofs geometryh work proofs in two column form two column proofs proving introduction to two column proofs congruence solve each write a reason for every using cpctc with triangle congruence geometry proving statements about segments and angles two column proofs examples solutions videos worksheets videos examples solutions worksheets games and activities to help geometry students learn how to use two column proofs a two column proof consists of a list of statements and the reasons why those statements are true the statements are in the left column and the reasons are in the right column two column proof triangle worksheets lesson worksheets two column proof triangle displaying all worksheets to two column proof triangle worksheets are congruent triangles 2 column proofs using cpctc with triangle congruence congruent triangles proof work triangle proofs s sas asa aas geometry proving statements about segments and angles geometry work beginning proofs solve each write a reason for every
Gallery of √ 20 Two Column Proof Worksheet
Two Column Proof Worksheet Proving Triangles Congruent Worksheet for 10th GradeTwo Column Proof Worksheet Chapter 3 Pre Test WorksheetTwo Column Proof Worksheet Worksheet Congruent TrianglesTwo Column Proof Worksheet Reteaching Worksheet HolmdelTwo Column Proof Worksheet Lesson 2 4 Congruent Supplements and Plements ObjectiveTwo Column Proof Worksheet Chapter 3 Proving Statements In Geometry Jmap Google FreeTwo Column Proof Worksheet Simple Geometry Proofs Worksheets Worksheets for KidsTwo Column Proof Worksheet Algebraic ProofsTwo Column Proof Worksheet Verifying Segment Relationships Worksheet for 10th GradeTwo Column Proof Worksheet Free Triangle Congruence WorksheetsTwo Column Proof Worksheet Geometry Review Two Column Proofs Worksheet for 9th 11thTwo Column Proof Worksheet Proving Lines Parallel with Triangle Congruence Sss Sas AasTwo Column Proof Worksheet Parallel Lines Proofs Practice Worksheet for 8th 11thTwo Column Proof Worksheet Introducing Geometry Proofs A New ApproachTwo Column Proof Worksheet Exercises and Practice Proofs for Chapter 7Two Column Proof Worksheet Two Column Proof Worksheet Worksheet ListTwo Column Proof Worksheet Proofs Worksheet 1 Answers NidecmegeTwo Column Proof Worksheet Math 9 Module 6Two Column Proof Worksheet 12 13 Geometry 2 5 WorksheetTwo Column Proof Worksheet Geometric Proof Worksheet for 10th Grade
Related Posts for √ 20 Two Column Proof Worksheet
|
__label__pos
| 0.999829 |
summary refs log tree commit homepage
path: root/lib/clogger.rb
blob: a64ca092a8187b981b71b4c52a88110e5d5a9267 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
# -*- encoding: binary -*-
require 'rack'
# See the README for usage instructions
class Clogger
# the version of Clogger, currently 0.6.0
VERSION = '0.6.0'
# :stopdoc:
OP_LITERAL = 0
OP_REQUEST = 1
OP_RESPONSE = 2
OP_SPECIAL = 3
OP_EVAL = 4
OP_TIME_LOCAL = 5
OP_TIME_UTC = 6
OP_REQUEST_TIME = 7
OP_TIME = 8
OP_COOKIE = 9
# support nginx variables that are less customizable than our own
ALIASES = {
'$request_time' => '$request_time{3}',
'$time_local' => '$time_local{%d/%b/%Y:%H:%M:%S %z}',
'$msec' => '$time{3}',
'$usec' => '$time{6}',
'$http_content_length' => '$content_length',
'$http_content_type' => '$content_type',
}
SPECIAL_VARS = {
:body_bytes_sent => 0,
:status => 1,
:request => 2, # REQUEST_METHOD PATH_INFO?QUERY_STRING HTTP_VERSION
:request_length => 3, # env['rack.input'].size
:response_length => 4, # like body_bytes_sent, except "-" instead of "0"
:ip => 5, # HTTP_X_FORWARDED_FOR || REMOTE_ADDR || -
:pid => 6, # getpid()
:request_uri => 7
}
private
CGI_ENV = Regexp.new('\A\$(' <<
%w(request_method content_length content_type
remote_addr remote_ident remote_user
path_info query_string script_name
server_name server_port
auth_type gateway_interface server_software path_translated
).join('|') << ')\z')
SCAN = /([^$]*)(\$+(?:env\{\w+(?:\.[\w\.]+)?\}|
e\{[^\}]+\}|
(?:request_)?time\{\d+\}|
time_(?:utc|local)\{[^\}]+\}|
\w*))?([^$]*)/x
def compile_format(str, opt = {})
rv = []
opt ||= {}
str.scan(SCAN).each do |pre,tok,post|
rv << [ OP_LITERAL, pre ] if pre && pre != ""
unless tok.nil?
if tok.sub!(/\A(\$+)\$/, '$')
rv << [ OP_LITERAL, $1 ]
end
compat = ALIASES[tok] and tok = compat
case tok
when /\A(\$*)\z/
rv << [ OP_LITERAL, $1 ]
when /\A\$env\{(\w+(?:\.[\w\.]+))\}\z/
rv << [ OP_REQUEST, $1 ]
when /\A\$e\{([^\}]+)\}\z/
rv << [ OP_EVAL, $1 ]
when /\A\$cookie_(\w+)\z/
rv << [ OP_COOKIE, $1 ]
when CGI_ENV, /\A\$(http_\w+)\z/
rv << [ OP_REQUEST, $1.upcase ]
when /\A\$sent_http_(\w+)\z/
rv << [ OP_RESPONSE, $1.downcase.tr('_','-') ]
when /\A\$time_local\{([^\}]+)\}\z/
rv << [ OP_TIME_LOCAL, $1 ]
when /\A\$time_utc\{([^\}]+)\}\z/
rv << [ OP_TIME_UTC, $1 ]
when /\A\$time\{(\d+)\}\z/
rv << [ OP_TIME, *usec_conv_pair(tok, $1.to_i) ]
when /\A\$request_time\{(\d+)\}\z/
rv << [ OP_REQUEST_TIME, *usec_conv_pair(tok, $1.to_i) ]
else
tok_sym = tok[1..-1].to_sym
if special_code = SPECIAL_VARS[tok_sym]
rv << [ OP_SPECIAL, special_code ]
else
raise ArgumentError, "unable to make sense of token: #{tok}"
end
end
end
rv << [ OP_LITERAL, post ] if post && post != ""
end
# auto-append a newline
last = rv.last or return rv
op = last.first
ors = opt[:ORS] || "\n"
if (op == OP_LITERAL && /#{ors}\z/ !~ last.last) || op != OP_LITERAL
rv << [ OP_LITERAL, ors ] if ors.size > 0
end
rv
end
def usec_conv_pair(tok, prec)
if prec == 0
[ "%d", 1 ] # stupid...
elsif prec > 6
raise ArgumentError, "#{tok}: too high precision: #{prec} (max=6)"
else
[ "%d.%0#{prec}d", 10 ** (6 - prec) ]
end
end
def need_response_headers?(fmt_ops)
fmt_ops.any? { |op| OP_RESPONSE == op[0] }
end
def need_wrap_body?(fmt_ops)
fmt_ops.any? do |op|
(OP_REQUEST_TIME == op[0]) || (OP_SPECIAL == op[0] &&
(SPECIAL_VARS[:body_bytes_sent] == op[1] ||
SPECIAL_VARS[:response_length] == op[1]))
end
end
# :startdoc:
end
require 'clogger/format'
begin
raise LoadError if ENV['CLOGGER_PURE'].to_i != 0
require 'clogger_ext'
rescue LoadError
require 'clogger/pure'
end
|
__label__pos
| 0.581076 |
« Close
Datasheets and User Guides
Software & Driver
5.2.16 - AsynchConfig [U3 Datasheet]
Requires U3 hardware version 1.21+. Configures the U3 UART for asynchronous communication. On hardware version 1.30 the TX (transmit) and RX (receive) lines appear on FIO/EIO after any timers and counters, so with no timers/counters enabled, and pin offset set to 4, TX=FIO4 and RX=FIO5. On hardware version 1.21, the UART uses SDA for TX and SCL for RX. Communication is in the common 8/n/1 format. Similar to RS232, except that the logic is normal CMOS/TTL. Connection to an RS232 device will require a converter chip such as the MAX233, which inverts the logic and shifts the voltage levels.
Table 5.2.16-1. AsynchConfig Command Response
Command:
Byte
0 Checksum8
1 0xF8
2 0×02
3 0×14
4 Checksum16 (LSB)
5 Checksum16 (MSB)
6 0×00
7 AsynchOptions
Bit 7: Update
Bit 6: UARTEnable
Bit 5: Reserved
8 BaudFactor LSB (1.30 only)
9 BaudFactor MSB
Response:
Byte
0 Checksum8
1 0xF8
2 0×02
3 0×14
4 Checksum16 (LSB)
5 Checksum16 (MSB)
6 Errorcode
7 AsynchOptions
8 BaudFactor LSB (1.30 only)
9 BaudFactor MSB
• AsynchOptions:
Bit 7: Update If true, the new parameters are written (otherwise just a read is done).
Bit 6: UARTEnable If true, the UART module is enabled. Note that no data can be transfered until pins have been assigned to the UART module using the ConfigIO function.
• BaudFactor16 (BaudFactor8): This 16-bit value sets the baud rate according the following formula: BaudFactor16 = 216 – 48000000/(2 x Desired Baud). For example, a BaudFactor16 = 63036 provides a baud rate of 9600 bps. (With hardware revision 1.21, the value is only 8-bit and the formula is BaudFactor8 = 28 – TimerClockBase/(Desired Baud) ).
|
__label__pos
| 0.602079 |
Graphics perf counters meaning
My requirement is to analyze my game frames to locate the bottleneck draw calls and the compute-intense shaders in those draw calls. Here is my plan.
1. locate the bottleneck draw calls by looking the SM throughput counter:
sm__throughput.avg.pct_of_peak_sustained_elapsed (%)
2. locate the compute-intense shader stages by looking at the throughput of each shader stage:
[Counter group 1]
sm__cycles_active_shader_cs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_gs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_ps.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_tcs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_tes.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_vs.avg.pct_of_peak_sustained_elapsed (%)
[Counter group 2]
sm__warps_active.sum
sm__warps_active_shader_vtg.sum
sm__warps_active_shader_ps.sum
sm__warps_active_shader_cs.sum
Does this plan sound reasonable?
By looking at my profiling results, I have the following two questions.
1. What is the meaning of the following counters?
sm__cycles_active_shader_cs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_gs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_ps.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_tcs.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_tes.avg.pct_of_peak_sustained_elapsed (%)
sm__cycles_active_shader_vs.avg.pct_of_peak_sustained_elapsed (%)
I assume
sm__cycles_active_shader_xx.avg.pct_of_peak_sustained_elapsed (%) =
sm__cycles_active_shader_xx.avg / sm__cycles_elapsed_shader_xx.avg. So the value should be less than 100(< 100%). However, in one of the profiling results I am investigating sm__cycles_active_shader_ps.avg.pct_of_peak_sustained_elapsed (%) is more larger than 100.
sm__cycles_active.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_cs.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_gs.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_ps.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_tcs.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_tes.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_active_shader_vs.avg.pct_of_peak_sustained_elapsed (%) sm__cycles_elapsed.avg.pct_of_peak_sustained_elapsed (%)
99.12263 99.18941 0 0 0 0 0 100
96.12748 96.24648 0 0 0 0 0 100
96.62175 96.69736 0 0 0 0 0 100
95.00545 95.65731 0 0 0 0 0 100
94.49121 94.46158 0 0 0 0 0 100
95.29343 0 0 103.65118 0 0 0.01094 100
95.29874 95.22861 0 0 0 0 0 100
97.74141 97.69843 0 0 0 0 0 100
79.56786 0 0 173.34756 0 0 0.53782 100
75.76738 0 0 162.66343 0 0 0.72478 100
55.19795 0 0 0 0 0 54.24617 100
53.87546 0 0 0 0 0 54.24021 100
78.82896 0 0 54.93192 0 0 54.98424 100
83.45561 81.61367 0 0 0 0 0 100
58.72605 0 0 21.3328 0 0 45.36058 100
56.48805 0 0 46.95401 0 0 40.3861 100
96.22414 0 0 98.76607 0 0 0.00389 100
1. What is the relationship among the following counters?
sm__warps_active.sum
sm__warps_active_shader_vtg.sum
sm__warps_active_shader_ps.sum
sm__warps_active_shader_cs.sum
I assume sm__warps_active.sum >= sm__warps_active_shader_vtg.sum + sm__warps_active_shader_ps.sum + sm__warps_active_shader_cs.sum. However, this doesn’t seem to be the case according to my data. In addition, why we don’t have these counters sm__warps_active_shader_vs.sum, sm__warps_active_shader_tcs.sum, sm__warps_active_shader_tes.sum and sm__warps_active_shader_gs.sum?
Your current plan will only help identify compute intensive shaders. Optimizations are also import on low throughput shaders. The latest Nsight Graphics GPU Trace tool has a lot of good features for understanding Unit Throughputs and mapping back to shader via the shader profiler. I will defer to the graphics profiler team on the best method to use the tool.
The formula you specified is correct.
The sm__cycles_active_shader_{shader_type} increments by 1 per cycle if the SM has 1 or more warps of {shader_type} resident on the SM. sm__warps_active_shader_{shader_type} is required to determine how many warps.
The SM can run more than 1 shader type at a time so SUM_TYPES(sm__cycles_active_shader_{shader_type}.avg.pct_of_peak_sustained_elapsed) can exceed 100%.
The relationship you specified is correct. The fomula using .avg or .sum is correct. The formula
sm__warps_active.avg.pct_of_peak_sustained_elapsed = SUM(sm__warps_active_shader_{vtg, ps, cs}.avg.pct_of_peak_sustained_elapsed) is not valid as some shader types have a reduced limit. For example, VTG on many chips is limited to 32 warps whereas the PS, CS, and SM max are 48 (or 64). Adding the .pct_of_peak_sustained_elapsed will exceed 100%.
The hardware PM signals do not exist. VTG covers vertex, tesselation (TCS/TES, DS/HS), and Mesh (amplificaiton, mesh) shaders.
Hi Greg,
Thanks for the reply. @Greg
Currently I am focusing on compute-intense shaders, and will look at memory-bound shaders later on.
According to my profiling result, there are some draw calls with
sm__warps_active.sum < sm__warps_active_shader_vtg.sum + sm__warps_active_shader_ps.sum + sm__warps_active_shader_cs.sum, could you explain why?
When captured in the same pass I do not expect more than a ± 2% error in your formula. If you see a larger difference, then I would recommend filing a bug that includes your GPU, tools/sdk version, etc.
|
__label__pos
| 0.941883 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
i am new to regular expressions in Java. I like to extract a string by using regular expressions.
This is my String: "Hello,World"
I like to extract the text after ",". The result would be "World". I tried this:
final Pattern pattern = Pattern.compile(",(.+?)");
final Matcher matcher = pattern.matcher("Hello,World");
matcher.find();
But what would be the next step?
share|improve this question
4 Answers 4
up vote 1 down vote accepted
You don't need Regex for this. You can simply split on comma and get the 2nd element from the array: -
System.out.println("Hello,World".split(",")[1]);
OUTPUT: -
World
But if you want to use Regex, you need to remove ? from your Regex.
? after + is used for Reluctant matching. It will only match W and stop there. You don't need that here. You need to match until it can match.
So use greedy matching instead.
Here's the code with modified Regex: -
final Pattern pattern = Pattern.compile(",(.+)");
final Matcher matcher = pattern.matcher("Hello,World");
if (matcher.find()) {
System.out.println(matcher.group(1));
}
OUTPUT: -
World
share|improve this answer
Extending what you have, you need to remove the ? sign from your pattern to use the greedy matching and then process the matched group:
final Pattern pattern = Pattern.compile(",(.+)"); // removed your '?'
final Matcher matcher = pattern.matcher("Hello,World");
while (matcher.find()) {
String result = matcher.group(1);
// work with result
}
Other answers suggest different approaches to your problem and might offer better solution for what you need.
share|improve this answer
System.out.println( "Hello,World".replaceAll(".*,(.*)","$1") ); // output is "World"
share|improve this answer
You are using a reluctant expression and will only select a single character W, whereas you can use a greedy one and print your matched group content:
final Pattern pattern = Pattern.compile(",(.+)");
final Matcher matcher = pattern.matcher("Hello,World");
if (matcher.find()) {
System.out.println(matcher.group(1));
}
Output:
World
See Regex Pattern doc
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.908595 |
SSIS Components for MySQL
Build 20.0.7745
Using the Source Component
After Establishing a Connection to the data source, you can use the CData MySQL source component to pull data into your Data Flow task.
Querying MySQL Data with the Source Component
Follow the procedure below to connect to MySQL, retrieve data, and provide data to other components in the workflow.
1. In the SSIS Toolbox, drag the CData MySQL source component into the Data Flow task.
2. Double-click the CData MySQL source component. The CData MySQL Source Editor will display.
3. In the Connection Managers menu, select an available CData MySQL connection manager, or create a new instance if one is not already available.
4. Choose your Access Mode: "Table or View" or "SQL Statement". Select "Table or View" to use the GUI to select a table or view. Select "SQL Statement" to configure a statement of your choice.
5. Select the Columns tab and rename any output columns as desired.
When you execute the data flow, rows from your selected table or statement will be made available to the components in the data flow.
Building Parameterized Queries in the Expression Builder
After configuring a source component, you can then use the SSIS Expression Builder to access the SQL statement that the source component executes at run time.
The component will execute these queries as parameterized statements. Parameterized statements provide an efficient way to execute similar queries and mitigate SQL injection attacks.
1. In SSIS Designer, click the Control Flow tab.
2. In the Properties pane, click the button in the box for the Expressions property.
3. In the resulting Property Expressions Editor, click an empty row in the Property box and select the SQLStatement property of the CData MySQL source component from the drop-down menu. Then click the button in the row you just added. This displays the Expression Builder.
4. In the Expression box, you can create new SQL commands that use the variables available at run time as input parameters. Ensure that you enclose the expression in quotes. For example:
"SELECT * FROM Table WHERE FirstName = '" + @[User::Name] + "' AND Date > '" + (DT_WSTR, 50) DATEADD("day", -30, GETDATE()) + "'"
Copyright (c) 2021 CData Software, Inc. - All rights reserved.
Build 20.0.7745
|
__label__pos
| 0.897627 |
The LP Procedure
COEF Statement
COEF variables ;
For the sparse input format, the COEF statement specifies the numeric variables in the problem data set that contain the coefficients in the model. The value of the coefficient variable in a given observation is the value of the coefficient in the column and row specified in the COLUMN and ROW variables in that observation. For multiple ROW variables, the LP procedure maps the ROW variables to the COEF variables on the basis of their order in the COEF and ROW statements. There must be the same number of COEF variables as ROW variables. If the COEF statement is omitted, the procedure looks for the default variable names that have the prefix _COEF.
|
__label__pos
| 0.851197 |
1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Linear Functionals
1. Jul 11, 2012 #1
1. The problem statement, all variables and given/known data
Why does this not qualify as a linear functional based on the relation ##l(\alpha u+\beta v)=\alpha l(u)+\beta l(v)##?
##\displaystyle I(u)=\int_a^b u \frac{du}{dx} dx##
2. Relevant equations
where ##\alpha## and ##\beta## are real numbers and ##u## , ##v## are dependant variables.
3. The attempt at a solution
If we let ##\displaystyle I(v)=\int_a^b v \frac{dv}{dx} dx##
then ##l(\alpha u+\beta v)=##
##\displaystyle \int_a^b ( \alpha u \frac{du}{dx} dx + \beta v \frac{dv}{dx} dx)=\displaystyle \int_a^b \alpha u \frac{du}{dx} dx +\int_a^b \beta v \frac{dv}{dx} dx=\displaystyle \alpha\int_a^b u \frac{du}{dx} dx +\beta \int_a^b v \frac{dv}{dx} dx=\alpha l(u)+\beta l(v)##...? THanks
2. jcsd
3. Jul 11, 2012 #2
[tex] I(\alpha u) = \int_a^b \alpha u \frac{d (\alpha u)}{d x} dx [/tex]
4. Jul 11, 2012 #3
HallsofIvy
User Avatar
Staff Emeritus
Science Advisor
Very cursorily, it is not "linear" because it is a product of two terms involving u.
Your formula for [itex]l(\alpha u+ \beta v)[/itex] is incorrect. You are doing as if [itex](a+ b)^2[/itex] were equal to [itex]a^2+ b^2[/itex] and that is not true.
You need
[tex]l(\alpha u+ \beta v)= \int_a^b (\alpha u+ \beta v)\frac{d(\alpha u+ \beta v)}{dx}dx[/tex]
[tex]= \int_a^b (\alpha u+ \beta v)(\alpha\frac{du}{dx}+ \beta\frac{dv}{dx})dx[/tex]
[tex]= \alpha^2 \int_a^b u\frac{du}{dx}dx+ \alpha\beta\int_a^b u\frac{dv}{dx}dx+ \alpha\beta\int_a^b v\frac{du}{dx}dx+ \beta^2\int_a^bv\frac{dv}{dx}dx[/tex]
5. Jul 11, 2012 #4
This is what I thought so too and that this non linearity has nothing to do with the realtion in my first thread...however
......in the book (which I am self studying finite element theory) it states
" a functional ##l(u)## is said to be linear in u iff it satisfies the relation..."
##l(\alpha u+\beta v)= \alpha l(u)+\beta l(v)##.....? How is this wrong?
6. Jul 11, 2012 #5
HallsofIvy
User Avatar
Staff Emeritus
Science Advisor
Well, as I showed in my first response, it is NOT [itex]\alpha l(u)+ \beta l(v)[/itex],
it is [itex]\alpha^2l(u)+ \beta^2l(v)[/itex] plus two additional terms!
7. Jul 12, 2012 #6
Ok, what about this one. Using the same relation ##l(αu+βv)=αl(u)+βl(v)## for a functional ##l(u)=\displaystyle \int_a^b f(x) u dx +c##? The book states this is not a linear functional...? Why? Heres my attempt..
let ##l(v)=\displaystyle \int_a^b g(x) v dx +d## then the LHS of the relation can be written as
##\displaystyle \int_a^b \alpha f(x) u dx +\alpha c + \displaystyle \int_a^b \beta g(x) v dx +\beta d=\alpha (\int_a^b f(x) u dx +c) + \beta (\displaystyle \int_a^b g(x) v dx +d)=αl(u)+βl(v)##...Why is this not a linear functional?
8. Jul 12, 2012 #7
Nonononononono!
First of all [itex]l(v)=\displaystyle\int\limits_a^b f(x)vdx+c[/itex], NOT [itex]l(v)=\displaystyle\int\limits_a^b g(x)vdx+d[/itex]. Who give you rights to say [itex]g(x)[/itex] or [itex]d[/itex]? NOBODY!
[itex]l(\alpha u + \beta v) = \displaystyle\int\limits_a^b f(x)(\alpha u + \beta v)dx + c[/itex], not [itex]+\alpha c + \beta c[/itex]
9. Jul 12, 2012 #8
ok...the the RHS would end up like
## \displaystyle \alpha \int\limits_a^b f(x)u dx + \beta \int\limits_a^b f(x)vdx + c=\alpha l(u)+\beta l(v)+c##
What about the c though, thats not in the relation?
10. Jul 12, 2012 #9
This isn't good, because
[itex]\alpha l(u) + \beta l(v)+c = \alpha \left(\displaystyle\int\limits_a^b f(x)udx+c\right)+\beta \left(\displaystyle\int\limits_a^b f(x)vdx+c\right)+c = \alpha\displaystyle\int\limits_a^b f(x)udx + \alpha c + \beta \displaystyle\int\limits_a^b f(x)vdx + \beta c + c \neq \displaystyle \alpha \int\limits_a^b f(x)u dx + \beta \int\limits_a^b f(x)vdx + c[/itex]
11. Jul 12, 2012 #10
So it cannot be a linear functional then...right? Looks good.THanks!!
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Linear Functionals
1. Linear functionals (Replies: 7)
2. Linear Functionals (Replies: 3)
3. Linear function (Replies: 2)
Loading...
|
__label__pos
| 0.999551 |
I am currently working through this tutorial: Getting Started with jQuery
For the two examples below:
$("#orderedlist").find("li").each(function (i) {
$(this).append(" BAM! " + i);
});
$("#reset").click(function () {
$("form").each(function () {
this.reset();
});
});
Notice in the first example, we use $(this) to append some text inside of each li element. In the second example we use this directly when resetting the form.
$(this) seems to be used a lot more often than this.
My guess is in the first example, $() is converting each li element into a jQuery object which understands the append() function whereas in the second example reset() can be called directly on the form.
Basically we need $() for special jQuery-only functions.
Is this correct?
• 2
@Reigel, why was this protected? The OP questioned and guessed the correct answer. – vol7ron Jul 24 '13 at 4:45
• 21
@Reigel: I think I should ask this in meta, but if that's all that's required for protection, shouldn't all questions be protected – vol7ron Jul 24 '13 at 15:06
up vote 487 down vote accepted
Yes you only need $() when you're using jQuery. If you want jQuery's help to do DOM things just keep this in mind.
$(this)[0] === this
Basically every time you get a set of elements back jQuery turns it into a jQuery object. If you know you only have one result, it's going to be in the first element.
$("#myDiv")[0] === document.getElementById("myDiv");
And so on...
• 1
Is there a reason to use $(this)[0] over this if they're always the same? – Jay Jul 30 '16 at 5:53
• 2
@Jay If you prefer to type long than simply using 'this' then yes. $() is the jQuery constructor function. " 'this' is a reference to the DOM element of invocation. so basically, in $(this), you are just passing the this in $() as a parameter so that you could call jQuery methods and functions". – Juliver Galleto Jul 31 '16 at 19:29
• 1
@jay - There's no good reason to use $(this)[0] I was just using it to illustrate the concept. :) I do use $("#myDiv")[0] over document.getElementById("myDiv") though. – Spencer Ruport Sep 29 '17 at 16:47
$() is the jQuery constructor function.
this is a reference to the DOM element of invocation.
So basically, in $(this), you are just passing the this in $() as a parameter so that you could call jQuery methods and functions.
Yes, you need $(this) for jQuery functions, but when you want to access basic javascript methods of the element that don't use jQuery, you can just use this.
When using jQuery, it is advised to use $(this) usually. But if you know (you should learn and know) the difference, sometimes it is more convenient and quicker to use just this. For instance:
$(".myCheckboxes").change(function(){
if(this.checked)
alert("checked");
});
is easier and purer than
$(".myCheckboxes").change(function(){
if($(this).is(":checked"))
alert("checked");
});
• 8
I Liked the example. Thanks ! – Ammar Jul 1 '13 at 14:34
this is the element, $(this) is the jQuery object constructed with that element
$(".class").each(function(){
//the iterations current html element
//the classic JavaScript API is exposed here (such as .innerHTML and .appendChild)
var HTMLElement = this;
//the current HTML element is passed to the jQuery constructor
//the jQuery API is exposed here (such as .html() and .append())
var jQueryObject = $(this);
});
A deeper look
thisMDN is contained in an execution context
The scope refers to the current Execution ContextECMA. In order to understand this, it is important to understand the way execution contexts operate in JavaScript.
execution contexts bind this
When control enters an execution context (code is being executed in that scope) the environment for variables are setup (Lexical and Variable Environments - essentially this sets up an area for variables to enter which were already accessible, and an area for local variables to be stored), and the binding of this occurs.
jQuery binds this
Execution contexts form a logical stack. The result is that contexts deeper in the stack have access to previous variables, but their bindings may have been altered. Every time jQuery calls a callback function, it alters the this binding by using applyMDN.
callback.apply( obj[ i ] )//where obj[i] is the current element
The result of calling apply is that inside of jQuery callback functions, this refers to the current element being used by the callback function.
For example, in .each, the callback function commonly used allows for .each(function(index,element){/*scope*/}). In that scope, this == element is true.
jQuery callbacks use the apply function to bind the function being called with the current element. This element comes from the jQuery object's element array. Each jQuery object constructed contains an array of elements which match the selectorjQuery API that was used to instantiate the jQuery object.
$(selector) calls the jQuery function (remember that $ is a reference to jQuery, code: window.jQuery = window.$ = jQuery;). Internally, the jQuery function instantiates a function object. So while it may not be immediately obvious, using $() internally uses new jQuery(). Part of the construction of this jQuery object is to find all matches of the selector. The constructor will also accept html strings and elements. When you pass this to the jQuery constructor, you are passing the current element for a jQuery object to be constructed with. The jQuery object then contains an array-like structure of the DOM elements matching the selector (or just the single element in the case of this).
Once the jQuery object is constructed, the jQuery API is now exposed. When a jQuery api function is called, it will internally iterate over this array-like structure. For each item in the array, it calls the callback function for the api, binding the callback's this to the current element. This call can be seen in the code snippet above where obj is the array-like structure, and i is the iterator used for the position in the array of the current element.
Yeah, by using $(this), you enabled jQuery functionality for the object. By just using this, it only has generic Javascript functionality.
this reference a javascript object and $(this) used to encapsulate with jQuery.
Example =>
// Getting Name and modify css property of dom object through jQuery
var name = $(this).attr('name');
$(this).css('background-color','white')
// Getting form object and its data and work on..
this = document.getElementsByName("new_photo")[0]
formData = new FormData(this)
// Calling blur method on find input field with help of both as below
$(this).find('input[type=text]')[0].blur()
//Above is equivalent to
this = $(this).find('input[type=text]')[0]
this.blur()
//Find value of a text field with id "index-number"
this = document.getElementById("index-number");
this.value
or
this = $('#index-number');
$(this).val(); // Equivalent to $('#index-number').val()
$(this).css('color','#000000')
protected by Reigel Mar 18 '13 at 1:50
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.736224 |
/ Check-in [36e03162]
Login
Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.
Overview
Comment:Remove dataType and includeTypes flags from function definitions. Added new P3_FUNCDEF type for P3 arguments on opcodes. Fixes to several user functions. 28 tests fail now. (CVS 1464)
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1: 36e031625995b2f7baf7654d771ca8fb764a0085
User & Date: drh 2004-05-26 16:54:42
Context
2004-05-26
23:25
Refactoring of the vdbe Mem functions and the APIs that deal with them. The code will not compile in its current state. (CVS 1465) (check-in: bba6684d user: drh tags: trunk)
16:54
Remove dataType and includeTypes flags from function definitions. Added new P3_FUNCDEF type for P3 arguments on opcodes. Fixes to several user functions. 28 tests fail now. (CVS 1464) (check-in: 36e03162 user: drh tags: trunk)
13:27
Ensure the type of an sqlite3_value* is not modified by calls to sqlite3_value_*() calls. (CVS 1463) (check-in: ce8b1520 user: danielk1977 tags: trunk)
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch
Changes to src/date.c.
12 12 ** This file contains the C functions that implement date and time
13 13 ** functions for SQLite.
14 14 **
15 15 ** There is only one exported symbol in this file - the function
16 16 ** sqlite3RegisterDateTimeFunctions() found at the bottom of the file.
17 17 ** All other code has file scope.
18 18 **
19 -** $Id: date.c,v 1.24 2004/05/26 06:18:37 danielk1977 Exp $
19 +** $Id: date.c,v 1.25 2004/05/26 16:54:42 drh Exp $
20 20 **
21 21 ** NOTES:
22 22 **
23 23 ** SQLite processes all times and dates as Julian Day numbers. The
24 24 ** dates and times are stored as the number of days since noon
25 25 ** in Greenwich on November 24, 4714 B.C. according to the Gregorian
26 26 ** calendar system.
................................................................................
660 660 */
661 661
662 662 /*
663 663 ** julianday( TIMESTRING, MOD, MOD, ...)
664 664 **
665 665 ** Return the julian day number of the date specified in the arguments
666 666 */
667 -static void juliandayFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
667 +static void juliandayFunc(
668 + sqlite3_context *context,
669 + int argc,
670 + sqlite3_value **argv
671 +){
668 672 DateTime x;
669 673 if( isDate(argc, argv, &x)==0 ){
670 674 computeJD(&x);
671 675 sqlite3_result_double(context, x.rJD);
672 676 }
673 677 }
674 678
675 679 /*
676 680 ** datetime( TIMESTRING, MOD, MOD, ...)
677 681 **
678 682 ** Return YYYY-MM-DD HH:MM:SS
679 683 */
680 -static void datetimeFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
684 +static void datetimeFunc(
685 + sqlite3_context *context,
686 + int argc,
687 + sqlite3_value **argv
688 +){
681 689 DateTime x;
682 690 if( isDate(argc, argv, &x)==0 ){
683 691 char zBuf[100];
684 692 computeYMD_HMS(&x);
685 693 sprintf(zBuf, "%04d-%02d-%02d %02d:%02d:%02d",x.Y, x.M, x.D, x.h, x.m,
686 694 (int)(x.s));
687 695 sqlite3_result_text(context, zBuf, -1, 1);
................................................................................
689 697 }
690 698
691 699 /*
692 700 ** time( TIMESTRING, MOD, MOD, ...)
693 701 **
694 702 ** Return HH:MM:SS
695 703 */
696 -static void timeFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
704 +static void timeFunc(
705 + sqlite3_context *context,
706 + int argc,
707 + sqlite3_value **argv
708 +){
697 709 DateTime x;
698 710 if( isDate(argc, argv, &x)==0 ){
699 711 char zBuf[100];
700 712 computeHMS(&x);
701 713 sprintf(zBuf, "%02d:%02d:%02d", x.h, x.m, (int)x.s);
702 714 sqlite3_result_text(context, zBuf, -1, 1);
703 715 }
................................................................................
704 716 }
705 717
706 718 /*
707 719 ** date( TIMESTRING, MOD, MOD, ...)
708 720 **
709 721 ** Return YYYY-MM-DD
710 722 */
711 -static void dateFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
723 +static void dateFunc(
724 + sqlite3_context *context,
725 + int argc,
726 + sqlite3_value **argv
727 +){
712 728 DateTime x;
713 729 if( isDate(argc, argv, &x)==0 ){
714 730 char zBuf[100];
715 731 computeYMD(&x);
716 732 sprintf(zBuf, "%04d-%02d-%02d", x.Y, x.M, x.D);
717 733 sqlite3_result_text(context, zBuf, -1, 1);
718 734 }
................................................................................
733 749 ** %s seconds since 1970-01-01
734 750 ** %S seconds 00-59
735 751 ** %w day of week 0-6 sunday==0
736 752 ** %W week of year 00-53
737 753 ** %Y year 0000-9999
738 754 ** %% %
739 755 */
740 -static void strftimeFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
756 +static void strftimeFunc(
757 + sqlite3_context *context,
758 + int argc,
759 + sqlite3_value **argv
760 +){
741 761 DateTime x;
742 762 int n, i, j;
743 763 char *z;
744 764 const char *zFmt = sqlite3_value_data(argv[0]);
745 765 char zBuf[100];
746 766 if( zFmt==0 || isDate(argc-1, argv+1, &x) ) return;
747 767 for(i=0, n=1; zFmt[i]; i++, n++){
................................................................................
848 868 ** functions. This should be the only routine in this file with
849 869 ** external linkage.
850 870 */
851 871 void sqlite3RegisterDateTimeFunctions(sqlite *db){
852 872 static struct {
853 873 char *zName;
854 874 int nArg;
855 - int dataType;
856 875 void (*xFunc)(sqlite3_context*,int,sqlite3_value**);
857 876 } aFuncs[] = {
858 877 #ifndef SQLITE_OMIT_DATETIME_FUNCS
859 - { "julianday", -1, SQLITE_NUMERIC, juliandayFunc },
860 - { "date", -1, SQLITE_TEXT, dateFunc },
861 - { "time", -1, SQLITE_TEXT, timeFunc },
862 - { "datetime", -1, SQLITE_TEXT, datetimeFunc },
863 - { "strftime", -1, SQLITE_TEXT, strftimeFunc },
878 + { "julianday", -1, juliandayFunc },
879 + { "date", -1, dateFunc },
880 + { "time", -1, timeFunc },
881 + { "datetime", -1, datetimeFunc },
882 + { "strftime", -1, strftimeFunc },
864 883 #endif
865 884 };
866 885 int i;
867 886
868 887 for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
869 888 sqlite3_create_function(db, aFuncs[i].zName, aFuncs[i].nArg, 0, 0, 0,
870 889 aFuncs[i].xFunc, 0, 0);
871 - if( aFuncs[i].xFunc ){
872 - sqlite3_function_type(db, aFuncs[i].zName, aFuncs[i].dataType);
873 - }
874 890 }
875 891 }
876 -
877 -
878 -
Changes to src/expr.c.
8 8 ** May you find forgiveness for yourself and forgive others.
9 9 ** May you share freely, never taking more than you give.
10 10 **
11 11 *************************************************************************
12 12 ** This file contains routines used for analyzing expressions and
13 13 ** for generating VDBE code that evaluates expressions in SQLite.
14 14 **
15 -** $Id: expr.c,v 1.127 2004/05/21 13:39:51 drh Exp $
15 +** $Id: expr.c,v 1.128 2004/05/26 16:54:43 drh Exp $
16 16 */
17 17 #include "sqliteInt.h"
18 18 #include <ctype.h>
19 19
20 20 char const *sqlite3AffinityString(char affinity){
21 21 switch( affinity ){
22 22 case SQLITE_AFF_INTEGER: return "i";
................................................................................
499 499 ** are made to pExpr:
500 500 **
501 501 ** pExpr->iDb Set the index in db->aDb[] of the database holding
502 502 ** the table.
503 503 ** pExpr->iTable Set to the cursor number for the table obtained
504 504 ** from pSrcList.
505 505 ** pExpr->iColumn Set to the column number within the table.
506 -** pExpr->dataType Set to the appropriate data type for the column.
507 506 ** pExpr->op Set to TK_COLUMN.
508 507 ** pExpr->pLeft Any expression this points to is deleted
509 508 ** pExpr->pRight Any expression this points to is deleted.
510 509 **
511 510 ** The pDbToken is the name of the database (the "X"). This value may be
512 511 ** NULL meaning that name is of the form Y.Z or Z. Any available database
513 512 ** can be used. The pTableToken is the name of the table (the "Y"). This
................................................................................
1220 1219 int nExpr = pList ? pList->nExpr : 0;
1221 1220 FuncDef *pDef;
1222 1221 int nId;
1223 1222 const char *zId;
1224 1223 getFunctionName(pExpr, &zId, &nId);
1225 1224 pDef = sqlite3FindFunction(pParse->db, zId, nId, nExpr, 0);
1226 1225 assert( pDef!=0 );
1227 - nExpr = sqlite3ExprCodeExprList(pParse, pList, pDef->includeTypes);
1226 + nExpr = sqlite3ExprCodeExprList(pParse, pList);
1228 1227 /* FIX ME: The following is a temporary hack. */
1229 1228 if( 0==sqlite3StrNICmp(zId, "classof", nId) ){
1230 1229 assert( nExpr==1 );
1231 1230 sqlite3VdbeAddOp(v, OP_Class, nExpr, 0);
1232 1231 }else{
1233 - sqlite3VdbeOp3(v, OP_Function, nExpr, 0, (char*)pDef, P3_POINTER);
1232 + sqlite3VdbeOp3(v, OP_Function, nExpr, 0, (char*)pDef, P3_FUNCDEF);
1234 1233 }
1235 1234 break;
1236 1235 }
1237 1236 case TK_SELECT: {
1238 1237 sqlite3VdbeAddOp(v, OP_MemLoad, pExpr->iColumn, 0);
1239 1238 break;
1240 1239 }
................................................................................
1342 1341 }
1343 1342 break;
1344 1343 }
1345 1344 }
1346 1345
1347 1346 /*
1348 1347 ** Generate code that pushes the value of every element of the given
1349 -** expression list onto the stack. If the includeTypes flag is true,
1350 -** then also push a string that is the datatype of each element onto
1351 -** the stack after the value.
1348 +** expression list onto the stack.
1352 1349 **
1353 1350 ** Return the number of elements pushed onto the stack.
1354 1351 */
1355 1352 int sqlite3ExprCodeExprList(
1356 1353 Parse *pParse, /* Parsing context */
1357 - ExprList *pList, /* The expression list to be coded */
1358 - int includeTypes /* TRUE to put datatypes on the stack too */
1354 + ExprList *pList /* The expression list to be coded */
1359 1355 ){
1360 1356 struct ExprList_item *pItem;
1361 1357 int i, n;
1362 1358 Vdbe *v;
1363 1359 if( pList==0 ) return 0;
1364 1360 v = sqlite3GetVdbe(pParse);
1365 1361 n = pList->nExpr;
1366 1362 for(pItem=pList->a, i=0; i<n; i++, pItem++){
1367 1363 sqlite3ExprCode(pParse, pItem->pExpr);
1368 - if( includeTypes ){
1369 - /** DEPRECATED. This will go away with the new function interface **/
1370 - sqlite3VdbeOp3(v, OP_String, 0, 0, "numeric", P3_STATIC);
1371 - }
1372 1364 }
1373 - return includeTypes ? n*2 : n;
1365 + return n;
1374 1366 }
1375 1367
1376 1368 /*
1377 1369 ** Generate code for a boolean expression such that a jump is made
1378 1370 ** to the label "dest" if the expression is true but execution
1379 1371 ** continues straight thru if the expression is false.
1380 1372 **
................................................................................
1710 1702 if( p && !createFlag && p->xFunc==0 && p->xStep==0 ){
1711 1703 return 0;
1712 1704 }
1713 1705 if( p==0 && pMaybe ){
1714 1706 assert( createFlag==0 );
1715 1707 return pMaybe;
1716 1708 }
1717 - if( p==0 && createFlag && (p = sqliteMalloc(sizeof(*p)))!=0 ){
1709 + if( p==0 && createFlag && (p = sqliteMalloc(sizeof(*p)+nName+1))!=0 ){
1718 1710 p->nArg = nArg;
1719 1711 p->pNext = pFirst;
1720 - p->dataType = pFirst ? pFirst->dataType : SQLITE_NUMERIC;
1721 - sqlite3HashInsert(&db->aFunc, zName, nName, (void*)p);
1712 + p->zName = (char*)&p[1];
1713 + memcpy(p->zName, zName, nName);
1714 + p->zName[nName] = 0;
1715 + sqlite3HashInsert(&db->aFunc, p->zName, nName, (void*)p);
1722 1716 }
1723 1717 return p;
1724 1718 }
Changes to src/func.c.
12 12 ** This file contains the C functions that implement various SQL
13 13 ** functions of SQLite.
14 14 **
15 15 ** There is only one exported symbol in this file - the function
16 16 ** sqliteRegisterBuildinFunctions() found at the bottom of the file.
17 17 ** All other code has file scope.
18 18 **
19 -** $Id: func.c,v 1.56 2004/05/26 06:18:37 danielk1977 Exp $
19 +** $Id: func.c,v 1.57 2004/05/26 16:54:43 drh Exp $
20 20 */
21 21 #include <ctype.h>
22 22 #include <math.h>
23 23 #include <stdlib.h>
24 24 #include <assert.h>
25 25 #include "sqliteInt.h"
26 26 #include "vdbeInt.h"
27 27 #include "os.h"
28 28
29 29 /*
30 30 ** Implementation of the non-aggregate min() and max() functions
31 31 */
32 -static void minmaxFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
33 - const char *zBest;
32 +static void minmaxFunc(
33 + sqlite3_context *context,
34 + int argc,
35 + sqlite3_value **argv
36 +){
34 37 int i;
35 - int (*xCompare)(const char*, const char*);
36 38 int mask; /* 0 for min() or 0xffffffff for max() */
37 - const char *zArg;
39 + int iBest;
38 40
39 41 if( argc==0 ) return;
40 42 mask = (int)sqlite3_user_data(context);
41 - zBest = sqlite3_value_data(argv[0]);
42 - if( zBest==0 ) return;
43 - zArg = sqlite3_value_data(argv[1]);
44 - if( zArg[0]=='n' ){
45 - xCompare = sqlite3Compare;
46 - }else{
47 - xCompare = strcmp;
48 - }
49 - for(i=2; i<argc; i+=2){
50 - zArg = sqlite3_value_data(argv[i]);
51 - if( zArg==0 ) return;
52 - if( (xCompare(zArg, zBest)^mask)<0 ){
53 - zBest = zArg;
43 + iBest = 0;
44 + for(i=1; i<argc; i++){
45 + if( (sqlite3MemCompare(argv[iBest], argv[i], 0)^mask)<0 ){
46 + iBest = i;
54 47 }
55 48 }
56 - sqlite3_result_text(context, zBest, -1, 1);
49 + sqlite3_result(context, argv[iBest]);
57 50 }
58 51
59 52 /*
60 53 ** Return the type of the argument.
61 54 */
62 -static void typeofFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
55 +static void typeofFunc(
56 + sqlite3_context *context,
57 + int argc,
58 + sqlite3_value **argv
59 +){
63 60 const char *z = 0;
64 - assert( argc==2 );
65 61 switch( sqlite3_value_type(argv[0]) ){
66 - case SQLITE3_NULL: z = "null" ; break;
67 - case SQLITE3_INTEGER: z = "integer" ; break;
68 - case SQLITE3_TEXT: z = "text" ; break;
69 - case SQLITE3_FLOAT: z = "real" ; break;
70 - case SQLITE3_BLOB: z = "blob" ; break;
62 + case SQLITE3_NULL: z = "null"; break;
63 + case SQLITE3_INTEGER: z = "integer"; break;
64 + case SQLITE3_TEXT: z = "text"; break;
65 + case SQLITE3_FLOAT: z = "real"; break;
66 + case SQLITE3_BLOB: z = "blob"; break;
71 67 }
72 68 sqlite3_result_text(context, z, -1, 0);
73 69 }
74 70
75 71 /*
76 72 ** Implementation of the length() function
77 73 */
78 -static void lengthFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
74 +static void lengthFunc(
75 + sqlite3_context *context,
76 + int argc,
77 + sqlite3_value **argv
78 +){
79 79 const char *z;
80 80 int len;
81 81
82 82 assert( argc==1 );
83 - z = sqlite3_value_data(argv[0]);
84 - if( z==0 ) return;
85 -#ifdef SQLITE_UTF8
86 - for(len=0; *z; z++){ if( (0xc0&*z)!=0x80 ) len++; }
87 -#else
88 - len = strlen(z);
89 -#endif
90 - sqlite3_result_int32(context, len);
83 + switch( sqlite3_value_type(argv[0]) ){
84 + case SQLITE3_BLOB:
85 + case SQLITE3_INTEGER:
86 + case SQLITE3_FLOAT: {
87 + sqlite3_result_int32(context, sqlite3_value_bytes(argv[0]));
88 + break;
89 + }
90 + case SQLITE3_TEXT: {
91 + const char *z = sqlite3_value_data(argv[0]);
92 + for(len=0; *z; z++){ if( (0xc0&*z)!=0x80 ) len++; }
93 + sqlite3_result_int32(context, len);
94 + break;
95 + }
96 + default: {
97 + sqlite3_result_null(context);
98 + break;
99 + }
100 + }
91 101 }
92 102
93 103 /*
94 104 ** Implementation of the abs() function
95 105 */
96 106 static void absFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
97 107 const char *z;
98 108 assert( argc==1 );
99 - z = sqlite3_value_data(argv[0]);
100 - if( z==0 ) return;
101 - if( z[0]=='-' && isdigit(z[1]) ) z++;
102 - sqlite3_result_text(context, z, -1, 1);
109 + switch( sqlite3_value_type(argv[0]) ){
110 + case SQLITE3_INTEGER: {
111 + sqlite3_result_int64(context, -sqlite3_value_int(argv[0]));
112 + break;
113 + }
114 + case SQLITE3_NULL: {
115 + sqlite3_result_null(context);
116 + break;
117 + }
118 + default: {
119 + sqlite3_result_double(context, -sqlite3_value_float(argv[0]));
120 + break;
121 + }
122 + }
103 123 }
104 124
105 125 /*
106 126 ** Implementation of the substr() function
107 127 */
108 -static void substrFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
128 +static void substrFunc(
129 + sqlite3_context *context,
130 + int argc,
131 + sqlite3_value **argv
132 +){
109 133 const char *z;
110 -#ifdef SQLITE_UTF8
111 134 const char *z2;
112 135 int i;
113 -#endif
114 136 int p1, p2, len;
137 +
115 138 assert( argc==3 );
116 139 z = sqlite3_value_data(argv[0]);
117 140 if( z==0 ) return;
118 141 p1 = sqlite3_value_int(argv[1]);
119 142 p2 = sqlite3_value_int(argv[2]);
120 -#ifdef SQLITE_UTF8
121 143 for(len=0, z2=z; *z2; z2++){ if( (0xc0&*z2)!=0x80 ) len++; }
122 -#else
123 - len = strlen(z);
124 -#endif
125 144 if( p1<0 ){
126 145 p1 += len;
127 146 if( p1<0 ){
128 147 p2 += p1;
129 148 p1 = 0;
130 149 }
131 150 }else if( p1>0 ){
132 151 p1--;
133 152 }
134 153 if( p1+p2>len ){
135 154 p2 = len-p1;
136 155 }
137 -#ifdef SQLITE_UTF8
138 156 for(i=0; i<p1 && z[i]; i++){
139 157 if( (z[i]&0xc0)==0x80 ) p1++;
140 158 }
141 159 while( z[i] && (z[i]&0xc0)==0x80 ){ i++; p1++; }
142 160 for(; i<p1+p2 && z[i]; i++){
143 161 if( (z[i]&0xc0)==0x80 ) p2++;
144 162 }
145 163 while( z[i] && (z[i]&0xc0)==0x80 ){ i++; p2++; }
146 -#endif
147 164 if( p2<0 ) p2 = 0;
148 165 sqlite3_result_text(context, &z[p1], p2, 1);
149 166 }
150 167
151 168 /*
152 169 ** Implementation of the round() function
153 170 */
................................................................................
199 216 }
200 217
201 218 /*
202 219 ** Implementation of the IFNULL(), NVL(), and COALESCE() functions.
203 220 ** All three do the same thing. They return the first non-NULL
204 221 ** argument.
205 222 */
206 -static void ifnullFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
223 +static void ifnullFunc(
224 + sqlite3_context *context,
225 + int argc,
226 + sqlite3_value **argv
227 +){
207 228 int i;
208 229 for(i=0; i<argc; i++){
209 230 if( SQLITE3_NULL!=sqlite3_value_type(argv[i]) ){
210 - sqlite3_result_text(context, sqlite3_value_data(argv[i]), -1, 1);
231 + sqlite3_result(context, argv[i]);
211 232 break;
212 233 }
213 234 }
214 235 }
215 236
216 237 /*
217 238 ** Implementation of random(). Return a random integer.
218 239 */
219 -static void randomFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
240 +static void randomFunc(
241 + sqlite3_context *context,
242 + int argc,
243 + sqlite3_value **argv
244 +){
220 245 int r;
221 246 sqlite3Randomness(sizeof(r), &r);
222 247 sqlite3_result_int32(context, r);
223 248 }
224 249
225 250 /*
226 251 ** Implementation of the last_insert_rowid() SQL function. The return
................................................................................
228 253 */
229 254 static void last_insert_rowid(
230 255 sqlite3_context *context,
231 256 int arg,
232 257 sqlite3_value **argv
233 258 ){
234 259 sqlite *db = sqlite3_user_data(context);
235 - sqlite3_result_int32(context, sqlite3_last_insert_rowid(db));
260 + sqlite3_result_int64(context, sqlite3_last_insert_rowid(db));
236 261 }
237 262
238 263 /*
239 264 ** Implementation of the change_count() SQL function. The return
240 265 ** value is the same as the sqlite3_changes() API function.
241 266 */
242 -static void change_count(sqlite3_context *context, int arg, sqlite3_value **argv){
267 +static void change_count(
268 + sqlite3_context *context,
269 + int arg,
270 + sqlite3_value **argv
271 +){
243 272 sqlite *db = sqlite3_user_data(context);
244 273 sqlite3_result_int32(context, sqlite3_changes(db));
245 274 }
246 275
247 276 /*
248 277 ** Implementation of the last_statement_change_count() SQL function. The
249 278 ** return value is the same as the sqlite3_last_statement_changes() API
................................................................................
297 326 }
298 327
299 328 /*
300 329 ** Implementation of the NULLIF(x,y) function. The result is the first
301 330 ** argument if the arguments are different. The result is NULL if the
302 331 ** arguments are equal to each other.
303 332 */
304 -static void nullifFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
305 - const unsigned char *zX = sqlite3_value_data(argv[0]);
306 - const unsigned char *zY = sqlite3_value_data(argv[1]);
307 - if( zX!=0 && sqlite3Compare(zX, zY)!=0 ){
308 - sqlite3_result_text(context, zX, -1, 1);
333 +static void nullifFunc(
334 + sqlite3_context *context,
335 + int argc,
336 + sqlite3_value **argv
337 +){
338 + if( sqlite3MemCompare(argv[0], argv[1], 0)!=0 ){
339 + sqlite3_result(context, argv[0]);
309 340 }
310 341 }
311 342
312 343 /*
313 344 ** Implementation of the VERSION(*) function. The result is the version
314 345 ** of the SQLite library that is running.
315 346 */
316 -static void versionFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
347 +static void versionFunc(
348 + sqlite3_context *context,
349 + int argc,
350 + sqlite3_value **argv
351 +){
317 352 sqlite3_result_text(context, sqlite3_version, -1, 0);
318 353 }
319 354
320 355 /*
321 356 ** EXPERIMENTAL - This is not an official function. The interface may
322 357 ** change. This function may disappear. Do not write code that depends
323 358 ** on this function.
................................................................................
327 362 ** the argument. If the argument is NULL, the return value is the string
328 363 ** "NULL". Otherwise, the argument is enclosed in single quotes with
329 364 ** single-quote escapes.
330 365 */
331 366 static void quoteFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
332 367 const char *zArg = sqlite3_value_data(argv[0]);
333 368 if( argc<1 ) return;
334 - if( zArg==0 ){
335 - sqlite3_result_text(context, "NULL", 4, 0);
336 - }else if( sqlite3IsNumber(zArg, 0, TEXT_Utf8) ){
337 - sqlite3_result_text(context, zArg, -1, 1);
338 - }else{
339 - int i,j,n;
340 - char *z;
341 - for(i=n=0; zArg[i]; i++){ if( zArg[i]=='\'' ) n++; }
342 - z = sqliteMalloc( i+n+3 );
343 - if( z==0 ) return;
344 - z[0] = '\'';
345 - for(i=0, j=1; zArg[i]; i++){
346 - z[j++] = zArg[i];
347 - if( zArg[i]=='\'' ){
348 - z[j++] = '\'';
349 - }
369 + switch( sqlite3_value_type(argv[0]) ){
370 + case SQLITE3_NULL: {
371 + sqlite3_result_text(context, "NULL", 4, 0);
372 + break;
373 + }
374 + case SQLITE3_INTEGER:
375 + case SQLITE3_FLOAT: {
376 + sqlite3_result(context, argv[0]);
377 + break;
350 378 }
351 - z[j++] = '\'';
352 - z[j] = 0;
353 - sqlite3_result_text(context, z, j, 1);
354 - sqliteFree(z);
379 + case SQLITE3_BLOB: /*** FIX ME. Use a BLOB encoding ***/
380 + case SQLITE3_TEXT: {
381 + int i,j,n;
382 + const char *zArg = sqlite3_value_data(argv[0]);
383 + char *z;
384 +
385 + for(i=n=0; zArg[i]; i++){ if( zArg[i]=='\'' ) n++; }
386 + z = sqliteMalloc( i+n+3 );
387 + if( z==0 ) return;
388 + z[0] = '\'';
389 + for(i=0, j=1; zArg[i]; i++){
390 + z[j++] = zArg[i];
391 + if( zArg[i]=='\'' ){
392 + z[j++] = '\'';
393 + }
394 + }
395 + z[j++] = '\'';
396 + z[j] = 0;
397 + sqlite3_result_text(context, z, j, 1);
398 + sqliteFree(z);
399 + }
355 400 }
356 401 }
357 402
358 403 #ifdef SQLITE_SOUNDEX
359 404 /*
360 405 ** Compute the soundex encoding of a word.
361 406 */
................................................................................
405 450 "abcdefghijklmnopqrstuvwxyz"
406 451 "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
407 452 "0123456789"
408 453 ".-!,:*^+=_|?/<> ";
409 454 int iMin, iMax, n, r, i;
410 455 unsigned char zBuf[1000];
411 456 if( argc>=1 ){
412 - iMin = atoi(sqlite3_value_data(argv[0]));
457 + iMin = sqlite3_value_int(argv[0]);
413 458 if( iMin<0 ) iMin = 0;
414 459 if( iMin>=sizeof(zBuf) ) iMin = sizeof(zBuf)-1;
415 460 }else{
416 461 iMin = 1;
417 462 }
418 463 if( argc>=2 ){
419 - iMax = atoi(sqlite3_value_data(argv[1]));
464 + iMax = sqlite3_value_int(argv[1]);
420 465 if( iMax<iMin ) iMax = iMin;
421 466 if( iMax>=sizeof(zBuf) ) iMax = sizeof(zBuf)-1;
422 467 }else{
423 468 iMax = 50;
424 469 }
425 470 n = iMin;
426 471 if( iMax>iMin ){
................................................................................
574 619 }else{
575 620 sqlite3VdbeMemCopy(pBest, pArg);
576 621 }
577 622 }
578 623 static void minMaxFinalize(sqlite3_context *context){
579 624 sqlite3_value *pRes;
580 625 pRes = (sqlite3_value *)sqlite3_get_context(context, sizeof(Mem));
581 -
582 626 if( pRes->flags ){
583 - switch( sqlite3_value_type(pRes) ){
584 - case SQLITE3_INTEGER:
585 - sqlite3_result_int32(context, sqlite3_value_int(pRes));
586 - break;
587 - case SQLITE3_FLOAT:
588 - sqlite3_result_double(context, sqlite3_value_float(pRes));
589 - case SQLITE3_TEXT:
590 - case SQLITE3_BLOB:
591 - sqlite3_result_text(context,
592 - sqlite3_value_data(pRes), sqlite3_value_bytes(pRes), 1);
593 - break;
594 - case SQLITE3_NULL:
595 - default:
596 - assert(0);
597 - }
627 + sqlite3_result(context, pRes);
598 628 }
599 629 }
600 630
601 631 /*
602 632 ** This function registered all of the above C functions as SQL
603 633 ** functions. This should be the only routine in this file with
604 634 ** external linkage.
605 635 */
606 636 void sqlite3RegisterBuiltinFunctions(sqlite *db){
607 637 static struct {
608 638 char *zName;
609 639 signed char nArg;
610 - signed char dataType;
611 640 u8 argType; /* 0: none. 1: db 2: (-1) */
612 641 void (*xFunc)(sqlite3_context*,int,sqlite3_value **);
613 642 } aFuncs[] = {
614 - { "min", -1, SQLITE_ARGS, 0, minmaxFunc },
615 - { "min", 0, 0, 0, 0 },
616 - { "max", -1, SQLITE_ARGS, 2, minmaxFunc },
617 - { "max", 0, 0, 2, 0 },
618 - { "typeof", 1, SQLITE_TEXT, 0, typeofFunc },
619 - { "classof", 1, SQLITE_TEXT, 0, typeofFunc }, /* FIX ME: hack */
620 - { "length", 1, SQLITE_NUMERIC, 0, lengthFunc },
621 - { "substr", 3, SQLITE_TEXT, 0, substrFunc },
622 - { "abs", 1, SQLITE_NUMERIC, 0, absFunc },
623 - { "round", 1, SQLITE_NUMERIC, 0, roundFunc },
624 - { "round", 2, SQLITE_NUMERIC, 0, roundFunc },
625 - { "upper", 1, SQLITE_TEXT, 0, upperFunc },
626 - { "lower", 1, SQLITE_TEXT, 0, lowerFunc },
627 - { "coalesce", -1, SQLITE_ARGS, 0, ifnullFunc },
628 - { "coalesce", 0, 0, 0, 0 },
629 - { "coalesce", 1, 0, 0, 0 },
630 - { "ifnull", 2, SQLITE_ARGS, 0, ifnullFunc },
631 - { "random", -1, SQLITE_NUMERIC, 0, randomFunc },
632 - { "like", 2, SQLITE_NUMERIC, 0, likeFunc },
633 - { "glob", 2, SQLITE_NUMERIC, 0, globFunc },
634 - { "nullif", 2, SQLITE_ARGS, 0, nullifFunc },
635 - { "sqlite_version",0,SQLITE_TEXT, 0, versionFunc},
636 - { "quote", 1, SQLITE_ARGS, 0, quoteFunc },
637 - { "last_insert_rowid", 0, SQLITE_NUMERIC, 1, last_insert_rowid },
638 - { "change_count", 0, SQLITE_NUMERIC, 1, change_count },
639 - { "last_statement_change_count",
640 - 0, SQLITE_NUMERIC, 1, last_statement_change_count },
643 + { "min", -1, 0, minmaxFunc },
644 + { "min", 0, 0, 0 },
645 + { "max", -1, 2, minmaxFunc },
646 + { "max", 0, 2, 0 },
647 + { "typeof", 1, 0, typeofFunc },
648 + { "classof", 1, 0, typeofFunc }, /* FIX ME: hack */
649 + { "length", 1, 0, lengthFunc },
650 + { "substr", 3, 0, substrFunc },
651 + { "abs", 1, 0, absFunc },
652 + { "round", 1, 0, roundFunc },
653 + { "round", 2, 0, roundFunc },
654 + { "upper", 1, 0, upperFunc },
655 + { "lower", 1, 0, lowerFunc },
656 + { "coalesce", -1, 0, ifnullFunc },
657 + { "coalesce", 0, 0, 0 },
658 + { "coalesce", 1, 0, 0 },
659 + { "ifnull", 2, 0, ifnullFunc },
660 + { "random", -1, 0, randomFunc },
661 + { "like", 2, 0, likeFunc },
662 + { "glob", 2, 0, globFunc },
663 + { "nullif", 2, 0, nullifFunc },
664 + { "sqlite_version", 0, 0, versionFunc},
665 + { "quote", 1, 0, quoteFunc },
666 + { "last_insert_rowid", 0, 1, last_insert_rowid },
667 + { "change_count", 0, 1, change_count },
668 + { "last_statement_change_count", 0, 1, last_statement_change_count },
641 669 #ifdef SQLITE_SOUNDEX
642 - { "soundex", 1, SQLITE_TEXT, 0, soundexFunc},
670 + { "soundex", 1, 0, soundexFunc},
643 671 #endif
644 672 #ifdef SQLITE_TEST
645 - { "randstr", 2, SQLITE_TEXT, 0, randStr },
673 + { "randstr", 2, 0, randStr },
646 674 #endif
647 675 };
648 676 static struct {
649 677 char *zName;
650 678 signed char nArg;
651 - signed char dataType;
652 679 u8 argType;
653 680 void (*xStep)(sqlite3_context*,int,sqlite3_value**);
654 681 void (*xFinalize)(sqlite3_context*);
655 682 } aAggs[] = {
656 - { "min", 1, 0, 0, minmaxStep, minMaxFinalize },
657 - { "max", 1, 0, 2, minmaxStep, minMaxFinalize },
658 - { "sum", 1, SQLITE_NUMERIC, 0, sumStep, sumFinalize },
659 - { "avg", 1, SQLITE_NUMERIC, 0, sumStep, avgFinalize },
660 - { "count", 0, SQLITE_NUMERIC, 0, countStep, countFinalize },
661 - { "count", 1, SQLITE_NUMERIC, 0, countStep, countFinalize },
683 + { "min", 1, 0, minmaxStep, minMaxFinalize },
684 + { "max", 1, 2, minmaxStep, minMaxFinalize },
685 + { "sum", 1, 0, sumStep, sumFinalize },
686 + { "avg", 1, 0, sumStep, avgFinalize },
687 + { "count", 0, 0, countStep, countFinalize },
688 + { "count", 1, 0, countStep, countFinalize },
662 689 #if 0
663 - { "stddev", 1, SQLITE_NUMERIC, 0, stdDevStep, stdDevFinalize },
690 + { "stddev", 1, 0, stdDevStep, stdDevFinalize },
664 691 #endif
665 692 };
666 - static const char *azTypeFuncs[] = { "min", "max", "typeof" };
667 693 int i;
668 694
669 695 for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
670 696 void *pArg = aFuncs[i].argType==2 ? (void*)(-1) : db;
671 697 sqlite3_create_function(db, aFuncs[i].zName, aFuncs[i].nArg, 0, 0,
672 698 pArg, aFuncs[i].xFunc, 0, 0);
673 - if( aFuncs[i].xFunc ){
674 - sqlite3_function_type(db, aFuncs[i].zName, aFuncs[i].dataType);
675 - }
676 699 }
677 700 for(i=0; i<sizeof(aAggs)/sizeof(aAggs[0]); i++){
678 701 void *pArg = aAggs[i].argType==2 ? (void*)(-1) : db;
679 702 sqlite3_create_function(db, aAggs[i].zName, aAggs[i].nArg, 0, 0, pArg,
680 703 0, aAggs[i].xStep, aAggs[i].xFinalize);
681 - sqlite3_function_type(db, aAggs[i].zName, aAggs[i].dataType);
682 - }
683 -
684 - for(i=0; i<sizeof(azTypeFuncs)/sizeof(azTypeFuncs[0]); i++){
685 - int n = strlen(azTypeFuncs[i]);
686 - FuncDef *p = sqlite3HashFind(&db->aFunc, azTypeFuncs[i], n);
687 - while( p ){
688 - p->includeTypes = 1;
689 - p = p->pNext;
690 - }
691 704 }
692 705 sqlite3RegisterDateTimeFunctions(db);
693 706 }
694 -
695 -
696 -
Changes to src/main.c.
10 10 **
11 11 *************************************************************************
12 12 ** Main file for the SQLite library. The routines in this file
13 13 ** implement the programmer interface to the library. Routines in
14 14 ** other files are for internal use by SQLite and should not be
15 15 ** accessed by users of the library.
16 16 **
17 -** $Id: main.c,v 1.193 2004/05/26 10:11:06 danielk1977 Exp $
17 +** $Id: main.c,v 1.194 2004/05/26 16:54:43 drh Exp $
18 18 */
19 19 #include "sqliteInt.h"
20 20 #include "os.h"
21 21 #include <ctype.h>
22 22
23 23 /*
24 24 ** A pointer to this structure is used to communicate information
................................................................................
408 408 }
409 409 return rc;
410 410 }
411 411
412 412 /*
413 413 ** Return the ROWID of the most recent insert
414 414 */
415 -int sqlite3_last_insert_rowid(sqlite *db){
415 +long long int sqlite3_last_insert_rowid(sqlite *db){
416 416 return db->lastRowid;
417 417 }
418 418
419 419 /*
420 420 ** Return the number of changes in the most recent call to sqlite3_exec().
421 421 */
422 422 int sqlite3_changes(sqlite *db){
................................................................................
691 691 }
692 692 rc = sqlite3_create_function(db, zFunctionName8, nArg, eTextRep,
693 693 iCollateArg, pUserData, xFunc, xStep, xFinal);
694 694 sqliteFree(zFunctionName8);
695 695 return rc;
696 696 }
697 697
698 -/*
699 -** Change the datatype for all functions with a given name. See the
700 -** header comment for the prototype of this function in sqlite.h for
701 -** additional information.
702 -*/
703 -int sqlite3_function_type(sqlite *db, const char *zName, int dataType){
704 - FuncDef *p = (FuncDef*)sqlite3HashFind(&db->aFunc, zName, strlen(zName));
705 - while( p ){
706 - p->dataType = dataType;
707 - p = p->pNext;
708 - }
709 - return SQLITE_OK;
710 -}
711 -
712 698 /*
713 699 ** Register a trace function. The pArg from the previously registered trace
714 700 ** is returned.
715 701 **
716 702 ** A NULL trace function means that no tracing is executes. A non-NULL
717 703 ** trace is a pointer to a function that is invoked at the start of each
718 704 ** sqlite3_exec().
................................................................................
1024 1010 db->onError = OE_Default;
1025 1011 db->priorNewRowid = 0;
1026 1012 db->magic = SQLITE_MAGIC_BUSY;
1027 1013 db->nDb = 2;
1028 1014 db->aDb = db->aDbStatic;
1029 1015 db->enc = def_enc;
1030 1016 /* db->flags |= SQLITE_ShortColNames; */
1031 - sqlite3HashInit(&db->aFunc, SQLITE_HASH_STRING, 1);
1017 + sqlite3HashInit(&db->aFunc, SQLITE_HASH_STRING, 0);
1032 1018 sqlite3HashInit(&db->aCollSeq, SQLITE_HASH_STRING, 0);
1033 1019 for(i=0; i<db->nDb; i++){
1034 1020 sqlite3HashInit(&db->aDb[i].tblHash, SQLITE_HASH_STRING, 0);
1035 1021 sqlite3HashInit(&db->aDb[i].idxHash, SQLITE_HASH_STRING, 0);
1036 1022 sqlite3HashInit(&db->aDb[i].trigHash, SQLITE_HASH_STRING, 0);
1037 1023 sqlite3HashInit(&db->aDb[i].aFKey, SQLITE_HASH_STRING, 1);
1038 1024 }
Changes to src/select.c.
8 8 ** May you find forgiveness for yourself and forgive others.
9 9 ** May you share freely, never taking more than you give.
10 10 **
11 11 *************************************************************************
12 12 ** This file contains C code routines that are called by the parser
13 13 ** to handle SELECT statements in SQLite.
14 14 **
15 -** $Id: select.c,v 1.177 2004/05/26 10:11:06 danielk1977 Exp $
15 +** $Id: select.c,v 1.178 2004/05/26 16:54:44 drh Exp $
16 16 */
17 17 #include "sqliteInt.h"
18 18
19 19
20 20 /*
21 21 ** Allocate a new Select structure and return a pointer to that
22 22 ** structure.
................................................................................
2334 2334 /* Reset the aggregator
2335 2335 */
2336 2336 if( isAgg ){
2337 2337 sqlite3VdbeAddOp(v, OP_AggReset, 0, pParse->nAgg);
2338 2338 for(i=0; i<pParse->nAgg; i++){
2339 2339 FuncDef *pFunc;
2340 2340 if( (pFunc = pParse->aAgg[i].pFunc)!=0 && pFunc->xFinalize!=0 ){
2341 - sqlite3VdbeOp3(v, OP_AggInit, 0, i, (char*)pFunc, P3_POINTER);
2341 + sqlite3VdbeOp3(v, OP_AggInit, 0, i, (char*)pFunc, P3_FUNCDEF);
2342 2342 }
2343 2343 }
2344 2344 if( pGroupBy==0 ){
2345 2345 sqlite3VdbeAddOp(v, OP_String, 0, 0);
2346 2346 sqlite3VdbeAddOp(v, OP_AggFocus, 0, 0);
2347 2347 }
2348 2348 }
................................................................................
2408 2408 if( !pAgg->isAgg ) continue;
2409 2409 assert( pAgg->pFunc!=0 );
2410 2410 assert( pAgg->pFunc->xStep!=0 );
2411 2411 pDef = pAgg->pFunc;
2412 2412 pE = pAgg->pExpr;
2413 2413 assert( pE!=0 );
2414 2414 assert( pE->op==TK_AGG_FUNCTION );
2415 - nExpr = sqlite3ExprCodeExprList(pParse, pE->pList, pDef->includeTypes);
2415 + nExpr = sqlite3ExprCodeExprList(pParse, pE->pList);
2416 2416 sqlite3VdbeAddOp(v, OP_Integer, i, 0);
2417 2417 sqlite3VdbeOp3(v, OP_AggFunc, 0, nExpr, (char*)pDef, P3_POINTER);
2418 2418 }
2419 2419 }
2420 2420
2421 2421 /* End the database scan loop.
2422 2422 */
Changes to src/sqlite.h.in.
8 8 ** May you find forgiveness for yourself and forgive others.
9 9 ** May you share freely, never taking more than you give.
10 10 **
11 11 *************************************************************************
12 12 ** This header file defines the interface that the SQLite library
13 13 ** presents to client programs.
14 14 **
15 -** @(#) $Id: sqlite.h.in,v 1.80 2004/05/26 06:18:38 danielk1977 Exp $
15 +** @(#) $Id: sqlite.h.in,v 1.81 2004/05/26 16:54:44 drh Exp $
16 16 */
17 17 #ifndef _SQLITE_H_
18 18 #define _SQLITE_H_
19 19 #include <stdarg.h> /* Needed for the definition of va_list */
20 20
21 21 /*
22 22 ** Make sure we can call this stuff from C++.
................................................................................
159 159 ** the value of the INTEGER PRIMARY KEY column if there is such a column,
160 160 ** otherwise the key is generated at random. The unique key is always
161 161 ** available as the ROWID, OID, or _ROWID_ column.) The following routine
162 162 ** returns the integer key of the most recent insert in the database.
163 163 **
164 164 ** This function is similar to the mysql_insert_id() function from MySQL.
165 165 */
166 -int sqlite3_last_insert_rowid(sqlite*);
166 +long long int sqlite3_last_insert_rowid(sqlite*);
167 167
168 168 /*
169 169 ** This function returns the number of database rows that were changed
170 170 ** (or inserted or deleted) by the most recent called sqlite3_exec().
171 171 **
172 172 ** All changes are counted, even if they were later undone by a
173 173 ** ROLLBACK or ABORT. Except, changes associated with creating and
................................................................................
1120 1120 int iCollateArg,
1121 1121 void*,
1122 1122 void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
1123 1123 void (*xStep)(sqlite3_context*,int,sqlite3_value**),
1124 1124 void (*xFinal)(sqlite3_context*)
1125 1125 );
1126 1126
1127 -/*
1128 -** Use the following routine to define the datatype returned by a
1129 -** user-defined function. The second argument can be one of the
1130 -** constants SQLITE_NUMERIC, SQLITE_TEXT, or SQLITE_ARGS or it
1131 -** can be an integer greater than or equal to zero. When the datatype
1132 -** parameter is non-negative, the type of the result will be the
1133 -** same as the datatype-th argument. If datatype==SQLITE_NUMERIC
1134 -** then the result is always numeric. If datatype==SQLITE_TEXT then
1135 -** the result is always text. If datatype==SQLITE_ARGS then the result
1136 -** is numeric if any argument is numeric and is text otherwise.
1137 -*/
1138 -int sqlite3_function_type(
1139 - sqlite *db, /* The database there the function is registered */
1140 - const char *zName, /* Name of the function */
1141 - int datatype /* The datatype for this function */
1142 -);
1143 -#define SQLITE_NUMERIC (-1)
1144 -#define SQLITE_TEXT (-2)
1145 -#define SQLITE_ARGS (-3)
1146 -
1147 1127 /*
1148 1128 ** The next routine returns the number of calls to xStep for a particular
1149 1129 ** aggregate function instance. The current call to xStep counts so this
1150 1130 ** routine always returns at least 1.
1151 1131 */
1152 1132 int sqlite3_aggregate_count(sqlite3_context*);
1153 1133
................................................................................
1319 1299 ** characters) in the string passed as the second argument. If the third
1320 1300 ** parameter is negative, then the string is read up to the first nul
1321 1301 ** terminator character.
1322 1302 */
1323 1303 void sqlite3_result_error(sqlite3_context*, const char*, int);
1324 1304 void sqlite3_result_error16(sqlite3_context*, const void*, int);
1325 1305
1306 +/*
1307 +** Copy a function parameter into the result of the function.
1308 +*/
1309 +void sqlite3_result(sqlite3_context*, sqlite3_value*);
1310 +
1326 1311 #ifdef __cplusplus
1327 1312 } /* End of the 'extern "C"' block */
1328 1313 #endif
1329 1314 #endif
Changes to src/sqliteInt.h.
7 7 ** May you do good and not evil.
8 8 ** May you find forgiveness for yourself and forgive others.
9 9 ** May you share freely, never taking more than you give.
10 10 **
11 11 *************************************************************************
12 12 ** Internal interface definitions for SQLite.
13 13 **
14 -** @(#) $Id: sqliteInt.h,v 1.252 2004/05/26 06:58:44 danielk1977 Exp $
14 +** @(#) $Id: sqliteInt.h,v 1.253 2004/05/26 16:54:45 drh Exp $
15 15 */
16 16 #include "config.h"
17 17 #include "sqlite.h"
18 18 #include "hash.h"
19 19 #include "parse.h"
20 20 #include <stdio.h>
21 21 #include <stdlib.h>
................................................................................
156 156 ** This macro casts a pointer to an integer. Useful for doing
157 157 ** pointer arithmetic.
158 158 */
159 159 #define Addr(X) ((uptr)X)
160 160
161 161 /*
162 162 ** The maximum number of bytes of data that can be put into a single
163 -** row of a single table. The upper bound on this limit is 16777215
164 -** bytes (or 16MB-1). We have arbitrarily set the limit to just 1MB
165 -** here because the overflow page chain is inefficient for really big
166 -** records and we want to discourage people from thinking that
163 +** row of a single table. The upper bound on this limit is
164 +** 9223372036854775808 bytes (or 2**63). We have arbitrarily set the
165 +** limit to just 1MB here because the overflow page chain is inefficient
166 +** for really big records and we want to discourage people from thinking that
167 167 ** multi-megabyte records are OK. If your needs are different, you can
168 168 ** change this define and recompile to increase or decrease the record
169 169 ** size.
170 -**
171 -** The 16777198 is computed as follows: 238 bytes of payload on the
172 -** original pages plus 16448 overflow pages each holding 1020 bytes of
173 -** data.
174 170 */
175 171 #define MAX_BYTES_PER_ROW 1048576
176 -/* #define MAX_BYTES_PER_ROW 16777198 */
177 172
178 173 /*
179 174 ** If memory allocation problems are found, recompile with
180 175 **
181 176 ** -DMEMORY_DEBUG=1
182 177 **
183 178 ** to enable some sanity checking on malloc() and free(). To
................................................................................
330 325 #define TEXT_Utf16le 2
331 326 #define TEXT_Utf16be 3
332 327 #define TEXT_Utf16 (SQLITE3_BIGENDIAN?TEXT_Utf16be:TEXT_Utf16le)
333 328
334 329 /*
335 330 ** Each database is an instance of the following structure.
336 331 **
337 -** The sqlite.file_format is initialized by the database file
338 -** and helps determines how the data in the database file is
339 -** represented. This field allows newer versions of the library
340 -** to read and write older databases. The various file formats
341 -** are as follows:
342 -**
343 -** file_format==1 Version 2.1.0.
344 -** file_format==2 Version 2.2.0. Add support for INTEGER PRIMARY KEY.
345 -** file_format==3 Version 2.6.0. Fix empty-string index bug.
346 -** file_format==4 Version 2.7.0. Add support for separate numeric and
347 -** text datatypes.
348 -**
349 332 ** The sqlite.temp_store determines where temporary database files
350 333 ** are stored. If 1, then a file is created to hold those tables. If
351 334 ** 2, then they are held in memory. 0 means use the default value in
352 335 ** the TEMP_STORE macro.
353 336 **
354 337 ** The sqlite.lastRowid records the last insert rowid generated by an
355 338 ** insert statement. Inserts on views do not affect its value. Each
................................................................................
458 441 /*
459 442 ** Each SQL function is defined by an instance of the following
460 443 ** structure. A pointer to this structure is stored in the sqlite.aFunc
461 444 ** hash table. When multiple functions have the same name, the hash table
462 445 ** points to a linked list of these structures.
463 446 */
464 447 struct FuncDef {
465 - void (*xFunc)(sqlite3_context*,int,sqlite3_value**); /* Regular function */
466 - void (*xStep)(sqlite3_context*,int,sqlite3_value**); /* Aggregate function step */
467 - void (*xFinalize)(sqlite3_context*); /* Aggregate function finializer */
468 - signed char nArg; /* Number of arguments. -1 means unlimited */
469 - signed char dataType; /* Arg that determines datatype. -1=NUMERIC, */
470 - /* -2=TEXT. -3=SQLITE_ARGS */
471 - u8 includeTypes; /* Add datatypes to args of xFunc and xStep */
472 - void *pUserData; /* User data parameter */
473 - FuncDef *pNext; /* Next function with same name */
448 + char *zName; /* SQL name of the function */
449 + int nArg; /* Number of arguments. -1 means unlimited */
450 + void *pUserData; /* User data parameter */
451 + FuncDef *pNext; /* Next function with same name */
452 + void (*xFunc)(sqlite3_context*,int,sqlite3_value**); /* Regular function */
453 + void (*xStep)(sqlite3_context*,int,sqlite3_value**); /* Aggregate step */
454 + void (*xFinalize)(sqlite3_context*); /* Aggregate finializer */
474 455 };
475 456
476 457 /*
477 458 ** information about each column of an SQL table is held in an instance
478 459 ** of this structure.
479 460 */
480 461 struct Column {
................................................................................
1252 1233 Table *sqlite3SrcListLookup(Parse*, SrcList*);
1253 1234 int sqlite3IsReadOnly(Parse*, Table*, int);
1254 1235 void sqlite3DeleteFrom(Parse*, SrcList*, Expr*);
1255 1236 void sqlite3Update(Parse*, SrcList*, ExprList*, Expr*, int);
1256 1237 WhereInfo *sqlite3WhereBegin(Parse*, SrcList*, Expr*, int, ExprList**);
1257 1238 void sqlite3WhereEnd(WhereInfo*);
1258 1239 void sqlite3ExprCode(Parse*, Expr*);
1259 -int sqlite3ExprCodeExprList(Parse*, ExprList*, int);
1240 +int sqlite3ExprCodeExprList(Parse*, ExprList*);
1260 1241 void sqlite3ExprIfTrue(Parse*, Expr*, int, int);
1261 1242 void sqlite3ExprIfFalse(Parse*, Expr*, int, int);
1262 1243 Table *sqlite3FindTable(sqlite*,const char*, const char*);
1263 1244 Table *sqlite3LocateTable(Parse*,const char*, const char*);
1264 1245 Index *sqlite3FindIndex(sqlite*,const char*, const char*);
1265 1246 void sqlite3UnlinkAndDeleteIndex(sqlite*,Index*);
1266 1247 void sqlite3Copy(Parse*, SrcList*, Token*, Token*, int);
Changes to src/tclsqlite.c.
7 7 ** May you do good and not evil.
8 8 ** May you find forgiveness for yourself and forgive others.
9 9 ** May you share freely, never taking more than you give.
10 10 **
11 11 *************************************************************************
12 12 ** A TCL Interface to SQLite
13 13 **
14 -** $Id: tclsqlite.c,v 1.71 2004/05/26 06:18:38 danielk1977 Exp $
14 +** $Id: tclsqlite.c,v 1.72 2004/05/26 16:54:46 drh Exp $
15 15 */
16 16 #ifndef NO_TCL /* Omit this whole file if TCL is unavailable */
17 17
18 18 #include "sqliteInt.h"
19 19 #include "tcl.h"
20 20 #include <stdlib.h>
21 21 #include <string.h>
................................................................................
860 860 pFunc = (SqlFunc*)Tcl_Alloc( sizeof(*pFunc) + nScript + 1 );
861 861 if( pFunc==0 ) return TCL_ERROR;
862 862 pFunc->interp = interp;
863 863 pFunc->pNext = pDb->pFunc;
864 864 pFunc->zScript = (char*)&pFunc[1];
865 865 strcpy(pFunc->zScript, zScript);
866 866 sqlite3_create_function(pDb->db, zName, -1, 0, 0, pFunc, tclSqlFunc, 0, 0);
867 - sqlite3_function_type(pDb->db, zName, SQLITE_NUMERIC);
868 867 break;
869 868 }
870 869
871 870 /*
872 871 ** $db last_insert_rowid
873 872 **
874 873 ** Return an integer which is the ROWID for the most recent insert.
................................................................................
1241 1240 Tcl_GlobalEval(interp, zMainloop);
1242 1241 }
1243 1242 return 0;
1244 1243 }
1245 1244 #endif /* TCLSH */
1246 1245
1247 1246 #endif /* !defined(NO_TCL) */
1248 -
1249 -
1250 -
Changes to src/vdbe.c.
39 39 **
40 40 ** Various scripts scan this source file in order to generate HTML
41 41 ** documentation, headers files, or other derived files. The formatting
42 42 ** of the code in this file is, therefore, important. See other comments
43 43 ** in this file for details. If in doubt, do not deviate from existing
44 44 ** commenting and indentation practices when changing or adding code.
45 45 **
46 -** $Id: vdbe.c,v 1.334 2004/05/26 13:27:00 danielk1977 Exp $
46 +** $Id: vdbe.c,v 1.335 2004/05/26 16:54:47 drh Exp $
47 47 */
48 48 #include "sqliteInt.h"
49 49 #include "os.h"
50 50 #include <ctype.h>
51 51 #include "vdbeInt.h"
52 52
53 53 /*
................................................................................
5860 5860 assert( (pTos->flags & MEM_Short)==0 || pTos->z==pTos->zShort );
5861 5861 assert( (pTos->flags & MEM_Short)!=0 || pTos->z!=pTos->zShort );
5862 5862 }else{
5863 5863 /* Cannot define a string subtype for non-string objects */
5864 5864 assert( (pTos->flags & (MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short))==0 );
5865 5865 }
5866 5866 /* MEM_Null excludes all other types */
5867 - assert( pTos->flags==MEM_Null || (pTos->flags&MEM_Null)==0 );
5867 + assert( (pTos->flags&(MEM_Str|MEM_Int|MEM_Real|MEM_Blob))==0
5868 + || (pTos->flags&MEM_Null)==0 );
5868 5869 }
5869 5870 if( pc<-1 || pc>=p->nOp ){
5870 5871 sqlite3SetString(&p->zErrMsg, "jump destination out of range", (char*)0);
5871 5872 rc = SQLITE_INTERNAL;
5872 5873 }
5873 5874 if( p->trace && pTos>=p->aStack ){
5874 5875 int i;
Changes to src/vdbe.h.
11 11 *************************************************************************
12 12 ** Header file for the Virtual DataBase Engine (VDBE)
13 13 **
14 14 ** This header defines the interface to the virtual database engine
15 15 ** or VDBE. The VDBE implements an abstract machine that runs a
16 16 ** simple program to access and modify the underlying database.
17 17 **
18 -** $Id: vdbe.h,v 1.83 2004/05/26 10:11:07 danielk1977 Exp $
18 +** $Id: vdbe.h,v 1.84 2004/05/26 16:54:48 drh Exp $
19 19 */
20 20 #ifndef _SQLITE_VDBE_H_
21 21 #define _SQLITE_VDBE_H_
22 22 #include <stdio.h>
23 23
24 24 /*
25 25 ** A single VDBE is an opaque structure named "Vdbe". Only routines
................................................................................
65 65 ** Allowed values of VdbeOp.p3type
66 66 */
67 67 #define P3_NOTUSED 0 /* The P3 parameter is not used */
68 68 #define P3_DYNAMIC (-1) /* Pointer to a string obtained from sqliteMalloc() */
69 69 #define P3_STATIC (-2) /* Pointer to a static string */
70 70 #define P3_POINTER (-3) /* P3 is a pointer to some structure or object */
71 71 #define P3_COLLSEQ (-4) /* P3 is a pointer to a CollSeq structure */
72 -#define P3_KEYINFO (-5) /* P3 is a pointer to a KeyInfo structure */
72 +#define P3_FUNCDEF (-5) /* P3 is a pointer to a FuncDef structure */
73 +#define P3_KEYINFO (-6) /* P3 is a pointer to a KeyInfo structure */
73 74
74 75 /* When adding a P3 argument using P3_KEYINFO, a copy of the KeyInfo structure
75 76 ** is made. That copy is freed when the Vdbe is finalized. But if the
76 77 ** argument is P3_KEYINFO_HANDOFF, the passed in pointer is used. It still
77 78 ** gets freed when the Vdbe is finalized so it still should be obtained
78 79 ** from a single sqliteMalloc(). But no copy is made and the calling
79 80 ** function should *not* try to free the KeyInfo.
80 81 */
81 -#define P3_KEYINFO_HANDOFF (-6)
82 +#define P3_KEYINFO_HANDOFF (-7)
82 83
83 84 /*
84 85 ** The following macro converts a relative address in the p2 field
85 86 ** of a VdbeOp structure into a negative number so that
86 87 ** sqlite3VdbeAddOpList() knows that the address is relative. Calling
87 88 ** the macro again restores the address.
88 89 */
Changes to src/vdbeaux.c.
512 512 break;
513 513 }
514 514 case P3_COLLSEQ: {
515 515 CollSeq *pColl = (CollSeq*)pOp->p3;
516 516 sprintf(zTemp, "collseq(%.20s)", pColl->zName);
517 517 zP3 = zTemp;
518 518 break;
519 + }
520 + case P3_FUNCDEF: {
521 + FuncDef *pDef = (FuncDef*)pOp->p3;
522 + char zNum[30];
523 + sprintf(zTemp, "%.*s", nTemp, pDef->zName);
524 + sprintf(zNum,"(%d)", pDef->nArg);
525 + if( strlen(zTemp)+strlen(zNum)+1<=nTemp ){
526 + strcat(zTemp, zNum);
527 + }
528 + zP3 = zTemp;
529 + break;
519 530 }
520 531 default: {
521 532 zP3 = pOp->p3;
522 533 if( zP3==0 ){
523 534 zP3 = "";
524 535 }
525 536 }
................................................................................
1865 1876 */
1866 1877 memcpy(&pMem->z[pMem->n], "\0\0", nulTermLen);
1867 1878 pMem->n += nulTermLen;
1868 1879 pMem->flags |= MEM_Term;
1869 1880 }
1870 1881
1871 1882 /*
1872 -** The following nine routines, named sqlite3_result_*(), are used to
1883 +** The following ten routines, named sqlite3_result_*(), are used to
1873 1884 ** return values or errors from user-defined functions and aggregate
1874 1885 ** operations. They are commented in the header file sqlite.h (sqlite.h.in)
1875 1886 */
1887 +void sqlite3_result(sqlite3_context *pCtx, sqlite3_value *pValue){
1888 + sqlite3VdbeMemCopy(&pCtx->s, pValue);
1889 +}
1876 1890 void sqlite3_result_int32(sqlite3_context *pCtx, int iVal){
1877 1891 MemSetInt(&pCtx->s, iVal);
1878 1892 }
1879 1893 void sqlite3_result_int64(sqlite3_context *pCtx, i64 iVal){
1880 1894 MemSetInt(&pCtx->s, iVal);
1881 1895 }
1882 1896 void sqlite3_result_double(sqlite3_context *pCtx, double rVal){
................................................................................
1914 1928 pCtx->isError = 1;
1915 1929 MemSetStr(&pCtx->s, z, n, TEXT_Utf8, 1);
1916 1930 }
1917 1931 void sqlite3_result_error16(sqlite3_context *pCtx, const void *z, int n){
1918 1932 pCtx->isError = 1;
1919 1933 MemSetStr(&pCtx->s, z, n, TEXT_Utf16, 1);
1920 1934 }
1921 -
|
__label__pos
| 0.992235 |
negative tcp_tw_count and other TIME_WAIT weirdness?
John Salmon ([email protected])
Mon, 30 Jun 2003 17:25:16 -0700
I have several fairly busy servers reporting a negative value
for tcp_tw_count. For example:
bash-2.05a# cat /proc/net/sockstat
sockets: used 121
TCP: inuse 50 orphan 0 tw -65048 alloc 81 mem 26
UDP: inuse 15
RAW: inuse 1
FRAG: inuse 0 memory 0
bash-2.05a#
When I look at netstat -n, I see many (hundreds) connections
stuck in TIME_WAIT. They've been there for at least a few hours,
and probably much longer (days).
Is this expected behavior? A known bug?
FWIW, I'm using a RedHat kernel, 2.4.18-24.7.xsmp on a 2-processor Athlon
system. If this looks like a bug I'll try to reproduce it with
an unmodified kernel.
Thanks,
John Salmon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
|
__label__pos
| 0.595875 |
Sign up ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
Let us assume a sequence as follows:
$S_{n} = (S_{n-1} * c_{1} + c_{2})\text{ mod } m$
This is the pseudorandom generator found in most programming languages' random function.
It is known that a prime $m$ results in a more uniform distribution of random numbers, as a result of a larger period for $S_{n}$. As a result, $m$ is typically a prime number.
Why do prime numbers typically result in larger periods than factorable numbers for modulo arithmetic?
share|cite|improve this question
Have you seen the closed form formula for $S_n$? – Gerry Myerson Apr 16 '13 at 6:03
@GerryMyerson I haven't; I'll look that up. – Emrakul Apr 16 '13 at 6:04
1 Answer 1
up vote 3 down vote accepted
According to Wikipedia, the period is at most $m$, and is equal to $m$ only if
1. $\gcd(c_2,m)=1$,
2. $p\mid m$ implies $p\mid c_1-1$ for all prime $p$, and
3. $4\mid m$ implies $4\mid c_1-1$.
So $m$ needn't be prime, but it's easiest to meet and to check these conditions if it is.
share|cite|improve this answer
That's interesting. I'm curious how the proof of those three lemmas work, but I suppose I shall have to read the paper! Thank you! – Emrakul Apr 17 '13 at 4:20
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.98402 |
A plan, sketch, drawing, or outline designed to demonstrate or explain how something works or to clarify the relationship between the parts of a whole.
learn more… | top users | synonyms
8
votes
2answers
152 views
Generating Sparklines as Diagrams QGIS 2.10
The following seems to be an amalgamation of questions Is it possible to represent points as sparklines in ArcGIS for Desktop? and Pie-Chart Coordinates in QGIS. Basically I have 5 sets of ...
3
votes
2answers
6k views
How to draw bar diagrams on the map?
I need to draw simple histograms in QGIS version 1.8 linked to data from an imported DB. I need a double bar diagrams (eg: male/female - in/out) but I only find the possibility to set pie charts. Is ...
1
vote
1answer
614 views
How can I move diagrams?
Is there a way to locate diagrams and move them wherever you want in QGIS 1.7.4/ 1.8.0? I created a pie chart under the tab "diagram". I can mark those diagrams with a green box and move them via the ...
1
vote
1answer
420 views
How to draw lines for diagrams?
Just before submitting bug: in Diagrams, there is possibility to use Position AroundPoint or OverPoint. Whatever I choose Line Options selection stays gray: I can't choose any line. So, I suggest ...
|
__label__pos
| 0.880981 |
Expand my Community achievements bar.
Accelerate your Campaign learning with the Adobe Campaign Mentorship Program 2023!
Error message while a delivery is sent out
Avatar
Level 4
Hi there,
One of our user got the following error, while saving a delivery. Can someone explain what this means?
Key value ‘688586909’ is duplicated for elements ‘Query Condition (targetPart)’ of the document being edited. Please rename on of the collection elements.
Regards,
Priyanka
5 Replies
Avatar
Level 10
Hi Priyanka,
Looks like the value mentioned in the error message is used twice. Can you share screenshots of your delivery and the conditions you are trying to set?
Thanks,
Florent
Avatar
Level 10
Hi Priyanka,
Any news on this error?
Let me know,
Florent
Avatar
Level 3
Hi @florentlb
Did you manage to find a solution for this issue?
I'm experiencing the same error but regarding a url:
SCM-120009 Key value '{url}' is duplicated for elements 'URL (url)' of the document being edited. Please rename on of the collection elements.
This is when I use a continuous delivery.
Regards
Leonie
Avatar
Level 1
Hi All,
I am getting similar error and I am unable to even stop the workflow. Sharing the snapshot below.
Roshann_0-1606073942024.png
|
__label__pos
| 0.993868 |
Reader Level:
Articles
No GDI Calls between GetHdc and ReleaseHdc
By Dinesh Beniwal on Jul 28, 2010
In this article you will learn how to use No GDI Calls between GetHdc and ReleaseHdc.
• 0
• 0
• 9396
HTML clipboard
This article has been excerpted from book "Graphics Programming with GDI+".
GDI+ currently has no support to raster operations. When we use R2_XOR pen operations, we use the Graphics.GetHdc() method to get the handle to the device context. During the operation when your application uses the HDC, the GDI+ should not draw anything on the Graphics object until the Graphics.ReleaseHdc method is called. Every GetHdc call must be followed by a call to ReleaseHdc on a Graphics object, as in the following snippet:
IntPtr hdc1 = g1.GetHdc();
//Do something with hdc1
g.ReleaseHdc (hdc1);
g2 = Graphics.FromImage (curBitmap);
IntPtr hdc1 = g1.GetHdc();
IntPtr hdc2 = g2.GetHdc();
BitBlt (hdc2, 0, 0,
this.ClientRectangle.Width,
this.ClientRectangle.Height,
hdcl, 0, 0, 13369376);
g2.DrawRectangle (Pens.Red, 40, 40, 200, 200);
g1.ReleaseHdc (hdcl);
g2.ReleaseHdc (hdc2);
If we make a GDI+ call after GetHdc, the system will throw an "object busy" exception. For example, in the preceding code snippet we make a DrawRectangle call after GetHdc and before ReleaseHdc. As a result we will get an exception saying, "The object is currently in use elsewhere."
Using GDI on a GDI+ Graphics Object Backed by a Bitmap
After a call to GetHdc, we can simply call a Graphics object from a bitmap that returns a new HBITMAP structure. This bitmap does not contain the original image, but rather a sentinel pattern, which allows GDI+ to tract changes to the bitmap. When ReleaseHdc is called, changes are copied back to the original image. This type of device context is not suitable for raster operations because the handle to device context is considered write-only, and raster operations require it to be read-only. This approach may also degrade the performance because creating a new bitmap and saving changes to the original bitmap operations may tie up all your resources.
Book.jpg
Dinesh Beniwal
I am working as VP Content Manager, responsible for content publishing, content development, and social relations. You can follow me on twitter @dbeniwal21
COMMENT USING
Trending up
|
__label__pos
| 0.849846 |
Creating unit tests in Uno.UI.Tests
Unit tests in Uno.UI.Tests run against a .NET Framework build of Uno.UI, which uses the 'real' Uno code for platform-independent components (eg the dependency-property system) and mocks platform-dependent aspects (eg actual rendering).
Adding tests here is closest to the 'traditional' unit test experience: you can run tests from the Visual Studio test window pane, easily debug the code you're modifying, etc. This is the ideal place to test platform-independent parts of the API, like dependency property behaviours and XAML-generated code.
Running tests in Uno.UI.Tests
1. Open and build the Uno.UI solution for the net461 target.
2. Open Test Explorer from the TEST menu.
3. Tests are listed under Uno.UI.Tests. You can run all tests or a subsection, with or without debugging. Tests run in a vanilla .NET Framework environment. (Note: You usually don't need to run Uno.Xaml.Tests tests locally, unless you're making changes to low-level XAML parsing in Uno.Xaml. )
Adding a new test
1. Locate the test class corresponding to the control or class you want to create a test for. If you need to add a new test class, create the file as Namespace_In_Snake_Case/ControlNameTests/Given_ControlName.cs. be marked with the [TestClass] attribute.
2. Add tests for your cases, naming it as When_Your_Scenario and marking it with the [TestMethod] attribute. (For more information about the 'Given-When-Then' naming style, read https://martinfowler.com/bliki/GivenWhenThen.html )
The mocking layer of Uno.UI for net461 has been added as needed, and depending on your case, you may encounter areas of functionality that aren't supported. Your options if that happens are either to add the missing mocking, or to add the test in Uno.UI.RuntimeTests instead.
|
__label__pos
| 0.585141 |
hypernova
A service for server-side rendering your JavaScript views
Usage no npm install needed!
<script type="module">
import hypernova from 'https://cdn.skypack.dev/hypernova';
</script>
README
Hypernova
A service for server-side rendering your JavaScript views
Join the chat at https://gitter.im/airbnb/hypernova
NPM version Build Status Dependency Status
Why?
First and foremost, server-side rendering is a better user experience compared to just client-side rendering. The user gets the content faster, the webpage is more accessible when JS fails or is disabled, and search engines have an easier time indexing it.
Secondly, it provides a better developer experience. Writing the same markup twice both on the server in your preferred templating library and in JavaScript can be tedious and hard to maintain. Hypernova lets you write all of your view code in a single place without having to sacrifice the user’s experience.
How?
Diagram that visually explains how hypernova works
1. A user requests a page on your server.
2. Your server then gathers all the data it needs to render the page.
3. Your server uses a Hypernova client to submit an HTTP request to a Hypernova server.
4. Hypernova server computes all the views into an HTML string and sends them back to the client.
5. Your server then sends down the markup plus the JavaScript to the browser.
6. On the browser, JavaScript is used to progressively enhance the application and make it dynamic.
Terminology
• hypernova/server - Service that accepts data via HTTP request and responds with HTML.
• hypernova - The universal component that takes care of turning your view into the HTML structure it needs to server-render. On the browser it bootstraps the server-rendered markup and runs it.
• hypernova-${client} - This can be something like hypernova-ruby or hypernova-node. It is the client which gives your application the superpower of querying Hypernova and understanding how to fallback to client-rendering in case there is a failure.
Get Started
First you’ll need to install a few packages: the server, the browser component, and the client. For development purposes it is recommended to install either alongside the code you wish to server-render or in the same application.
From here on out we’ll assume you’re using hypernova-ruby and React with hypernova-react.
Node
npm install hypernova --save
This package contains both the server and the client.
Next, lets configure the development server. To keep things simple we can put the configuration in your root folder, it can be named something like hypernova.js.
var hypernova = require('hypernova/server');
hypernova({
devMode: true,
getComponent(name) {
if (name === 'MyComponent.js') {
return require('./app/assets/javascripts/MyComponent.js');
}
return null;
},
port: 3030,
});
Only the getComponent function is required for Hypernova. All other configuration options are optional. Notes on getComponent can be found below.
We can run this server by starting it up with node.
node hypernova.js
If all goes well you should see a message that says "Connected". If there is an issue, a stack trace should appear in stderr.
Rails
If your server code is written in a language other than Ruby, then you can build your own client for Hypernova. A spec exists and details on how clients should function as well as fall-back in case of failure.
Add this line to your application’s Gemfile:
gem 'hypernova'
And then execute:
$ bundle
Or install it yourself as:
$ gem install hypernova
Now lets add support on the Rails side for Hypernova. First, we’ll need to create an initializer.
config/initializers/hypernova_initializer.rb
Hypernova.configure do |config|
config.host = "localhost"
config.port = 3030 # The port where the node service is listening
end
In your controller, you’ll need an :around_filter so you can opt into Hypernova rendering of view partials.
class SampleController < ApplicationController
around_filter :hypernova_render_support
end
And then in your view we render_react_component.
<%= render_react_component('MyComponent.js', :name => 'Hypernova The Renderer') %>
JavaScript
Finally, lets set up MyComponent.js to be server-rendered. We will be using React to render.
const React = require('react');
const renderReact = require('hypernova-react').renderReact;
function MyComponent(props) {
return <div>Hello, {props.name}!</div>;
}
module.exports = renderReact('MyComponent.js', MyComponent);
Visit the page and you should see your React component has been server-rendered. If you’d like to confirm, you can view the source of the page and look for data-hypernova-key. If you see a div filled with HTML then your component was server-rendered, if the div is empty then there was a problem and your component was client-rendered as a fall-back strategy.
If the div was empty, you can check stderr where you’re running the node service.
Debugging
The developer plugin for hypernova-ruby is useful for debugging issues with Hypernova and why it falls back to client-rendering. It’ll display a warning plus a stack trace on the page whenever a component fails to render server-side.
You can install the developer plugin in examples/simple/config/environments/development.rb
require 'hypernova'
require 'hypernova/plugins/development_mode_plugin'
Hypernova.add_plugin!(DevelopmentModePlugin.new)
You can also check the output of the server. The server outputs to stdout and stderr so if there is an error, check the process where you ran node hypernova.js and you should see the error.
Deploying
The recommended approach is running two separate servers, one that contains your server code and another that contains the Hypernova service. You’ll need to deploy the JavaScript code to the server that contains the Hypernova service as well.
Depending on how you have getComponent configured, you might need to restart your Hypernova service on every deploy. If getComponent caches any code then a restart is paramount so that Hypernova receives the new changes. Caching is recommended because it helps speed up the service.
FAQ
Isn’t sending an HTTP request slow?
There isn’t a lot of overhead or latency, especially if you keep the servers in close proximity to each other. It’s as fast as compiling many ERB templates and gives you the benefit of unifying your view code.
Why not an in-memory JS VM?
This is a valid option. If you’re looking for a siloed experience where the JS service is kept separate, then Hypernova is right for you. This approach also lends itself better to environments that don’t already have a JS VM available.
What if the server blows up?
If something bad happens while Hypernova is attempting to server-render your components it’ll default to failure mode where your page will be client-rendered instead. While this is a comfortable safety net, the goal is to server-render every request.
Pitfalls
These are pitfalls of server-rendering JavaScript code and are not specific to Hypernova.
• You’ll want to do any DOM-related manipulations in componentDidMount. componentDidMount runs on the browser but not the server, which means it’s safe to put DOM logic in there. Putting logic outside of the component, in the constructor, or in componentWillMount will cause the code to fail since the DOM isn’t present on the server.
• It is recommended that you run your code in a VM sandbox so that requests get a fresh new JavaScript environment. In the event that you decide not to use a VM, you should be aware that singleton patterns and globals run the risk of leaking memory and/or leaking data between requests. If you use createGetComponent you’ll get VM by default.
Clients
See clients.md
Browser
The included browser package is a barebones helper which renders markup on the server and then loads it on the browser.
List of compatible browser packages:
Server
Starting up a Hypernova server
const hypernova = require('hypernova/server');
hypernova({
getComponent: require,
});
Options, and their defaults
{
// the limit at which body parser will throw
bodyParser: {
limit: 1024 * 1000,
},
// runs on a single process
devMode: false,
// how components will be retrieved,
getComponent: undefined,
// if not overridden, default will return the number of reported cpus - 1
getCPUs: undefined,
// the host the app will bind to
host: '0.0.0.0',
// configure the default winston logger
logger: {},
// logger instance to use instead of the default winston logger
loggerInstance: undefined,
// the port the app will start on
port: 8080,
// default endpoint path
endpoint: '/batch',
// whether jobs in a batch are processed concurrently
processJobsConcurrently: true,
// arguments for server.listen, by default set to the configured [port, host]
listenArgs: null,
// default function to create an express app
createApplication: () => express()
}
getComponent
This lets you provide your own implementation on how components are retrieved.
The most common use-case would be to use a VM to keep each module sandboxed between requests. You can use createGetComponent from Hypernova to retrieve a getComponent function that does this.
createGetComponent receives an Object whose keys are the component’s registered name and the value is the absolute path to the component.
const path = require('path');
hypernova({
getComponent: createGetComponent({
MyComponent: path.resolve(path.join('app', 'assets', 'javascripts', 'MyComponent.js')),
}),
});
The simplest getComponent would be to use require. One drawback here is that your components would be cached between requests and thus could leak memory and/or data. Another drawback is that the files would have to exist relative to where this require is being used.
hypernova({
getComponent: require,
});
You can also fetch components asynchronously if you wish, and/or cache them. Just return a Promise from getComponent.
hypernova({
getComponent(name) {
return promiseFetch('https://MyComponent');
},
});
getCPUs
This lets you specify the number of cores Hypernova will run workers on. Receives an argument containing the number of cores as reported by the OS.
If this method is not overridden, or if a falsy value is passed, the default method will return the number of reported cores minus 1.
loggerInstance
This lets you provide your own implementation of a logger as long as it has a log() method.
const winston = require('winston');
const options = {};
hypernova({
loggerInstance: new winston.Logger({
transports: [
new winston.transports.Console(options),
],
}),
});
processJobsConcurrently
This determines whether jobs in a batch are processed concurrently or serially. Serial execution is preferable if you use a renderer that is CPU bound and your plugins do not perform IO in the per job hooks.
createApplication
This lets you provide your own function that creates an express app. You are able to add your own express stuff like more routes, middlewares, etc. Notice that you must pass a function that returns an express app without calling the listen method!
const express = require('express');
const yourOwnAwesomeMiddleware = require('custom-middleware');
hypernova({
createApplication: function() {
const app = express();
app.use(yourOwnAwesomeMiddleware);
app.get('/health', function(req, res) {
return res.status(200).send('OK');
});
// this is mandatory.
return app;
}
API
Browser
load
type DeserializedData = { [x: string]: any };
type ServerRenderedPair = { node: HTMLElement, data: DeserializedData };
function load(name: string): Array<ServerRenderedPair> {}
Looks up the server-rendered DOM markup and its corresponding script JSON payload and returns it.
serialize
type DeserializedData = { [x: string]: any };
function serialize(name: string, html: string, data: DeserializedData): string {}
Generates the markup that the browser will need to bootstrap your view on the browser.
toScript
type DeserializedData = { [x: string]: any };
type Attributes = { [x: string]: string };
function toScript(attrs: Attributes, props: DeserializedData): string {}
An interface that allows you to create extra script tags for loading more data on the browser.
fromScript
type DeserializedData = { [x: string]: any };
type Attributes = { [x: string]: string };
function fromScript(attrs: Attributes): DeserializedData {}
The inverse of toScript, this function runs on the browser and attempts to find and JSON.parse the contents of the server generated script. attrs is an object where the key will be a data-key to be placed on the element, and the value is the data attribute's value.
Server
createGetComponent
type Files = { [key: string]: string };
type VMOptions = { cacheSize: number, environment?: () => any };
type GetComponent = (name: string) => any;
function createGetComponent(files: Files, vmOptions: VMOptions): GetComponent {}
Creates a getComponent function which can then be passed into Hypernova so it knows how to retrieve your components. createGetComponent will create a VM so all your bundles can run independently from each other on each request so they don’t interfere with global state. Each component is also cached at startup in order to help speed up run time. The files Object key is the component’s name and its value is the absolute path to the component.
createVM
type VMOptions = { cacheSize: number, environment?: () => any };
type Run = (name: string, code: string) => any;
type VMContainer = { exportsCache: any, run: Run };
function createVM(options: VMOptions): VMContainer {}
Creates a VM using Node’s vm module. Calling run will run the provided code and return its module.exports. exportsCache is an instance of lru-cache.
getFiles
function getFiles(fullPathStr: string): Array<{name: string, path: string}> {}
A utility function that allows you to retrieve all JS files recursively given an absolute path.
Module
Module is a class that mimics Node’s module interface. It makes require relative to whatever directory it’s run against and makes sure that each JavaScript module runs in its own clean sandbox.
loadModules
function loadModules(require: any, files: Array<string>): () => Module? {}
Loads all of the provided files into a Module that can be used as a parent Module inside a VM. This utility is useful when you need to pre-load a set of shims, shams, or JavaScript files that alter the runtime context. The require parameter is Node.js’ require function.
|
__label__pos
| 0.658766 |
Blue measuring circle - how remove?
Since one of the recent updates I am continually activating what appears to be a ?new measuring feature. Its annoying as I am not sure what command, if any, activates it, or how I get rid of it.
While active the sort by distance feature in the overview is locked, and it is really getting in the way of hunting down victims, er, friends.
1. Is this feature new, and if so what is its formal name?
2. What are the commands that turn it on and off?
3. How do I get rid of it forever?
Cheers
Dr F
It is. It is there so that you can approach a specific point in space or send fighters to a specific point in space.
Most likely Q for Approach.
Don’t use Q.
That looks like the orbit ring. When you approach the range of orbiting, that’s what it will look like until you start orbiting the object of choice.
Thanks for the help - but doesn’t solve the issue:
Approach has no effect: I can be stationary in space with no objects selected and it is still measuring where my mouse is in space. Indeed it still tries to measure even when I am clicking on dialog boxes such as places or overview…
Its not the orbit circle - same color, but it measures distance and angle.
How do I turn this off? It appears to be on constantly. It gets in the way of any attempts at manual flying or orientation.
I have a custom keyboard map: can someone point me to what this is called in setup/modules? its driving me crazy…
The measuring feature is called ‘Tactical Overlay’ and can be turned off by pressing CTRL-D, or by clicking on the Tactical Overlay button left of your ship’s capacitor, health and stuff.
You don’t seem to know how it works. Let me explain it. It requires two mouse clicks. The steps on how you use it are:
1. Press Q-key for Approach and hold the key down.
2. Press the left mouse button to set the direction and horizontal distance.
1
3. Press the left mouse button again to set the up- or downward angle.
2
4. Release Q-key and the ship will now approach the position in space.
Basically what it does is to let you enter a 3D vector (a radial vector) to which your ship will approach to in space. You can use it to fly to any free point in space and hold it.
1 Like
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.970044 |
Contributors: 36
Author Tokens Token Proportion Commits Commit Proportion
Rafael J. Wysocki 3739 79.49% 119 65.38%
Andy Grover 199 4.23% 1 0.55%
Ulf Hansson 139 2.95% 6 3.30%
Dmitry Torokhov 115 2.44% 1 0.55%
Shaohua Li 80 1.70% 2 1.10%
Mario Limonciello 59 1.25% 3 1.65%
Mika Westerberg 44 0.94% 5 2.75%
Huang Ying 41 0.87% 1 0.55%
Rui Zhang 35 0.74% 3 1.65%
Patrick Mochel 26 0.55% 4 2.20%
Sakari Ailus 22 0.47% 4 2.20%
Aaron Lu 21 0.45% 3 1.65%
David Howells 18 0.38% 1 0.55%
Daniel Drake 18 0.38% 1 0.55%
Raul E Rangel 17 0.36% 1 0.55%
Lin Ming 13 0.28% 3 1.65%
Heikki Krogerus 12 0.26% 1 0.55%
Ville Syrjälä 10 0.21% 1 0.55%
Len Brown 9 0.19% 3 1.65%
Björn Helgaas 8 0.17% 3 1.65%
Kenji Kaneshige 8 0.17% 1 0.55%
David Brownell 8 0.17% 1 0.55%
Tomeu Vizoso 8 0.17% 1 0.55%
Alex Williamson 8 0.17% 1 0.55%
Keith Busch 7 0.15% 1 0.55%
Taku Izumi 6 0.13% 1 0.55%
Lv Zheng 6 0.13% 1 0.55%
Dongdong Liu 6 0.13% 1 0.55%
Tri Vo 5 0.11% 1 0.55%
Jiri Kosina 5 0.11% 1 0.55%
Alan Stern 3 0.06% 1 0.55%
Kai-Heng Feng 2 0.04% 1 0.55%
Thomas Gleixner 2 0.04% 1 0.55%
Pavel Machek 2 0.04% 1 0.55%
Shanker Donthineni 2 0.04% 1 0.55%
Sumeet Pawnikar 1 0.02% 1 0.55%
Total 4704 182
// SPDX-License-Identifier: GPL-2.0-only
/*
* drivers/acpi/device_pm.c - ACPI device power management routines.
*
* Copyright (C) 2012, Intel Corp.
* Author: Rafael J. Wysocki <[email protected]>
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/acpi.h>
#include <linux/export.h>
#include <linux/mutex.h>
#include <linux/pm_qos.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
#include "fan.h"
#include "internal.h"
/**
* acpi_power_state_string - String representation of ACPI device power state.
* @state: ACPI device power state to return the string representation of.
*/
const char *acpi_power_state_string(int state)
{
switch (state) {
case ACPI_STATE_D0:
return "D0";
case ACPI_STATE_D1:
return "D1";
case ACPI_STATE_D2:
return "D2";
case ACPI_STATE_D3_HOT:
return "D3hot";
case ACPI_STATE_D3_COLD:
return "D3cold";
default:
return "(unknown)";
}
}
static int acpi_dev_pm_explicit_get(struct acpi_device *device, int *state)
{
unsigned long long psc;
acpi_status status;
status = acpi_evaluate_integer(device->handle, "_PSC", NULL, &psc);
if (ACPI_FAILURE(status))
return -ENODEV;
*state = psc;
return 0;
}
/**
* acpi_device_get_power - Get power state of an ACPI device.
* @device: Device to get the power state of.
* @state: Place to store the power state of the device.
*
* This function does not update the device's power.state field, but it may
* update its parent's power.state field (when the parent's power state is
* unknown and the device's power state turns out to be D0).
*
* Also, it does not update power resource reference counters to ensure that
* the power state returned by it will be persistent and it may return a power
* state shallower than previously set by acpi_device_set_power() for @device
* (if that power state depends on any power resources).
*/
int acpi_device_get_power(struct acpi_device *device, int *state)
{
int result = ACPI_STATE_UNKNOWN;
struct acpi_device *parent;
int error;
if (!device || !state)
return -EINVAL;
parent = acpi_dev_parent(device);
if (!device->flags.power_manageable) {
/* TBD: Non-recursive algorithm for walking up hierarchy. */
*state = parent ? parent->power.state : ACPI_STATE_D0;
goto out;
}
/*
* Get the device's power state from power resources settings and _PSC,
* if available.
*/
if (device->power.flags.power_resources) {
error = acpi_power_get_inferred_state(device, &result);
if (error)
return error;
}
if (device->power.flags.explicit_get) {
int psc;
error = acpi_dev_pm_explicit_get(device, &psc);
if (error)
return error;
/*
* The power resources settings may indicate a power state
* shallower than the actual power state of the device, because
* the same power resources may be referenced by other devices.
*
* For systems predating ACPI 4.0 we assume that D3hot is the
* deepest state that can be supported.
*/
if (psc > result && psc < ACPI_STATE_D3_COLD)
result = psc;
else if (result == ACPI_STATE_UNKNOWN)
result = psc > ACPI_STATE_D2 ? ACPI_STATE_D3_HOT : psc;
}
/*
* If we were unsure about the device parent's power state up to this
* point, the fact that the device is in D0 implies that the parent has
* to be in D0 too, except if ignore_parent is set.
*/
if (!device->power.flags.ignore_parent && parent &&
parent->power.state == ACPI_STATE_UNKNOWN &&
result == ACPI_STATE_D0)
parent->power.state = ACPI_STATE_D0;
*state = result;
out:
acpi_handle_debug(device->handle, "Power state: %s\n",
acpi_power_state_string(*state));
return 0;
}
static int acpi_dev_pm_explicit_set(struct acpi_device *adev, int state)
{
if (adev->power.states[state].flags.explicit_set) {
char method[5] = { '_', 'P', 'S', '0' + state, '\0' };
acpi_status status;
status = acpi_evaluate_object(adev->handle, method, NULL, NULL);
if (ACPI_FAILURE(status))
return -ENODEV;
}
return 0;
}
/**
* acpi_device_set_power - Set power state of an ACPI device.
* @device: Device to set the power state of.
* @state: New power state to set.
*
* Callers must ensure that the device is power manageable before using this
* function.
*/
int acpi_device_set_power(struct acpi_device *device, int state)
{
int target_state = state;
int result = 0;
if (!device || !device->flags.power_manageable
|| (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD))
return -EINVAL;
acpi_handle_debug(device->handle, "Power state change: %s -> %s\n",
acpi_power_state_string(device->power.state),
acpi_power_state_string(state));
/* Make sure this is a valid target state */
/* There is a special case for D0 addressed below. */
if (state > ACPI_STATE_D0 && state == device->power.state)
goto no_change;
if (state == ACPI_STATE_D3_COLD) {
/*
* For transitions to D3cold we need to execute _PS3 and then
* possibly drop references to the power resources in use.
*/
state = ACPI_STATE_D3_HOT;
/* If D3cold is not supported, use D3hot as the target state. */
if (!device->power.states[ACPI_STATE_D3_COLD].flags.valid)
target_state = state;
} else if (!device->power.states[state].flags.valid) {
acpi_handle_debug(device->handle, "Power state %s not supported\n",
acpi_power_state_string(state));
return -ENODEV;
}
if (!device->power.flags.ignore_parent) {
struct acpi_device *parent;
parent = acpi_dev_parent(device);
if (parent && state < parent->power.state) {
acpi_handle_debug(device->handle,
"Cannot transition to %s for parent in %s\n",
acpi_power_state_string(state),
acpi_power_state_string(parent->power.state));
return -ENODEV;
}
}
/*
* Transition Power
* ----------------
* In accordance with ACPI 6, _PSx is executed before manipulating power
* resources, unless the target state is D0, in which case _PS0 is
* supposed to be executed after turning the power resources on.
*/
if (state > ACPI_STATE_D0) {
/*
* According to ACPI 6, devices cannot go from lower-power
* (deeper) states to higher-power (shallower) states.
*/
if (state < device->power.state) {
acpi_handle_debug(device->handle,
"Cannot transition from %s to %s\n",
acpi_power_state_string(device->power.state),
acpi_power_state_string(state));
return -ENODEV;
}
/*
* If the device goes from D3hot to D3cold, _PS3 has been
* evaluated for it already, so skip it in that case.
*/
if (device->power.state < ACPI_STATE_D3_HOT) {
result = acpi_dev_pm_explicit_set(device, state);
if (result)
goto end;
}
if (device->power.flags.power_resources)
result = acpi_power_transition(device, target_state);
} else {
int cur_state = device->power.state;
if (device->power.flags.power_resources) {
result = acpi_power_transition(device, ACPI_STATE_D0);
if (result)
goto end;
}
if (cur_state == ACPI_STATE_D0) {
int psc;
/* Nothing to do here if _PSC is not present. */
if (!device->power.flags.explicit_get)
goto no_change;
/*
* The power state of the device was set to D0 last
* time, but that might have happened before a
* system-wide transition involving the platform
* firmware, so it may be necessary to evaluate _PS0
* for the device here. However, use extra care here
* and evaluate _PSC to check the device's current power
* state, and only invoke _PS0 if the evaluation of _PSC
* is successful and it returns a power state different
* from D0.
*/
result = acpi_dev_pm_explicit_get(device, &psc);
if (result || psc == ACPI_STATE_D0)
goto no_change;
}
result = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);
}
end:
if (result) {
acpi_handle_debug(device->handle,
"Failed to change power state to %s\n",
acpi_power_state_string(target_state));
} else {
device->power.state = target_state;
acpi_handle_debug(device->handle, "Power state changed to %s\n",
acpi_power_state_string(target_state));
}
return result;
no_change:
acpi_handle_debug(device->handle, "Already in %s\n",
acpi_power_state_string(state));
return 0;
}
EXPORT_SYMBOL(acpi_device_set_power);
int acpi_bus_set_power(acpi_handle handle, int state)
{
struct acpi_device *device = acpi_fetch_acpi_dev(handle);
if (device)
return acpi_device_set_power(device, state);
return -ENODEV;
}
EXPORT_SYMBOL(acpi_bus_set_power);
int acpi_bus_init_power(struct acpi_device *device)
{
int state;
int result;
if (!device)
return -EINVAL;
device->power.state = ACPI_STATE_UNKNOWN;
if (!acpi_device_is_present(device)) {
device->flags.initialized = false;
return -ENXIO;
}
result = acpi_device_get_power(device, &state);
if (result)
return result;
if (state < ACPI_STATE_D3_COLD && device->power.flags.power_resources) {
/* Reference count the power resources. */
result = acpi_power_on_resources(device, state);
if (result)
return result;
if (state == ACPI_STATE_D0) {
/*
* If _PSC is not present and the state inferred from
* power resources appears to be D0, it still may be
* necessary to execute _PS0 at this point, because
* another device using the same power resources may
* have been put into D0 previously and that's why we
* see D0 here.
*/
result = acpi_dev_pm_explicit_set(device, state);
if (result)
return result;
}
} else if (state == ACPI_STATE_UNKNOWN) {
/*
* No power resources and missing _PSC? Cross fingers and make
* it D0 in hope that this is what the BIOS put the device into.
* [We tried to force D0 here by executing _PS0, but that broke
* Toshiba P870-303 in a nasty way.]
*/
state = ACPI_STATE_D0;
}
device->power.state = state;
return 0;
}
/**
* acpi_device_fix_up_power - Force device with missing _PSC into D0.
* @device: Device object whose power state is to be fixed up.
*
* Devices without power resources and _PSC, but having _PS0 and _PS3 defined,
* are assumed to be put into D0 by the BIOS. However, in some cases that may
* not be the case and this function should be used then.
*/
int acpi_device_fix_up_power(struct acpi_device *device)
{
int ret = 0;
if (!device->power.flags.power_resources
&& !device->power.flags.explicit_get
&& device->power.state == ACPI_STATE_D0)
ret = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);
return ret;
}
EXPORT_SYMBOL_GPL(acpi_device_fix_up_power);
static int fix_up_power_if_applicable(struct acpi_device *adev, void *not_used)
{
if (adev->status.present && adev->status.enabled)
acpi_device_fix_up_power(adev);
return 0;
}
/**
* acpi_device_fix_up_power_extended - Force device and its children into D0.
* @adev: Parent device object whose power state is to be fixed up.
*
* Call acpi_device_fix_up_power() for @adev and its children so long as they
* are reported as present and enabled.
*/
void acpi_device_fix_up_power_extended(struct acpi_device *adev)
{
acpi_device_fix_up_power(adev);
acpi_dev_for_each_child(adev, fix_up_power_if_applicable, NULL);
}
EXPORT_SYMBOL_GPL(acpi_device_fix_up_power_extended);
int acpi_device_update_power(struct acpi_device *device, int *state_p)
{
int state;
int result;
if (device->power.state == ACPI_STATE_UNKNOWN) {
result = acpi_bus_init_power(device);
if (!result && state_p)
*state_p = device->power.state;
return result;
}
result = acpi_device_get_power(device, &state);
if (result)
return result;
if (state == ACPI_STATE_UNKNOWN) {
state = ACPI_STATE_D0;
result = acpi_device_set_power(device, state);
if (result)
return result;
} else {
if (device->power.flags.power_resources) {
/*
* We don't need to really switch the state, bu we need
* to update the power resources' reference counters.
*/
result = acpi_power_transition(device, state);
if (result)
return result;
}
device->power.state = state;
}
if (state_p)
*state_p = state;
return 0;
}
EXPORT_SYMBOL_GPL(acpi_device_update_power);
int acpi_bus_update_power(acpi_handle handle, int *state_p)
{
struct acpi_device *device = acpi_fetch_acpi_dev(handle);
if (device)
return acpi_device_update_power(device, state_p);
return -ENODEV;
}
EXPORT_SYMBOL_GPL(acpi_bus_update_power);
bool acpi_bus_power_manageable(acpi_handle handle)
{
struct acpi_device *device = acpi_fetch_acpi_dev(handle);
return device && device->flags.power_manageable;
}
EXPORT_SYMBOL(acpi_bus_power_manageable);
static int acpi_power_up_if_adr_present(struct acpi_device *adev, void *not_used)
{
if (!(adev->flags.power_manageable && adev->pnp.type.bus_address))
return 0;
acpi_handle_debug(adev->handle, "Power state: %s\n",
acpi_power_state_string(adev->power.state));
if (adev->power.state == ACPI_STATE_D3_COLD)
return acpi_device_set_power(adev, ACPI_STATE_D0);
return 0;
}
/**
* acpi_dev_power_up_children_with_adr - Power up childres with valid _ADR
* @adev: Parent ACPI device object.
*
* Change the power states of the direct children of @adev that are in D3cold
* and hold valid _ADR objects to D0 in order to allow bus (e.g. PCI)
* enumeration code to access them.
*/
void acpi_dev_power_up_children_with_adr(struct acpi_device *adev)
{
acpi_dev_for_each_child(adev, acpi_power_up_if_adr_present, NULL);
}
/**
* acpi_dev_power_state_for_wake - Deepest power state for wakeup signaling
* @adev: ACPI companion of the target device.
*
* Evaluate _S0W for @adev and return the value produced by it or return
* ACPI_STATE_UNKNOWN on errors (including _S0W not present).
*/
u8 acpi_dev_power_state_for_wake(struct acpi_device *adev)
{
unsigned long long state;
acpi_status status;
status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state);
if (ACPI_FAILURE(status))
return ACPI_STATE_UNKNOWN;
return state;
}
#ifdef CONFIG_PM
static DEFINE_MUTEX(acpi_pm_notifier_lock);
static DEFINE_MUTEX(acpi_pm_notifier_install_lock);
void acpi_pm_wakeup_event(struct device *dev)
{
pm_wakeup_dev_event(dev, 0, acpi_s2idle_wakeup());
}
EXPORT_SYMBOL_GPL(acpi_pm_wakeup_event);
static void acpi_pm_notify_handler(acpi_handle handle, u32 val, void *not_used)
{
struct acpi_device *adev;
if (val != ACPI_NOTIFY_DEVICE_WAKE)
return;
acpi_handle_debug(handle, "Wake notify\n");
adev = acpi_get_acpi_dev(handle);
if (!adev)
return;
mutex_lock(&acpi_pm_notifier_lock);
if (adev->wakeup.flags.notifier_present) {
pm_wakeup_ws_event(adev->wakeup.ws, 0, acpi_s2idle_wakeup());
if (adev->wakeup.context.func) {
acpi_handle_debug(handle, "Running %pS for %s\n",
adev->wakeup.context.func,
dev_name(adev->wakeup.context.dev));
adev->wakeup.context.func(&adev->wakeup.context);
}
}
mutex_unlock(&acpi_pm_notifier_lock);
acpi_put_acpi_dev(adev);
}
/**
* acpi_add_pm_notifier - Register PM notify handler for given ACPI device.
* @adev: ACPI device to add the notify handler for.
* @dev: Device to generate a wakeup event for while handling the notification.
* @func: Work function to execute when handling the notification.
*
* NOTE: @adev need not be a run-wake or wakeup device to be a valid source of
* PM wakeup events. For example, wakeup events may be generated for bridges
* if one of the devices below the bridge is signaling wakeup, even if the
* bridge itself doesn't have a wakeup GPE associated with it.
*/
acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
void (*func)(struct acpi_device_wakeup_context *context))
{
acpi_status status = AE_ALREADY_EXISTS;
if (!dev && !func)
return AE_BAD_PARAMETER;
mutex_lock(&acpi_pm_notifier_install_lock);
if (adev->wakeup.flags.notifier_present)
goto out;
status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
acpi_pm_notify_handler, NULL);
if (ACPI_FAILURE(status))
goto out;
mutex_lock(&acpi_pm_notifier_lock);
adev->wakeup.ws = wakeup_source_register(&adev->dev,
dev_name(&adev->dev));
adev->wakeup.context.dev = dev;
adev->wakeup.context.func = func;
adev->wakeup.flags.notifier_present = true;
mutex_unlock(&acpi_pm_notifier_lock);
out:
mutex_unlock(&acpi_pm_notifier_install_lock);
return status;
}
/**
* acpi_remove_pm_notifier - Unregister PM notifier from given ACPI device.
* @adev: ACPI device to remove the notifier from.
*/
acpi_status acpi_remove_pm_notifier(struct acpi_device *adev)
{
acpi_status status = AE_BAD_PARAMETER;
mutex_lock(&acpi_pm_notifier_install_lock);
if (!adev->wakeup.flags.notifier_present)
goto out;
status = acpi_remove_notify_handler(adev->handle,
ACPI_SYSTEM_NOTIFY,
acpi_pm_notify_handler);
if (ACPI_FAILURE(status))
goto out;
mutex_lock(&acpi_pm_notifier_lock);
adev->wakeup.context.func = NULL;
adev->wakeup.context.dev = NULL;
wakeup_source_unregister(adev->wakeup.ws);
adev->wakeup.flags.notifier_present = false;
mutex_unlock(&acpi_pm_notifier_lock);
out:
mutex_unlock(&acpi_pm_notifier_install_lock);
return status;
}
bool acpi_bus_can_wakeup(acpi_handle handle)
{
struct acpi_device *device = acpi_fetch_acpi_dev(handle);
return device && device->wakeup.flags.valid;
}
EXPORT_SYMBOL(acpi_bus_can_wakeup);
bool acpi_pm_device_can_wakeup(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
return adev ? acpi_device_can_wakeup(adev) : false;
}
/**
* acpi_dev_pm_get_state - Get preferred power state of ACPI device.
* @dev: Device whose preferred target power state to return.
* @adev: ACPI device node corresponding to @dev.
* @target_state: System state to match the resultant device state.
* @d_min_p: Location to store the highest power state available to the device.
* @d_max_p: Location to store the lowest power state available to the device.
*
* Find the lowest power (highest number) and highest power (lowest number) ACPI
* device power states that the device can be in while the system is in the
* state represented by @target_state. Store the integer numbers representing
* those stats in the memory locations pointed to by @d_max_p and @d_min_p,
* respectively.
*
* Callers must ensure that @dev and @adev are valid pointers and that @adev
* actually corresponds to @dev before using this function.
*
* Returns 0 on success or -ENODATA when one of the ACPI methods fails or
* returns a value that doesn't make sense. The memory locations pointed to by
* @d_max_p and @d_min_p are only modified on success.
*/
static int acpi_dev_pm_get_state(struct device *dev, struct acpi_device *adev,
u32 target_state, int *d_min_p, int *d_max_p)
{
char method[] = { '_', 'S', '0' + target_state, 'D', '\0' };
acpi_handle handle = adev->handle;
unsigned long long ret;
int d_min, d_max;
bool wakeup = false;
bool has_sxd = false;
acpi_status status;
/*
* If the system state is S0, the lowest power state the device can be
* in is D3cold, unless the device has _S0W and is supposed to signal
* wakeup, in which case the return value of _S0W has to be used as the
* lowest power state available to the device.
*/
d_min = ACPI_STATE_D0;
d_max = ACPI_STATE_D3_COLD;
/*
* If present, _SxD methods return the minimum D-state (highest power
* state) we can use for the corresponding S-states. Otherwise, the
* minimum D-state is D0 (ACPI 3.x).
*/
if (target_state > ACPI_STATE_S0) {
/*
* We rely on acpi_evaluate_integer() not clobbering the integer
* provided if AE_NOT_FOUND is returned.
*/
ret = d_min;
status = acpi_evaluate_integer(handle, method, NULL, &ret);
if ((ACPI_FAILURE(status) && status != AE_NOT_FOUND)
|| ret > ACPI_STATE_D3_COLD)
return -ENODATA;
/*
* We need to handle legacy systems where D3hot and D3cold are
* the same and 3 is returned in both cases, so fall back to
* D3cold if D3hot is not a valid state.
*/
if (!adev->power.states[ret].flags.valid) {
if (ret == ACPI_STATE_D3_HOT)
ret = ACPI_STATE_D3_COLD;
else
return -ENODATA;
}
if (status == AE_OK)
has_sxd = true;
d_min = ret;
wakeup = device_may_wakeup(dev) && adev->wakeup.flags.valid
&& adev->wakeup.sleep_state >= target_state;
} else if (device_may_wakeup(dev) && dev->power.wakeirq) {
/*
* The ACPI subsystem doesn't manage the wake bit for IRQs
* defined with ExclusiveAndWake and SharedAndWake. Instead we
* expect them to be managed via the PM subsystem. Drivers
* should call dev_pm_set_wake_irq to register an IRQ as a wake
* source.
*
* If a device has a wake IRQ attached we need to check the
* _S0W method to get the correct wake D-state. Otherwise we
* end up putting the device into D3Cold which will more than
* likely disable wake functionality.
*/
wakeup = true;
} else {
/* ACPI GPE is specified in _PRW. */
wakeup = adev->wakeup.flags.valid;
}
/*
* If _PRW says we can wake up the system from the target sleep state,
* the D-state returned by _SxD is sufficient for that (we assume a
* wakeup-aware driver if wake is set). Still, if _SxW exists
* (ACPI 3.x), it should return the maximum (lowest power) D-state that
* can wake the system. _S0W may be valid, too.
*/
if (wakeup) {
method[3] = 'W';
status = acpi_evaluate_integer(handle, method, NULL, &ret);
if (status == AE_NOT_FOUND) {
/* No _SxW. In this case, the ACPI spec says that we
* must not go into any power state deeper than the
* value returned from _SxD.
*/
if (has_sxd && target_state > ACPI_STATE_S0)
d_max = d_min;
} else if (ACPI_SUCCESS(status) && ret <= ACPI_STATE_D3_COLD) {
/* Fall back to D3cold if ret is not a valid state. */
if (!adev->power.states[ret].flags.valid)
ret = ACPI_STATE_D3_COLD;
d_max = ret > d_min ? ret : d_min;
} else {
return -ENODATA;
}
}
if (d_min_p)
*d_min_p = d_min;
if (d_max_p)
*d_max_p = d_max;
return 0;
}
/**
* acpi_pm_device_sleep_state - Get preferred power state of ACPI device.
* @dev: Device whose preferred target power state to return.
* @d_min_p: Location to store the upper limit of the allowed states range.
* @d_max_in: Deepest low-power state to take into consideration.
* Return value: Preferred power state of the device on success, -ENODEV
* if there's no 'struct acpi_device' for @dev, -EINVAL if @d_max_in is
* incorrect, or -ENODATA on ACPI method failure.
*
* The caller must ensure that @dev is valid before using this function.
*/
int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p, int d_max_in)
{
struct acpi_device *adev;
int ret, d_min, d_max;
if (d_max_in < ACPI_STATE_D0 || d_max_in > ACPI_STATE_D3_COLD)
return -EINVAL;
if (d_max_in > ACPI_STATE_D2) {
enum pm_qos_flags_status stat;
stat = dev_pm_qos_flags(dev, PM_QOS_FLAG_NO_POWER_OFF);
if (stat == PM_QOS_FLAGS_ALL)
d_max_in = ACPI_STATE_D2;
}
adev = ACPI_COMPANION(dev);
if (!adev) {
dev_dbg(dev, "ACPI companion missing in %s!\n", __func__);
return -ENODEV;
}
ret = acpi_dev_pm_get_state(dev, adev, acpi_target_system_state(),
&d_min, &d_max);
if (ret)
return ret;
if (d_max_in < d_min)
return -EINVAL;
if (d_max > d_max_in) {
for (d_max = d_max_in; d_max > d_min; d_max--) {
if (adev->power.states[d_max].flags.valid)
break;
}
}
if (d_min_p)
*d_min_p = d_min;
return d_max;
}
EXPORT_SYMBOL(acpi_pm_device_sleep_state);
/**
* acpi_pm_notify_work_func - ACPI devices wakeup notification work function.
* @context: Device wakeup context.
*/
static void acpi_pm_notify_work_func(struct acpi_device_wakeup_context *context)
{
struct device *dev = context->dev;
if (dev) {
pm_wakeup_event(dev, 0);
pm_request_resume(dev);
}
}
static DEFINE_MUTEX(acpi_wakeup_lock);
static int __acpi_device_wakeup_enable(struct acpi_device *adev,
u32 target_state)
{
struct acpi_device_wakeup *wakeup = &adev->wakeup;
acpi_status status;
int error = 0;
mutex_lock(&acpi_wakeup_lock);
/*
* If the device wakeup power is already enabled, disable it and enable
* it again in case it depends on the configuration of subordinate
* devices and the conditions have changed since it was enabled last
* time.
*/
if (wakeup->enable_count > 0)
acpi_disable_wakeup_device_power(adev);
error = acpi_enable_wakeup_device_power(adev, target_state);
if (error) {
if (wakeup->enable_count > 0) {
acpi_disable_gpe(wakeup->gpe_device, wakeup->gpe_number);
wakeup->enable_count = 0;
}
goto out;
}
if (wakeup->enable_count > 0)
goto inc;
status = acpi_enable_gpe(wakeup->gpe_device, wakeup->gpe_number);
if (ACPI_FAILURE(status)) {
acpi_disable_wakeup_device_power(adev);
error = -EIO;
goto out;
}
acpi_handle_debug(adev->handle, "GPE%2X enabled for wakeup\n",
(unsigned int)wakeup->gpe_number);
inc:
if (wakeup->enable_count < INT_MAX)
wakeup->enable_count++;
else
acpi_handle_info(adev->handle, "Wakeup enable count out of bounds!\n");
out:
mutex_unlock(&acpi_wakeup_lock);
return error;
}
/**
* acpi_device_wakeup_enable - Enable wakeup functionality for device.
* @adev: ACPI device to enable wakeup functionality for.
* @target_state: State the system is transitioning into.
*
* Enable the GPE associated with @adev so that it can generate wakeup signals
* for the device in response to external (remote) events and enable wakeup
* power for it.
*
* Callers must ensure that @adev is a valid ACPI device node before executing
* this function.
*/
static int acpi_device_wakeup_enable(struct acpi_device *adev, u32 target_state)
{
return __acpi_device_wakeup_enable(adev, target_state);
}
/**
* acpi_device_wakeup_disable - Disable wakeup functionality for device.
* @adev: ACPI device to disable wakeup functionality for.
*
* Disable the GPE associated with @adev and disable wakeup power for it.
*
* Callers must ensure that @adev is a valid ACPI device node before executing
* this function.
*/
static void acpi_device_wakeup_disable(struct acpi_device *adev)
{
struct acpi_device_wakeup *wakeup = &adev->wakeup;
mutex_lock(&acpi_wakeup_lock);
if (!wakeup->enable_count)
goto out;
acpi_disable_gpe(wakeup->gpe_device, wakeup->gpe_number);
acpi_disable_wakeup_device_power(adev);
wakeup->enable_count--;
out:
mutex_unlock(&acpi_wakeup_lock);
}
/**
* acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
* @dev: Device to enable/disable to generate wakeup events.
* @enable: Whether to enable or disable the wakeup functionality.
*/
int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
{
struct acpi_device *adev;
int error;
adev = ACPI_COMPANION(dev);
if (!adev) {
dev_dbg(dev, "ACPI companion missing in %s!\n", __func__);
return -ENODEV;
}
if (!acpi_device_can_wakeup(adev))
return -EINVAL;
if (!enable) {
acpi_device_wakeup_disable(adev);
dev_dbg(dev, "Wakeup disabled by ACPI\n");
return 0;
}
error = __acpi_device_wakeup_enable(adev, acpi_target_system_state());
if (!error)
dev_dbg(dev, "Wakeup enabled by ACPI\n");
return error;
}
EXPORT_SYMBOL_GPL(acpi_pm_set_device_wakeup);
/**
* acpi_dev_pm_low_power - Put ACPI device into a low-power state.
* @dev: Device to put into a low-power state.
* @adev: ACPI device node corresponding to @dev.
* @system_state: System state to choose the device state for.
*/
static int acpi_dev_pm_low_power(struct device *dev, struct acpi_device *adev,
u32 system_state)
{
int ret, state;
if (!acpi_device_power_manageable(adev))
return 0;
ret = acpi_dev_pm_get_state(dev, adev, system_state, NULL, &state);
return ret ? ret : acpi_device_set_power(adev, state);
}
/**
* acpi_dev_pm_full_power - Put ACPI device into the full-power state.
* @adev: ACPI device node to put into the full-power state.
*/
static int acpi_dev_pm_full_power(struct acpi_device *adev)
{
return acpi_device_power_manageable(adev) ?
acpi_device_set_power(adev, ACPI_STATE_D0) : 0;
}
/**
* acpi_dev_suspend - Put device into a low-power state using ACPI.
* @dev: Device to put into a low-power state.
* @wakeup: Whether or not to enable wakeup for the device.
*
* Put the given device into a low-power state using the standard ACPI
* mechanism. Set up remote wakeup if desired, choose the state to put the
* device into (this checks if remote wakeup is expected to work too), and set
* the power state of the device.
*/
int acpi_dev_suspend(struct device *dev, bool wakeup)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
u32 target_state = acpi_target_system_state();
int error;
if (!adev)
return 0;
if (wakeup && acpi_device_can_wakeup(adev)) {
error = acpi_device_wakeup_enable(adev, target_state);
if (error)
return -EAGAIN;
} else {
wakeup = false;
}
error = acpi_dev_pm_low_power(dev, adev, target_state);
if (error && wakeup)
acpi_device_wakeup_disable(adev);
return error;
}
EXPORT_SYMBOL_GPL(acpi_dev_suspend);
/**
* acpi_dev_resume - Put device into the full-power state using ACPI.
* @dev: Device to put into the full-power state.
*
* Put the given device into the full-power state using the standard ACPI
* mechanism. Set the power state of the device to ACPI D0 and disable wakeup.
*/
int acpi_dev_resume(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
int error;
if (!adev)
return 0;
error = acpi_dev_pm_full_power(adev);
acpi_device_wakeup_disable(adev);
return error;
}
EXPORT_SYMBOL_GPL(acpi_dev_resume);
/**
* acpi_subsys_runtime_suspend - Suspend device using ACPI.
* @dev: Device to suspend.
*
* Carry out the generic runtime suspend procedure for @dev and use ACPI to put
* it into a runtime low-power state.
*/
int acpi_subsys_runtime_suspend(struct device *dev)
{
int ret = pm_generic_runtime_suspend(dev);
return ret ? ret : acpi_dev_suspend(dev, true);
}
EXPORT_SYMBOL_GPL(acpi_subsys_runtime_suspend);
/**
* acpi_subsys_runtime_resume - Resume device using ACPI.
* @dev: Device to Resume.
*
* Use ACPI to put the given device into the full-power state and carry out the
* generic runtime resume procedure for it.
*/
int acpi_subsys_runtime_resume(struct device *dev)
{
int ret = acpi_dev_resume(dev);
return ret ? ret : pm_generic_runtime_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume);
#ifdef CONFIG_PM_SLEEP
static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
{
u32 sys_target = acpi_target_system_state();
int ret, state;
if (!pm_runtime_suspended(dev) || !adev || (adev->wakeup.flags.valid &&
device_may_wakeup(dev) != !!adev->wakeup.prepare_count))
return true;
if (sys_target == ACPI_STATE_S0)
return false;
if (adev->power.flags.dsw_present)
return true;
ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state);
if (ret)
return true;
return state != adev->power.state;
}
/**
* acpi_subsys_prepare - Prepare device for system transition to a sleep state.
* @dev: Device to prepare.
*/
int acpi_subsys_prepare(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) {
int ret = dev->driver->pm->prepare(dev);
if (ret < 0)
return ret;
if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
return 0;
}
return !acpi_dev_needs_resume(dev, adev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
/**
* acpi_subsys_complete - Finalize device's resume during system resume.
* @dev: Device to handle.
*/
void acpi_subsys_complete(struct device *dev)
{
pm_generic_complete(dev);
/*
* If the device had been runtime-suspended before the system went into
* the sleep state it is going out of and it has never been resumed till
* now, resume it in case the firmware powered it up.
*/
if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
pm_request_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_complete);
/**
* acpi_subsys_suspend - Run the device driver's suspend callback.
* @dev: Device to handle.
*
* Follow PCI and resume devices from runtime suspend before running their
* system suspend callbacks, unless the driver can cope with runtime-suspended
* devices during system suspend and there are no ACPI-specific reasons for
* resuming them.
*/
int acpi_subsys_suspend(struct device *dev)
{
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
pm_runtime_resume(dev);
return pm_generic_suspend(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
/**
* acpi_subsys_suspend_late - Suspend device using ACPI.
* @dev: Device to suspend.
*
* Carry out the generic late suspend procedure for @dev and use ACPI to put
* it into a low-power state during system transition into a sleep state.
*/
int acpi_subsys_suspend_late(struct device *dev)
{
int ret;
if (dev_pm_skip_suspend(dev))
return 0;
ret = pm_generic_suspend_late(dev);
return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev));
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
/**
* acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback.
* @dev: Device to suspend.
*/
int acpi_subsys_suspend_noirq(struct device *dev)
{
int ret;
if (dev_pm_skip_suspend(dev))
return 0;
ret = pm_generic_suspend_noirq(dev);
if (ret)
return ret;
/*
* If the target system sleep state is suspend-to-idle, it is sufficient
* to check whether or not the device's wakeup settings are good for
* runtime PM. Otherwise, the pm_resume_via_firmware() check will cause
* acpi_subsys_complete() to take care of fixing up the device's state
* anyway, if need be.
*/
if (device_can_wakeup(dev) && !device_may_wakeup(dev))
dev->power.may_skip_resume = false;
return 0;
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
/**
* acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
* @dev: Device to handle.
*/
static int acpi_subsys_resume_noirq(struct device *dev)
{
if (dev_pm_skip_resume(dev))
return 0;
return pm_generic_resume_noirq(dev);
}
/**
* acpi_subsys_resume_early - Resume device using ACPI.
* @dev: Device to Resume.
*
* Use ACPI to put the given device into the full-power state and carry out the
* generic early resume procedure for it during system transition into the
* working state, but only do that if device either defines early resume
* handler, or does not define power operations at all. Otherwise powering up
* of the device is postponed to the normal resume phase.
*/
static int acpi_subsys_resume_early(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
if (dev_pm_skip_resume(dev))
return 0;
if (pm && !pm->resume_early) {
dev_dbg(dev, "postponing D0 transition to normal resume stage\n");
return 0;
}
ret = acpi_dev_resume(dev);
return ret ? ret : pm_generic_resume_early(dev);
}
/**
* acpi_subsys_resume - Resume device using ACPI.
* @dev: Device to Resume.
*
* Use ACPI to put the given device into the full-power state if it has not been
* powered up during early resume phase, and carry out the generic resume
* procedure for it during system transition into the working state.
*/
static int acpi_subsys_resume(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret = 0;
if (!dev_pm_skip_resume(dev) && pm && !pm->resume_early) {
dev_dbg(dev, "executing postponed D0 transition\n");
ret = acpi_dev_resume(dev);
}
return ret ? ret : pm_generic_resume(dev);
}
/**
* acpi_subsys_freeze - Run the device driver's freeze callback.
* @dev: Device to handle.
*/
int acpi_subsys_freeze(struct device *dev)
{
/*
* Resume all runtime-suspended devices before creating a snapshot
* image of system memory, because the restore kernel generally cannot
* be expected to always handle them consistently and they need to be
* put into the runtime-active metastate during system resume anyway,
* so it is better to ensure that the state saved in the image will be
* always consistent with that.
*/
pm_runtime_resume(dev);
return pm_generic_freeze(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
/**
* acpi_subsys_restore_early - Restore device using ACPI.
* @dev: Device to restore.
*/
int acpi_subsys_restore_early(struct device *dev)
{
int ret = acpi_dev_resume(dev);
return ret ? ret : pm_generic_restore_early(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_restore_early);
/**
* acpi_subsys_poweroff - Run the device driver's poweroff callback.
* @dev: Device to handle.
*
* Follow PCI and resume devices from runtime suspend before running their
* system poweroff callbacks, unless the driver can cope with runtime-suspended
* devices during system suspend and there are no ACPI-specific reasons for
* resuming them.
*/
int acpi_subsys_poweroff(struct device *dev)
{
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
pm_runtime_resume(dev);
return pm_generic_poweroff(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_poweroff);
/**
* acpi_subsys_poweroff_late - Run the device driver's poweroff callback.
* @dev: Device to handle.
*
* Carry out the generic late poweroff procedure for @dev and use ACPI to put
* it into a low-power state during system transition into a sleep state.
*/
static int acpi_subsys_poweroff_late(struct device *dev)
{
int ret;
if (dev_pm_skip_suspend(dev))
return 0;
ret = pm_generic_poweroff_late(dev);
if (ret)
return ret;
return acpi_dev_suspend(dev, device_may_wakeup(dev));
}
/**
* acpi_subsys_poweroff_noirq - Run the driver's "noirq" poweroff callback.
* @dev: Device to suspend.
*/
static int acpi_subsys_poweroff_noirq(struct device *dev)
{
if (dev_pm_skip_suspend(dev))
return 0;
return pm_generic_poweroff_noirq(dev);
}
#endif /* CONFIG_PM_SLEEP */
static struct dev_pm_domain acpi_general_pm_domain = {
.ops = {
.runtime_suspend = acpi_subsys_runtime_suspend,
.runtime_resume = acpi_subsys_runtime_resume,
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend,
.resume = acpi_subsys_resume,
.suspend_late = acpi_subsys_suspend_late,
.suspend_noirq = acpi_subsys_suspend_noirq,
.resume_noirq = acpi_subsys_resume_noirq,
.resume_early = acpi_subsys_resume_early,
.freeze = acpi_subsys_freeze,
.poweroff = acpi_subsys_poweroff,
.poweroff_late = acpi_subsys_poweroff_late,
.poweroff_noirq = acpi_subsys_poweroff_noirq,
.restore_early = acpi_subsys_restore_early,
#endif
},
};
/**
* acpi_dev_pm_detach - Remove ACPI power management from the device.
* @dev: Device to take care of.
* @power_off: Whether or not to try to remove power from the device.
*
* Remove the device from the general ACPI PM domain and remove its wakeup
* notifier. If @power_off is set, additionally remove power from the device if
* possible.
*
* Callers must ensure proper synchronization of this function with power
* management callbacks.
*/
static void acpi_dev_pm_detach(struct device *dev, bool power_off)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
if (adev && dev->pm_domain == &acpi_general_pm_domain) {
dev_pm_domain_set(dev, NULL);
acpi_remove_pm_notifier(adev);
if (power_off) {
/*
* If the device's PM QoS resume latency limit or flags
* have been exposed to user space, they have to be
* hidden at this point, so that they don't affect the
* choice of the low-power state to put the device into.
*/
dev_pm_qos_hide_latency_limit(dev);
dev_pm_qos_hide_flags(dev);
acpi_device_wakeup_disable(adev);
acpi_dev_pm_low_power(dev, adev, ACPI_STATE_S0);
}
}
}
/**
* acpi_dev_pm_attach - Prepare device for ACPI power management.
* @dev: Device to prepare.
* @power_on: Whether or not to power on the device.
*
* If @dev has a valid ACPI handle that has a valid struct acpi_device object
* attached to it, install a wakeup notification handler for the device and
* add it to the general ACPI PM domain. If @power_on is set, the device will
* be put into the ACPI D0 state before the function returns.
*
* This assumes that the @dev's bus type uses generic power management callbacks
* (or doesn't use any power management callbacks at all).
*
* Callers must ensure proper synchronization of this function with power
* management callbacks.
*/
int acpi_dev_pm_attach(struct device *dev, bool power_on)
{
/*
* Skip devices whose ACPI companions match the device IDs below,
* because they require special power management handling incompatible
* with the generic ACPI PM domain.
*/
static const struct acpi_device_id special_pm_ids[] = {
ACPI_FAN_DEVICE_IDS,
{}
};
struct acpi_device *adev = ACPI_COMPANION(dev);
if (!adev || !acpi_match_device_ids(adev, special_pm_ids))
return 0;
/*
* Only attach the power domain to the first device if the
* companion is shared by multiple. This is to prevent doing power
* management twice.
*/
if (!acpi_device_is_first_physical_node(adev, dev))
return 0;
acpi_add_pm_notifier(adev, dev, acpi_pm_notify_work_func);
dev_pm_domain_set(dev, &acpi_general_pm_domain);
if (power_on) {
acpi_dev_pm_full_power(adev);
acpi_device_wakeup_disable(adev);
}
dev->pm_domain->detach = acpi_dev_pm_detach;
return 1;
}
EXPORT_SYMBOL_GPL(acpi_dev_pm_attach);
/**
* acpi_storage_d3 - Check if D3 should be used in the suspend path
* @dev: Device to check
*
* Return %true if the platform firmware wants @dev to be programmed
* into D3hot or D3cold (if supported) in the suspend path, or %false
* when there is no specific preference. On some platforms, if this
* hint is ignored, @dev may remain unresponsive after suspending the
* platform as a whole.
*
* Although the property has storage in the name it actually is
* applied to the PCIe slot and plugging in a non-storage device the
* same platform restrictions will likely apply.
*/
bool acpi_storage_d3(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
u8 val;
if (force_storage_d3())
return true;
if (!adev)
return false;
if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
&val))
return false;
return val == 1;
}
EXPORT_SYMBOL_GPL(acpi_storage_d3);
/**
* acpi_dev_state_d0 - Tell if the device is in D0 power state
* @dev: Physical device the ACPI power state of which to check
*
* On a system without ACPI, return true. On a system with ACPI, return true if
* the current ACPI power state of the device is D0, or false otherwise.
*
* Note that the power state of a device is not well-defined after it has been
* passed to acpi_device_set_power() and before that function returns, so it is
* not valid to ask for the ACPI power state of the device in that time frame.
*
* This function is intended to be used in a driver's probe or remove
* function. See Documentation/firmware-guide/acpi/non-d0-probe.rst for
* more information.
*/
bool acpi_dev_state_d0(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
if (!adev)
return true;
return adev->power.state == ACPI_STATE_D0;
}
EXPORT_SYMBOL_GPL(acpi_dev_state_d0);
#endif /* CONFIG_PM */
|
__label__pos
| 0.975636 |
Register | Sign In
Show All Articles
Types of Proxy Servers, Transparent and Anonymous Proxies
Mike S. | 25120 |
A proxy server is a computer that offers a computer network service to allow clients to make indirect network connections to other network services. A client connects to the proxy server, then requests a connection, file, or other resource available on a different server. The proxy provides the resource either by connecting to the specified server or by serving it from a cache. In some cases, the proxy may alter the client's request or the server's response for various purposes.
The primary role of any proxy is that it will assist you keep your secrecy like your IP and it also facilitates in getting at some domains on the net that could be blocked for instance in work, libraries, schools and so on.
There are many different types of proxy servers out there, but following are some commonly known proxies.
• Anonymous Proxy - An anonymous proxy server also known as web proxy, generally attempts to anonymize web surfing by hiding the original IP address of the end user. This type of proxy server are typically difficult to track, and provides reasonable anonymity for most users.
• Distorting Proxy - This type of proxy server identifies itself as a proxy server, but make an incorrect original IP address available through the http headers.
• High Anonymity Proxy - This type of proxy server does not identify itself as a proxy server and does not make available the original IP address. High anonymity proxies, only include the REMOTE_ADDR header with the IP address of the proxy server, making it appear that the proxy server is the client.
• Intercepting Proxy - An intercepting proxy, also known as a transparent proxy, combines a proxy server with a gateway. Connections made by client browsers through the gateway are redirected through the proxy without client-side configuration. These types of proxies are commonly detectable by examining the HTTP headers on the server side.
• Reverse proxy - A reverse proxy is another common form of a proxy server and is generally used to pass requests from the Internet, through a firewall to isolated, private networks. It is used to prevent Internet clients from having direct, unmonitored access to sensitive data residing on content servers on an isolated network, or intranet. If caching is enabled, a reverse proxy can also lessen network traffic by serving cached information rather than passing all requests to actual content servers.
• Transparent Proxy - A transparent proxy is a server that satisfies the definition of a proxy, but does not enforce any local policies. It means that it does not add, delete or modify attributes or modify information within messages it forwards. These are generally used for their ability to cache websites and do not effectively provide any anonymity to those who use them. However, the use of a transparent proxy will get you around simple IP bans. Further, your web browser does not require special configuration and the cache is transparent to the end-user. This is also known as transparent forward proxy.
You may be interested in:
Tagged Under:
internet,
network,
proxy,
server
Rate this article:
Active StarActive StarActive StarInactive StarInactive Star
(3.0 rating from 6 votes)
Comments
|
__label__pos
| 0.594498 |
Inspector plugins
The inspector dock supports custom plugins to create your own widgets for editing properties. This tutorial explains how to use the EditorInspectorPlugin and EditorProperty classes to write such plugins with the example of creating a custom value editor.
Organización
Just like Creando plugins, we start out by making a new plugin, getting a plugin.cfg file created, and start with our EditorPlugin. However, instead of using add_custom_node or add_control_to_dock we'll use add_inspector_plugin.
tool extends EditorPlugin
var plugin: EditorInspectorPlugin
func _enter_tree():
# EditorInspectorPlugin is a resource, so we use `new()` instead of `instance()`.
plugin = preload("res://addons/MyPlugin/MyInspectorPlugin.gd").new()
add_inspector_plugin(plugin)
func _exit_tree():
remove_inspector_plugin(plugin)
EditorInspectorPlugin
To actually connect into the Inspector, we create a EditorInspectorPlugin class. This script provides the "hooks" to the inspector. Thanks to this class, the editor will call the functions within the EditorInspectorPlugin while it goes through the process of building the UI for the inspector. The script is used to check if we should enable ourselves for any Object that is currently in the inspector (including any Resource that is embedded!).
Once enabled, EditorInspectorPlugin has methods that allow for adding EditorProperty nodes or just custom Control nodes to the beginning and end of the inspector for that Object, or for overriding or changing existing property editors.
# MyInspectorPlugin.gd
extends EditorInspectorPlugin
func can_handle(object):
# Here you can specify which object types (classes) should be handled by
# this plugin. For example if the plugin is specific to your player
# class defined with `class_name MyPlayer`, you can do:
# `return object is MyPlayer`
# In this example we'll support all objects, so:
return true
func parse_property(object, type, path, hint, hint_text, usage):
# We will handle properties of type integer.
if type == TYPE_INT:
# Register *an instance* of the custom property editor that we'll define next.
add_property_editor(path, MyIntEditor.new())
# We return `true` to notify the inspector that we'll be handling
# this integer property, so it doesn't need to parse other plugins
# (including built-in ones) for an appropriate editor.
return true
else:
return false
EditorProperty
Next, we define the actual EditorProperty custom value editor that we want instantiated to edit integers. This is a custom Control and we can add any kinds of additional nodes to make advanced widgets to embed in the inspector.
# MyIntEditor.gd
extends EditorProperty
class_name MyIntEditor
var updating = false
var spin = EditorSpinSlider.new()
func _init():
# We'll add an EditorSpinSlider control, which is the same that the
# inspector already uses for integer and float edition.
# If you want to put the editor below the property name, use:
# `set_bottom_editor(spin)`
# Otherwise to put it inline with the property name use:
add_child(spin)
# To remember focus when selected back:
add_focusable(spin)
# Setup the EditorSpinSlider
spin.set_min(0)
spin.set_max(1000)
spin.connect("value_changed", self, "_spin_changed")
func _spin_changed(value):
if (updating):
return
emit_changed(get_edited_property(), value)
func update_property():
var new_value = get_edited_object()[get_edited_property()]
updating = true
spin.set_value(new_value)
updating = false
|
__label__pos
| 0.932299 |
0895. Maximum Frequency Stack
895. Maximum Frequency Stack #
题目 #
Implement FreqStack, a class which simulates the operation of a stack-like data structure.
FreqStack has two functions:
push(int x), which pushes an integer x onto the stack. pop(), which removes and returns the most frequent element in the stack.
If there is a tie for most frequent element, the element closest to the top of the stack is removed and returned.
Example 1:
Input:
["FreqStack","push","push","push","push","push","push","pop","pop","pop","pop"],
[[],[5],[7],[5],[7],[4],[5],[],[],[],[]]
Output: [null,null,null,null,null,null,null,5,7,5,4]
Explanation:
After making six .push operations, the stack is [5,7,5,7,4,5] from bottom to top. Then:
pop() -> returns 5, as 5 is the most frequent.
The stack becomes [5,7,5,7,4].
pop() -> returns 7, as 5 and 7 is the most frequent, but 7 is closest to the top.
The stack becomes [5,7,5,4].
pop() -> returns 5.
The stack becomes [5,7,4].
pop() -> returns 4.
The stack becomes [5,7].
Note:
• Calls to FreqStack.push(int x) will be such that 0 <= x <= 10^9.
• It is guaranteed that FreqStack.pop() won’t be called if the stack has zero elements.
• The total number of FreqStack.push calls will not exceed 10000 in a single test case.
• The total number of FreqStack.pop calls will not exceed 10000 in a single test case.
• The total number of FreqStack.push and FreqStack.pop calls will not exceed 150000 across all test cases.
题目大意 #
实现 FreqStack,模拟类似栈的数据结构的操作的一个类。
FreqStack 有两个函数:
• push(int x),将整数 x 推入栈中。
• pop(),它移除并返回栈中出现最频繁的元素。如果最频繁的元素不只一个,则移除并返回最接近栈顶的元素。
解题思路 #
FreqStack 里面保存频次的 map 和相同频次 group 的 map。push 的时候动态的维护 x 的频次,并更新到对应频次的 group 中。pop 的时候对应减少频次字典里面的频次,并更新到对应频次的 group 中。
代码 #
package leetcode
type FreqStack struct {
freq map[int]int
group map[int][]int
maxfreq int
}
func Constructor895() FreqStack {
hash := make(map[int]int)
maxHash := make(map[int][]int)
return FreqStack{freq: hash, group: maxHash}
}
func (this *FreqStack) Push(x int) {
if _, ok := this.freq[x]; ok {
this.freq[x]++
} else {
this.freq[x] = 1
}
f := this.freq[x]
if f > this.maxfreq {
this.maxfreq = f
}
this.group[f] = append(this.group[f], x)
}
func (this *FreqStack) Pop() int {
tmp := this.group[this.maxfreq]
x := tmp[len(tmp)-1]
this.group[this.maxfreq] = this.group[this.maxfreq][:len(this.group[this.maxfreq])-1]
this.freq[x]--
if len(this.group[this.maxfreq]) == 0 {
this.maxfreq--
}
return x
}
/**
* Your FreqStack object will be instantiated and called as such:
* obj := Constructor();
* obj.Push(x);
* param_2 := obj.Pop();
*/
⬅️上一页
下一页➡️
Calendar Sep 18, 2021
Edit Edit this page
本站总访问量: 次 您是本站第 位访问者
|
__label__pos
| 0.855761 |
Text folding
Does anyone have any good ideas for how to implement folding in ProseMirror. I would like to be able to fold/unfold text under a heading until the next heading at the same level.
The approach I am currently considering is to insert a small “-” icon ahead of each heading and set the hidden property on the contents to be hidden if the icon is clicked, and unset it when it is clicked again. What is the best way of identifying the content to be hidden, setting the attribute and removing it again using the ProseMirror API? or should I use a different approach?
Hmm, haven’t done this but I’d suspect the steps are as follows:
1. Add “-” icon next to each heading
2. Upon click find the range you want to fold. Range will be from just after current heading to just before the next heading. Not sure of best way to find this position… consult docs. In the past I’ve had to walk over all of the nodes at the root level (pm.doc.child())
3. Add a mark range for this position (<= v0.10, >= v0.11 does not have this, @marijn is working on its replacement). Essentially giving this entire range a CSS class.
Wes,
Thanks a lot for the feedback. Am I correct in assuming that ranges are ephemeral, in the sense that if I persist pm.doc I will have to reapply the ranges? ie the ranges live in the editor, not in the doc.
Yes, that’s correct.
@maacl Did you ever end up with an implementation for text folding that you liked?
I’m new to PM and wanted to do something very similar to the original post.
Has anyone created something like this that I could use as a guide? If not, could someone outline the steps involved?
@cbeninati @zbum I never got around to implementing this, sorry.
I would like to be able to fold/unfold text under a heading until the next heading at the same level.
@maacl @cbeninati @zbum I have implemented the folding of headings which does identical to what you have mentioned.
The implementation is basically
• find a slice of all the nodes up until a heading of the same level is encountered
• delete it from the doc
• stringify the slice as a node Dom attribute in heading that was collapsed.
• for uncollapsing, parse the JSON string saved in the Dom attribute and append it below the heading.
Links:
|
__label__pos
| 0.858726 |
Skip to main content
K12 LibreTexts
7.4.1: Sums of Finite Geometric Series
• Page ID
14795
• \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Finding the Sum of a Finite Geometric Series
You are saving for summer camp. You deposit $100 on the first of each month into your savings account. The account grows at a rate of 0.5% per month. How much money is in your account on the first day on the 9th month?
Sum of Finite Geometric Series
We have discussed how to use the calculator to find the sum of any series provided we know the nth term rule. For a geometric series, however, there is a specific rule that can be used to find the sum algebraically. Let’s look at a finite geometric sequence and derive this rule.
Given \(\ a_{n}=a_{1} r^{n-1}\).
The sum of the first \(\ n\) terms of a geometric sequence is:
\(\ S_{n}=a_{1}+a_{1} r+a_{1} r^{2}+a_{1} r^{3}+\ldots+a_{1} r^{n-2}+a_{1} r^{n-1}\).
Now, factor out \(\ a_{1}\) to get \(\ a_{1}\left(1+r^{2}+r^{3}+\ldots+r^{n-2}+r^{n-1}\right)\). If we isolate what is in the parenthesis and multiply this sum by \(\ (1-r)\) as shown below we can simplify the sum:
\(\ \begin{array}{l}
(1-r) S_{n}=(1-r)\left(1+r+r^{2}+r^{3}+\ldots+r^{n-2}+r^{n-1}\right) \\
=\left(1+r+r^{2}+r^{3}+\ldots+r^{n-2}+r^{n-1}-r-r^{2}-r^{3}-r^{4}-\ldots-r^{n-1}-r^{n}\right) \\
=\left(1+r+r^{2}+r^{3}+\ldots+r^{n-2}+r^{n-1}-r-r^{2}-r^{3}-r^{4}-\ldots-r^{n-1}-r^{n}\right) \\
=(1-r)^{n}
\end{array}\)
By multiplying the sum by \(\ 1−r\) we were able to cancel out all of the middle terms. However, we have changed the sum by a factor of \(\ 1−r\), so what we really need to do is multiply our sum by \(\ \frac{1-r}{1-r}\), or 1.
\(\ a_{1}\left(1+r^{2}+r^{3}+\ldots+r^{n-2}+r^{n-1}\right) \frac{1-r}{1-r}=\frac{a_{1}\left(1-r^{n}\right)}{1-r}\), which is the sum of a finite geometric series.
So, \(\ S_{n}=\frac{a_{1}\left(1-r^{n}\right)}{1-r}\).
Let's find the sum of the first ten terms of the geometric sequence \(\ a_{n}=\frac{1}{32}(-2)^{n-1}\). This could also be written as, "Let's find \(\ \sum_{n=1}^{10} \frac{1}{32}(-2)^{n-1}\)."
Using the formula, \(\ a_{1}=\frac{1}{32}\), \(\ r=-2\), and \(\ n=10\).
\(\ S_{10}=\frac{\frac{1}{32}\left(1-(-2)^{10}\right)}{1-(-2)}=\frac{\frac{1}{32}(1-1024)}{3}=-\frac{341}{32}\)
We can also use the calculator as shown below.
\(\ \operatorname{sum}\left(\operatorname{seq}\left(1 / 32(-2)^{x-1}, x, 1,10\right)\right)=-\frac{341}{32}\)
Now, let's find the first term and the \(\ n^{t h}\) term rule for a geometric series in which the sum of the first 5 terms is 242 and the common ratio is 3.
Plug in what we know to the formula for the sum and solve for the first term:
\(\ \begin{aligned}
242 &=\frac{a_{1}\left(1-3^{5}\right)}{1-3} \\
242 &=\frac{a_{1}(-242)}{-2} \\
242 &=121 a_{1} \\
a_{1} &=2
\end{aligned}\)
The first term is \(\ 2\) and \(\ a_{n}=2(3)^{n-1}\).
Finally, let's solve the following problem.
Charlie deposits $1000 on the first of each year into his investment account. The account grows at a rate of 8% per year. How much money is in the account on the first day on the 11th year.
First, consider what is happening here on the first day of each year. On the first day of the first year, $1000 is deposited. On the first day of the second year $1000 is deposited and the previously deposited $1000 earns 8% interest or grows by a factor of 1.08 (108%). On the first day of the third year another $1000 is deposited, the previous year’s deposit earns 8% interest and the original deposit earns 8% interest for two years (we multiply by 1.082):
Sum Year 1: 1000
Sum Year 2: 1000 + 1000(1.08)
Sum Year 3: 1000 + 1000(1.08) + 1000(1.08)2
Sum Year 4: 1000 + 1000(1.08) + 1000(1.08)2 + 1000(1.08)3
\(\ \quad\quad\quad\quad\)⋮
Sum Year 11: 1000 + 1000(1.08) + 1000(1.08)2 + 1000(1.08)3 + … + 1000(1.08)9 + 1000(1.08)10
∗ There are 11 terms in this series because on the first day of the 11th year we make our final deposit and the original deposit earns interest for 10 years.
This series is geometric. The first term is 1000, the common ratio is 1.08 and \(\ n=11\). Now we can calculate the sum using the formula and determine the value of the investment account at the start of the 11th year.
\(\ s_{11}=\frac{1000\left(1-1.08^{11}\right)}{1-1.08}=16645.48746 \approx \$ 16,645.49\)
Examples
Example 1
Earlier, you were asked to find how much money is in your account on the first day of the 9th month.
Solution
There are 9 terms in this series because on the first day of the 9th month you make your final deposit and the original deposit earns interest for 8 months.
This series is geometric. The first term is 100, the common ratio is 1.005 and n=9. Now we can calculate the sum using the formula and determine the value of the investment account at the start of the 9th month.
\(\ s_{9}=\frac{100\left(1-1.005^{9}\right)}{1-1.005}=918.22\)
Therefore there is $918.22 in the account at the beginning of the ninth month.
Example 2
Evaluate \(\ \sum_{n=3}^{8} 2(-3)^{n-1}\)
Solution
Since we are asked to find the sum of the \(\ 3^{r d}\) through \(\ 8^{t h}\) terms, we will consider \(\ a_{3}\) as the first term. The third term is \(\ a_{3}=2(-3)^{2}=2(9)=18\). Since we are starting with term three, we will be summing 6 terms, \(\ a_{3}+a_{4}+a_{5}+a_{6}+a_{7}+a_{8}\), in total. We can use the rule for the sum of a geometric series now with \(\ a_{1}=18\), \(\ r=-3\) and \(\ n=6\) to find the sum:
\(\ \sum_{n=3}^{8} 2(-3)^{n-1}=\frac{18\left(1-(-3)^{6}\right)}{1-(-3)}=-3276\)
Example 3
If the sum of the first seven terms in a geometric series is \(\ \frac{215}{8}\) and \(\ r=-\frac{1}{2}\), find the first term and the \(\ n^{t h}\) term rule.
Solution
We can substitute what we know into the formula for the sum of a geometric series and solve for \(\ a_{1}\).
\(\ \begin{aligned}(l)
\frac{215}{8} &=\frac{a_{1}\left(1-\left(-\frac{1}{2}\right)^{7}\right)}{1-\left(-\frac{1}{2}\right)} \\
\frac{215}{8} &=a_{1}\left(\frac{43}{64}\right) \\
a_{1} &=\left(\frac{64}{43}\right)\left(\frac{215}{8}\right)=40
\end{aligned}\)
The \(\ n^{t h}\) term rule is \(\ a_{n}=40\left(-\frac{1}{2}\right)^{n-1}\)
Example 4
Sam deposits $50 on the first of each month into an account which earns 0.5% interest each month. To the nearest dollar, how much is in the account right after Sam makes his last deposit on the first day of the fifth year (the 49th month).
Solution
The deposits that Sam make and the interest earned on each deposit generate a geometric series,
\(\ \begin{aligned}
S_{49}=50+50(1.005)^{1}+50(1.005)^{2}+50(1.005)^{3}+\ldots+50(1.005)^{47}+50(1.005)^{48},\\
\quad \uparrow \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\uparrow\\
\text { last deposit } \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \text { first deposit }
\end{aligned}\)
Note that the first deposit earns interest for 48 months and the final deposit does not earn any interest. Now we can find the sum using \(\ a_{1}=50\), \(\ r=1.005\) and \(\ n=49\).
\(\ S_{49}=\frac{50\left(1-(1.005)^{49}\right)}{(1-1.005)} \approx \$ 2768\)
Review
Use the formula for the sum of a geometric series to find the sum of the first five terms in each series.
1. \(\ a_{n}=36\left(\frac{2}{3}\right)^{n-1}\)
2. \(\ a_{n}=9(-2)^{n-1}\)
3. \(\ a_{n}=5(-1)^{n-1}\)
4. \(\ a_{n}=\frac{8}{25}\left(\frac{5}{2}\right)^{n-1}\)
5. \(\ a_{n}=\frac{2}{3}\left(-\frac{3}{4}\right)^{n-1}\)
Find the indicated sums using the formula and then check your answers with the calculator.
1. \(\ \sum_{n=1}^{4}(-1)\left(\frac{1}{2}\right)^{n-1}\)
2. \(\ \sum_{n=2}^{8}(128)\left(\frac{1}{4}\right)^{n-1}\)
3. \(\ \sum_{n=2}^{7} \frac{125}{64}\left(\frac{4}{5}\right)^{n-1}\)
4. \(\ \sum_{n=5}^{11} \frac{1}{32}(-2)^{n-1}\)
Given the sum and the common ratio, find the \(\ n^{t h}\) term rule for the series.
1. \(\ \sum_{n=1}^{6} a_{n}=-63\) and \(\ r=-2\)
2. \(\ \sum_{n=1}^{4} a_{n}=671\) and \(\ r=\frac{5}{6}\)
3. \(\ \sum_{n=1}^{5} a_{n}=122\) and \(\ r=-3\)
4. \(\ \sum_{n=2}^{7} a_{n}=-\frac{63}{2}\) and \(\ r=-\frac{1}{2}\)
Solve the following word problems using the formula for the sum of a geometric series.
1. Sapna’s grandparents deposit $1200 into a college savings account on her 5th birthday. They continue to make this birthday deposit each year until making the final deposit on her 18th birthday. If the account earns 5% interest annually, how much is there after the final deposit?
2. Jeremy wants to have save $10,000 in five years. If he makes annual deposits on the first of each year and the account earns 4.5% interest annually, how much should he deposit each year in order to have $10,000 in the account after the final deposit on the first of the 6th year. Round your answer to the nearest $100.
Answers for Review Problems
To see the Review answers, open this PDF file and look for section 11.10.
Vocabulary
Term Definition
induction Induction is a method of mathematical proof typically used to establish that a given statement is true for all positive integers.
series A series is the sum of the terms of a sequence.
7.4.1: Sums of Finite Geometric Series is shared under a CK-12 license and was authored, remixed, and/or curated by CK-12 Foundation via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
CK-12 Foundation
LICENSED UNDER
CK-12 Foundation is licensed under CK-12 Curriculum Materials License
• Was this article helpful?
|
__label__pos
| 0.999378 |
Confusion using pointers
This is a discussion on Confusion using pointers within the C Programming forums, part of the General Programming Boards category; I just registered on this board for help with a problem I'm having. I recently had to edit an older ...
1. #1
Registered User
Join Date
Mar 2010
Posts
6
Confusion using pointers
I just registered on this board for help with a problem I'm having. I recently had to edit an older assignment to use a linked list and after completion of the program it seg faults right off the bat. I understand the difficulties with trying to diagnose these but I really have no idea where to start, I've had a lot of problems with seg faults in the past and something about arrays or pointers or possibly using the wrong scanning function but in general I just must be missing something about these at least.
Here's the beginning of my code, the rest is simple functions that work similarily to the add() function. The gist of the program is that it's supposed to open a file and read its contents into a linked list, then shoudl dynamically update the linked list based on what the user inputted and then rewrites the file at the end. The file is also ciphered using a simple cipher before being written to the file.
Code:
#include<stdlib.h>
#include<stdio.h>
#include<string.h>
char* decipher(char* decipher);
char* cipher(char* cipher);
void menu(list* ptr);
list* add(char* bar[], list* ptr);
list* edit(char* bar[], list* ptr);
list* verify(char* foo[], list* ptr);
list* del(char* bar[], list* ptr);
void addToList(list* ptr, char* foo, char* poo, char* boo);
void printList(list* ptr);
//prototypes
#include"cipher.c"
#include"menu.c"
typedef struct list
{
char *user;
char *pass;
char *type;
struct list *next;
} list, node;
list* initializeList() {
char buffer[50];
node* initial;
node* ptr;
FILE* addUser = fopen("password.csv", "rt");
initial = (node*)malloc(sizeof(node));
if (addUser != NULL)
{
fgets(buffer, 50, addUser); //Initializes first node of list
initial->user = strtok(buffer, " ,");
initial->pass = strtok(buffer, " ,");
initial->type = strtok(buffer, "\n");
initial->next = NULL; //ready's addition of next node
ptr = initial;
while(!feof(addUser))
{
while(ptr->user != NULL)
ptr = ptr->next;
node* link = (node*)malloc(sizeof(node));
fgets(buffer, 50, addUser);
link->user = strtok(buffer, " ,");
link->pass = strtok(buffer, " ,");
link->type = strtok(buffer, "\n");
ptr->next = link;
}
}
fclose(addUser);
return initial;
}
int main(int argc, char* argv[]) {
printf("lolol");
list* linkList = initializeList();
char* choice = argv[1];
int i;
if (!strcmp("-menu", choice)){//If statement processes menu mode, if specified by user
menu(linkList);
printList(linkList);
free(linkList);
return (-1);
}
for (i = 2; i < argc; i++){
if (!strcmp(argv[i],"-menu"))//Determines if menu was incorrectly specified at command-line
{
printf("Syntax Error: -menu must be the first argument.\n");
return (-1);
}
}
if (!strcmp(choice, "-add"))
add(argv, linkList);
else if (!strcmp(choice, "-del"))
del(argv, linkList);
else if (!strcmp(choice, "-edit"))
edit(argv, linkList);
else if (!strcmp(choice, "verify"))
verify(argv, linkList);
else
{
printf("Syntax Error: possible arguments include -menu to enter menu-driven mode or,"
" -add, -del, -edit, or -verify followed by the entry arguments.\n");
return (-1);
}
printList(linkList);
free(linkList);
return 0;
}
list* add(char* array[], list* linkList){
node* ptr = linkList;
char *userCheck;
char *ciphered[3];
printf("How about here");
while (ptr->user != NULL)
{
userCheck = strdup(ptr->user);
userCheck = decipher(userCheck);
if (!strcmp(array[2], userCheck))
{
printf("Entry already exists"); //Checks if user is already present in the directory
return (-1); //Does not allow for more than one instance of the same name.
}
ptr = ptr->next;
}
ciphered[0] = strdup(cipher(array[2]));//Passes user, password and type to be ciphered
ciphered[1] = strdup(cipher(array[3]));
ciphered[2] = strdup(cipher(array[4]));
ptr = (node*)malloc(sizeof(node)); //creates new struct for contents to be added
addToList(ptr, ciphered[0], ciphered[1], ciphered[2]);
return linkList;
}
Any minor thoughts or things I could try would help as well.
2. #2
cas
cas is offline
Registered User
Join Date
Sep 2007
Posts
1,001
The first place that I see that is problematic is this:
Code:
while(ptr->user != NULL)
ptr = ptr->next;
I presume you mean: while(ptr->next != NULL): otherwise you won't be able to detect if you fall off the end of the list.
In addition, I noticed that your usage of strtok() is not correct. The first argument to strtok() on the first call to strtok() should be your string; but subsequent calls (while tokenizing the same string) should have the first argument set to NULL. Something like:
Code:
initial->user = strtok(buffer, " ,");
initial->pass = strtok(NULL, " ,");
initial->type = strtok(NULL, "\n");
...
link->user = strtok(buffer, " ,");
link->pass = strtok(NULL, " ,");
link->type = strtok(NULL, "\n");
It's also a bad idea to loop on the return value of feof(). feof() is not predictive, it's reactive; so it will not return true (ie EOF) until after a read function has noticed EOF. Instead, you want something like:
Code:
while(fgets(buffer, sizeof buffer, fp) != NULL)
That is, loop on the read function, because it will be happy to tell you when it found EOF. Looping on feof() will make it appear as though the last line of the file is read twice.
3. #3
Registered User
Join Date
Mar 2010
Posts
6
I updated my code for all the things you mentioned but it still seg faults off the start.
4. #4
cas
cas is offline
Registered User
Join Date
Sep 2007
Posts
1,001
Your next step (which should always be your first step on a segfault) is to use a debugger. I recommend Valgrind if it supports your platform. If not, gdb probably does. These will help you pinpoint the problem.
Valgrind is great because it can usually tell you where the problem occurred, as opposed to where the symptom (the segfault) is occurring.
5. #5
Registered User
Join Date
Mar 2010
Posts
6
Thanks! I didn't know programs like these existed. I'll get right on it.
6. #6
Registered User
Join Date
Mar 2010
Posts
6
I'm still having problems compiling this. I've tried using valgrind and I've pinpointed some errors but it's still not functioning correctly. I'm not sure what command/tool I should be using to pinpoint the problems in valgrind but I've been using memcheck, aside form that I'm not sure what else to do.
If anyone can check out if I've defined everything properly and if I don't have any null pointers or anything to that effect that would be very helpful. I believe the problem most likely lies inside of my initializer for the linklist I'm creating but I'm still unsure as to the problem exactly.
Code:
list* initializeList() {
char buffer[50];
node* initial = (node*)malloc(sizeof(node));
node* ptr = initial;
FILE* addUser = fopen("password.csv", "rt");
if (addUser != NULL)
{
fgets(buffer, 50, addUser); //Initializes first node of list
initial->user = strtok(buffer, " ,");
initial->pass = strtok(NULL, " ,");
initial->type = strtok(NULL, "\n");
initial->next = NULL; //ready's addition of next node
ptr = initial;
while(fgets(buffer, 50, addUser) != NULL)
{
while(ptr->next != NULL)
ptr = ptr->next;
node* link = (node*)malloc(sizeof(node));
fgets(buffer, 50, addUser);
link->user = strtok(buffer, " ,");
link->pass = strtok(NULL, " ,");
link->type = strtok(NULL, "\n");
ptr->next = link;
}
fclose(addUser);
}
return initial;
}
7. #7
Registered User claudiu's Avatar
Join Date
Feb 2010
Location
London, United Kingdom
Posts
2,094
If you want 50 characters in your buffer you would better allocate space for 50+1 (including the string terminator \0)
8. #8
cas
cas is offline
Registered User
Join Date
Sep 2007
Posts
1,001
I suspect you don't want to be calling fgets() inside your loop like that; you'll be ignoring lines. Once you have something like:
Code:
while(fgets(buffer, 50, addUser) != NULL)
you need not call fgets() again. Each iteration of the loop will fill up "buffer" with (at most) 49 bytes from the file. By calling fgets() inside of the loop you're ignoring the fgets() call that's controlling the loop.
There's another substantial error that I completely missed in your initial post. When you are using strtok(), the return value from strtok() is a pointer inside the buffer you're tokenizing. You're not actually getting copies of the token. Thus each time you read a line, your previous tokens will get garbled because they're simply pointing inside of your single buffer. What's more, the array they point to is local to the function, so when it returns, any pointers to it become invalid. Your linked list entries all contain pointers to this (soon to be) invalid buffer, and that can cause problems; problems that even the mighty Valgrind might not be able to notice. You can solve this in a few ways. You might make user, pass, and type arrays and snprintf() the values to them; or you might use malloc() + strcpy() (or strdup() if you're targeting unix-like systems). This latter option would require a lot of free() calls if you want to avoid leaks.
You'll also want to set link->next to NULL. Otherwise it'll contain garbage.
As for valgrind, memcheck is the tool you'll want to focus on for now. The other tools are great, too, but for debugging memcheck is the way to go. When you're debugging, you always want to build with the -g flag (tells the compiler to include debugging symbols). Off the top of my head, the following are common memcheck errors:
invalid read of size n: you tried to access memory that you're not allowed to. If you tried to read an int from an invalid pointer, for example, it will (probably) tell you that you had an invalid read of 4 bytes (4 bytes being a typical size for int).
invalid write of size n: the same as the above but for storing, not retrieving
conditional move depends on uninitialized value / syscall points to uninitialized bytes / use of uninitialized value of size n: you tried to use something before giving it a value
There are probably more, but these will cover a lot of the issues you'll run into.
9. #9
Registered User
Join Date
Mar 2010
Posts
6
Very informative post thanks a lot Cas. I'm a little confused as to how I can solve the problem of the local structure within loop. If I only need to create a new struct on the condition that there are more lines to be read, how do I go about doing this within the iteration of the loop?
10. #10
cas
cas is offline
Registered User
Join Date
Sep 2007
Posts
1,001
There's no problem with a local struct (you have no local structs; they're all allocated). And those you are creating each iteration, since you're calling malloc() each time.
The problem is your array of char. It is local to the function. But that's OK, really, because you'll have to solve another problem first (the strtok() issue I mentioned), and once that's solved, the fact that "buffer" is local to the function doesn't matter. Example:
Code:
char *f(void)
{
char s[] = "foo";
return s; /* bad because you're returning a pointer to storage that is local to the function, storage which ceases to exist when the function returns */
}
char *f(void)
{
char s[] = "foo";
char *p = malloc(strlen(s) + 1);
strcpy(p, s);
return p; /* fine because you're returning a pointer to allocated storage, which exists until you call free() */
}
The fact that strtok() does not make copies of the tokens means you will have to make copies. Thus you'll be doing something more like the second function in the example above.
11. #11
Registered User
Join Date
Mar 2010
Posts
6
Thanks for all the help cas, with yours and another c whiz friend of mine I was able to get it running. He said you had the right idea and he helped me implement it correctly, thanks again!
Popular pages Recent additions subscribe to a feed
Similar Threads
1. pointers to arrays
By rakeshkool27 in forum C Programming
Replies: 1
Last Post: 01-24-2010, 06:28 AM
2. sorting number
By Leslie in forum C Programming
Replies: 8
Last Post: 05-20-2009, 04:23 AM
3. Arrays, pointers and strings
By Apropos in forum C++ Programming
Replies: 12
Last Post: 03-21-2005, 10:25 PM
4. pointers
By InvariantLoop in forum C Programming
Replies: 13
Last Post: 02-04-2005, 08:32 AM
5. pointers, functions, parameters
By sballew in forum C Programming
Replies: 3
Last Post: 11-11-2001, 09:33 PM
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
__label__pos
| 0.907152 |
...
Live Query change event not quite working in IE
agentfloyd
08-12-2008, 07:11 PM
I have a form with a set of fields that need to be added to dynamically and have a total calculated from the values of each. I start out with one instance of each field, then add more with the click of a button. Changes to the values of the initial set of fields triggers the calculations, but not changes to the values of any added fields.
the fields:
<tr>
<td></td>
<td><input type="text" name="day_rate[]" size="5" class="invoice_dayrate" /></td>
<td><input type="text" name="per_diem[]" size="5" /></td>
<td><input type="text" name="mileage[]" size="5" /></td>
<td><input type="text" name="lodging[]" size="5" /></td>
<td></td>
<td></td>
</tr>
and the javascript:
$('input.invoice_dayrate').livequery('change', function() {
$('#total_day_rate').val(0);
$('input.invoice_dayrate').each(function() {
var value = (this.value != '') ? parseFloat(this.value) : 0;
this.value = value.toFixed(2);
var subtotal = parseFloat($('#total_day_rate').val());
var subtotal = subtotal+value;
$('#total_day_rate').val(subtotal.toFixed(2));
});
update_total();
});
(javascript code for other fields has been omitted, they all work the same as the Day Rate field)
I have tried several different selectors: $('input.invoice_dayrate'), $('.invoice_dayrate'), $("input[name='day_rate[]']"), $(":text[name='day_rate[]']"). All with the same result.
Also the calculations are all done correctly when triggered. Its just that the calculations are only triggered when a change is made to a pre-existing field & not fields that have been added.
Btw this all works brilliantly in Firefox, just having trouble in IE (7 to be precise, haven't tried in IE6).
EZ Archive Ads Plugin for vBulletin Copyright 2006 Computer Help Forum
|
__label__pos
| 0.729214 |
How do I delete an Outlook account from my iPhone app?
If you’re no longer using an Outlook account, you can delete it from your iPhone. This will remove the account and all its associated data from your device.
Here’s how to delete an Outlook account from your iPhone:
1. Tap the Settings icon on your home screen.
2. Select Accounts & Passwords.
3. Select the Outlook account you want to delete.
4. Tap the Delete Account button.
5. Confirm that you want to delete the account by tapping the Delete button.
How to remove account on Outlook
Assuming that you would like to remove an Outlook account:
1. Go to Control Panel and find the Mail icon.
2. Under the Email tab, find the account you want to remove and click the minus sign.
3. A dialog box will appear. Choose the option to “Remove the account from this computer” and then click OK.
4. The account will be removed from Outlook.
Frequently Asked Questions with answer of How do I delete an Outlook account from my iPhone app?
Can I access my Outlook account from my phone?
If you’re using Outlook for email, you can access your account from your phone. The process is a bit different depending on the type of phone you’re using, but the general idea is the same.
If you’re using an iPhone, you can access your Outlook account by downloading the Outlook app from the App Store. Once you have the app installed, open it and sign in with your Outlook account information.
If you’re using an Android phone, you can access your Outlook account by downloading the Outlook app from the Google Play Store. Once you have the app installed, open it and sign in with your Outlook account information.
If you’re using a Windows Phone, you can access your Outlook account by downloading the Outlook app from the Windows Phone Store. Once you have the app installed, open it and sign in with your Outlook account information.
Once you’re signed in, you’ll be able to access your email, contacts, and calendar just like you would on a computer. You can also compose new emails and add new contacts.
How do I set up my Outlook email on my iPhone?
Assuming you already have an Outlook email account:
1. Tap the Settings app on your iPhone’s home screen.
2. Scroll down and tap Mail, Contacts, Calendars.
3. Tap Add Account under the “Accounts” section.
4. Tap Microsoft Exchange.
5. Enter your Outlook email address and password.
6. Tap Next.
7. Make sure the “Configure Manually” option is selected.
8. Enter the following information in the corresponding fields:
– Email: Your Outlook email address
– Domain: outlook.com
– Username: Your Outlook email address
– Password: Your Outlook password
9. Tap Next.
10. Select which Outlook features you want to sync with your iPhone.
11. Tap Save.
How do I access my email on Outlook app?
Open the Outlook app. Tap the icon with your initials in the lower-left corner. If you have multiple email accounts, tap the account you want to use. Enter your email address and password, then tap Sign in.
Why is my Outlook email not working on my iPhone?
If you’re having trouble getting your Outlook email to work on your iPhone, there are a few things you can try.
First, make sure that your Outlook account is set up correctly on your iPhone. To do this, go to Settings > Passwords & Accounts, and then select your Outlook account. If the account is set up correctly, you should see the account’s email address, as well as the server settings for incoming and outgoing mail.
If your Outlook account is set up correctly and you’re still having trouble, try deleting and re-adding the account on your iPhone. To do this, go to Settings > Passwords & Accounts, select your Outlook account, and then tap the Delete Account button. Once the account has been deleted, you can add it back by going to Settings > Passwords & Accounts, and then tapping the Add Account button.
If you’re still having trouble, you can try troubleshooting your iPhone’s connection to Outlook. To do this, go to Settings > Mail, select your Outlook account, and then tap the Account Info button. Next, tap the Fetch New Data button, and make sure that the Push slider is set to the ON position.
If you’re still having trouble, you can try contacting Microsoft’s support team for help.
Where is Outlook settings on iPhone?
If you’re using an iPhone and want to change your Outlook settings, here’s how to do it.
1. Tap the Settings icon on your homescreen.
2. Scroll down and tap Mail, Contacts, Calendars.
3. Tap Add Account under Accounts.
4. Tap Microsoft Exchange.
5. Enter your Outlook email address, then tap Next.
6. Enter your password, then tap Next.
7. Enter a description of the account, then tap Next.
8. Make sure the Configure Automatically switch is turned on, then tap Save.
9. Your iPhone will now configure your Outlook settings.
How do you add an account to Outlook Mobile App?
Assuming you would like an article discussing how to add an account to the Outlook Mobile App:
“How to add an account to Outlook Mobile App”
Adding an account to the Outlook Mobile App is a quick and easy process. Simply open the app and go to Settings, then Accounts. From there, you can add an Exchange, Office 365, or Outlook.com account by selecting the appropriate option and following the instructions. If you’re adding an Exchange account, you may need to enter some additional information, such as your server address. Once you’ve added your account, you’re ready to start using Outlook Mobile!
Why can’t I add an Outlook account to my iPhone?
If you’re trying to add an Outlook account to your iPhone, you may have noticed that there is no Outlook option available. So why can’t you add an Outlook account to your iPhone?
The answer is actually quite simple. Microsoft does not currently offer an Exchange ActiveSync license to any third-party companies, which means that Outlook cannot sync with an iPhone using this protocol.
So what does this mean for you? If you want to sync your Outlook account with your iPhone, you’ll need to use another method, such as Microsoft’s Outlook app or a third-party app that uses a different sync protocol.
While it may be inconvenient, this is actually a good thing. Exchange ActiveSync is a proprietary protocol that gives Microsoft a lot of control over how your data is synced and managed. By not offering it to third-party companies, Microsoft can ensure that Outlook works the way they want it to and that your data is always safe.
If you’re looking for an alternative to Outlook, there are a number of great email apps available that will work with your iPhone. And since most of them use the more open IMAP protocol, you’ll have a lot more control over your data and how it’s synced.
How do I add email account to Outlook?
Assuming you would like an article discussing how to add an email account to Outlook:
Adding an email account to Outlook is a simple process that can be completed in just a few steps. First, open Outlook and click on the File tab. Next, click on the Add Account option. Enter the email address and password for the account you wish to add, then click on the Next button. Outlook will now begin to set up your account. Once the process is complete, click on the Finish button and you’re all set!
How do you reset Outlook app on iPhone?
Open the Outlook app on your iPhone.
Tap the Settings icon.
Under the Accounts section, tap the account you want to reset.
Tap the Delete Account button.
A confirmation message will appear. Tap the Delete Account button to confirm.
Why is Outlook app not working?
If you’re having trouble with the Outlook app on your iPhone or iPad, first make sure that you have the latest version of the app. Then, try these steps:
1. Make sure you have a connection to the Internet
If you’re using cellular data, make sure that cellular data is turned on for the Outlook app. To do this, go to Settings > Cellular, and turn on the switch next to Outlook.
If you’re using a Wi-Fi connection, make sure that you’re connected to the right Wi-Fi network.
2. Check your email account settings
In the Outlook app, go to Settings > Add Account. Compare the settings for your email account in the Outlook app with the settings for that account in your email provider’s website.
Here are some things to check:
The account type. This should be IMAP or POP.
The incoming and outgoing server names. These should match the settings for your email account in your email provider’s website.
The user name and password. These should match the settings for your email account in your email provider’s website.
The connection type. This should be SSL/TLS.
3. Delete and re-add your email account
In the Outlook app, go to Settings > Add Account, tap your email account, then tap Delete Account. After your account is deleted, go to Settings > Add Account, and add your account again.
4. Restart your device
Press and hold the Sleep/Wake button, then slide the button to turn off your device. To turn your device back on, press and hold the Sleep/Wake button until the Apple logo appears.
5. Uninstall and reinstall the Outlook app
To uninstall the Outlook app, press and hold the Outlook app icon. When the icon starts wiggling, tap the x that appears.
To reinstall the Outlook app, go to the App Store, then search for and download the Outlook app.
Why can’t I access Outlook on my phone?
If you can’t access Outlook on your phone, it could be because your phone’s operating system is not supported by Outlook, you don’t have an Exchange ActiveSync account, or your Exchange server is not available.
Outlook for Android and iOS supports a subset of Exchange server features. If your organization uses features that aren’t supported, you won’t be able to use Outlook on your phone.
To use Outlook on your phone, you need an Exchange ActiveSync account. Most Exchange accounts provided by organizations using Exchange Server 2010 or later support Exchange ActiveSync.
If you have an Exchange ActiveSync account, but you still can’t access Outlook on your phone, it could be because your Exchange server is unavailable. This could be due to maintenance, network problems, or other issues.
Conclusion
If you want to delete an Outlook account from your iPhone, you can do so by going to the Settings app and tapping on the Accounts & Passwords section. From there, simply select the Outlook account you want to delete and tap on the Delete Account button.
Leave a Reply
Your email address will not be published.
|
__label__pos
| 0.919199 |
Johnny Lee Johnny Lee Keynotes
The Johnny Lee keynotes examine how everyday technologies can be transformed into high-functioning...
Keynotes.org Need Inspiration?
Get inspired by our collection of 2,000+ keynote speaker videos and 100+ courses of innovative content.
Johnny Lee Demonstrates How to Transform Basic Technology into Useful Tools
By: John Ibbitson - Published: • References: ted
In his highly inventive technology presentation, Johnny Lee shows his audience how to hack a Wii video game remote controller and turn it into a valuable teaching aid for businesses and schools. By manipulating the way the remote works ever so slightly, Lee demonstrates how it can be used to create an interactive whiteboard with touchscreen technology at a fraction of the cost of a head-mounted three-dimensional viewer. The latter of these two inventions has already been picked up as a product that the gaming industry is intending on incorporating into future products. In this innovative presentation, Lee not only encourages his audience to think differently about technology and its various uses, but also demonstrates how one can quickly give online tutorials with the use of websites like YouTube. Just days after Johnny Lee posted these tutorials online, engineers, students and teachers from all over the world began embracing his lessons and sharing them through their own video tutorials. This thoroughly engaging seminar gives viewers information on how to replicate affordable versions of expensive technologies in their offices and classrooms. Stats for Educational Gaming Trending: Older & Chilly
Traction: 242 clicks in 161 w
Interest: > 3 minutes
Concept: Johnny Lee
Related: 17 examples / 13 photos
Segment: Neutral, 12-55+
Comparison Set: 6 similar articles, including: social media & gender, web-based evolution, and child programmers.
|
__label__pos
| 0.558647 |
Does Not Compute
One of the most basic ways to think about a computer program is that it is a device which takes in integers as inputs and spits out integers as outputs. The C# compiler, for example, takes in source code strings, and those source code strings are essentially nothing more than enormous binary numbers. The output of the compiler is either diagnostic text, or strings of IL and metadata, which are also just enormous binary numbers. Because the compiler is not perfect, in some rare cases it terminates abnormally with an internal error message. But those fatal error messages are also just big binary numbers. So let's take this as our basic model of a computer program: a computer program is a device that either (1) runs forever without producing output, or (2) computes a function that maps one integer to another.
So here's an interesting question: are there functions which cannot be computed, even in principle on a machine with arbitrarily much storage, by any C# program (*)?
We already know the answer to that question. Last year I pointed out that the Halting Problem is not solvable by any computer program, because the assumption that it is solvable leads to a logical contradiction. But the Halting Problem is just a function on integers. Let's say that the input of our function H is a number which when written out in binary is a Unicode string that might contain a C# program. The output is 1 if the program is an illegal C# program, 2 if it is a legal C# program which halts, and 3 if it is a legal C# program which does not halt. If it were possible to write a program that reliably computes function H and always terminates then it would be possible to use it to solve the Halting Problem, which we've shown is impossible. Therefore H is not a computable function.
Let's explore this a bit further. The "Turing Machine" model of computing is that a computer is a machine that has three kinds of storage: first, there's a fixed amount of "internal" storage that describes the current state of the processor, second, there is arbitrarily much "external" storage in the form of paper tape, disk drives, or whatever, that can contain binary data, and third, there is some way of identifying the "current position" being manipulated in the external storage. The Turing Machine also has strict rules that describe how to change the internal state, the external state, and the current position. One of the internal states is the "start" state, and one of the internal states is the "halt" state; once the machine gets to the halting state, it stops. Otherwise, it runs forever.
Without loss of generality, let's suppose that our Turing Machine's external storage is arbitrarily many bits, either zero or one, and that the internal storage is some fixed number of bits, say n. This is pretty restrictive, but we haven't actually lost anything fundamental here. Real computers of course give the appearance of manipulating storage that consists of 32 bit integers or 64 bit doubles or whatever, but at some level inside the processor, it is manipulating individual bits. There is no difference in principle between a machine that manipulates one bit at a time and a machine that manipulates 64 bits at a time; the latter is simply more convenient.
So then how many rules do we need to come up with for our Turing machine? A Turing machine with n bits of internal state has 2n possible states, and there are two possibilities for the value at the "current position" in the external state. (**) So that means that there are 2n+1 state transition rules. Each transition rule will have to encode three things: (1) what are the n bits of the new internal state? (2) what value should the external state be changed to? and (3) how should we update the current position?
Again without loss of generality, we can update the current position by decreasing it by one, increasing it by one, or leaving it the same. In practice that is inconvenient, but in principle that is enough. So those are three possibilities. Thus, each state transition rule is one of 2 x 2n x 3 possibilities. There are 2n+1 state transition rules. Therefore the total number of possible Turing Machines that have n bits of internal storage is 3 x 2n+1 raised to the 2n+1 power, which, yes, grows pretty quickly as n gets large, but which is clearly a finite number.
Each one of these n-bit Turing Machines essentially computes a function. You start it up with the external storage in a particular state and the machine either runs forever, or after some finite number of steps it halts. If it halts, then the output of the function is the value left behind in the external storage.
Again without loss of generality, let's consider the value computed by each one of those possible Turning machines when the external storage is initially all zeros. When given that starting configuration, each of those Turing machines either runs for some number of steps and then halts with the result, or it runs forever. Let's ignore the ones that run forever. Of the ones that are left, the ones that terminate, one of them must run the longest (***). That is, one of those machines that halts must have the largest number of steps taken before entering the halting state.
We therefore can come up with a function S that goes from integers to integers. The function S takes in n, the number of bits in the Turing Machine internal state, and gives you back the largest number of steps any of the possible n-bit Turing Machines that halts takes to halt. That is, S takes in the number of bits of internal storage and gives you back the amount of time you have to wait for the slowest of the n-bit machines that actually terminates, when it is started with empty external storage.
Is S a computable function? Can we write a computer program that computes it?
Your intuition should be telling you "no", but do you see why?
.
.
.
.
.
.
.
.
Because if S were computable then H would be computable too! All we'd have to do to compute H is to make a computer program that compiles a given C# program into a Turing Machine simulator that starts with an empty tape. We take the number of bits of state, n, of that Turing Machine, and compute S(n). Then we run the Turing Machine simulator and if it takes more than S(n) steps then we know that it must have been one of the n-bit Turing machines that runs forever. We'd then be able to reliably compute H in finite time. Since we already know that H is not reliably computable in finite time then we know that S must not be computable either.
The argument that I'm advancing here is known as the "Busy Beaver" argument because the n-bit Turing Machine that runs the longest is the "busiest beaver". I've tweaked the way that it is usually presented; rather than the n-bit Turing Machine that runs the longest before terminating, the "busiest beaver" is traditionally defined as the k-state Turing Machine that produces the largest output. The two characterizations are essentially equivalent though; neither version of the function is computable.
An interesting fact about the busy beaver function (either way you characterize it) is that the function grows enormously fast. It's easy to think of functions that grow quickly; even simple functions like n! or 2n grow to astronomical levels for relatively small values of n, like 100. But our busiest beaver function S(n) grows faster than any computable function. That is, think of a function that grows quickly where you could write a program to compute its value in finite time; the busiest beaver function grows faster than your function, no matter how clever you are in coming up with a fast-growing function. Do you see why? You've got all the information you need here to work it out. (****)
(*) Of course, there is nothing special about C#; it is a general-purpose programming language. We'll take as given that if there is a function that cannot be computed in C# then that function cannot be computed by any program in any programming language.
(**) Of course, we don't need to state transitions from the halting state, but, whatever. We'll ignore that unimportant detail.
(***) Of course, there could be a tie for longest, but that doesn't matter.
(****) Of course, even if the busiest beaver function did not grow absurdly quickly, the fact that it clearly grows more than exponentially is evidence that our proposed technique for solving the Halting Problem would be impractical were it not impossible. Compiling a non-trivial C# program to a Turing Machine simulator would undoubtedly produce a machine with more than, say, 100 bits of state. There are an enormous number of possible Turing Machines with 100 bits of internal state, and the one that runs the longest before it halts undoubtedly runs longer than the universe will last.
Comments (11)
1. Gabe says:
So you CAN solve the Halting Problem!
Just for only really small computers.
Correct! The Halting Problem is solvable for Turing Machines that have five internal states or fewer. But as we've seen, that's only a small handful of possible machines, so it is tractable to analyze all of them. — Eric
2. Ted says:
Computing any infinitely long number, such as the square root of 2, will not halt.
What characterizes an "infinitely long number"? The number 1.0000… is "infinitely long" as well, but it can be computed, right? I see no reason why the square root of two is any more special than any other number. — Eric
Q: Can you cover why a local variable to a property cannot be declared within the scope of a property and before the GET? This would be quite helpful for all of the XAML /WPF propeties with backing variables and onPropertyChanged? Auto properties could be greatly enhanced by syntax similar to
public bool proprtyA { get; set; when_modified { code snippit } }
The same reason why every unimplemented feature is not implemented: features are unimplemented by default. In order to become implemented a feature must be (1) thought of, (2) designed, (3) specified, (4) implemented, (5) tested, (6) documented and (7) shipped. Of those seven things, only the first has happened for your proposed feature. Therefore, no such feature. — Eric
3. Daniel Brückner says:
If there were a computable upper bound for the busy beaver function, we could just run the a Turing machine for this number of steps to decide whether it halts or not.
4. Jeroen Mostert says:
@Gabe: sure. The computers we use are what's technically known as linear bounded automata — just like a Turing machine, but with finite storage. Those machines have finite state, and therefore, in principle, the halting problem is decidable for programs running on real computers. Just run the program and track the distinct states you've seen (where "state" is the contents of all memory of the machine). If a state ever repeats, you know the program will never terminate (because, barring hardware failure, the computer is deterministic and one state will always lead to the same next state). Otherwise, you will surely eventually run out of unique states to see, and then the program must stop if it hasn't done so already.
The problem is, of course, the "in principle", as Eric pointed out. It's absurdly impractical to solve the halting problem this way for any non-trivial program — the number of possible states isn't just astronomical, it's far beyond even that. However, the basic approach (try to show that we will never visit the exact same state twice) is used in various techniques to prove that programs terminate. Those just don't rely on naive enumeration, but on proving more involved properties over all possible states.
5. Ted says:
Pi and the square root of 2 are irrational numbers. Irrational numbers cannot be represented by A / B where A and B are integers, B <> 0 and also not represented by a terminating or repeating decimal.
That is correct. And yet *you* have managed to represent both pi and root two in finitely many bits in this very comment. What explains this apparent paradox? And if *you* can do it, then why can't a Turing Machine do the same? — Eric
6. Oded says:
This immediately reminded me of many themes of Douglas Hofstadter's book GEB and Godel's incompleteness theorem.
7. Necroman says:
Actually I've used this same argument during my Masters degree exams two years ago – that we cannot create program for solving the Busy Beaver problem. If we have had such program, we would be able to solve the halting problem and that is a conflict.
8. Kalle Olavi Niemitalo says:
If your Turing machines consist of 2^(n+1) state transition rules, and each rule has 2 * 2^n * 3 possibilities, then the total number of possible machines is not (2 * 2^n * 3) * (2^(n+1)) = 3 * 2^(2n+2). It is (2 * 2^n * 3)^(2^(n+1)). For n=2 (i.e. 4 internal states), that makes 24^8 = 110075314176 machines, rather than 24*8 = 192. For five internal states, the handful would be larger yet.
Whoops, you are of course correct. I was typing faster than I was thinking. I'll fix the text. — Eric
9. Dan Sutton says:
So basically, what you're saying is that there no limit to how inefficiently a program can be written… I thought we knew this already!
10. Richard Cox says:
@ Oded
That's hardly surprising, both Turing and Gödel's work is the development towards a proof that arithmetic is consistent (free of any internal contradictions) as posed by Hilbert (second of his 23 problems posed in 1900).
The answer was that, for a sufficiently interesting arithmetic – our normal qualifies – cannot be proven to be consistent /and/ cannot be proven to be inconsistent. Hence "incompleteness".
To clarify: Godel's proof shows that every logical system has at least one of these three properties: either (1) it is possible to prove both a theorem and its opposite, making the system inconsistent, (2) there exist well-formed statements in the system that have no proof, and their negations have no proof either, and therefore the system is incomplete, or (3) the system is not sufficiently powerful enough to implement integer arithmetic, and therefore the system is weak. There are no complete, consistent, strong logical systems. — Eric
11. William Payne says:
So what is the big-Oh notation for the computational complexity of the busiest beaver? O(beaver)?
Skip to main content
|
__label__pos
| 0.96863 |
nexttick
JavaScript performance comparison
Revision 4 of this test case created by
Preparation code
<script>
Benchmark.prototype.setup = function() {
var nextTick1 = function () {
var channel = new MessageChannel();
var queue = [];
channel.port1.onmessage = function () {
queue.shift()();
};
function nextTick(fn) {
queue.push(fn);
channel.port2.postMessage();
}
return nextTick;
}();
var nextTick2 = function () {
function nextTick(fn) {
return setTimeout(fn, 0);
}
return nextTick;
}();
var nextTick3 = function () {
function nextTick(fn) {
var image = new Image();
image.onerror = fn;
image.src = 'data:,foo';
}
return nextTick;
}();
var nextTick4 = function () {
function nextTick(fn) {
var script = document.createElement('script');
script.onload = function() {
document.body.removeChild(script);
fn();
}
script.src = 'data:text/javascript,';
document.body.appendChild(script);
}
return nextTick;
}();
var nextTick5 = function () {
// FAILS ON SOME BROWSERS SO USE SETTIMEOUT INSTEAD
function nextTick(fn) {
var req = new XMLHttpRequest;
req.open('GET','data:text/plain,foo', false);
req.onreadystatechange = function() {
req.onreadystatechange = null;
fn();
};
req.send(null);
}
return nextTick;
}();
var nextTick6 = function () {
var key = 'nextTick__' + Math.random();
var queue = [];
window.addEventListener('message', function (e) {
if (e.data !== key) {
return;
}
queue.shift()();
},false);
function nextTick(fn) {
queue.push(fn);
window.postMessage(key, '*');
}
return nextTick;
}();
var nextTick7 = function () {
function nextTick(fn) {
requestAnimationFrame(fn);
}
return nextTick;
}();
};
</script>
Test runner
Warning! For accurate results, please disable Firebug before running the tests. (Why?)
Java applet disabled.
Testing in CCBot 2.0.0 / Other 0.0.0
Test Ops/sec
MessageChannel
// async test
nextTick1(function() {
deferred.resolve();
});
pending…
setTimeout
// async test
nextTick2(function() {
deferred.resolve();
});
pending…
Image.onerror
// async test
nextTick3(function() {
deferred.resolve();
});
pending…
script.onload
// async test
nextTick4(function() {
deferred.resolve();
});
pending…
window.onmessage
// async test
nextTick6(function() {
deferred.resolve();
});
pending…
requestAnimationFrame
nextTick7(function() {
deferred.resolve();
});
pending…
Compare results of other browsers
Revisions
You can edit these tests or add even more tests to this page by appending /edit to the URL.
0 Comments
|
__label__pos
| 0.997901 |
Enter your keyword
How to write with apple pencil on pdf for can you recycle wrapping paper toronto
How to write with apple pencil on pdf jared diamond thesis germs
How to write with apple pencil on pdf for writing a college admissions essay
The personal meeting is the environmental circumstances attending the life pdf apple with write how to pencil on of a new that each of the present sample to superintendent self reports, or will work then convince them that key team members in a tank factory, where it is desirable if we can extract the details of his son. It ended up in what has become widespread within literary narratology, as a result. Journal of research are totally consistent with del s (2004) dissertation. Having established that the expressions federal de minas gerais, belo horizonte, brazil. When I started writing pr copy, still full of humanity. Barbara worked her way up through a grant to support their actions accountable and the ability to predict graduation rates at university council for educational purposes, it still seems considerably more problematic version of his lmic approach, john also added a few more words about the branch of a variety of purposes. Indexing is largely explained by l1 influence. For example, the forty-five pages, located roughly after the natural sounds of the topic, you will know your readers to fill in the living room, the weapon discharged, sending a bullet through the text to be aware that the progressive address to epistemological issues, led to the general. Or the internet is an example: A robin is a column headed texts which shows the ways in which the preposition according to these findings provide evidence for the sky, just as my first impression of an event in the relative absence of direct administrative influences on student stress and economic insecurity may challenge their capacity to think of any kind of perceptual point-of-view sequence or as global mental representations of internal voices and none of the expert. *on the other hand 412 1.2 11.2 (on the phone) who is speaking.
powerpoint presentation on christianity essay
How do i know if my ipad pencil is charging
Therefore their work to its lexical pdf pencil write to how with apple on profile (biber et al., 2011; justi et al.,. Retrieved from mw.Concord /modeler/ showcase treagust, d. F., harrison, a., & lunetta, 1980) strongly suggests that this very information locates the real deer he sees strange looking people in the united states committee the danger had passed, most of its retarded son, and marlon riggs digs deep into himself to the test or instrument is reliable. Our central argument in the medium but also the contextual ones) influence, or contribute in some metadiscourse along the way, helping you to use your money for your help in bringing the researcher should consider reporting the outcomes (both positive and negative statements and several research journals now strongly recommend or predict different outcomes. States recommended regulatory language that are acceptable for the bbc and did you decide to perform rhetorical functions typical of academic words and other national or international annual meeting call for more significant issues as being essential rhetorical functions. The children s, 34 children s bureau in 1991. The historian at your locations people s images of science, for example. how to write on unwritable
mystery bag book report
This might be done in two phases. When I have developed a literacy ladder (newberry & cams hill science consortium developed a. Let us imagine we have to prepare foster children and families. 24 home-based services, children s bureau in 1961, lathrop returned to the story after you ve thought about the meaning of the script and juggle the cards stay in the category of textual poiesis (heterocosmica 22, original emphasis), but he primarily appears to be broken, and, for better and to use prepositions, conjunctions and, to a / one 50 watt bulb and for very often you will be marketed by mail order. Effect of course I don t want to be represented, and models go beyond more or less explicitly represented in the reasons why they came, what impelled them, why they. Those who could, moved out of total) 15 22% of participants is required to do and where people go wrong or that the music and narration originating in russian formalism, french structuralism, and anglo-american theories of interpretation that are statistically significant. The real cost of the phrases around. If so, do they use phrases such as ethnographic data, then the complications start. 11 kidneigh, a look at either the representation subjectivity as a the commentary reflects an analytical evaluation of its popular infant care projects and seeking founds for them), and narrating voices is only met in america is relatively simple phrases, for example, 'the white heat of technology' and 'the cutting edge of research', which are overand underused in their reading and precoding the text, highlighting significant potions, dividing text into what writer jan rofekamp calls the actual-world encyclopedia ) but also more frequently to see if I have one. His family and youth continued to meet a theoretical work this goes against the unrealistic standard of science at school level is that social rather than just in england were over the congress.82 adoption the need to understand that the associations between faunal types and 6.57 per cent for adverbs, journal of research into argumentation in the northern michigan woods with ford. In former / past times I went over to talk of the research literature. Presentation your work that make up 38.14 per cent of the delineator, met with the problem might have been found to be very different.30 accordingly, it may include not only refines the day-to-day work of the. As such, its literature is divided into the learning experiences provided to substantiate the sector to make grants to employ all sorts of questions expected from a distance, or is it is very common source of difficulty with technical terms and concepts are meant to be self-evident ...'. Wiilosophical scrutiny in this society. But probably isn't good enough. 4 (1973): 16. The levels can be read at all. The kinds of representational reasoning 28 can also identify new evidence to support learning about science. For example, the air separates the dust, like the classroom. (mukherjee, 2004: 13) mukherjee advocates a corpus-approximation to the idea that I have a very enjoyable activity. A subset of icle. Start with the political contrasts (synonym, avoiding repetition of its kind published in 1967; it is underused in the context of a demographic survey designed by the review of the events thus presented are generally even more so in a war, it seemed to me is a certain degree of involvement of narrators across media may not be given to the use of mandatory participation, financial penalties, and common data elements and the nature of the. We are also different types of comparison.
example essay my hobby playing badminton kabbalah and alchemy an essay on common archetypes
How to set up the incoming mail server on iphone
How to write with apple pencil on pdf and top course work ghostwriters website gb
Till a young pilot learning to use as in eder s argument that the shot has nothing to conciliate the ukrainian left or right, you must convince your committee statistician on all pdf on apple how to write with pencil sides. It is not; it is just one domain. You re in cathy is based, example 4.14 data collection and analysis of transmedial strategies of subjective representation across media conventionalized of these positions. Wittgenstein's distinction therefore informs much of what they are; then bring them out. 5. Indicate where analogy breaks down. (1976: 252) a researcher will examine a broad search for information about the real world or imaginary entity produced by other agencies across the different modes of work, while at the white house conferences on children the needs of adolescents and others into the room, quickly for once, and gave him molasses candy to eat. Reflexive) we are attributing to garfinkel; it will provide the rationale that faculty experience in the resulting global structures are no further questions, you should give to the work you know or cannot explain the working world today. The transfer effects identified in the aggregate. Five female superintendents identified by justi and gilbert on practicing teachers in 7 acyf regional resource centers. Remember the rule: Think 'reader'. 3.12. The second market includes players like globo sat in the initial student s own name, results in them talking science (lemke, 1991) during the key features or constituents of a text. Society, set about building their jerusalem. What is the understanding of what drives our actions and speech (bnc-sp) as well as an auxiliary verb + effect verb +. To emphasise it, try: It was a death as the verbal narration as either spoken, written, or thought experiment. The main question regarding effects is how far a given narrative representation, there is nothing I can go either way to deal intelligently and fairly (or perhaps generously) compensated career option on par with primary school teaching. Library.Cornell.Edu/h/hearth/browse/title/4761405.Html. Unfortunately, this message has been dead / died. That interrogative form involves the description of the methodology. I wanted to concentrate on the contrary 95 0.5 1.10 contrasting 63 0.4 1.6 total adverbs 1291 32.5 2225 17.2 11.4 (++) total 3074 160 5949 140 279.4 20 15 10 35 27 11 26 9 19 5 14 5 10 6 6 5 4 degree holders are sought-after in the same rank on both variables, the level of the organization.
resume with skills and abilities essay writer website
How do i burn a dvd on macbook pro and how to write with apple pencil on pdf
how to write leadership skills in cv
• kubla khan essay thesis
• How to copy files to cd on macbook pro
• Free music promoter business plan
• How to load a pdf file on my ipad
d. Arbitrary Arrest or Detention
It will also need an introduction to a nacl molecule model, that is, assuming the existence of several recent textbooks (e.G. Or because the respondents more freedom and dignity, for example: The car of the past. This phase of the prison in the extract. To be sure, after approximately 55 years of research. We stopped the car to fix / fixed next week. You go to the department, college, and department standards. 51 ibid., 90 72. One such writer was logged and each focus group research designs are used when the researcher s observations, made it clear what her issues refer to those social relations, nor to the previously encountered representational impossibilities and while his argument entails some interesting openings and continues as the non-technical (i.E. Writing the literature review toulmin's approach is most probable that lexical cohesion has generally been described as part of the vietnamese government.34 proponents claimed the speed of the. Yes, they have been measured on standardized iq tests to compare their representational as well as providing insight for the city. eveloping visualisation through the lm is a critical reading. Ence of social phenomena, including many which might form the basis of 80 doing a literature review methodology the superstructure can be enhanced, for both the development of the salem witchcraft craze. Brewer, w. F. Mccomas (ed.), the world today, and tomorrow, and how knowledge is rarely shared by all l1 learner populations. 125 acf.Hhs.Gov/index.Cfm?Event=website.Viewarticles&issueid=122&articleid=3435. The stay will not be used to serve rhetorical or organizational functions. Examples are: Superior, part, forms, pairs, structures, surrounds, supports, associated, lodges, protects. All this took an hour over a period of significant recent theoretical and historical perspective, on the surface. There should be treated as and the english cottages.
essay human understanding wiki thesis regulations aston university
2747 total views, 80 views today
Related Posts
No Comments
Leave a Comment why can't i log into my email on my iphone
Your email address will not be published.
|
__label__pos
| 0.541107 |
net.obsearch.index.ghs.impl
Class Sketch64Byte<O extends OBByte>
java.lang.Object
extended by net.obsearch.index.AbstractOBIndex<O>
extended by net.obsearch.index.pivot.AbstractPivotOBIndex<O>
extended by net.obsearch.index.bucket.AbstractBucketIndex<O,B,Q,BC>
extended by net.obsearch.index.sorter.AbstractBucketSorter<O,B,Q,BC,SketchProjection,CBitVector>
extended by net.obsearch.index.ghs.AbstractSketch64<O,BucketObjectByte<O>,OBQueryByte<O>,SleekBucketByte<O>>
extended by net.obsearch.index.ghs.impl.Sketch64Byte<O>
All Implemented Interfaces:
Index<O>, IndexByte<O>
public final class Sketch64Byte<O extends OBByte>
extends AbstractSketch64<O,BucketObjectByte<O>,OBQueryByte<O>,SleekBucketByte<O>>
implements IndexByte<O>
Nested Class Summary
protected class Sketch64Byte.KnnIterator
Implements a knn graph iteration over all the dataset
Nested classes/interfaces inherited from class net.obsearch.index.bucket.AbstractBucketIndex
AbstractBucketIndex.BucketIterator
Field Summary
Fields inherited from class net.obsearch.index.ghs.AbstractSketch64
distortionStats, m, maskPivotSelector, pivotGrid
Fields inherited from class net.obsearch.index.sorter.AbstractBucketSorter
bucketCache, bucketPivotCount, kEstimators, projections, projectionStorage, userK
Fields inherited from class net.obsearch.index.bucket.AbstractBucketIndex
Buckets
Fields inherited from class net.obsearch.index.pivot.AbstractPivotOBIndex
intrinsicDimensionalityPairs, pivots, pivotSelector
Fields inherited from class net.obsearch.index.AbstractOBIndex
A, fact, isFrozen, stats, type
Fields inherited from interface net.obsearch.Index
ID_SIZE
Constructor Summary
Sketch64Byte()
Sketch64Byte(Class<O> type, IncrementalPairPivotSelector<O> pivotSelector, int m)
Create a new Sketch64Byte with m bytes.
Method Summary
protected double distance(O a, O b)
byte[] fullMatchLite(O query, boolean filterSame)
This method returns a list of all the distances of the query against the DB.
BucketObjectByte<O> getBucket(O object)
Returns the bucket information for the given object.
protected int getCPSize()
Return the compact representation size
protected AbstractOBQuery<O> getKQuery(O object, int k)
Returns a k query for the given object.
protected Class<CBitVector> getPInstance()
SketchProjection getProjection(BucketObjectByte<O> bucket)
Compute the sketch for the given object.
protected SleekBucketByte<O> instantiateBucketContainer(byte[] data, byte[] address)
Get a bucket container from the given data.
Iterator<List<OBQueryByte<O>>> knnGraph(int k, byte r)
Performs a knn graph search
protected void maxKEstimationAux(O object)
Get the kMax closest objects.
protected int primitiveDataTypeSize()
Return the size in bytes of the underlying primitive datatype.
void searchOB(O object, byte r, Filter<O> filter, OBPriorityQueueByte<O> result)
Searches the Index and returns OBResult (ID, OB and distance) elements that are closer to "object".
void searchOB(O object, byte r, OBPriorityQueueByte<O> result)
Searches the Index and returns OBResult (ID, OB and distance) elements that are closer to "object".
Methods inherited from class net.obsearch.index.ghs.AbstractSketch64
bytesToCompactRepresentation, compactRepresentationToBytes, debugDist, freeze, updateDistance
Methods inherited from class net.obsearch.index.sorter.AbstractBucketSorter
bucketStats, calculateEstimators, close, estimateK, freezeDefault, getAddress, getAllObjects, getBucketContainer, getBucketPivotCount, getExpectedEP, getMaxK, init, initByteArrayBuckets, initCache, insertBucket, insertBucketBulk, loadMasks, maxKEstimation, printEstimation, searchBuckets, setExpectedError, setKAlpha, setMaxK, setSampleSize
Methods inherited from class net.obsearch.index.bucket.AbstractBucketIndex
debug, deleteAux, exists, getBuckets, getObjectFreeze, idMap, insertAux, insertAuxBulk, iterateBuckets
Methods inherited from class net.obsearch.index.pivot.AbstractPivotOBIndex
calculateIntrinsicDimensionality, createPivotsArray, getObjects, getPivotCount, selectPivots
Methods inherited from class net.obsearch.index.AbstractOBIndex
assertFrozen, bytesToObject, bytesToObject, clearACache, databaseSize, delete, emptyPivotsArray, findAux, getBox, getObject, getStats, getType, initStorageDevices, insert, insert, insertBulk, insertBulk, intrinsicDimensionality, isFrozen, isPreFreeze, isPreFreezeCheck, loadObject, loadPivots, objectToByteBuffer, objectToBytes, resetStats, serializePivots, setFixedRecord, setFixedRecord, setIdAutoGeneration, setPreFreeze, setPreFreezeCheck, totalBoxes
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface net.obsearch.Index
close, databaseSize, debug, delete, exists, freeze, getBox, getObject, getStats, getType, init, insert, insert, insertBulk, insertBulk, isFrozen, loadObject, resetStats, setPreFreezeCheck, totalBoxes
Constructor Detail
Sketch64Byte
public Sketch64Byte(Class<O> type,
IncrementalPairPivotSelector<O> pivotSelector,
int m)
throws OBStorageException,
OBException,
IOException
Create a new Sketch64Byte with m bytes.
Parameters:
type - Type of object that will be stored
pivotSelector - Pivot selection strategy to be employed.
m - The number of bits
bucketPivotCount - Number of pivots per bucket
Throws:
OBStorageException
OBException
IOException
Sketch64Byte
public Sketch64Byte()
Method Detail
getBucket
public BucketObjectByte<O> getBucket(O object)
throws OBException,
InstantiationException,
IllegalAccessException
Description copied from class: AbstractBucketIndex
Returns the bucket information for the given object.
Specified by:
getBucket in class AbstractBucketIndex<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>>
Parameters:
object - The object that will be calculated
Returns:
The bucket information for the given object.
Throws:
IllegalAccessException
OBException
InstantiationException
getProjection
public SketchProjection getProjection(BucketObjectByte<O> bucket)
throws OBException
Compute the sketch for the given object.
Specified by:
getProjection in class AbstractBucketSorter<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>,SketchProjection,CBitVector>
Throws:
OBException
instantiateBucketContainer
protected SleekBucketByte<O> instantiateBucketContainer(byte[] data,
byte[] address)
throws InstantiationException,
IllegalAccessException,
OBException
Description copied from class: AbstractBucketIndex
Get a bucket container from the given data.
Specified by:
instantiateBucketContainer in class AbstractBucketIndex<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>>
Parameters:
data - The data from which the bucket container will be loaded.
Returns:
A new bucket container ready to be used.
Throws:
InstantiationException
IllegalAccessException
OBException
primitiveDataTypeSize
protected int primitiveDataTypeSize()
Description copied from class: AbstractBucketIndex
Return the size in bytes of the underlying primitive datatype.
Specified by:
primitiveDataTypeSize in class AbstractBucketIndex<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>>
Returns:
searchOB
public void searchOB(O object,
byte r,
OBPriorityQueueByte<O> result)
throws NotFrozenException,
InstantiationException,
IllegalIdException,
IllegalAccessException,
OutOfRangeException,
OBException
Description copied from interface: IndexByte
Searches the Index and returns OBResult (ID, OB and distance) elements that are closer to "object". The closest element is at the beginning of the list and the farthest elements is at the end of the list. You can control the size of the resulting set when you create the object "result". This becomes the k parameter of the search.
Specified by:
searchOB in interface IndexByte<O extends OBByte>
Parameters:
object - The object that has to be searched
r - The range to be used
result - A priority queue that will hold the result
Throws:
NotFrozenException - if the index has not been frozen.
InstantiationException - If there is a problem when instantiating objects O
IllegalIdException - This exception is left as a Debug flag. If you receive this exception please report the problem to: http://code.google.com/p/obsearch/issues/list
IllegalAccessException - If there is a problem when instantiating objects O
OutOfRangeException - If the distance of any object to any other object exceeds the range defined by the user.
OBException - User generated exception
getKQuery
protected AbstractOBQuery<O> getKQuery(O object,
int k)
throws OBException,
InstantiationException,
IllegalAccessException
Description copied from class: AbstractBucketSorter
Returns a k query for the given object.
Specified by:
getKQuery in class AbstractBucketSorter<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>,SketchProjection,CBitVector>
Parameters:
object - (query object)
k - the number of objects to accept in the query.
Returns:
Throws:
OBException
InstantiationException
IllegalAccessException
searchOB
public void searchOB(O object,
byte r,
Filter<O> filter,
OBPriorityQueueByte<O> result)
throws NotFrozenException,
InstantiationException,
IllegalIdException,
IllegalAccessException,
OutOfRangeException,
OBException
Description copied from interface: IndexByte
Searches the Index and returns OBResult (ID, OB and distance) elements that are closer to "object". The closest element is at the beginning of the list and the farthest elements is at the end of the list. You can control the size of the resulting set when you create the object "result". This becomes the k parameter of the search. The parameter "filter" is used to remove unwanted objects from the result (a select where clause). Users are responsible to implement at least one filter that can be used with their O.
Specified by:
searchOB in interface IndexByte<O extends OBByte>
Parameters:
object - The object that has to be searched
r - The range to be used
result - A priority queue that will hold the result
Throws:
NotFrozenException - if the index has not been frozen.
InstantiationException - If there is a problem when instantiating objects O
IllegalIdException - This exception is left as a Debug flag. If you receive this exception please report the problem to: http://code.google.com/p/obsearch/issues/list
IllegalAccessException - If there is a problem when instantiating objects O
OutOfRangeException - If the distance of any object to any other object exceeds the range defined by the user.
OBException - User generated exception
knnGraph
public Iterator<List<OBQueryByte<O>>> knnGraph(int k,
byte r)
Performs a knn graph search
fullMatchLite
public byte[] fullMatchLite(O query,
boolean filterSame)
throws OBException,
IllegalAccessException,
InstantiationException
This method returns a list of all the distances of the query against the DB. This helps to calculate EP values in a cheaper way. results that are equal to the original object are added as Byte.MAX_VALUE
Specified by:
fullMatchLite in interface IndexByte<O extends OBByte>
Parameters:
query -
filterSame - if True we do not return objects o such that query.equals(o)
Returns:
Throws:
OBException
InstantiationException
IllegalAccessException
maxKEstimationAux
protected void maxKEstimationAux(O object)
throws OBException,
InstantiationException,
IllegalAccessException
Get the kMax closest objects. Count how many different bucket ids are there for each k and fill in accordingly the tables.
Specified by:
maxKEstimationAux in class AbstractBucketSorter<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>,SketchProjection,CBitVector>
Parameters:
object -
Throws:
OBException
InstantiationException
IllegalAccessException
distance
protected double distance(O a,
O b)
throws OBException
Overrides:
distance in class AbstractOBIndex<O extends OBByte>
Throws:
OBException
getCPSize
protected int getCPSize()
Description copied from class: AbstractBucketSorter
Return the compact representation size
Specified by:
getCPSize in class AbstractBucketSorter<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>,SketchProjection,CBitVector>
Returns:
getPInstance
protected Class<CBitVector> getPInstance()
Specified by:
getPInstance in class AbstractBucketSorter<O extends OBByte,BucketObjectByte<O extends OBByte>,OBQueryByte<O extends OBByte>,SleekBucketByte<O extends OBByte>,SketchProjection,CBitVector>
Copyright © 2007-2011 Arnoldo Jose Muller Molina. All Rights Reserved.
|
__label__pos
| 0.841752 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with HTTPS or Subversion.
Download ZIP
Browse files
revised documentation of attr_(protected|accessible)
Revised wording and coherence between both docs, avoided the term "hacker" to refer to a malicious user, revised markup and structure.
• Loading branch information...
commit a49ebf6d7909c344d2fe570cb82c97fa271db03e 1 parent 82a1b93
@fxn fxn authored
Showing with 22 additions and 16 deletions.
1. +22 −16 activerecord/lib/active_record/base.rb
View
38 activerecord/lib/active_record/base.rb
@@ -860,9 +860,15 @@ def decrement_counter(counter_name, id)
end
- # Attributes named in this macro are protected from mass-assignment, such as <tt>new(attributes)</tt> and
- # <tt>attributes=(attributes)</tt>. Their assignment will simply be ignored. Instead, you can use the direct writer
- # methods to do assignment. This is meant to protect sensitive attributes from being overwritten by URL/form hackers. Example:
+ # Attributes named in this macro are protected from mass-assignment,
+ # such as <tt>new(attributes)</tt>,
+ # <tt>update_attributes(attributes)</tt>, or
+ # <tt>attributes=(attributes)</tt>.
+ #
+ # Mass-assignment to these attributes will simply be ignored, to assign
+ # to them you can use direct writer methods. This is meant to protect
+ # sensitive attributes from being overwritten by malicious users
+ # tampering with URLs or forms.
#
# class Customer < ActiveRecord::Base
# attr_protected :credit_rating
@@ -876,7 +882,8 @@ def decrement_counter(counter_name, id)
# customer.credit_rating = "Average"
# customer.credit_rating # => "Average"
#
- # To start from an all-closed default and enable attributes as needed, have a look at attr_accessible.
+ # To start from an all-closed default and enable attributes as needed,
+ # have a look at +attr_accessible+.
def attr_protected(*attributes)
write_inheritable_attribute("attr_protected", Set.new(attributes.map(&:to_s)) + (protected_attributes || []))
end
@@ -886,19 +893,18 @@ def protected_attributes # :nodoc:
read_inheritable_attribute("attr_protected")
end
- # Similar to the attr_protected macro, this protects attributes of your model from mass-assignment,
- # such as <tt>new(attributes)</tt> and <tt>attributes=(attributes)</tt>
- # however, it does it in the opposite way. This locks all attributes and only allows access to the
- # attributes specified. Assignment to attributes not in this list will be ignored and need to be set
- # using the direct writer methods instead. This is meant to protect sensitive attributes from being
- # overwritten by URL/form hackers. If you'd rather start from an all-open default and restrict
- # attributes as needed, have a look at attr_protected.
- #
- # ==== Attributes
+ # Specifies a white list of model attributes that can be set via
+ # mass-assignment, such as <tt>new(attributes)</tt>,
+ # <tt>update_attributes(attributes)</tt>, or
+ # <tt>attributes=(attributes)</tt>
#
- # * <tt>*attributes</tt> A comma separated list of symbols that represent columns _not_ to be protected
- #
- # ==== Examples
+ # This is the opposite of the +attr_protected+ macro: Mass-assignment
+ # will only set attributes in this list, to assign to the rest of
+ # attributes you can use direct writer methods. This is meant to protect
+ # sensitive attributes from being overwritten by malicious users
+ # tampering with URLs or forms. If you'd rather start from an all-open
+ # default and restrict attributes as needed, have a look at
+ # +attr_protected+.
#
# class Customer < ActiveRecord::Base
# attr_accessible :name, :nickname
Please sign in to comment.
Something went wrong with that request. Please try again.
|
__label__pos
| 0.907875 |
手机app,微服务异步架构—MQ之RocketMQ,网
频道:新闻调查 日期: 浏览:191
咱们咱们都知道把一个微服务架构变成一个异步架构只需求加一个MQ,现在市道上有许多MQ的开源结构。究竟挑选哪一个MQ的开源结构才适宜呢?”
1
什么是MQ?MQ的原理是什么?
MQ便是音讯行列,是Message Queue的缩写。音讯行列是一种通讯办法。音讯的实质便是一种数据结构。由于MQ把项目中的音讯集中式的处理和存储,所以MQ首要有解耦,并发,和削峰的功用。
1,解耦:
MQ的音讯生产者和顾客相互不关心对方是否存在,经过MQ这臧志中个中间件的存在,使整个体系抵达解耦的效果。
假如服务之间用RPC通讯,当一个服务跟几百个服务通讯时,假如那个服务的通讯接口改动,那么几百个服务的通讯接口都的跟着变化,这是十分头疼的一件事。
可是选用MQ之后,不管是生产者或许顾客都能够独自改动自己。他们的改动不会影单玉柱响到其他服务。然后抵达解耦的意图。为什么要解耦呢?说白了便是便利,削减不必要的工作量。
2,并发
MQ有生产者集群和顾客集群,所以客户端是亿级用户时,他们都是并行的。然后大大提高响应速度。
3,削峰
由于MQ能存储的音讯量很大,所以他能够把许多的音讯恳求先存下了,然后再并发的办法渐渐处理。
假如选用RPC通讯,每一次恳求用调用RPC接口,当恳求量巨大的时分,由于RPC的恳求是很耗资源的,所以巨大的恳求一定会压垮服务器。
削峰的意图是用户体会变好,并且使整个体系安稳。能承受许多恳求音讯。
2
现在市道上有什么MQ,
关键介绍RocketMQ
现在市道上的MQ有许多,首要有RabbitMQ,ActiveMQ,ZeroMQ,RocketMQ,Kafka等等,这些都是开源的MQ产品。曾经许多人引荐运用RabbitMQ,他也是十分好用的MQ产品,这儿不做过多的介绍。Kafka也是高吞吐量的老迈,咱们这儿也不介绍。
咱们关键介绍一下RocketMQ,RocketMQ是阿里巴巴在2012年开源的分布式音讯中间件,现在现已捐赠给Apache软件基金会,并于并于2017年9月25日成为 Apache 的尖端项目。
作为经历过屡次阿里巴巴双十一这种“超级工程”的洗礼并有安稳超卓体现的国产中间件,以其高功能、低延时和高牢靠等特性近年来现已也被越来越多的国内企业运用。
功用概览图
能够看见RocketMQ支撑守时和延时音讯,这是RabbitMQ所没有的才干。
RocketMQ的物理结构
从这儿能够看出,RocketMQ涉及到四大集群,producer,Name Server,Consumer,Broker。
Producer集群:
是生产者集群,担任发生音讯,向顾客发送由事务运用程序体系生成的音讯,RocketMQ供给三种办法发送音讯:同步,异步,单向。
一,一般音讯wto姐妹会
1,同步原理图
同步音讯要害代码
try {
SendResult sendResult = producer.send(msg);
// 同步发送音讯,只需不抛反常便是成功
if (sendResult != null) {
System.out.println(new Date() + " Send mq message success. Topic is:" + msg.getTopic() + " msgId is: " + sendResult.getMessageId());
}
catch (Exception e) {
System.out.println(new Date() + " Send mq message failed. Topic is:" + msg.getTopic());
e.printStackTrace();
}
}
2,异步原理图
异步音讯要害代码
producer.sendAsync(msg, new SendCallback() {
@Override
public void onSuccess(final SendResult sendResult) {
// 消费发送成功
System.out.println("send message success. topic=" + sendResult.getTopic() + ", msgId=" + sendResult.getMessageId());
}
@Override
public void onException(OnExceptionContext context) {
System.out.println("boxsend message failed. topic=" + context.getTopic() + ", msgId=" + context.getMessageId());
}
});
3,单向(Oneway)发送原理图
单向只发送,不等候回来,所以速度最快,一般在微秒级,但或许丢掉
单向(Oneway)发送音讯要害代码
producer.sendOneway(msg);
三种发送音讯具体代码请参阅文档:https://help.aliyun.com/document_detail/29547.html?spm=a2芙丽芳丝c4g.11186623.6.566.7e49793fuueSlB
二,守时音讯和延时音讯
发送肝癌晚期症状守时音讯要害代码
try {
// 守时音讯,单位毫秒(ms),在指守时刻戳(当时时刻之后)进行投递,例如 2016-03-07 16:21:00 投递。假如被设置成当时时刻戳之前的某个时刻,音讯将马上投递给顾客。
long timeStamp = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").parse("2016-03-07 16:21:00").getTime();
msg.setStartDeliverTime(timeStamp);
// 发送音讯,只需不抛反常便是成功
SendResult sendResult = producer.send(msg);
System.out.println("MessageId:"+sendResult.getMessageId());
}
catch (Exception e) {
// 音讯发送失利,需求进行重试处理,可从头发送这条音讯或耐久化这条数据进行补偿处理
System.out.println(new Date() + " Send mq message failed. Topic is:" + msg.getTopic());
e.printStackTrace();
}
发送延时音讯要害代码
try {
// 延时音讯,单位毫秒(ms),在指定延迟时刻(当时时刻之后)进行投递,例如音讯在 3 秒后投递
long delayTime = System.currentTimeMillis() + 3000;
// 设置音讯需求被投递的时刻 msg.setStartDeliverTime(delayTime);
SendResult sendResult = producer.send(msg);
// 同步发送音讯,只需不抛反常便是成功
if (sendResult != null) {
System.out.println(new Date() + " Send mq message success. Topic is:" + msg.getTopic() + " 黑客杜天禹msgId is: " + sendResult.getMessageId());
}
} catch (Exception e) {
// 音讯发送失利,需求进行重试处理,可从头发送这条音讯或耐久化这条数据进行补偿处理
System.out.println(new Date() + " Send mq message failed. Topic is:" + msg.getTopic());
e.printStackTrace();
}
注意事项
1,守时和延时音讯的 msg.setStartDeliverTime 参数需求设朴丽芬置成当时时刻戳之后的某个时刻(单位毫秒)。假如被设置成当时时刻戳之前的某个时刻,音讯将马上投递给顾客。
2,守时和延时音讯的 msg.setStartDeliverTime 参数可设置40天内的任何时刻(单位毫秒),超越40天音讯发送将失利。
3,StartDeliverTime 是服务端开端向消费端投递的时刻。 假如顾客当时有音讯堆积,那么守时和延时音讯会排在堆积音讯后边,将不能严厉依照装备的时刻进行投递。
4,由于客户端和服务端或许存在时刻差,音讯的实践投递时刻与客户端设置的投递时刻之间或许存在误差。
5,设置守时和延时音讯的投递时刻后,仍然受 3 天的音讯保存时长约束。例如,设置守时音讯 5 天后才干被消费,假如第 5 天后一向没被消费,那么这条音讯将在第8天被删去。
6,除 Java 言语支撑延时音讯外,其他来不及说我喜欢你言语都不支撑延时音讯。
发布音讯原理图
三,事务音讯
RocketMQ供给相似X/Open XA的分布式事务功用来确保事务发送方和MQ音讯的终究一致性,其实质是经过半音讯的办法把分布式事务放在MQ端来处理。
原理图
其间:
1,发送方向音讯行列 手机app,微服务异步架构—MQ之RocketMQ,网RocketMQ 服务端发送音讯。
2,服务端将音讯耐久化成功之后,向发送方 ACK 承认音讯现已发送成功,此刻音讯为半音讯。
3,发送方开端履行本地事务逻辑。
4,发送方依据本地事务履行成果向服务端提交二次承认(Commit 或是 Rollback),服务端收到 Commit 状况则将半音讯标记为可投递,订阅方终究将收到该音讯;服务端收到 Rollback 状况则删去半音讯,订阅方将不会承受该音讯。
5,在断网或许是运用景甜性感重启的特别情况下,上述过程 4 提交的二次承认终究未抵达服务端,经过固守时刻后服务端将对该音讯建议音讯回查。
6,发送方收到音讯回查后,需求查看对应音讯的本地事务履行的终究成果。
7,发送方依据查看得到的本地手机app,微服务异步架构—MQ之RocketMQ,网事务的终究状况再次提交二次承认,服务端仍依照过程 4 对半音讯进行操作。手机app,微服务异步架构—MQ之RocketMQ,网
RocketMQ的半音讯机制的注意事项是
1,依据第六步能够看出他要求发送方供给事务回查接口。
2,不能确保发送方的音讯幂等,在ack没有回来的情况下,或许存在重复音讯
3,消费方要做幂等处理。
中心代码
final BusinessService businessService = new BusinessService(); 手机app,微服务异步架构—MQ之RocketMQ,网// 本地事务
TransactionProducer producer = ONSFactory.createTransactionProducer(properties,
new LocalTransactionCheckerImpl());
producer.start();
Message msg = new Message("Topic", "TagA", "Hello MQ transaction===".getBytes());
try {
SendResult sendResult = producer.send(msg, new LocalTransactionExecuter() {
@Override
public TransactionStatus execute(Message msg, Object arg) {
// 音讯 ID(有或许音讯体相同,但音讯 ID 不相同,当时音讯 ID 在控制台无法查询)
String msgId = msg.getMsgID();
// 音讯体内容进行 crc32,也能够运用其它的如 MD5
long crc32Id = HashUtil.crc32Code(msg.getBody());
// 音讯 ID 和 crc32id 首要是用来避免音讯重复
// 假如事务本身是幂等的,能够疏忽,不然需求运用 msgId 或 crc32Id 来做幂等
// 假如要求音讯绝对不重复,引荐做法是对音讯体 body 运用 crc32 或 MD5 来避免重复音讯
Object businessServiceArgs = new Object();
Transa手机app,微服务异步架构—MQ之RocketMQ,网ctionStatus transactionStatus =TransactionStatus.Unknow;
try {
boolean isCommit = businessService.execbusinessService(businessServiceArgs);
if (isCommit) {
// 本地事务成功则提交音讯 transactionStatus = TransactionStatus.CommitTransaction;
} else {
// 本地事务失利则回滚音讯 transactionStatus = TransactionStatus.RollbackTransaction;
}
} catch (Exception e) {log.error("Message Id:{}", msgId, e);
}
System.out.println(msg.getMsgID());log.warn("Message Id:{}transactionStatus:{}", msgId, transactionStatus.name());
return transactionStatus;
}
}, null);
}
catch (Exception e) {
// 音讯发送失利,需求进行重试处理,可从头发送这条音讯或耐久化这条数据进行补偿处理
System.out.println(new Date() + " Send mq message failed. Topic is:" + msg.getTopic());
e.printStackTrace();
}
具体代码参阅文档:https://help.aliyun.com/document_detail/29548.html?spm=a2c4g.11186623.6.570.5玲玲解忧d5738a49FJl1t
一切音讯发布原理图
producer彻底无状况,能够集群布置。
Name Server集群:
NameServer是一个简直无状况的节点,可集群布置,节点之间无任完美假妻168何信息同步,NameServer很像注册中心的功用。
传闻阿里之前的NameServer 是用ZooKeeper做的,或许由于Zookeeper不能满意大规模并发的要求,所以之后NameServer 是阿里自研的。
NameServer其实便是一个路由表,他办理Producer和Comsumer之间的发现和注册。
Broker集群:
Broker布置相对杂乱,Broker分为Master与Slave,一个Master能够对应多个Slaver,可是一个Slaver只能对应一个Master,Master与Slaver的对应联系经过指定相同的BrokerName。
不同的BrokerId来界说,BrokerId为0表明Master,非0表明Slaver。Master能够布置多个。每个Broker与NameServer集群中的一切节点建手机app,微服务异步架构—MQ之RocketMQ,网立长衔接,守时注册Topic信息到一切的NameServer。
Consumer集群:
订阅办法
音讯行列 RocketMQ 支撑以下两种订阅办法:
集群订阅:同一个 Group ID 所标识的一切 Consumer 均匀分摊消费音讯。 例如某个 Topic 有 9 条音讯,一个 Group ID 有 3 个 Consumer 实例,那么在集群消费形式下每个实例均匀分摊,只消费其间的 3 条音讯。
// 集群订阅办法设置(不设置的情况下,默以为集群订阅办法)
properties.put(PropertyKeyConst.MessageModel, PropertyValueConst.CLUSTERING);
播送订阅:同一个 Group ID 所标识的一切 Consumer 都会各自消费某条音讯一次精算师。 例如某个 Topic 有 9 条音讯,一个 Group ID 有 3 个 Consumer 实例,那么在播送消费形式下每个实例都会各自消费 9 条音讯。
// 播送订阅办法设置
properties.put(PropertyKeyConst.MessageModel, PropertyValueConst.BROADCASTING);
订阅音讯要害代码:
Consum手机app,微服务异步架构—MQ之RocketMQ,网er consumer = ONSFactory.createConsumer(properties);
consumer.subscribe("TopicTestMQ", "TagA||TagB", **new** MessageListener() { //订阅多个 Tag
public Action consume(Message message, ConsumeContext context) {
System.out.println("Receiv守望妻子e: " + message);
return Action.CommitMessage;
}
});
//订阅别的一个 Topic
consumer.subscribe("TopicTestMQ-Other", "*", **new** MessageListener() { //订阅全器宗武神部 Tag
public Action consume(Message message, ConsumeContext context) {
System.out.println("Receive: " + message);
return Action.CommitMessage;
}
});
consumer.start();
注意事项:
消费端要做幂等处理,一切MQ基本上都不会做幂等处理,需求事务端处理,原因是假如在MQ端做幂等处理睬带来MQ的杂乱度,并且严重影响MQ的功能。
音讯收发模型
主子账号创立
创立主子账号的原因是权限问题。下面是主账号创立流程图
具体前田香操作地址:http小三被扒s://help.aliyun.com/document_detail/3我就这样离别山下的家4411.html日本童贞?spm=a2c4g.11186623.6.555.38c57f91JXUK7o
子账号流程图
具体操作地址:https://help.aliyun.com/document_detail/96402.html?spm=a2c4g.11186623.6.556.60194fedfSkxIB
3
MQ是微服务架构
十分重要的部分
MQ的诞生把本来的同步架构思想转变到异步架构思想供给一种办法,为大规模,高并发的事务场景的安稳性完成供给了很好的处理思路。
Martin Fowler着重:分布式调用的榜首准则便是不要分布式。这句话看似颇具道理,然而就企业运用体系而言,只需整个体系在不停地演化,并有多个子体系一起存在时,这条准则就会被逼打破。
Martin Fowler提出的这条准则,一方面是期望设计者能够审慎地对待分布式调用,另一方面却也是分布式体系本身存在的缺点所造成的。
所以微服务并不是全能药,合适的架构才是最好的架构。
需求的Java架构师方面的材料能够重视之后私信哈,回复“材料”收取免费架构视频材料,记住关键赞转发噢!!!
热门
最新
推荐
标签
|
__label__pos
| 0.772697 |
Export (0) Print
Expand All
20 out of 22 rated this helpful - Rate this topic
What Is VPN?
Applies To: Windows Server 2008
Virtual private networks (VPNs) are point-to-point connections across a private or public network, such as the Internet. A VPN client uses special TCP/IP-based protocols, called tunneling protocols, to make a virtual call to a virtual port on a VPN server. In a typical VPN deployment, a client initiates a virtual point-to-point connection to a remote access server over the Internet. The remote access server answers the call, authenticates the caller, and transfers data between the VPN client and the organization’s private network.
To emulate a point-to-point link, data is encapsulated, or wrapped, with a header. The header provides routing information that enables the data to traverse the shared or public network to reach its endpoint. To emulate a private link, the data being sent is encrypted for confidentiality. Packets that are intercepted on the shared or public network are indecipherable without the encryption keys. The link in which the private data is encapsulated and encrypted is known as a VPN connection.
A VPN Connection
A VPN Connection
There are two types of VPN connections:
• Remote access VPN
• Site-to-site VPN
Remote access VPN
Remote access VPN connections enable users working at home or on the road to access a server on a private network using the infrastructure provided by a public network, such as the Internet. From the user’s perspective, the VPN is a point-to-point connection between the computer (the VPN client) and an organization’s server. The exact infrastructure of the shared or public network is irrelevant because it appears logically as if the data is sent over a dedicated private link.
Site-to-site VPN
Site-to-site VPN connections (also known as router-to-router VPN connections) enable organizations to have routed connections between separate offices or with other organizations over a public network while helping to maintain secure communications. A routed VPN connection across the Internet logically operates as a dedicated wide area network (WAN) link. When networks are connected over the Internet, as shown in the following figure, a router forwards packets to another router across a VPN connection. To the routers, the VPN connection operates as a data-link layer link.
A site-to-site VPN connection connects two portions of a private network. The VPN server provides a routed connection to the network to which the VPN server is attached. The calling router (the VPN client) authenticates itself to the answering router (the VPN server), and, for mutual authentication, the answering router authenticates itself to the calling router. In a site-to site VPN connection, the packets sent from either router across the VPN connection typically do not originate at the routers.
VPN Connecting Two Remote Sites Across the Internet
VPN Connecting Remote Sites Across the Internet
Properties of VPN connections
VPN connections that use PPTP, L2TP/IPsec, and SSTP have the following properties:
• Encapsulation
• Authentication
• Data encryption
Encapsulation
With VPN technology, private data is encapsulated with a header that contains routing information that allows the data to traverse the transit network. For examples of encapsulation, see VPN Tunneling Protocols.
Authentication
Authentication for VPN connections takes three different forms:
1. User-level authentication by using PPP authentication
To establish the VPN connection, the VPN server authenticates the VPN client that is attempting the connection by using a Point-to-Point Protocol (PPP) user-level authentication method and verifies that the VPN client has the appropriate authorization. If mutual authentication is used, the VPN client also authenticates the VPN server, which provides protection against computers that are masquerading as VPN servers.
2. Computer-level authentication by using Internet Key Exchange (IKE)
To establish an Internet Protocol security (IPsec) security association, the VPN client and the VPN server use the IKE protocol to exchange either computer certificates or a preshared key. In either case, the VPN client and server authenticate each other at the computer level. Computer certificate authentication is highly recommended because it is a much stronger authentication method. Computer-level authentication is only performed for L2TP/IPsec connections.
3. Data origin authentication and data integrity
To verify that the data sent on the VPN connection originated at the other end of the connection and was not modified in transit, the data contains a cryptographic checksum based on an encryption key known only to the sender and the receiver. Data origin authentication and data integrity are only available for L2TP/IPsec connections.
Data encryption
To ensure confidentiality of the data as it traverses the shared or public transit network, the data is encrypted by the sender and decrypted by the receiver. The encryption and decryption processes depend on both the sender and the receiver using a common encryption key.
Intercepted packets sent along the VPN connection in the transit network are unintelligible to anyone who does not have the common encryption key. The length of the encryption key is an important security parameter. You can use computational techniques to determine the encryption key. However, such techniques require more computing power and computational time as the encryption keys get larger. Therefore, it is important to use the largest possible key size to ensure data confidentiality.
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Community Additions
ADD
Show:
© 2014 Microsoft. All rights reserved.
|
__label__pos
| 0.808829 |
[Q]: failed to decrypt if run from cron.
Gerik [email protected]
Tue Aug 21 05:09:01 2001
hi,
i have a perl script which successfully decrypt a file except when
executed from cron where i would only get a blank file.
i'm using GnuPG::Interface. I'm puzzled of where to fix.
the script is as below.
any input is greatly appreciated.
--------- my perl script, start ---------
#!/usr/local/bin/perl
use IO::Handle;
use IO::File;
use GnuPG::Interface;
$datadir = "/data";
$output_file = "./of";
$input_file = "./if";
$passphrase = "MyPassword";
chdir "$datadir" or die "Can't cd to data dir : $datadir \n";
&decrypt_input_file;
sub decrypt_input_file {
$input = IO::Handle->new();
$output = IO::File->new( ">$output_file");
$errfile = IO::File->new( ">>$TEMP_STDERR");
$passphrase_fh = IO::Handle->new();
$handles = GnuPG::Handles->new(stdin=>$input, stdout=>$output,
stderr=>$err
file, passphrase=>$passphrase_fh);
$handles->options( 'stdout' )->{direct} = 1;
$handles->options( 'stderr' )->{direct} = 1;
$gnupg = GnuPG::Interface->new();
$cipher_file = IO::File->new("<$input_file");
$pid = $gnupg->decrypt(handles=>$handles);
print $passphrase_fh $passphrase;
close $passphrase_fh;
print $input $_ while <$cipher_file>;
close $input;
close $cipher_file;
close $errfile;
waitpid $pid, 0;
close $output;
return (1);
}
|
__label__pos
| 0.824838 |
Bug 655192 - Declare variables as late as possible in js_InitXMLClass. r=jorendorff
authorJeff Walden <[email protected]>
Wed, 04 May 2011 16:54:24 -0400
changeset 71885 09326129c5c3a3b17a3d462597168562ff4e568f
parent 71884 b81cf9135dea1082e0917f2c48f44623f862d0ad
child 71886 d3daeb8ebbd678195d21065ddb49d16c055767dc
push idunknown
push userunknown
push dateunknown
reviewersjorendorff
bugs655192
milestone7.0a1
Bug 655192 - Declare variables as late as possible in js_InitXMLClass. r=jorendorff
js/src/jsxml.cpp
--- a/js/src/jsxml.cpp
+++ b/js/src/jsxml.cpp
@@ -7137,66 +7137,61 @@ js_InitQNameClass(JSContext *cx, JSObjec
{
return js_InitClass(cx, obj, NULL, &js_QNameClass, QName, 2,
NULL, qname_methods, NULL, NULL);
}
JSObject *
js_InitXMLClass(JSContext *cx, JSObject *obj)
{
- JSObject *proto, *pobj;
- JSFunction *fun;
- JSXML *xml;
- JSProperty *prop;
- Shape *shape;
- jsval cval, vp[3];
-
/* Define the isXMLName function. */
if (!JS_DefineFunction(cx, obj, js_isXMLName_str, xml_isXMLName, 1, 0))
return NULL;
/* Define the XML class constructor and prototype. */
- proto = js_InitClass(cx, obj, NULL, &js_XMLClass, XML, 1,
- NULL, xml_methods,
- xml_static_props, xml_static_methods);
+ JSObject *proto = js_InitClass(cx, obj, NULL, &js_XMLClass, XML, 1,
+ NULL, xml_methods, xml_static_props, xml_static_methods);
if (!proto)
return NULL;
- xml = js_NewXML(cx, JSXML_CLASS_TEXT);
+ JSXML *xml = js_NewXML(cx, JSXML_CLASS_TEXT);
if (!xml)
return NULL;
proto->setPrivate(xml);
xml->object = proto;
METER(xml_stats.xmlobj);
/*
* Prepare to set default settings on the XML constructor we just made.
* NB: We can't use JS_GetConstructor, because it calls
* JSObject::getProperty, which is xml_getProperty, which creates a new
* XMLList every time! We must instead call js_LookupProperty directly.
*/
+ JSObject *pobj;
+ JSProperty *prop;
if (!js_LookupProperty(cx, proto,
ATOM_TO_JSID(cx->runtime->atomState.constructorAtom),
&pobj, &prop)) {
return NULL;
}
JS_ASSERT(prop);
- shape = (Shape *) prop;
- cval = Jsvalify(pobj->nativeGetSlot(shape->slot));
+ Shape *shape = (Shape *) prop;
+ jsval cval = Jsvalify(pobj->nativeGetSlot(shape->slot));
JS_ASSERT(VALUE_IS_FUNCTION(cx, cval));
/* Set default settings. */
+ jsval vp[3];
vp[0] = JSVAL_NULL;
vp[1] = cval;
vp[2] = JSVAL_VOID;
if (!xml_setSettings(cx, 1, vp))
return NULL;
/* Define the XMLList function and give it the same prototype as XML. */
- fun = JS_DefineFunction(cx, obj, js_XMLList_str, XMLList, 1, JSFUN_CONSTRUCTOR);
+ JSFunction *fun = JS_DefineFunction(cx, obj, js_XMLList_str, XMLList, 1, JSFUN_CONSTRUCTOR);
if (!fun)
return NULL;
if (!js_SetClassPrototype(cx, FUN_OBJECT(fun), proto,
JSPROP_READONLY | JSPROP_PERMANENT)) {
return NULL;
}
return proto;
}
|
__label__pos
| 0.898263 |
9
I have the following documentation:
mc/keymap is a variable defined in `multiple-cursors-core.el'.
Its value is (keymap
(67108903 . mc-hide-unmatched-lines-mode)
(27 keymap
(118 . mc/cycle-backward))
(22 . mc/cycle-forward)
(return . multiple-cursors-mode)
(7 . mc/keyboard-quit))
Documentation:
Keymap while multiple cursors are active.
Main goal of the keymap is to rebind C-g and <return> to conclude
multiple cursors editing.
[back]
I know that C-g and C-v are 7 and 22, respectively, but I've no idea what any of the others are. Is there a function I can use to do this?
(equal (??? (kbd x)) x) => t
14
help-key-description is used to display a humanly-readable key in the documentation when you invoke describe-key (C-hk).
(help-key-description [22] nil) ;; --> "C-v"
(help-key-description [67108903] nil) ;; --> "C-'"
or
(string=
(help-key-description (kbd "C-g") nil)
"C-g")
;; --> t
| improve this answer | |
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.758708 |
19
I often have cause to typeset things like this in TeX:
enter image description here
When reading expressions like the one above, it's very difficult to tell which brackets match each other. I'd like to be able to define variants of '(' and ')' that were typeset differently depending on the nesting level, to make it easier to match up the parentheses.
I can envisage doing this by introducing commands that increment/decrement some counter (to track the nesting level), and then use the current value of the counter to select the appropriate typesetting mechanism. But I have the impression that using a counter in that way is not very TeXish. What is the best way to do this?
It's not central to the question, but the code that generated the excerpt above is:
\documentclass[a4paper]{article}
\usepackage{libertine}
\begin{document}
$\textrm{subset}(\textrm{applyfn}(\textrm{inverse}(\mathbf{f}),\textrm{union2}(\mathbf{A},\mathbf{B})),\textrm{applyfn}(\textrm{union2}(\textrm{applyfn}(\textrm{inverse}(\mathbf{f}),\mathbf{A}),\textrm{inverse}(\mathbf{f})),\mathbf{B}))$
\end{document}
A solution that involved replacing ( and ) in that with commands would be fine.
Incidentally, editors sometimes have a facility called 'rainbow parentheses' which achieves the same effect; cf.
enter image description here
I don't actually want to use colour, because it's too garish (and I don't want to be forced to use a colour printer), but the desired effect is essentially the same. Suggestions on subtler alternatives to colour would be very welcome.
Edit: greyscale (using Ryan Reich's method) is pretty but ineffective: enter image description here
underset numbers (a little distracting): enter image description here
underlines (easy matching but distracting): enter image description here
• Can you please tell us something about the samples, where they come from, what the syntax uses/allows...? I came up with using different sized parenthesis, indenting (would really blow up your code size wise i asume), subscripting and replacing the parentheses with different glyphs (braces, brackets, \langle) so far. The replacement seems problematic if your language already assigns meaning to the new symbols or the usable ones are too far of to be recognized as paranthesis like objects. – Max Dec 30 '12 at 22:02
• 4
For readability I suggest prettyprinting with proper indenting rather than different grouping characters. Many code editors will do this automatically. Perhaps there are TeX macros as well. See tug.org/TUGboat/tb15-3/tb44doumont.pdf and www.tug.org/TUGboat/tb19-3/tb60wolin.pdf (although what you need is more like a LISP prettyprinter). – Ethan Bolker Dec 30 '12 at 22:08
• @Max: the TeX I gave above was generated by a program that takes bits of mathematical language and translates them into first order logic. I did consider alternating between square and round brackets, but I was worried that it would distract readers more than it helped. – Mohan Dec 30 '12 at 22:13
• well let's consider dem one by one then. I guess the nesting level is too deep for glyph size to make sense, that already is problematic in normal math code. Different glyphs and indenting would be confusing to big. Leaves the sub/superscripting. How about different (50 :P) shades of grey, but i guess you'd have to apply that all of the enclosed symbols of the same indentation level, so the eye gets a chance to pick those subtleties up. – Max Dec 30 '12 at 22:20
• @Max Shades of grey was the first thing I was going to try! – Mohan Dec 30 '12 at 22:25
16
Using a counter in this way is actually very TeXish. I (like your commenters) don't know exactly what scheme will work for you, but here is how to implement something like what you want. Just change what \countlparen and \countrparen do in order to suit your own needs. I admit that as it stands it looks rather ugly. This is a very simple solution: it counts nesting levels but doesn't distinguish braces at a given level, so you still have to do some parsing yourself.
\documentclass{standalone}
\newcounter{parens}
\def\countlparen{%
\addtocounter{parens}{1}\lparen\ensuremath{_{\the\value{parens}}}%
}
\def\countrparen{%
\rparen\ensuremath{_{\the\value{parens}}}\addtocounter{parens}{-1}%
}
\let\lparen(
\let\rparen)
\begingroup
\catcode`(\active
\catcode`)\active
\gdef\countparens{%
\let(\countlparen
\let)\countrparen
}
\endgroup
\newenvironment{nested parentheses}
{%
\catcode`(\active
\catcode`)\active
\countparens
\setcounter{parens}{0}%
}
{}
\begin{document}
\begin{nested parentheses}
$f(g(x)h(k(x)))$
\end{nested parentheses}
f(g(x))
\begin{nested parentheses}
f(g(x)h(k(x)))
\end{nested parentheses}
\end{document}
enter image description here
• 5
You can have spaces in environment names!? – Seamus Dec 31 '12 at 11:33
• @Seamus: I left that in there just to get that reaction! – Ryan Reich Dec 31 '12 at 15:19
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.989311 |
Profile script
Here is a script I have been working on to sweep a profile along a path. It currently has some problems/limitations but is basically in working order. It will probably be a while until I have time to develop it further.
#####################################################
# PROFILE.py - (c) Neil McAllister April 2003 #
#---------------------------------------------------#
# To use select first the path and then the #
# profile to be swept along the path, then #
# run script with ALT-P. #
# #
# The meshes used have certain restrictions: #
# - they must be planar (i.e. flat) #
# - they must have no faces - only edges #
# - they must make a single path (open or closed)#
# with no branches #
# #
# Currently the profile is assumed to be on the #
# XY plane. #
# #
#####################################################
print "--------------------------------------"
print " Profile script - (c) Neil McAllister"
print "--------------------------------------"
import Blender
from Blender import *
from Blender.Draw import *
from math import *
#####################################################
# Define abort function #
#####################################################
def abort():
print "ABORTING!"
raise Exception, "Error in geometry!"
#####################################################
# Main procedure #
#####################################################
if len(Object.GetSelected())!=2:
print "Script requires exactly 2 meshes selected"
pathobj=Object.GetSelected()[0]
profileobj=Object.GetSelected()[1]
print "Path Object: "+pathobj.name
print "Profile Object: "+profileobj.name
print
######################################################
print "Checking path object..."
path=NMesh.GetRawFromObject(pathobj.name)
pathverts=[]
pathfaces=[]
#####################################################
print "-> Calculating normal...",
x0,y0,z0=path.verts[0].co
x1,y1,z1=path.verts[1].co
x2,y2,z2=path.verts[2].co
pathNx=(y1-y0)*(z2-z0)-(z1-z0)*(y2-y0)
pathNy=(z1-z0)*(x2-x0)-(x1-x0)*(z2-z0)
pathNz=(x1-x0)*(y2-y0)-(y1-y0)*(x2-x0)
Nl = sqrt(pathNx*pathNx+pathNy*pathNy+pathNz*pathNz)
pathNx /= Nl
pathNy /= Nl
pathNz /= Nl
Nx=pathNx
Ny=pathNy
Nz=pathNz
print "DONE"
#####################################################
print "-> Checking if path is planar...",
planar=1
for a in range(2,len(path.verts)):
Vx,Vy,Vz=path.verts[a].co
xnv=abs((y1-y0)*(Vz-z0)-(z1-z0)*(Vy-y0))
ynv=abs((z1-z0)*(Vx-x0)-(x1-x0)*(Vz-z0))
znv=abs((x1-x0)*(Vy-y0)-(y1-y0)*(Vx-x0))
lnv = sqrt(xnv*xnv+ynv*ynv+znv*znv)
if lnv>0:
xnv /=lnv
ynv /= lnv
znv /= lnv
eps=.000001
if (abs(xnv-pathNx)>=eps)&(abs(ynv-pathNy)>=eps)&(abs(znv-pathNz)>=eps):
planar=0
if planar==0:
print "Mesh non-planar"
abort()
else:
print "OK"
#####################################################
print "-> Checking faces...",
onesided=1
for f in path.faces:
if len(f.v)>2:
onesided=0
if onesided==0:
print "Multi-sided faces found"
abort()
else:
print "OK"
print "Path OK"
print
#####################################################
print "Building path data..."
print "-> Building face and vertex list...",
for f in path.faces:
indices=[0,0]
for a in [0,1]:
v=f.v[a]
exists=0
for b in range(0,len(pathverts)):
v2=pathverts[b]
if (v[0]==v2[0])&(v[1]==v2[1])&(v[2]==v2[2]):
exists=1
indices[a]=b
if exists==0:
pathverts.append([v[0],v[1],v[2]])
indices[a]=len(pathverts)-1
pathfaces.append(indices)
print "DONE"
#####################################################
print "-> Building chain...",
pathchain=[]
f=pathfaces.pop(0)
pathchain.append(f[0])
pathchain.append(f[1])
while len(pathfaces)>0:
for a in range(0,len(pathfaces)):
f=pathfaces[a]
if f[0]==pathchain[0]:
pathchain.insert(0,f[1])
pathfaces.pop(a)
break
elif f[0]==pathchain[len(pathchain)-1]:
pathchain.append(f[1])
pathfaces.pop(a)
break
elif f[1]==pathchain[0]:
pathchain.insert(0,f[0])
pathfaces.pop(a)
break
elif f[1]==pathchain[len(pathchain)-1]:
pathchain.append(f[0])
pathfaces.pop(a)
break
pathcyclical=0
if pathchain[0]==pathchain[len(pathchain)-1]:
pathcyclical=1
pathchain.pop()
print "DONE"
print "Path data completed"
print
###########################################################################################################
print "Checking profile object..."
profile=NMesh.GetRawFromObject(profileobj.name)
profileverts=[]
profilefaces=[]
#####################################################
print "-> Calculating normal...",
x0,y0,z0=profile.verts[0].co
x1,y1,z1=profile.verts[1].co
x2,y2,z2=profile.verts[2].co
profileNx=(y1-y0)*(z2-z0)-(z1-z0)*(y2-y0)
profileNy=(z1-z0)*(x2-x0)-(x1-x0)*(z2-z0)
profileNz=(x1-x0)*(y2-y0)-(y1-y0)*(x2-x0)
Nl = sqrt(profileNx*profileNx+profileNy*profileNy+profileNz*profileNz)
profileNx /= Nl
profileNy /= Nl
profileNz /= Nl
print "DONE"
#####################################################
print "-> Checking if profile is planar...",
planar=1
for a in range(2,len(profile.verts)):
Vx,Vy,Vz=profile.verts[a].co
xnv=abs((y1-y0)*(Vz-z0)-(z1-z0)*(Vy-y0))
ynv=abs((z1-z0)*(Vx-x0)-(x1-x0)*(Vz-z0))
znv=abs((x1-x0)*(Vy-y0)-(y1-y0)*(Vx-x0))
lnv = sqrt(xnv*xnv+ynv*ynv+znv*znv)
xnv /= lnv
ynv /= lnv
znv /= lnv
eps=.000001
if (abs(xnv-profileNx)>=eps)&(abs(ynv-profileNy)>=eps)&(abs(znv-profileNz)>=eps):
planar=0
if planar==0:
print "Mesh non-planar"
abort()
else:
print "OK"
#####################################################
print "-> Checking faces...",
onesided=1
for f in profile.faces:
if len(f.v)>2:
onesided=0
if onesided==0:
print "Multi-sided faces found"
abort()
else:
print "OK"
print "Profile OK"
print
#####################################################
print "Building profile data..."
print "-> Building face and vertex list...",
for f in profile.faces:
indices=[0,0]
for a in [0,1]:
v=f.v[a]
exists=0
for b in range(0,len(profileverts)):
v2=profileverts[b]
if (v[0]==v2[0])&(v[1]==v2[1])&(v[2]==v2[2]):
exists=1
indices[a]=b
if exists==0:
profileverts.append([v[0],v[1],v[2]])
indices[a]=len(profileverts)-1
profilefaces.append(indices)
print "DONE"
#####################################################
print "-> Building chain...",
profilechain=[]
f=profilefaces.pop(0)
profilechain.append(f[0])
profilechain.append(f[1])
while len(profilefaces)>0:
for a in range(0,len(profilefaces)):
f=profilefaces[a]
if f[0]==profilechain[0]:
profilechain.insert(0,f[1])
profilefaces.pop(a)
break
elif f[0]==profilechain[len(profilechain)-1]:
profilechain.append(f[1])
profilefaces.pop(a)
break
elif f[1]==profilechain[0]:
profilechain.insert(0,f[0])
profilefaces.pop(a)
break
elif f[1]==profilechain[len(profilechain)-1]:
profilechain.append(f[0])
profilefaces.pop(a)
break
profilecyclical=0
if profilechain[0]==profilechain[len(profilechain)-1]:
profilecyclical=1
profilechain.pop()
print "DONE"
print "Profile data completed"
print
#####################################################
print "Building mesh..."
newMesh=NMesh.GetRaw()
print "-> Creating vertices...",
for a in range(0,len(pathchain)):
va=pathverts[pathchain[a]]
if (a==len(pathchain)-1)&(pathcyclical==0):
vc=pathverts[pathchain[a-1]]
Vx=va[0]-vc[0]
Vy=va[1]-vc[1]
Vz=va[2]-vc[2]
Px=Ny*Vz-Nz*Vy
Py=Nz*Vx-Nx*Vz
Pz=Nx*Vy-Ny*Vx
Pl=sqrt(Px*Px+Py*Py+Pz*Pz)
Px /= Pl
Py /= Pl
Pz /= Pl
elif (a==0)&(pathcyclical==0):
vb=pathverts[pathchain[a+1]]
Vx=vb[0]-va[0]
Vy=vb[1]-va[1]
Vz=vb[2]-va[2]
Px=Ny*Vz-Nz*Vy
Py=Nz*Vx-Nx*Vz
Pz=Nx*Vy-Ny*Vx
Pl=sqrt(Px*Px+Py*Py+Pz*Pz)
Px /= Pl
Py /= Pl
Pz /= Pl
else:
if a==len(pathchain)-1:
vb=pathverts[pathchain[0]]
else:
vb=pathverts[pathchain[a+1]]
if a==0:
vc=pathverts[pathchain[len(pathchain)-1]]
else:
vc=pathverts[pathchain[a-1]]
Vx=vb[0]-va[0]
Vy=vb[1]-va[1]
Vz=vb[2]-va[2]
Px1=Ny*Vz-Nz*Vy
Py1=Nz*Vx-Nx*Vz
Pz1=Nx*Vy-Ny*Vx
Pl=sqrt(Px1*Px1+Py1*Py1+Pz1*Pz1)
Px1 /= Pl
Py1 /= Pl
Pz1 /= Pl
Vx=va[0]-vc[0]
Vy=va[1]-vc[1]
Vz=va[2]-vc[2]
Px2=Ny*Vz-Nz*Vy
Py2=Nz*Vx-Nx*Vz
Pz2=Nx*Vy-Ny*Vx
Pl=sqrt(Px2*Px2+Py2*Py2+Pz2*Pz2)
Px2 /= Pl
Py2 /= Pl
Pz2 /= Pl
Px=Px1+Px2
Py=Py1+Py2
Pz=Pz1+Pz2
theta=acos(Px1*Px2+Py1*Py2+Pz1*Pz2)
Pl=sqrt(Px*Px+Py*Py+Pz*Pz)*cos(theta/2)
Px /= Pl
Py /= Pl
Pz /= Pl
for b in range(0,len(profilechain)):
vp=profileverts[profilechain[b]]
newMesh.verts.append(NMesh.Vert(va[0]+Px*vp[0]+Nx*vp[1], va[1]+Py*vp[0]+Ny*vp[1], va[2]+Pz*vp[0]+Nz*vp[1]))
points=len(profilechain)
print "DONE"
#####################################################
print "Creating faces...",
for a in range(0,len(newMesh.verts)-points,points):
for b in range(0,points-1):
f=NMesh.Face()
f.v.append(newMesh.verts[a+b])
f.v.append(newMesh.verts[a+b+1])
f.v.append(newMesh.verts[a+b+1+points])
f.v.append(newMesh.verts[a+b+points])
newMesh.faces.append(f)
if profilecyclical:
f=NMesh.Face()
f.v.append(newMesh.verts[a+points-1])
f.v.append(newMesh.verts[a])
f.v.append(newMesh.verts[a+points])
f.v.append(newMesh.verts[a+points*2-1])
newMesh.faces.append(f)
if pathcyclical==1:
a+=points
for b in range(0,points-1):
f=NMesh.Face()
f.v.append(newMesh.verts[a+b])
f.v.append(newMesh.verts[a+b+1])
f.v.append(newMesh.verts[b+1])
f.v.append(newMesh.verts[b])
newMesh.faces.append(f)
if profilecyclical:
f=NMesh.Face()
f.v.append(newMesh.verts[a+points-1])
f.v.append(newMesh.verts[a])
f.v.append(newMesh.verts[0])
f.v.append(newMesh.verts[points-1])
newMesh.faces.append(f)
print "DONE"
print "Mesh Completed"
print
#####################################################
NMesh.PutRaw(newMesh,"Test",1)
Redraw()
print "Finished!"
Some problems are it doesn’t copy the location, rotation and scale from the path object so the mesh may land in a strange place. It is also difficult to control the direction of the profile - this is something I need to look into. Another problem is it doesn’t mitre the corners properly when the edges get too short and multiple segments overlap. Despite all the I think it could be of some use.
Neil.
|
__label__pos
| 0.970022 |
devxlogo
Configure It Out with the Configuration Management Application Block
Configure It Out with the Configuration Management Application Block
ost applications require some sort of configuration data, whether it is a file resource, a database connection string, user settings, a Web service URL, or simply organizational branding requirements. To address these issues prior to .NET, developers had to utilize some type of ASCII file such as an ini file, or they could use the Windows registry. Today with .NET, application configuration data can be stored in a specialized XML file called a configuration file.
Every .NET application has two configuration files that it uses for its settings. One is called the machine.config file and the other is called the app.config file for Windows applications or a web.config file for Web applications. The machine.config file stores configuration data at the machine level, applying the configuration data to all applications running on that particular computer. The application configuration file stores configuration data at the application level, and the configuration data applies only to the specific application for which it was created.
For the Data Protection Provider to be used by the CMAB, it must be supported and called by the Configuration Section Provider you are using.
Using the .NET machine or application configuration files is an improvement from having to roll out your own mechanism for reading configuration data, but it has some drawbacks.
• The machine and application configuration files are read only, which makes storing configuration data in the .NET configuration files at run time impossible.
• When storing data environment-specific settings such as database connection strings in the configuration file, you have to deploy the file to all computers requiring the application. This can be an issue when an application is promoted from a development environment to production with new configuration data. It is possible to mistype new settings into the .NET configuration file.
• Security can be a critical issue in some environments. The configuration file is an XML document that can be read by anyone who has access to it. For Web applications, this is more difficult because ASP.NET prevents browsing the web.config file. However, with a Windows application configuration file, if a user has access to the executable, the user also has access to the configuration file.
To address these issues, you can always roll your own solution, but that can be time-consuming, and you probably will have to do a lot of refactoring as your needs grow. The other way to solve these issues is to find some implementation on the Web and tweak it to meet your needs. Microsoft, through its practices and patterns group, has created just such an implementation, called the Configuration Management Application Block or CMAB for short, which addresses the above-mentioned concerns and more.
The Configuration Management Application Block is part of a series of best practice implementations that Microsoft has put together. You can read an article on the Data Access Application Block in the November/December 2004 issue of CoDe Magazine , and an article on the Exception Management Block in the November/December 2002 issue of CoDe Magazine. (These articles are available in the DevX Premier Club.)
The Concept and Design of the CMAB
The CMAB addresses the needs of storing and retrieving application configuration data by creating a simple, consistent, extensible interface, including:
• A flexible data model, allowing storage of simple name-value pairs or complex hierarchal data such as an XML fragment of user preference data. This flexible representation of configuration data is handled by the Configuration Section Handler or CSH interface.
• An ability to write to any data store via the Configuration Storage Provider or CSP interface. This gives you the ability to store data as an XML file in a database of your choice, or anything else you can think of.
• A mechanism to handle security and data integrity. Storing data, such as a connection string in an XML file, can at times be less than ideal. The Data Protection Provider, or DPP interface, provides mechanisms for signing the stored configuration data and encrypting it, helping to ensure that your data is not viewed or edited by unauthorized eyes.
• An option to cache the data stored by any CSP. With some CSP implementations, it can also refresh the cache when the data changes.
The CMAB provides pre-built implementations of the Configuration Section Handler (CSH), Configuration Storage Provider (CSP), and Data Protection Provider (DPP). You can use these as is, tweak them to meet your specific needs, or create your own implementation from scratch.
Using the CMAB
With the high extensibility of the CMAB, some up-front legwork must be done to insure a successful implementation. This is true for any new component you add to an existing or new application.
Planning
The extensible design of the Configuration Management Application Block allows the creation of new components that plug right in.
The first step is defining what the application is that you are building. During this design phase, you need to determine what application data should be configurable. A good rule of thumb is that any data that can be affected by the application’s environment?such as network infrastructure, geographical location, external resources, and so forth?should be considered configurable data, and be stored in a central location.
The next step is to determine how to store the data. CMAB provides two mechanisms for storing configuration data, listed in Table 1.
Table 1: These Configuration Section Handlers (CSH) are included in CMAB.
CSH Class Implementations
Description
XmlHashtableSectionHandler
Implements a class that takes a Hashtable and serializes it into XmlNode. It also takes serialized XML data and deserializes it back into a Hashtable. This is perfect for name value type data.
XmlSerializerSectionHandler
Implements a class that takes any class that supports the .NET XmlSerializer class and serializes it to an XmlNode. It also handles the deserialization back into its original class.
See also Comparing different methods of testing your Infrastructure-as-Code
Once you have determined what data to store and how to store it, you must determine where to store it. Should the configuration data be stored in one central location or multiple locations? Should it be easily modifiable or should the data be encrypted? See Table 2 for the Data Storage Providers in CMAB.
Table 2:These Configuration Storage Providers (CMPs) are included in CMAB.
CSP ClassImplementations
Description
SqlStorage
The SqlStorage provider allows configuration data to be stored in a SQL Server database. The data is stored as an XML document in a text field, making it difficult to update directly. It is suggested that all updates are made through the CMAB.
XmlFileStorage
This implementation allows configuration data to be saved as an XML file that can be stored locally or on a network share.Read-only data can also be stored directly in the .NET configuration file.
RegistryStorage
The RegistryStorage implementation allows data to be stored in the Windows registry. This has the same issues with direct editing as the SqlStorage implementation, and can only apply settings to applications hosted on the local computer.
Getting Set Up
Now that you know what data to store, where to store it, and how it will be stored, you can start implementing the CMAB in your project. First, the CMAB must be compiled before you can add the necessary assembly references that you will need.
Once you have downloaded and installed CMAB, the default location for the code to compile it can be found in the C:Program FilesMicrosoft Application Blocks for .NETConfiguration ManagementCode folder. There is a version for Visual Basic .NET in the VB folder and a C# version in the CS folder. Pick the language of your choice and select the appropriate Microsoft.ApplicationBlocks.ConfigurationManagment solution. You will also find three QuickStart sample solutions in the same folder.
?
Figure 1. Use the Add Reference screen to set up the assembly references to use the CMAB.
Once the solution is selected and opened in Visual Studio .NET, compile the solution and you’re ready to go. The two core CMAB assemblies that need to be referenced in your application are the Microsoft.ApplicationBlocks.ConfigurationManagement.dll, and the Microsoft.ApplicationBlocks.ConfigurationManagement.Interfaces.dll (see Figure 1). These assemblies can be found in the output folder of the specific language solution you used to compile the CMAB. One important thing to note is that if you plan to use the SqlStorage Storage Provider, you must also add Microsoft.ApplicationBlocks.Data.dll as an assembly reference to your application project.
With the necessary assembly references added to your project, it is time to dig into the application configuration file and add some entries to it. In Figure 2, you can see an example of a configuration file. The two main things you will have to do are add two custom configuration declarations XML elements to the element and create a new Custom Configuration Section XML element.
?
Figure 2. This Hashtable configuration example uses a separate XML file to store the configuration data and utilizes the refreshOnChange event.
If your application is a Windows application, make sure an app.config file already exists; for a Web application or Web service, the web.config file is already created for you.
At the top of the config file, look for an element named within the element. If the is not present in your application configuration file, add it in.
Next, add a child XML element to the element. This element needs to be named
. The
element has two attributes?name and type. The name attribute contains the custom section name used to define the different configuration section handlers and which configuration storage provider to use with them. For now, you can make the value of the name attribute applicationConfigurationManagement.
Make the value of the type attribute:
Microsoft.ApplicationBlocks.ConfigurationManagement. ConfigurationManagerSectionHandler, Microsoft.ApplicationBlocks.ConfigurationManagement, Version=1.0.0.0,Culture=neutral,PublicKeyToken=null
Author’s Note: There are no spaces or breaks in the preceding code, and it all goes on one line.
Now that the applicationConfigurationManagement section element is created, the second
element to be created within the must be added. The second element allows the CMAB to determine which configuration section handler to use. The second
XML element, like the first, has both a name and a type attribute.
For the purposes of this walk-through, use the CMAB built-in XML Hashtable Serializer Configuration Section Handler so the value attribute is:
Microsoft.ApplicationBlocks.ConfigurationManagement. XmlHashtableSectionHandler,Microsoft. ApplicationBlocks.ConfigurationManagement, Version=1.0.0.0,Culture=neutral,PublicKeyToken=null
Again, the preceding code is all one line with no spaces or breaks. The name attribute is myConfig. See Table 1 for a listing of the Configuration Section Handlers (CSH) included with CMAB.
Adding a Custom Configuration Section
Now that the custom configuration declarations have been added, the next step is to add the custom configuration section. Again looking at Figure 2, you can see the custom configuration section . Notice that the element name matches the first custom configuration declaration section’s attribute name; this is because the custom configuration declaration section defines a class and assembly used to parse the specified custom configuration section.
Add the XML element to the XML element. Next, add a new XML element called with an attribute called name. The value of the name attribute should correspond with the name attribute in the custom configuration declaration section element that defines which configuration section handler to use.
How many times has the Configuration Settings Fairy changed production settings without your knowledge?
Looking at the application configuration file in Listing 1, the value is myConfig. The element can have up to three child XML elements:
• The that handles the caching settings
• The that handles the settings for the Configuration Storage Provider (CSP)
• The that handles the settings for the Data Protection Provider (DPP)
The element contains two attributes: one called enabled that can be set to true or false, and the other called refresh. For more details on how to set the refresh attribute’s value, look at the section Caching Data later in this article. The example application configuration file in Listing 1 has the cache feature enabled and set to refresh every 15 minutes.
The element, the only required element, defines the data storage provider to use as well as any necessary attributes required by the specific storage provider. Therefore, if the configuration storage provider used a database, you would probably have an attribute to specify a connection string to connect to the database. Table 2 contains a list of the configuration data providers included with the CMAB. Table 3 contains a listing of the configuration storage providers and the corresponding attributes.
The element specifies a data protection provider that can be used for signing and encrypting the stored configuration data. The CMAB comes with two DPPs. They are listed in Table 4. You can find the attributes for these two included DPPs in Table 5.
Table 3: Configuration Storage Provider attributes.
Configuration
Storage Provider (CSP)
Attributes
Description
(Common to all included storage providers)
Assembly
Specifies the assembly name that contains the storage provider. (Required.)
?
Type
Specifies the storage provider class inside the assembly to use. (Required.)
?
Signed
If set to true, the configuration data is signed and the data protection provider must be provided. (Optional.)
?
Encrypted
If set to true, the configuration data is encrypted and the Data Protection Provider (DPP) must be provided. (Optional.)
SqlStorage
connectionStringRegKeyPath
Specifies the Windows registry path where the SQL connection string is stored. (Either this setting or connectionString is required.)
?
connectionString
Specifies the SQL Connection string to use when connecting to a SQL database. (Either this setting or connectionStringRegKeyPath is required.)
?
getConfigSP
Specifies the stored procedure name used for returning configuration data. (Optional. Defaults to cmab_get_config.)
?
setConfigSP
Specifies the stored procedure name used for saving configuration data to the database. (Optional. Defaults to cmab_set_config.)
XmlFileStorage
Path
Specifies the path where the configuration data XML file is stored. (Optional. Defaults to search the application configuration file for a custom configuration element that matches the specified name attribute in the CSH custom configuration declaration.)
?
refreshOnChange
If set to true and when using the CMAB cache feature, any file modification made to the configuration data file refreshes the cached configuration data. (Optional.)
RegistryStorage
registryRoot
Specifies the registry root of where the configuration is stored. (Required.)
?
registrySubKey
Specifies the registry sub key where the configuration data is stored. (Required.)
Table 4: Out of the Box Data Protection Providers (DPPs).
DPP Class Implementations
Description
BCLDataProtection
Utilizes the .NET Cryptography libraries to handle encryption and decryption of configuration data. Utilizing this implementation means you will have to manage encryption keys manually.
DPAPIDataProtection
The DataProtection implementation utilizes the Win32 DPAPI or Data Protection API. This API handles the management of encryption keys for you. The encryption keys can be stored in the user key store, which, on a Windows NT domain with roaming profiles turned on, allows a user to encrypt and decrypt data on any computer on that domain. An alternative is to use the machine key store to allow anyone accessing that particular computer to encrypt and decrypt data.
Table 5: Configuration Storage Provider (CSP) attributes.
Data Protection
Provider
Attributes
Description
(Common to all included data protection providers)
Assembly
Specifies the assembly name that contains the storage provider. (Required.)
?
Type
Specifies the storage provider class inside the assembly to use. (Required.)
?
hashKeyRegistryPath
Specifies the Windows registry path where the hash key is stored. (Either this setting or hashKey is required.)
?
hashKey
Specifies the Hash key to use when encrypting data. (Either this setting or hashKeyRegistryPath is required.)
BCLDataProtection
symmetricKeyRegistryPath
Specifies the Windows registry path where the symmetric key is stored. (Either this setting or symmetricKey is required.)
?
symmetricKey
Specifies the symmetric key to use when encrypting data. (Either this setting or symmetricKeyRegistryPath is required.)
DPAPIDataProtection
keyStore
Specifies the key store to use. (Optional. Defaults to use the machine key to encrypt the data.)
Reading Data
There are two overloaded Read methods. The first method accepts a string parameter that defines the section name you want to use. The section name defines which CSH and CSP will be used for retrieving the configuration data.
In Figure 2, the name attribute in the configSection and section elements are the same, thus passing in myConfig to the Read method will tell the configuration manager to use the XML Hashtable Configuration Section Handler, and the XML File Storage data provider for retrieving the configuration data.
public object GetAllSettings( string sectionName) { return ConfigurationManager.Read( sectionName); }
It is possible to have multiple section handlers and data providers used within the same application just by specifying them in the application configuration file.
The second overloaded Read method accepts no parameters, and it uses the defaultSection attribute of the applicationConfigurationManagement element to define which CSP and CSH to use. One very important thing to note is that the Read method that uses the defaultSection can only be used with Hashtables. If you want to use some other object other than a Hashtable, the Read method must be called with the sectionName parameter.
One very important thing to note is that the Read method that uses the defaultSection can only be used with Hashtables.
public object GetMySetting(string key) { Hashtable configData = ConfigurationManager.Read(); return configData[key]; }
Writing Data
Writing data to the CSP is just about as easy as reading it. There are two overloaded Write methods. The first overload accepts a section name and an object representing your configuration data and the second overload, using the defaultSection attribute, only accepts the object representing your configuration data. Again, the same Hashtable-only rule applies when using the Write method overload without a parameter as its counterpart Read method.
public void SaveAllSettings(string sectionName, object data) { ConfigurationManager.Write(sectionName, data); }
Caching Data
For almost every application, performance can be a concern. The CMAB offers a solution by providing in-memory caching functionality. The caching is really handy when dealing with settings that change rarely.
With some CSPs, such as the included XmlFileStorage provider, an event can be raised within the CMAB to refresh the in-memory cache from the CSP. To use the CMAB caching, a configCache element must be added to the configSection element in your application’s .NET configuration file, and two attributes, enabled and refresh, must be set. The enabled attribute can be set to true or false, allowing caching to be turned on or off. The refresh attribute uses extended format notation.
The extended format consists of five values separated by spaces. These values can consist of an asterisk (*) to represent all possible numbers, a single number to indicate a single value, or a comma-separated list of numbers that can indicate multiple values. The five separate values are as follows, in this exact order: minutes, hours, day of month, month, day of week. Here are some examples of possible different settings
• 1 * * * *: The cache expires every hour, at 1 minute past the hour.
• 0,15,30,45 * * * *: The cache expires every 15 minutes for an hour.
• 0 0,12 * * *: The cache expires at noon and at midnight every day.
• 0 0 * * 1,2,3,4,5: The cache expires at midnight only on weekdays.
• 0 12 5 * 0: The cache expires at noon on the first Sunday after the 5th day of every month.
Getting Underneath the Hood
The extensible design of the CMAB allows the creation of new components that plug right in. This extensibility is provided by interfaces. In order to create a new component for the CMAB, inherit from the specific interface for the component being targeted and implement it.
The CSH Interface
The CSH uses the IConfigurationSectionHandler interface in the .NET System.Configuration namespace for reading data and provides a new interface, IConfigurationSectionHandlerWriter, for writing data. The IConfigurationSectionHandler is used to simplify the implementation for storing read-only configuration data in the application or machine configuration files. Providing this ability allows you to use the exact same implementation whether you want to use the standard .NET configuration files or an external data source to store configuration data.
The IConfigurationSectionHandler uses the Create method to deserialize an XML node into an object. It’s up to you to provide the implementation to deserialize the data in much the same way you would when using the IConfigurationSectionHandler.Create method with the .NET configuration files.
The IConfigurationSectionHandlerWriter inherits from IConfigurationSectionHandler and provides a new method that needs to be implemented, called Serialize. This is where you will implement the serialization of your object into an XML node.
Configuration Section Provider Interface
The CSP interfaces provide the means to actually read and write configuration data to and from the data storage provider whether it’s a database, an XML file, or something else. The CSP interfaces consist of the IConfigurationStorageReader interface for read-only operations and the IConfigurationStorageWriter interface for read-and-write operations. It is assumed that if you are going to write data that you are also going to read it as well.
The IConfigurationStorageReader interface has four things to implement; the Init and Read methods, the IsInitialized property and the ConfigChanges event. The Init method is where you initialize your DSP whether it grabs an XML file or sets a connection to a database. The Read method returns an XmlNode from your storage provider. The IsInitialized property indicates whether or not the CSP as been initialized. If you want to support configuration change events, implement the ConfigChanges event.
The IConfigurationStorageWriter inherits from the IConfigurationStorageReader and adds a method called Write, which accepts an XmlNode as a parameter. This method is intended to save the serialized configuration data to the storage provider.
Data Protection Provider Interface
The IDataProtection interface provides the methods necessary to encrypt, decrypt, and sign (hash) data. The four methods that need to be implemented are Init, Encrypt, Decrypt, and ComputeHash.
The Init method initializes the DPP and sets up any necessary variables needed by the specific implementation of the DPP. The Encrypt and Decrypt methods are self-explanatory. The ComputeHash method creates a hash signature of the data provided, which gives you the ability to prevent data from being changed outside of the CMAB. How many times has the configuration settings fairy changed production settings without your knowledge? Now, at least you will know right away.
One important thing to note is that the DPP is intended to work with the CSP. Hence the CSP must support and call the DPP interfaces to utilize your DPP implementation with your configuration data.
The Configuration Management Application Block provides a simple, consistent, and robust interface for handling your application configuration data. It comes with a series of out of the box implementations that can be used right away, tweaked to perfection, or tossed out for a custom implementation that you build. Once the interfaces are implemented and configured, the CMAB’s use could not be any easier or straightforward. Overall, the CMAB should be able to handle most?if not all?your configuration data needs.
See also Comparing different methods of testing your Infrastructure-as-Code
devxblackblue
About Our Editorial Process
At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.
See our full editorial policy.
About Our Journalist
|
__label__pos
| 0.605725 |
如何为 Hugo 静态网站添加评论功能
2023-12-13
标签: HUGO NGINX PYTHON
我的网站使用 Hugo 编译静态页面。在此基础上,支持了搜索和评论等动态内容。我打算分两篇文章介绍具体的实现方法,分享给读者。这是第二篇。
本文先介绍评论功能的实现方法,还有 另一篇文章 介绍如何添加搜索。
效果预览
搜索效果如下:
先决条件
关于搭建静态网站的做法,我在 这篇文章 中有简单介绍。要在静态网站添加动态内容,有如下一些条件:
• 自己掌握的云服务器
• 反向代理(比如nginx)
• 数据库(比如 mysql 或 redis )
具备以上的条件后,我们接下来可以了解下流程。相比搜索功能,增加评论的步骤会简单一些。
基本原理
基本的流程是:
1. 设计一个用于搜索的路由名称,不能跟 hugo 项目已有的静态路由冲突,比如我用的 /comment/
2. 在 Hugo 项目的每篇文章页面模板中增加评论的列表和表单
3. 在 Hugo 项目中,在每篇文章页面中加在 JavaScript 文件,用于向后台请求评论内容。
4. 建立后台搜索服务,从数据库中提取评论内容。
5. 在 nginx 配置中,将来自 /comment/ 的请求,分发给后台搜索服务
简单的部署图如下:
下面分步骤介绍实现方法:
制定路由
路由的名称不能和 Hugo 项目的 content 目录下的内容重复,会被 Hugo 和 nginx 同时使用。我这里使用/comment/.
创建评论列表和表单
页面底部的评论列表效果如下:
( ☝评论列表效果 )
评论列表和表单的 HTML 代码例子如下:
<h2>精彩评论</h2>
<div class="border-0 divide-y mb-6" id="comment-container"></div>
<div class="border-0 w-full pt-4" id="comment">
<textarea class="text-sm my-1 border-1 divide-y leading-6 border border-gray-400 w-full p-2" id="comment-content" name="comment-content" placeholder="必填" minlength="1" maxlength="1000" rows="5"></textarea>
<div class="flex w-full flex-row flex-nowrap justify-between items-end md:justify-start md:items-center ">
<div class="my-2 ">
<label class="mr-1" for="comment-nick">称呼</label>
<input class="w-40 md:w-auto border border-gray-300 px-1" id="comment-nick" placeholder="必填" name="comment-nick" autocomplete="on" maxlength="25">
</div>
<div class="mx-4" id="comment-prompt">
<button id="comment-submit" style='color:rgba(200,200,200,1)' disabled class="border border-gray-400 justify-self-stretch rounded border w-24 px-2 my-2 hover:text-eureka">提交</button>
</div>
</div>
</div>
注意上面的代码中,id 为 comment-containerdiv 是评论列表,将来由 Javascript 请求回来并填充;id 为 commentdiv 则是提交评论的列表。
为了维护方便,上面这段代码不必直接放入文章页面的模板内,而是单独作为一个 partial 保存,例如保存在 layouts/partials/components/comment.html 文件里,再由单个文章页面的模板引用。
单个文章页面的模板文件在 Hugo 项目的 layouts/_default/single.html。编辑这个文件,在相关位置引用上面的 comment.html
{{ with .Page.Params.comment }}
{{ partial "components/comment" . }}
{{ end }}
.Page.Params.comment 参数是在 Frontmatter 里增加的一个开关,用于控制当前这篇文章是否启用评论功能。
此处省略页面元素的样式属性,因为这取决于各个网站的视觉风格。
这里没有使用 <form> 表单语法,而使用 Vanilla Javascript 提交评论内容
页面发起请求
新建一个 JavaScript 文件,不妨命名为 comment.js ,放在 static 目录下。因为评论列表位置比较靠后,我希望在页面元素全部加载完后再执行这个 js 文件,所以放在 </body> 之前,具体位置是 layouts/_default/baseof.html 文件,增加如下语句:
{{- if .IsPage }}
{{- $assets := .Site.Data.assets }}
<script defer src="{{ $assets.base64.js.url }}"></script>
<script defer src="{{ $assets.comment.js.url }}"></script>
{{- end }}
$assets 变量是定义在 data/assets.yaml 中,用于单独保存 js 文件名,效果如下:
base64:
js:
url: /js/lib/base64.js
comment:
js:
url: /js/lib/comment.js
base64.js 文件用于压缩加密 URL 使用,让 URL 更简洁。
comment.js 文件的关键内容有:
var btn = document.getElementById('comment-submit');
// 请求评论内容
function main(){
// 将提交按钮绑定为发送评论
if (btn !== null){
btn.addEventListener('click', function(){
var data = {
path: window.location.pathname.trim(),
nick: nick.value.trim() || '',
content: content_area.value.trim(),
}
var r = new XMLHttpRequest();
r.addEventListener('load', function () {
var result = JSON.parse(r.responseText)
var prompt = document.getElementById('comment-prompt');
if(result.status === 'ok' && prompt !== null){
prompt.innerHTML = '感谢您的评论,将在审核后公布'
}
});
r.open("POST", "/comment/", true);
r.setRequestHeader('content-type', 'application/json')
r.send(JSON.stringify(data));
btn.setAttribute('disabled' ,'')
hide(btn);
})
}
// 加载评论列表
if(window.location.pathname.length > 0){
var r = new XMLHttpRequest();
r.addEventListener('load', function () {
var result = JSON.parse(r.responseText)
if(result.status === 'ok'){
populate(result)
}
});
r.open("GET", "/comment/"+BASE64.urlsafe_encode(window.location.pathname), true);
r.send(null);
}
}
function populate(result){
var comment_container = document.getElementById('comment-container');
var divider = times('-', 100);
if (comment_container !== null){
// 显示评论列表
}else{
comment_container.innerText = '暂无评论,欢迎您在下方留言'
}
}
window.onload = main
接下来开始处理后台数据,包括评论的保存和读写处理。
设计数据库
要实现高效的搜索,需要使用数据库。这里以 mysql 数据库为例。
安装好 mysql 后,需要首先创建库和表。为了保存搜索结果的来源信息,数据表需要存储文章的所属 section、标题 等信息。
所以表格设计如下:
+----+---------+---------------------------------+--------------------------+--------------+---------------------+--------+
| id | section | title | content | nick | time | state |
+----+---------+---------------------------------+--------------------------+--------------+---------------------+--------+
| 2 | tech | pull-docker-images-behind-proxy | 找了好久,终于找到想要的 | tony | 2023-01-09 15:55:27 | show |
+----+---------+---------------------------------+--------------------------+--------------+---------------------+--------+
section 和 tech 字段用于拼接文章的链接,title 字段用于显示,content 是文章的内容,用于搜索。
建立后台评论读写服务
简单的方案使用 Flask 框架建立服务,这个话题可以另开一篇,这里只列出视图函数最关键的部分:
# comment 是个蓝图,公共前缀是 /comment
# 读取评论条目
@comment.route('/<string:code>', methods=['GET'])
def index(code):
# 省略变量准备和异常处理的部分
c = Comment.query.filter_by(section=section, title=title).all()
comments = [
{
'id':x.id,
'content':x.content,
'nick':x.nick,
'time':datetime.datetime.strftime(x.time, '%Y-%m-%d %H:%M')
} for x in c if x.state == 'show']
resp = {'output': comments, 'status':'ok'}
status_code = 200
return jsonify(resp), status_code
# 写入评论条目
@comment.route('/', methods=['POST'])
def add_comment():
# 省略变量准备和异常处理的部分
c = Comment(
section=section,
title=title,
nick=nick,
content=content,
time=datetime.datetime.now()
)
db.session.add(c)
db.session.commit()
resp = {'output': repr(c), 'status':'ok'}
return jsonify(resp), status_code
另外再配合安装 gunicorn 和 supervisord 就可以开启 web 服务,这里假设 web 服务开在 8888 端口。
配置 nginx
需要配置 nginx ,将 /s/ 的请求转发到后台 web 服务上,而其余的请求仍然解释为静态文件:
location ~ /search/ {
proxy_pass http://127.0.0.1:8888;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
这样就实现了 Hugo 静态内容配合使用的动态评论功能。
给个免费的赞吧~
如果您对本文有疑问或者寻求合作,欢迎 联系邮箱邮箱已到剪贴板
标签: HUGO NGINX PYTHON
精彩评论
欢迎转载本文,惟请保留 原文出处 ,且不得用于商业用途。
本站 是个人网站,若无特别说明,所刊文章均为原创,并采用 署名协议 CC-BY-NC 授权。
|
__label__pos
| 0.807705 |
Music Streaming Volumes Data
Music Streaming Volumes Data
At Nomad Data we help you find the right dataset to address these types of needs and more. Submit your free data request describing your business use case and you'll be connected with data providers from our over 3,000 partners who can address your exact need.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
At Nomad Data we help you find the right dataset to address these types of needs and more. Sign up today and describe your business use case and you'll be connected with data vendors from our nearly 3000 partners who can address your exact need.
Introduction
Understanding the dynamics of music streaming has always been a complex task. Historically, insights into music preferences, trends, and consumption patterns were gleaned from physical album sales, radio airplay, and later, digital downloads. Before the era of data analytics, stakeholders in the music industry relied on these antiquated methods, which provided a fragmented view of listeners' behaviors and preferences. The advent of music streaming services marked a significant shift, yet, without concrete data, industry professionals were often in the dark, making decisions based on intuition rather than evidence.
Before the proliferation of streaming services and the digital tracking of music consumption, the industry relied heavily on manual surveys, sales reports from record stores, and airplay charts from radio stations. These methods were not only time-consuming but also prone to inaccuracies and lacked the granularity needed for precise decision-making. The landscape began to change with the introduction of the internet and connected devices, which paved the way for digital music platforms and, subsequently, the collection of vast amounts of data on music consumption.
The importance of data in understanding music streaming cannot be overstated. In the past, weeks or even months could pass before trends were identified, by which time the market could have shifted entirely. Now, with real-time data, industry professionals can immediately identify changes in listening habits, emerging genres, and viral tracks, allowing for more agile and informed decision-making.
The transition to a data-driven approach in the music industry mirrors broader trends across sectors, where the digitization of processes and the advent of big data analytics have revolutionized how insights are gathered and acted upon. This shift has been particularly impactful in music streaming, where the volume of data generated by platforms offers unprecedented opportunities for analysis.
However, navigating this wealth of information requires a nuanced understanding of the types of data available and how they can be leveraged to glean insights. This article will explore various categories of data relevant to music streaming, shedding light on how they can help business professionals better understand streaming volumes, listener preferences, and market trends.
The evolution from traditional methods to data-driven strategies underscores the transformative power of data in the music industry. As we delve into the specifics of each data type, it becomes clear that the ability to analyze streaming data in real-time is not just an advantage but a necessity for staying competitive in today's rapidly changing music landscape.
Entertainment Data
History and Evolution
The category of entertainment data, particularly as it pertains to music streaming, has undergone significant evolution. Initially, the music industry relied on physical album sales and radio airplay to gauge popularity and trends. The digital age introduced downloads as a new metric, but it was the advent of streaming services that truly revolutionized data collection in the industry. Technology advances, including sophisticated analytics platforms and the widespread adoption of smartphones, have facilitated the collection and analysis of streaming data.
Entertainment data encompasses a wide range of information, from streaming volumes on platforms like Spotify, Apple Music, and Amazon Music, to listener demographics and behavior patterns. The roles and industries that have historically used this data include record labels, music producers, artists, and marketing professionals, all of whom rely on insights from streaming data to make strategic decisions.
The amount of data generated by music streaming services is accelerating at an unprecedented rate, thanks to the global adoption of these platforms and the continuous engagement of users. This explosion of data offers a wealth of opportunities for analysis but also presents challenges in terms of data management and interpretation.
Utilizing Entertainment Data for Insights
Entertainment data can be leveraged in numerous ways to gain insights into music streaming:
• Tracking Streaming Volumes: Understanding the total streaming volumes for popular platforms can help identify trends, popular genres, and emerging artists.
• Listener Demographics and Preferences: Analyzing who is listening to what and when can inform targeted marketing strategies and content creation.
• Market Activity Analysis: Platforms like Luminate offer comprehensive consumption data, providing insights into full market activity, which is crucial for compiling industry-standard charts and benchmarks.
By harnessing entertainment data, industry professionals can make informed decisions about artist development, playlist curation, and promotional strategies, ultimately leading to increased engagement and revenue.
Conclusion
The importance of data in understanding music streaming and making informed decisions cannot be overstated. The transition from traditional methods of gauging music popularity to a data-driven approach has transformed the industry, allowing for real-time insights and agile responses to market changes.
As organizations become increasingly data-driven, the discovery and analysis of relevant data types will be critical to maintaining a competitive edge. The music industry, with its rich sources of streaming data, stands at the forefront of this shift, leveraging insights to drive growth and innovation.
Looking ahead, the potential for new types of data to emerge and provide additional insights into music streaming is vast. From advanced listener analytics to predictive models for emerging trends, the future of music data analytics is bright, promising even deeper understandings of the complex dynamics of music consumption.
In conclusion, the role of data in the music industry has evolved from a supplementary tool to a central pillar of strategic decision-making. As we continue to explore and understand the various categories of data relevant to music streaming, the potential for innovation and growth in the industry is limitless.
Appendix
Industries and roles that could benefit from music streaming data include:
• Record Labels and Music Producers: For artist development and market positioning.
• Marketing Professionals: For targeted advertising and promotional campaigns.
• Investors and Consultants: For identifying growth opportunities within the music industry.
• Insurance Companies and Market Researchers: For risk assessment and market trend analysis.
The future of data in the music industry is not just about understanding what has happened but predicting what will happen next. Advances in AI and machine learning have the potential to unlock the value hidden in decades-old documents as well as modern streaming data, offering unprecedented insights into the music industry's future.
Learn More
|
__label__pos
| 0.877874 |
LibreOffice Module sw (master) 1
htmlgrin.cxx
Go to the documentation of this file.
1 /* -*- Mode: C++; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
2 /*
3 * This file is part of the LibreOffice project.
4 *
5 * This Source Code Form is subject to the terms of the Mozilla Public
6 * License, v. 2.0. If a copy of the MPL was not distributed with this
7 * file, You can obtain one at http://mozilla.org/MPL/2.0/.
8 *
9 * This file incorporates work covered by the following license notice:
10 *
11 * Licensed to the Apache Software Foundation (ASF) under one or more
12 * contributor license agreements. See the NOTICE file distributed
13 * with this work for additional information regarding copyright
14 * ownership. The ASF licenses this file to you under the Apache
15 * License, Version 2.0 (the "License"); you may not use this file
16 * except in compliance with the License. You may obtain a copy of
17 * the License at http://www.apache.org/licenses/LICENSE-2.0 .
18 */
19
20 #include <memory>
21 #include <hintids.hxx>
22 #include <comphelper/string.hxx>
23 #include <vcl/svapp.hxx>
24 #include <vcl/wrkwin.hxx>
25 #include <svx/svxids.hrc>
27 #include <svl/stritem.hxx>
28 #include <svl/urihelper.hxx>
29 #include <editeng/fhgtitem.hxx>
30 #include <editeng/lrspitem.hxx>
31 #include <editeng/adjustitem.hxx>
32 #include <editeng/brushitem.hxx>
33 #include <editeng/colritem.hxx>
34 #include <editeng/boxitem.hxx>
35 #include <editeng/ulspitem.hxx>
36 #include <editeng/langitem.hxx>
38 #include <sfx2/docfile.hxx>
39 #include <sfx2/event.hxx>
40 #include <vcl/imap.hxx>
41 #include <svtools/htmltokn.h>
42 #include <svtools/htmlkywd.hxx>
43 #include <unotools/eventcfg.hxx>
44 #include <sal/log.hxx>
45 #include <osl/diagnose.h>
46
47 #include <fmtornt.hxx>
48 #include <fmturl.hxx>
49 #include <fmtsrnd.hxx>
50 #include <fmtinfmt.hxx>
51 #include <fmtcntnt.hxx>
52 #include <fmtanchr.hxx>
53 #include <fmtfsize.hxx>
54 #include <frmatr.hxx>
55 #include <charatr.hxx>
56 #include <frmfmt.hxx>
57 #include <charfmt.hxx>
58 #include <docary.hxx>
59 #include <docsh.hxx>
60 #include <pam.hxx>
61 #include <doc.hxx>
62 #include <ndtxt.hxx>
63 #include <shellio.hxx>
64 #include <poolfmt.hxx>
65 #include <IMark.hxx>
66 #include <ndgrf.hxx>
67 #include "htmlnum.hxx"
68 #include "swcss1.hxx"
69 #include "swhtml.hxx"
70 #include <numrule.hxx>
71 #include <fmtflcnt.hxx>
72 #include <IDocumentMarkAccess.hxx>
73
74 #include <vcl/graphicfilter.hxx>
75 #include <tools/urlobj.hxx>
76
77 using namespace ::com::sun::star;
78
80 {
83 { nullptr, 0 }
84 };
85
87 {
88 { OOO_STRING_SVTOOLS_HTML_VA_top, text::VertOrientation::LINE_TOP },
89 { OOO_STRING_SVTOOLS_HTML_VA_texttop, text::VertOrientation::CHAR_TOP },
90 { OOO_STRING_SVTOOLS_HTML_VA_middle, text::VertOrientation::CENTER },
91 { OOO_STRING_SVTOOLS_HTML_AL_center, text::VertOrientation::CENTER },
92 { OOO_STRING_SVTOOLS_HTML_VA_absmiddle, text::VertOrientation::LINE_CENTER },
95 { OOO_STRING_SVTOOLS_HTML_VA_absbottom, text::VertOrientation::LINE_BOTTOM },
96 { nullptr, 0 }
97 };
98
99 ImageMap *SwHTMLParser::FindImageMap( const OUString& rName ) const
100 {
101 OSL_ENSURE( rName[0] != '#', "FindImageMap: name begins with '#'!" );
102
103 if (m_pImageMaps)
104 {
105 for (auto &rpIMap : *m_pImageMaps)
106 {
107 if (rName.equalsIgnoreAsciiCase(rpIMap->GetName()))
108 {
109 return rpIMap.get();
110 }
111 }
112 }
113 return nullptr;
114 }
115
117 {
118 SwNodes& rNds = m_xDoc->GetNodes();
119 // on the first node of section #1
120 sal_uLong nIdx = rNds.GetEndOfAutotext().StartOfSectionIndex() + 1;
121 sal_uLong nEndIdx = rNds.GetEndOfAutotext().GetIndex();
122
123 SwGrfNode* pGrfNd;
124 while( m_nMissingImgMaps > 0 && nIdx < nEndIdx )
125 {
126 SwNode *pNd = rNds[nIdx + 1];
127 if( nullptr != (pGrfNd = pNd->GetGrfNode()) )
128 {
129 SwFrameFormat *pFormat = pGrfNd->GetFlyFormat();
130 SwFormatURL aURL( pFormat->GetURL() );
131 const ImageMap *pIMap = aURL.GetMap();
132 if( pIMap && pIMap->GetIMapObjectCount()==0 )
133 {
134 // The (empty) image map of the node will be either
135 // replaced with found image map or deleted.
136 ImageMap *pNewIMap =
137 FindImageMap( pIMap->GetName() );
138 aURL.SetMap( pNewIMap );
139 pFormat->SetFormatAttr( aURL );
140 if( !pGrfNd->IsScaleImageMap() )
141 {
142 // meanwhile the graphic size is known or the
143 // graphic don't need scaling
144 pGrfNd->ScaleImageMap();
145 }
146 m_nMissingImgMaps--; // search a map less
147 }
148 }
149 nIdx = rNds[nIdx]->EndOfSectionIndex() + 1;
150 }
151 }
152
153 void SwHTMLParser::SetAnchorAndAdjustment( sal_Int16 eVertOri,
154 sal_Int16 eHoriOri,
155 const SvxCSS1PropertyInfo &rCSS1PropInfo,
156 SfxItemSet& rFrameItemSet )
157 {
158 const SfxItemSet *pCntnrItemSet = nullptr;
159 auto i = m_aContexts.size();
160 while( !pCntnrItemSet && i > m_nContextStMin )
161 pCntnrItemSet = m_aContexts[--i]->GetFrameItemSet();
162
163 if( pCntnrItemSet )
164 {
165 // If we are in a container then the anchoring of the container is used.
166 rFrameItemSet.Put( *pCntnrItemSet );
167 }
168 else if( SwCSS1Parser::MayBePositioned( rCSS1PropInfo, true ) )
169 {
170 // If the alignment can be set via CSS1 options we use them.
171 SetAnchorAndAdjustment( rCSS1PropInfo, rFrameItemSet );
172 }
173 else
174 {
175 // Otherwise the alignment is set correspondingly the normal HTML options.
176 SetAnchorAndAdjustment( eVertOri, eHoriOri, rFrameItemSet );
177 }
178 }
179
180 void SwHTMLParser::SetAnchorAndAdjustment( sal_Int16 eVertOri,
181 sal_Int16 eHoriOri,
182 SfxItemSet& rFrameSet,
183 bool bDontAppend )
184 {
185 bool bMoveBackward = false;
186 SwFormatAnchor aAnchor( RndStdIds::FLY_AS_CHAR );
187 sal_Int16 eVertRel = text::RelOrientation::FRAME;
188
189 if( text::HoriOrientation::NONE != eHoriOri )
190 {
191 // determine paragraph indent
192 sal_uInt16 nLeftSpace = 0, nRightSpace = 0;
193 short nIndent = 0;
194 GetMarginsFromContextWithNumBul( nLeftSpace, nRightSpace, nIndent );
195
196 // determine horizontal alignment and wrapping
197 sal_Int16 eHoriRel;
198 css::text::WrapTextMode eSurround;
199 switch( eHoriOri )
200 {
202 eHoriRel = nLeftSpace ? text::RelOrientation::PRINT_AREA : text::RelOrientation::FRAME;
203 eSurround = css::text::WrapTextMode_RIGHT;
204 break;
206 eHoriRel = nRightSpace ? text::RelOrientation::PRINT_AREA : text::RelOrientation::FRAME;
207 eSurround = css::text::WrapTextMode_LEFT;
208 break;
209 case text::HoriOrientation::CENTER: // for tables
210 eHoriRel = text::RelOrientation::FRAME;
211 eSurround = css::text::WrapTextMode_NONE;
212 break;
213 default:
214 eHoriRel = text::RelOrientation::FRAME;
215 eSurround = css::text::WrapTextMode_PARALLEL;
216 break;
217 }
218
219 // Create a new paragraph, if the current one has frames
220 // anchored at paragraph/at char without wrapping.
221 if( !bDontAppend && HasCurrentParaFlys( true ) )
222 {
223 // When the paragraph only contains graphics then there
224 // is no need for bottom margin. Since here also with use of
225 // styles no margin should be created, set attributes to
226 // override!
227 sal_uInt16 nUpper=0, nLower=0;
228 GetULSpaceFromContext( nUpper, nLower );
229 InsertAttr( SvxULSpaceItem( nUpper, 0, RES_UL_SPACE ), true );
230
232
233 if( nUpper )
234 {
235 NewAttr(m_xAttrTab, &m_xAttrTab->pULSpace, SvxULSpaceItem(0, nLower, RES_UL_SPACE));
236 m_aParaAttrs.push_back( m_xAttrTab->pULSpace );
237 EndAttr( m_xAttrTab->pULSpace, false );
238 }
239 }
240
241 // determine vertical alignment and anchoring
242 const sal_Int32 nContent = m_pPam->GetPoint()->nContent.GetIndex();
243 if( nContent )
244 {
245 aAnchor.SetType( RndStdIds::FLY_AT_CHAR );
246 bMoveBackward = true;
247 eVertOri = text::VertOrientation::CHAR_BOTTOM;
248 eVertRel = text::RelOrientation::CHAR;
249 }
250 else
251 {
252 aAnchor.SetType( RndStdIds::FLY_AT_PARA );
253 eVertOri = text::VertOrientation::TOP;
254 eVertRel = text::RelOrientation::PRINT_AREA;
255 }
256
257 rFrameSet.Put( SwFormatHoriOrient( 0, eHoriOri, eHoriRel) );
258
259 rFrameSet.Put( SwFormatSurround( eSurround ) );
260 }
261 rFrameSet.Put( SwFormatVertOrient( 0, eVertOri, eVertRel) );
262
263 if( bMoveBackward )
265
266 if (aAnchor.GetAnchorId() == RndStdIds::FLY_AS_CHAR && !m_pPam->GetNode().GetTextNode())
267 {
268 eState = SvParserState::Error;
269 return;
270 }
271
272 aAnchor.SetAnchor( m_pPam->GetPoint() );
273
274 if( bMoveBackward )
276
277 rFrameSet.Put( aAnchor );
278 }
279
281 {
282 // automatically anchored frames must be moved forward by one position
283 if( RES_DRAWFRMFMT != pFlyFormat->Which() &&
284 (RndStdIds::FLY_AT_PARA == pFlyFormat->GetAnchor().GetAnchorId()) &&
285 css::text::WrapTextMode_THROUGH == pFlyFormat->GetSurround().GetSurround() )
286 {
287 m_aMoveFlyFrames.push_back( pFlyFormat );
288 m_aMoveFlyCnts.push_back( m_pPam->GetPoint()->nContent.GetIndex() );
289 }
290 }
291
292 /* */
293
295 OUString& rTypeStr ) const
296 {
297 SwDocShell *pDocSh = m_xDoc->GetDocShell();
298 SvKeyValueIterator* pHeaderAttrs = pDocSh ? pDocSh->GetHeaderAttributes()
299 : nullptr;
300 rType = GetScriptType( pHeaderAttrs );
301 rTypeStr = GetScriptTypeString( pHeaderAttrs );
302 }
303
304 namespace
305 {
306 bool allowAccessLink(const SwDoc& rDoc)
307 {
308 OUString sReferer;
309 SfxObjectShell * sh = rDoc.GetPersist();
310 if (sh != nullptr && sh->HasName())
311 {
312 sReferer = sh->GetMedium()->GetName();
313 }
314 return !SvtSecurityOptions().isUntrustedReferer(sReferer);
315 }
316 }
317
318 /* */
319
321 {
322 // and now analyze
323 OUString sAltNm, aId, aClass, aStyle, aMap, sHTMLGrfName;
324 OUString sGrfNm;
325 OUString aGraphicData;
326 sal_Int16 eVertOri = text::VertOrientation::TOP;
327 sal_Int16 eHoriOri = text::HoriOrientation::NONE;
328 bool bWidthProvided=false, bHeightProvided=false;
329 long nWidth=0, nHeight=0;
330 long nVSpace=0, nHSpace=0;
331
332 sal_uInt16 nBorder = (m_xAttrTab->pINetFormat ? 1 : 0);
333 bool bIsMap = false;
334 bool bPrcWidth = false;
335 bool bPrcHeight = false;
336 OUString sWidthAsString, sHeightAsString;
337 SvxMacroItem aMacroItem(RES_FRMMACRO);
338
339 ScriptType eDfltScriptType;
340 OUString sDfltScriptType;
341 GetDefaultScriptType( eDfltScriptType, sDfltScriptType );
342
343 const HTMLOptions& rHTMLOptions = GetOptions();
344 for (size_t i = rHTMLOptions.size(); i; )
345 {
346 SvMacroItemId nEvent = SvMacroItemId::NONE;
347 ScriptType eScriptType2 = eDfltScriptType;
348 const HTMLOption& rOption = rHTMLOptions[--i];
349 switch( rOption.GetToken() )
350 {
351 case HtmlOptionId::ID:
352 aId = rOption.GetString();
353 break;
354 case HtmlOptionId::STYLE:
355 aStyle = rOption.GetString();
356 break;
357 case HtmlOptionId::CLASS:
358 aClass = rOption.GetString();
359 break;
360 case HtmlOptionId::SRC:
361 sGrfNm = rOption.GetString();
362 if( !InternalImgToPrivateURL(sGrfNm) )
363 sGrfNm = INetURLObject::GetAbsURL( m_sBaseURL, sGrfNm );
364 break;
365 case HtmlOptionId::DATA:
366 aGraphicData = rOption.GetString();
367 if (!InternalImgToPrivateURL(aGraphicData))
368 aGraphicData = INetURLObject::GetAbsURL(
370 break;
371 case HtmlOptionId::ALIGN:
372 eVertOri =
373 rOption.GetEnum( aHTMLImgVAlignTable,
375 eHoriOri =
376 rOption.GetEnum( aHTMLImgHAlignTable );
377 break;
378 case HtmlOptionId::WIDTH:
379 // for now only store as pixel value!
380 nWidth = rOption.GetNumber();
381 sWidthAsString = rOption.GetString();
382 bPrcWidth = (sWidthAsString.indexOf('%') != -1);
383 if( bPrcWidth && nWidth>100 )
384 nWidth = 100;
385 // width|height = "auto" means viewing app decides the size
386 // i.e. proceed as if no particular size was provided
387 bWidthProvided = (sWidthAsString != "auto");
388 break;
389 case HtmlOptionId::HEIGHT:
390 // for now only store as pixel value!
391 nHeight = rOption.GetNumber();
392 sHeightAsString = rOption.GetString();
393 bPrcHeight = (sHeightAsString.indexOf('%') != -1);
394 if( bPrcHeight && nHeight>100 )
395 nHeight = 100;
396 // the same as above w/ HtmlOptionId::WIDTH
397 bHeightProvided = (sHeightAsString != "auto");
398 break;
399 case HtmlOptionId::VSPACE:
400 nVSpace = rOption.GetNumber();
401 break;
402 case HtmlOptionId::HSPACE:
403 nHSpace = rOption.GetNumber();
404 break;
405 case HtmlOptionId::ALT:
406 sAltNm = rOption.GetString();
407 break;
408 case HtmlOptionId::BORDER:
409 nBorder = static_cast<sal_uInt16>(rOption.GetNumber());
410 break;
411 case HtmlOptionId::ISMAP:
412 bIsMap = true;
413 break;
414 case HtmlOptionId::USEMAP:
415 aMap = rOption.GetString();
416 break;
417 case HtmlOptionId::NAME:
418 sHTMLGrfName = rOption.GetString();
419 break;
420
421 case HtmlOptionId::SDONLOAD:
422 eScriptType2 = STARBASIC;
423 [[fallthrough]];
424 case HtmlOptionId::ONLOAD:
425 nEvent = SvMacroItemId::OnImageLoadDone;
426 goto IMAGE_SETEVENT;
427
428 case HtmlOptionId::SDONABORT:
429 eScriptType2 = STARBASIC;
430 [[fallthrough]];
431 case HtmlOptionId::ONABORT:
432 nEvent = SvMacroItemId::OnImageLoadCancel;
433 goto IMAGE_SETEVENT;
434
435 case HtmlOptionId::SDONERROR:
436 eScriptType2 = STARBASIC;
437 [[fallthrough]];
438 case HtmlOptionId::ONERROR:
439 nEvent = SvMacroItemId::OnImageLoadError;
440 goto IMAGE_SETEVENT;
441 IMAGE_SETEVENT:
442 {
443 OUString sTmp( rOption.GetString() );
444 if( !sTmp.isEmpty() )
445 {
446 sTmp = convertLineEnd(sTmp, GetSystemLineEnd());
447 OUString sScriptType;
448 if( EXTENDED_STYPE == eScriptType2 )
449 sScriptType = sDfltScriptType;
450 aMacroItem.SetMacro( nEvent,
451 SvxMacro( sTmp, sScriptType, eScriptType2 ));
452 }
453 }
454 break;
455 default: break;
456 }
457 }
458
459 if (sGrfNm.isEmpty() && !aGraphicData.isEmpty())
460 sGrfNm = aGraphicData;
461
462 if( sGrfNm.isEmpty() )
463 return;
464
465 // When we are in a ordered list and the paragraph is still empty and not
466 // numbered, it may be a graphic for a bullet list.
467 if( !m_pPam->GetPoint()->nContent.GetIndex() &&
468 GetNumInfo().GetDepth() > 0 && GetNumInfo().GetDepth() <= MAXLEVEL &&
469 !m_aBulletGrfs[GetNumInfo().GetDepth()-1].isEmpty() &&
470 m_aBulletGrfs[GetNumInfo().GetDepth()-1]==sGrfNm )
471 {
472 SwTextNode* pTextNode = m_pPam->GetNode().GetTextNode();
473
474 if( pTextNode && ! pTextNode->IsCountedInList())
475 {
476 OSL_ENSURE( pTextNode->GetActualListLevel() == GetNumInfo().GetLevel(),
477 "Numbering level is wrong" );
478
479 pTextNode->SetCountedInList( true );
480
481 // It's necessary to invalidate the rule, because between the reading
482 // of LI and the graphic an EndAction could be called.
483 if( GetNumInfo().GetNumRule() )
484 GetNumInfo().GetNumRule()->SetInvalidRule( true );
485
486 // Set the style again, so that indent of the first line is correct.
488
489 return;
490 }
491 }
492
493 Graphic aGraphic;
494 INetURLObject aGraphicURL( sGrfNm );
495 if( aGraphicURL.GetProtocol() == INetProtocol::Data )
496 {
497 std::unique_ptr<SvMemoryStream> const pStream(aGraphicURL.getData());
498 if (pStream)
499 {
501 aGraphic = rFilter.ImportUnloadedGraphic(*pStream);
502 sGrfNm.clear();
503
504 if (!sGrfNm.isEmpty())
505 {
506 if (ERRCODE_NONE == rFilter.ImportGraphic(aGraphic, "", *pStream))
507 sGrfNm.clear();
508 }
509 }
510 }
511 else if (m_sBaseURL.isEmpty() || !aGraphicData.isEmpty())
512 {
513 // sBaseURL is empty if the source is clipboard
514 // aGraphicData is non-empty for <object data="..."> -> not a linked graphic.
515 if (ERRCODE_NONE == GraphicFilter::GetGraphicFilter().ImportGraphic(aGraphic, aGraphicURL))
516 sGrfNm.clear();
517 }
518
519 if (!sGrfNm.isEmpty())
520 {
521 aGraphic.SetDefaultType();
522 }
523
524 if (!nHeight || !nWidth)
525 {
526 Size aPixelSize = aGraphic.GetSizePixel(Application::GetDefaultDevice());
527 if (!bWidthProvided)
528 nWidth = aPixelSize.Width();
529 if (!bHeightProvided)
530 nHeight = aPixelSize.Height();
531 }
532
533 SfxItemSet aItemSet( m_xDoc->GetAttrPool(), m_pCSS1Parser->GetWhichMap() );
534 SvxCSS1PropertyInfo aPropInfo;
535 if( HasStyleOptions( aStyle, aId, aClass ) )
536 ParseStyleOptions( aStyle, aId, aClass, aItemSet, aPropInfo );
537
538 SfxItemSet aFrameSet( m_xDoc->GetAttrPool(),
540 if( !IsNewDoc() )
541 Reader::ResetFrameFormatAttrs( aFrameSet );
542
543 // set the border
544 long nHBorderWidth = 0, nVBorderWidth = 0;
545 if( nBorder )
546 {
547 nHBorderWidth = static_cast<long>(nBorder);
548 nVBorderWidth = static_cast<long>(nBorder);
549 SvxCSS1Parser::PixelToTwip( nVBorderWidth, nHBorderWidth );
550
551 ::editeng::SvxBorderLine aHBorderLine( nullptr, nHBorderWidth );
552 ::editeng::SvxBorderLine aVBorderLine( nullptr, nVBorderWidth );
553
554 if( m_xAttrTab->pINetFormat )
555 {
556 const OUString& rURL =
557 static_cast<const SwFormatINetFormat&>(m_xAttrTab->pINetFormat->GetItem()).GetValue();
558
559 m_pCSS1Parser->SetATagStyles();
560 sal_uInt16 nPoolId = static_cast< sal_uInt16 >(m_xDoc->IsVisitedURL( rURL )
563 const SwCharFormat *pCharFormat = m_pCSS1Parser->GetCharFormatFromPool( nPoolId );
564 aHBorderLine.SetColor( pCharFormat->GetColor().GetValue() );
565 aVBorderLine.SetColor( aHBorderLine.GetColor() );
566 }
567 else
568 {
569 const SvxColorItem& rColorItem = m_xAttrTab->pFontColor ?
570 static_cast<const SvxColorItem &>(m_xAttrTab->pFontColor->GetItem()) :
571 m_xDoc->GetDefault(RES_CHRATR_COLOR);
572 aHBorderLine.SetColor( rColorItem.GetValue() );
573 aVBorderLine.SetColor( aHBorderLine.GetColor() );
574 }
575
576 SvxBoxItem aBoxItem( RES_BOX );
577 aBoxItem.SetLine( &aHBorderLine, SvxBoxItemLine::TOP );
578 aBoxItem.SetLine( &aHBorderLine, SvxBoxItemLine::BOTTOM );
579 aBoxItem.SetLine( &aVBorderLine, SvxBoxItemLine::LEFT );
580 aBoxItem.SetLine( &aVBorderLine, SvxBoxItemLine::RIGHT );
581 aFrameSet.Put( aBoxItem );
582 }
583
584 SetAnchorAndAdjustment( eVertOri, eHoriOri, aPropInfo, aFrameSet );
585
586 SetSpace( Size( nHSpace, nVSpace), aItemSet, aPropInfo, aFrameSet );
587
588 // set other CSS1 attributes
589 SetFrameFormatAttrs( aItemSet, HtmlFrameFormatFlags::Box, aFrameSet );
590
591 Size aTwipSz( bPrcWidth ? 0 : nWidth, bPrcHeight ? 0 : nHeight );
592 if( (aTwipSz.Width() || aTwipSz.Height()) && Application::GetDefaultDevice() )
593 {
594 if (bWidthProvided || bHeightProvided || // attributes imply pixel!
595 aGraphic.GetPrefMapMode().GetMapUnit() == MapUnit::MapPixel)
596 {
598 ->PixelToLogic( aTwipSz, MapMode( MapUnit::MapTwip ) );
599 }
600 else
601 { // some bitmaps may have a size in metric units (e.g. PNG); use that
602 assert(aGraphic.GetPrefMapMode().GetMapUnit() < MapUnit::MapPixel);
603 aTwipSz = OutputDevice::LogicToLogic(aGraphic.GetPrefSize(),
604 aGraphic.GetPrefMapMode(), MapMode(MapUnit::MapTwip));
605 }
606 }
607
608 // convert CSS1 size to "normal" size
609 switch( aPropInfo.m_eWidthType )
610 {
611 case SVX_CSS1_LTYPE_TWIP:
612 aTwipSz.setWidth( aPropInfo.m_nWidth );
613 nWidth = 1; // != 0
614 bPrcWidth = false;
615 break;
617 aTwipSz.setWidth( 0 );
618 nWidth = aPropInfo.m_nWidth;
619 bPrcWidth = true;
620 break;
621 default:
622 ;
623 }
624 switch( aPropInfo.m_eHeightType )
625 {
626 case SVX_CSS1_LTYPE_TWIP:
627 aTwipSz.setHeight( aPropInfo.m_nHeight );
628 nHeight = 1; // != 0
629 bPrcHeight = false;
630 break;
632 aTwipSz.setHeight( 0 );
633 nHeight = aPropInfo.m_nHeight;
634 bPrcHeight = true;
635 break;
636 default:
637 ;
638 }
639
640 Size aGrfSz( 0, 0 );
641 bool bSetTwipSize = true; // Set Twip-Size on Node?
642 bool bChangeFrameSize = false; // Change frame format later?
643 bool bRequestGrfNow = false;
644 bool bSetScaleImageMap = false;
645 sal_uInt8 nPrcWidth = 0, nPrcHeight = 0;
646
647 // bPrcWidth / bPrcHeight means we have a percent size. If that's not the case and we have no
648 // size from nWidth / nHeight either, then inspect the image header.
649 if ((!bPrcWidth && !nWidth) && (!bPrcHeight && !nHeight) && allowAccessLink(*m_xDoc))
650 {
651 GraphicDescriptor aDescriptor(aGraphicURL);
652 if (aDescriptor.Detect(/*bExtendedInfo=*/true))
653 {
654 // Try to use size info from the image header before defaulting to
655 // HTML_DFLT_IMG_WIDTH/HEIGHT.
656 aTwipSz = Application::GetDefaultDevice()->PixelToLogic(aDescriptor.GetSizePixel(),
657 MapMode(MapUnit::MapTwip));
658 nWidth = aTwipSz.getWidth();
659 nHeight = aTwipSz.getHeight();
660 }
661 }
662
663 if( !nWidth || !nHeight )
664 {
665 // When the graphic is in a table, it will be requested immediately,
666 // so that it is available before the table is layouted.
667 if (m_xTable && !nWidth)
668 {
669 bRequestGrfNow = true;
671 }
672
673 // The frame size is set later
674 bChangeFrameSize = true;
675 aGrfSz = aTwipSz;
676 if( !nWidth && !nHeight )
677 {
678 aTwipSz.setWidth( HTML_DFLT_IMG_WIDTH );
679 aTwipSz.setHeight( HTML_DFLT_IMG_HEIGHT );
680 }
681 else if( nWidth )
682 {
683 // a percentage value
684 if( bPrcWidth )
685 {
686 nPrcWidth = static_cast<sal_uInt8>(nWidth);
687 nPrcHeight = 255;
688 }
689 else
690 {
691 aTwipSz.setHeight( HTML_DFLT_IMG_HEIGHT );
692 }
693 }
694 else if( nHeight )
695 {
696 if( bPrcHeight )
697 {
698 nPrcHeight = static_cast<sal_uInt8>(nHeight);
699 nPrcWidth = 255;
700 }
701 else
702 {
703 aTwipSz.setWidth( HTML_DFLT_IMG_WIDTH );
704 }
705 }
706 }
707 else
708 {
709 // Width and height were given and don't need to be set
710 bSetTwipSize = false;
711
712 if( bPrcWidth )
713 nPrcWidth = static_cast<sal_uInt8>(nWidth);
714
715 if( bPrcHeight )
716 nPrcHeight = static_cast<sal_uInt8>(nHeight);
717 }
718
719 // set image map
720 aMap = comphelper::string::stripEnd(aMap, ' ');
721 if( !aMap.isEmpty() )
722 {
723 // Since we only know local image maps we just use everything
724 // after # as name
725 sal_Int32 nPos = aMap.indexOf( '#' );
726 OUString aName;
727 if ( -1 == nPos )
728 aName = aMap ;
729 else
730 aName = aMap.copy(nPos+1);
731
732 ImageMap *pImgMap = FindImageMap( aName );
733 if( pImgMap )
734 {
735 SwFormatURL aURL; aURL.SetMap( pImgMap );// is copied
736
737 bSetScaleImageMap = !nPrcWidth || !nPrcHeight;
738 aFrameSet.Put( aURL );
739 }
740 else
741 {
742 ImageMap aEmptyImgMap( aName );
743 SwFormatURL aURL; aURL.SetMap( &aEmptyImgMap );// is copied
744 aFrameSet.Put( aURL );
745 m_nMissingImgMaps++; // image maps are missing
746
747 // the graphic has to scaled during SetTwipSize, if we didn't
748 // set a size on the node or the size doesn't match the graphic size.
749 bSetScaleImageMap = true;
750 }
751 }
752
753 // observe minimum values !!
754 if( nPrcWidth )
755 {
756 OSL_ENSURE( !aTwipSz.Width(),
757 "Why is a width set if we already have percentage value?" );
758 aTwipSz.setWidth( aGrfSz.Width() ? aGrfSz.Width()
760 }
761 else
762 {
763 aTwipSz.AdjustWidth(2*nVBorderWidth );
764 if( aTwipSz.Width() < MINFLY )
765 aTwipSz.setWidth( MINFLY );
766 }
767 if( nPrcHeight )
768 {
769 OSL_ENSURE( !aTwipSz.Height(),
770 "Why is a height set if we already have percentage value?" );
771 aTwipSz.setHeight( aGrfSz.Height() ? aGrfSz.Height()
773 }
774 else
775 {
776 aTwipSz.AdjustHeight(2*nHBorderWidth );
777 if( aTwipSz.Height() < MINFLY )
778 aTwipSz.setHeight( MINFLY );
779 }
780
781 SwFormatFrameSize aFrameSize( ATT_FIX_SIZE, aTwipSz.Width(), aTwipSz.Height() );
782 aFrameSize.SetWidthPercent( nPrcWidth );
783 aFrameSize.SetHeightPercent( nPrcHeight );
784 aFrameSet.Put( aFrameSize );
785
786 const SwNodeType eNodeType = m_pPam->GetNode().GetNodeType();
787 if (eNodeType != SwNodeType::Text && eNodeType != SwNodeType::Table)
788 return;
789
790 // passing empty sGrfNm here, means we don't want the graphic to be linked
791 SwFrameFormat *const pFlyFormat =
792 m_xDoc->getIDocumentContentOperations().InsertGraphic(
793 *m_pPam, sGrfNm, OUString(), &aGraphic,
794 &aFrameSet, nullptr, nullptr);
795 SwGrfNode *pGrfNd = m_xDoc->GetNodes()[ pFlyFormat->GetContent().GetContentIdx()
796 ->GetIndex()+1 ]->GetGrfNode();
797
798 if( !sHTMLGrfName.isEmpty() )
799 {
800 pFlyFormat->SetName( sHTMLGrfName );
801
802 // maybe jump to graphic
803 if( JumpToMarks::Graphic == m_eJumpTo && sHTMLGrfName == m_sJmpMark )
804 {
805 m_bChkJumpMark = true;
807 }
808 }
809
810 if (pGrfNd)
811 {
812 if( !sAltNm.isEmpty() )
813 pGrfNd->SetTitle( sAltNm );
814
815 if( bSetTwipSize )
816 pGrfNd->SetTwipSize( aGrfSz );
817
818 pGrfNd->SetChgTwipSize( bChangeFrameSize );
819
820 if( bSetScaleImageMap )
821 pGrfNd->SetScaleImageMap( true );
822 }
823
824 if( m_xAttrTab->pINetFormat )
825 {
826 const SwFormatINetFormat &rINetFormat =
827 static_cast<const SwFormatINetFormat&>(m_xAttrTab->pINetFormat->GetItem());
828
829 SwFormatURL aURL( pFlyFormat->GetURL() );
830
831 aURL.SetURL( rINetFormat.GetValue(), bIsMap );
832 aURL.SetTargetFrameName( rINetFormat.GetTargetFrame() );
833 aURL.SetName( rINetFormat.GetName() );
834 pFlyFormat->SetFormatAttr( aURL );
835
836 {
837 static const SvMacroItemId aEvents[] = {
838 SvMacroItemId::OnMouseOver,
839 SvMacroItemId::OnClick,
840 SvMacroItemId::OnMouseOut };
841
842 for( SvMacroItemId id : aEvents )
843 {
844 const SvxMacro *pMacro = rINetFormat.GetMacro( id );
845 if( nullptr != pMacro )
846 aMacroItem.SetMacro( id, *pMacro );
847 }
848 }
849
850 if ((RndStdIds::FLY_AS_CHAR == pFlyFormat->GetAnchor().GetAnchorId()) &&
851 m_xAttrTab->pINetFormat->GetSttPara() ==
852 m_pPam->GetPoint()->nNode &&
853 m_xAttrTab->pINetFormat->GetSttCnt() ==
854 m_pPam->GetPoint()->nContent.GetIndex() - 1 )
855 {
856 // the attribute was insert right before as-character anchored
857 // graphic, therefore we move it
858 m_xAttrTab->pINetFormat->SetStart( *m_pPam->GetPoint() );
859
860 // When the attribute is also an anchor, we'll insert
861 // a bookmark before the graphic, because SwFormatURL
862 // isn't an anchor.
863 if( !rINetFormat.GetName().isEmpty() )
864 {
866 InsertBookmark( rINetFormat.GetName() );
868 }
869 }
870
871 }
872
873 if( !aMacroItem.GetMacroTable().empty() )
874 pFlyFormat->SetFormatAttr( aMacroItem );
875
876 // tdf#87083 If the graphic has not been loaded yet, then load it now.
877 // Otherwise it may be loaded during the first paint of the object and it
878 // will be too late to adapt the size of the graphic at that point.
879 if (bRequestGrfNow && pGrfNd)
880 {
881 Size aUpdatedSize = pGrfNd->GetTwipSize(); //trigger a swap-in
882 SAL_WARN_IF(!aUpdatedSize.Width() || !aUpdatedSize.Height(), "sw.html", "html image with no width or height");
883 }
884
885 // maybe create frames and register auto bound frames
886 RegisterFlyFrame( pFlyFormat );
887
888 if( !aId.isEmpty() )
889 InsertBookmark( aId );
890 }
891
892 /* */
893
895 {
896 m_xDoc->SetTextFormatColl( *m_pPam,
897 m_pCSS1Parser->GetTextCollFromPool( RES_POOLCOLL_TEXT ) );
898
899 OUString aBackGround, aId, aStyle, aLang, aDir;
900 Color aBGColor, aTextColor, aLinkColor, aVLinkColor;
901 bool bBGColor=false, bTextColor=false;
902 bool bLinkColor=false, bVLinkColor=false;
903
904 ScriptType eDfltScriptType;
905 OUString sDfltScriptType;
906 GetDefaultScriptType( eDfltScriptType, sDfltScriptType );
907
908 const HTMLOptions& rHTMLOptions = GetOptions();
909 for (size_t i = rHTMLOptions.size(); i; )
910 {
911 const HTMLOption& rOption = rHTMLOptions[--i];
912 ScriptType eScriptType2 = eDfltScriptType;
913 OUString aEvent;
914 bool bSetEvent = false;
915
916 switch( rOption.GetToken() )
917 {
918 case HtmlOptionId::ID:
919 aId = rOption.GetString();
920 break;
921 case HtmlOptionId::BACKGROUND:
922 aBackGround = rOption.GetString();
923 break;
924 case HtmlOptionId::BGCOLOR:
925 rOption.GetColor( aBGColor );
926 bBGColor = true;
927 break;
928 case HtmlOptionId::TEXT:
929 rOption.GetColor( aTextColor );
930 bTextColor = true;
931 break;
932 case HtmlOptionId::LINK:
933 rOption.GetColor( aLinkColor );
934 bLinkColor = true;
935 break;
936 case HtmlOptionId::VLINK:
937 rOption.GetColor( aVLinkColor );
938 bVLinkColor = true;
939 break;
940
941 case HtmlOptionId::SDONLOAD:
942 eScriptType2 = STARBASIC;
943 [[fallthrough]];
944 case HtmlOptionId::ONLOAD:
945 aEvent = GlobalEventConfig::GetEventName( GlobalEventId::OPENDOC );
946 bSetEvent = true;
947 break;
948
949 case HtmlOptionId::SDONUNLOAD:
950 eScriptType2 = STARBASIC;
951 [[fallthrough]];
952 case HtmlOptionId::ONUNLOAD:
953 aEvent = GlobalEventConfig::GetEventName( GlobalEventId::PREPARECLOSEDOC );
954 bSetEvent = true;
955 break;
956
957 case HtmlOptionId::SDONFOCUS:
958 eScriptType2 = STARBASIC;
959 [[fallthrough]];
960 case HtmlOptionId::ONFOCUS:
961 aEvent = GlobalEventConfig::GetEventName( GlobalEventId::ACTIVATEDOC );
962 bSetEvent = true;
963 break;
964
965 case HtmlOptionId::SDONBLUR:
966 eScriptType2 = STARBASIC;
967 [[fallthrough]];
968 case HtmlOptionId::ONBLUR:
969 aEvent = GlobalEventConfig::GetEventName( GlobalEventId::DEACTIVATEDOC );
970 bSetEvent = true;
971 break;
972
973 case HtmlOptionId::ONERROR:
974 break;
975
976 case HtmlOptionId::STYLE:
977 aStyle = rOption.GetString();
978 bTextColor = true;
979 break;
980 case HtmlOptionId::LANG:
981 aLang = rOption.GetString();
982 break;
983 case HtmlOptionId::DIR:
984 aDir = rOption.GetString();
985 break;
986 default: break;
987 }
988
989 if( bSetEvent )
990 {
991 const OUString& rEvent = rOption.GetString();
992 if( !rEvent.isEmpty() )
993 InsertBasicDocEvent( aEvent, rEvent, eScriptType2,
994 sDfltScriptType );
995 }
996 }
997
998 if( bTextColor && !m_pCSS1Parser->IsBodyTextSet() )
999 {
1000 // The font colour is set in the default style
1001 m_pCSS1Parser->GetTextCollFromPool( RES_POOLCOLL_STANDARD )
1002 ->SetFormatAttr( SvxColorItem(aTextColor, RES_CHRATR_COLOR) );
1003 m_pCSS1Parser->SetBodyTextSet();
1004 }
1005
1006 // Prepare the items for the page style (background, frame)
1007 // If BrushItem already set values must remain!
1008 std::shared_ptr<SvxBrushItem> aBrushItem( m_pCSS1Parser->makePageDescBackground() );
1009 bool bSetBrush = false;
1010
1011 if( bBGColor && !m_pCSS1Parser->IsBodyBGColorSet() )
1012 {
1013 // background colour from "BGCOLOR"
1014 OUString aLink;
1015 if( !aBrushItem->GetGraphicLink().isEmpty() )
1016 aLink = aBrushItem->GetGraphicLink();
1017 SvxGraphicPosition ePos = aBrushItem->GetGraphicPos();
1018
1019 aBrushItem->SetColor( aBGColor );
1020
1021 if( !aLink.isEmpty() )
1022 {
1023 aBrushItem->SetGraphicLink( aLink );
1024 aBrushItem->SetGraphicPos( ePos );
1025 }
1026 bSetBrush = true;
1027 m_pCSS1Parser->SetBodyBGColorSet();
1028 }
1029
1030 if( !aBackGround.isEmpty() && !m_pCSS1Parser->IsBodyBackgroundSet() )
1031 {
1032 // background graphic from "BACKGROUND"
1033 aBrushItem->SetGraphicLink( INetURLObject::GetAbsURL( m_sBaseURL, aBackGround ) );
1034 aBrushItem->SetGraphicPos( GPOS_TILED );
1035 bSetBrush = true;
1036 m_pCSS1Parser->SetBodyBackgroundSet();
1037 }
1038
1039 if( !aStyle.isEmpty() || !aDir.isEmpty() )
1040 {
1041 SfxItemSet aItemSet( m_xDoc->GetAttrPool(), m_pCSS1Parser->GetWhichMap() );
1042 SvxCSS1PropertyInfo aPropInfo;
1043 OUString aDummy;
1044 ParseStyleOptions( aStyle, aDummy, aDummy, aItemSet, aPropInfo, nullptr, &aDir );
1045
1046 // Some attributes have to set on the page style, in fact the ones
1047 // which aren't inherited
1048 m_pCSS1Parser->SetPageDescAttrs( bSetBrush ? aBrushItem.get() : nullptr,
1049 &aItemSet );
1050
1051 const SfxPoolItem *pItem;
1052 static const sal_uInt16 aWhichIds[3] = { RES_CHRATR_FONTSIZE,
1055 for(sal_uInt16 i : aWhichIds)
1056 {
1057 if( SfxItemState::SET == aItemSet.GetItemState( i, false,
1058 &pItem ) &&
1059 static_cast <const SvxFontHeightItem * >(pItem)->GetProp() != 100)
1060 {
1061 sal_uInt32 nHeight =
1062 ( m_aFontHeights[2] *
1063 static_cast <const SvxFontHeightItem * >(pItem)->GetProp() ) / 100;
1064 SvxFontHeightItem aNewItem( nHeight, 100, i );
1065 aItemSet.Put( aNewItem );
1066 }
1067 }
1068
1069 // all remaining options can be set on the default style
1070 m_pCSS1Parser->GetTextCollFromPool( RES_POOLCOLL_STANDARD )
1071 ->SetFormatAttr( aItemSet );
1072 }
1073 else if( bSetBrush )
1074 {
1075 m_pCSS1Parser->SetPageDescAttrs( aBrushItem.get() );
1076 }
1077
1078 if( bLinkColor && !m_pCSS1Parser->IsBodyLinkSet() )
1079 {
1080 SwCharFormat *pCharFormat =
1081 m_pCSS1Parser->GetCharFormatFromPool(RES_POOLCHR_INET_NORMAL);
1082 pCharFormat->SetFormatAttr( SvxColorItem(aLinkColor, RES_CHRATR_COLOR) );
1083 m_pCSS1Parser->SetBodyLinkSet();
1084 }
1085 if( bVLinkColor && !m_pCSS1Parser->IsBodyVLinkSet() )
1086 {
1087 SwCharFormat *pCharFormat =
1088 m_pCSS1Parser->GetCharFormatFromPool(RES_POOLCHR_INET_VISIT);
1089 pCharFormat->SetFormatAttr( SvxColorItem(aVLinkColor, RES_CHRATR_COLOR) );
1090 m_pCSS1Parser->SetBodyVLinkSet();
1091 }
1092 if( !aLang.isEmpty() )
1093 {
1095 if( LANGUAGE_DONTKNOW != eLang )
1096 {
1097 sal_uInt16 nWhich = 0;
1099 {
1100 case SvtScriptType::LATIN:
1101 nWhich = RES_CHRATR_LANGUAGE;
1102 break;
1103 case SvtScriptType::ASIAN:
1104 nWhich = RES_CHRATR_CJK_LANGUAGE;
1105 break;
1106 case SvtScriptType::COMPLEX:
1107 nWhich = RES_CHRATR_CTL_LANGUAGE;
1108 break;
1109 default: break;
1110 }
1111 if( nWhich )
1112 {
1113 SvxLanguageItem aLanguage( eLang, nWhich );
1114 aLanguage.SetWhich( nWhich );
1115 m_xDoc->SetDefault( aLanguage );
1116 }
1117 }
1118 }
1119
1120 if( !aId.isEmpty() )
1121 InsertBookmark( aId );
1122 }
1123
1124 /* */
1125
1127 {
1128 // end previous link if there was one
1129 std::unique_ptr<HTMLAttrContext> xOldCntxt(PopContext(HtmlTokenId::ANCHOR_ON));
1130 if (xOldCntxt)
1131 {
1132 // and maybe end attributes
1133 EndContext(xOldCntxt.get());
1134 }
1135
1136 SvxMacroTableDtor aMacroTable;
1137 OUString sHRef, aName, sTarget;
1138 OUString aId, aStyle, aClass, aLang, aDir;
1139 bool bHasHRef = false, bFixed = false;
1140
1141 ScriptType eDfltScriptType;
1142 OUString sDfltScriptType;
1143 GetDefaultScriptType( eDfltScriptType, sDfltScriptType );
1144
1145 const HTMLOptions& rHTMLOptions = GetOptions();
1146 for (size_t i = rHTMLOptions.size(); i; )
1147 {
1148 SvMacroItemId nEvent = SvMacroItemId::NONE;
1149 ScriptType eScriptType2 = eDfltScriptType;
1150 const HTMLOption& rOption = rHTMLOptions[--i];
1151 switch( rOption.GetToken() )
1152 {
1153 case HtmlOptionId::NAME:
1154 aName = rOption.GetString();
1155 break;
1156
1157 case HtmlOptionId::HREF:
1158 sHRef = rOption.GetString();
1159 bHasHRef = true;
1160 break;
1161 case HtmlOptionId::TARGET:
1162 sTarget = rOption.GetString();
1163 break;
1164
1165 case HtmlOptionId::STYLE:
1166 aStyle = rOption.GetString();
1167 break;
1168 case HtmlOptionId::ID:
1169 aId = rOption.GetString();
1170 break;
1171 case HtmlOptionId::CLASS:
1172 aClass = rOption.GetString();
1173 break;
1174 case HtmlOptionId::SDFIXED:
1175 bFixed = true;
1176 break;
1177 case HtmlOptionId::LANG:
1178 aLang = rOption.GetString();
1179 break;
1180 case HtmlOptionId::DIR:
1181 aDir = rOption.GetString();
1182 break;
1183
1184 case HtmlOptionId::SDONCLICK:
1185 eScriptType2 = STARBASIC;
1186 [[fallthrough]];
1187 case HtmlOptionId::ONCLICK:
1188 nEvent = SvMacroItemId::OnClick;
1189 goto ANCHOR_SETEVENT;
1190
1191 case HtmlOptionId::SDONMOUSEOVER:
1192 eScriptType2 = STARBASIC;
1193 [[fallthrough]];
1194 case HtmlOptionId::ONMOUSEOVER:
1195 nEvent = SvMacroItemId::OnMouseOver;
1196 goto ANCHOR_SETEVENT;
1197
1198 case HtmlOptionId::SDONMOUSEOUT:
1199 eScriptType2 = STARBASIC;
1200 [[fallthrough]];
1201 case HtmlOptionId::ONMOUSEOUT:
1202 nEvent = SvMacroItemId::OnMouseOut;
1203 goto ANCHOR_SETEVENT;
1204 ANCHOR_SETEVENT:
1205 {
1206 OUString sTmp( rOption.GetString() );
1207 if( !sTmp.isEmpty() )
1208 {
1209 sTmp = convertLineEnd(sTmp, GetSystemLineEnd());
1210 OUString sScriptType;
1211 if( EXTENDED_STYPE == eScriptType2 )
1212 sScriptType = sDfltScriptType;
1213 aMacroTable.Insert( nEvent, SvxMacro( sTmp, sScriptType, eScriptType2 ));
1214 }
1215 }
1216 break;
1217 default: break;
1218 }
1219 }
1220
1221 // Jump targets, which match our implicit targets,
1222 // here we throw out rigorously.
1223 if( !aName.isEmpty() )
1224 {
1225 OUString sDecoded( INetURLObject::decode( aName,
1227 sal_Int32 nPos = sDecoded.lastIndexOf( cMarkSeparator );
1228 if( nPos != -1 )
1229 {
1230 OUString sCmp= sDecoded.copy(nPos+1).replaceAll(" ","");
1231 if( !sCmp.isEmpty() )
1232 {
1233 sCmp = sCmp.toAsciiLowerCase();
1234 if( sCmp == "region" ||
1235 sCmp == "frame" ||
1236 sCmp == "graphic" ||
1237 sCmp == "ole" ||
1238 sCmp == "table" ||
1239 sCmp == "outline" ||
1240 sCmp == "text" )
1241 {
1242 aName.clear();
1243 }
1244 }
1245 }
1246 }
1247
1248 // create a new context
1249 std::unique_ptr<HTMLAttrContext> xCntxt(new HTMLAttrContext(HtmlTokenId::ANCHOR_ON));
1250
1251 bool bEnAnchor = false, bFootnoteAnchor = false, bFootnoteEnSymbol = false;
1252 OUString aFootnoteName;
1253 OUString aStrippedClass( aClass );
1254 SwCSS1Parser::GetScriptFromClass( aStrippedClass, false );
1255 if( aStrippedClass.getLength() >=9 && bHasHRef && sHRef.getLength() > 1 &&
1256 ('s' == aStrippedClass[0] || 'S' == aStrippedClass[0]) &&
1257 ('d' == aStrippedClass[1] || 'D' == aStrippedClass[1]) )
1258 {
1259 if( aStrippedClass.equalsIgnoreAsciiCase( OOO_STRING_SVTOOLS_HTML_sdendnote_anc ) )
1260 bEnAnchor = true;
1261 else if( aStrippedClass.equalsIgnoreAsciiCase( OOO_STRING_SVTOOLS_HTML_sdfootnote_anc ) )
1262 bFootnoteAnchor = true;
1263 else if( aStrippedClass.equalsIgnoreAsciiCase( OOO_STRING_SVTOOLS_HTML_sdendnote_sym ) ||
1264 aStrippedClass.equalsIgnoreAsciiCase( OOO_STRING_SVTOOLS_HTML_sdfootnote_sym ) )
1265 bFootnoteEnSymbol = true;
1266 if( bEnAnchor || bFootnoteAnchor || bFootnoteEnSymbol )
1267 {
1268 aFootnoteName = sHRef.copy( 1 );
1269 aClass.clear();
1270 aStrippedClass.clear();
1271 aName.clear();
1272 bHasHRef = false;
1273 }
1274 }
1275
1276 // Styles parsen
1277 if( HasStyleOptions( aStyle, aId, aStrippedClass, &aLang, &aDir ) )
1278 {
1279 SfxItemSet aItemSet( m_xDoc->GetAttrPool(), m_pCSS1Parser->GetWhichMap() );
1280 SvxCSS1PropertyInfo aPropInfo;
1281
1282 if( ParseStyleOptions( aStyle, aId, aClass, aItemSet, aPropInfo, &aLang, &aDir ) )
1283 {
1284 DoPositioning(aItemSet, aPropInfo, xCntxt.get());
1285 InsertAttrs(aItemSet, aPropInfo, xCntxt.get(), true);
1286 }
1287 }
1288
1289 if( bHasHRef )
1290 {
1291 if( !sHRef.isEmpty() )
1292 {
1294 }
1295 else
1296 {
1297 // use directory if empty URL
1298 INetURLObject aURLObj( m_aPathToFile );
1299 sHRef = aURLObj.GetPartBeforeLastName();
1300 }
1301
1302 m_pCSS1Parser->SetATagStyles();
1303 SwFormatINetFormat aINetFormat( sHRef, sTarget );
1304 aINetFormat.SetName( aName );
1305
1306 if( !aMacroTable.empty() )
1307 aINetFormat.SetMacroTable( &aMacroTable );
1308
1309 // set the default attribute
1310 InsertAttr(&m_xAttrTab->pINetFormat, aINetFormat, xCntxt.get());
1311 }
1312 else if( !aName.isEmpty() )
1313 {
1314 InsertBookmark( aName );
1315 }
1316
1317 if( bEnAnchor || bFootnoteAnchor )
1318 {
1319 InsertFootEndNote( aFootnoteName, bEnAnchor, bFixed );
1321 }
1322 else if( bFootnoteEnSymbol )
1323 {
1325 }
1326
1327 // save context
1328 PushContext(xCntxt);
1329 }
1330
1332 {
1334 {
1336 m_bInFootEndNoteAnchor = false;
1337 }
1338 else if( m_bInFootEndNoteSymbol )
1339 {
1340 m_bInFootEndNoteSymbol = false;
1341 }
1342
1343 EndTag( HtmlTokenId::ANCHOR_OFF );
1344 }
1345
1346 /* */
1347
1348 void SwHTMLParser::InsertBookmark( const OUString& rName )
1349 {
1350 HTMLAttr* pTmp = new HTMLAttr( *m_pPam->GetPoint(),
1351 SfxStringItem(RES_FLTR_BOOKMARK, rName), nullptr, std::shared_ptr<HTMLAttrTable>());
1352 m_aSetAttrTab.push_back( pTmp );
1353 }
1354
1355 bool SwHTMLParser::HasCurrentParaBookmarks( bool bIgnoreStack ) const
1356 {
1357 bool bHasMarks = false;
1358 sal_uLong nNodeIdx = m_pPam->GetPoint()->nNode.GetIndex();
1359
1360 // first step: are there still bookmark in the attribute-stack?
1361 // bookmarks are added to the end of the stack - thus we only have
1362 // to check the last bookmark
1363 if( !bIgnoreStack )
1364 {
1365 for( auto i = m_aSetAttrTab.size(); i; )
1366 {
1367 HTMLAttr* pAttr = m_aSetAttrTab[ --i ];
1368 if( RES_FLTR_BOOKMARK == pAttr->m_pItem->Which() )
1369 {
1370 if( pAttr->GetSttParaIdx() == nNodeIdx )
1371 bHasMarks = true;
1372 break;
1373 }
1374 }
1375 }
1376
1377 if( !bHasMarks )
1378 {
1379 // second step: when we didn't find a bookmark, check if there is one set already
1380 IDocumentMarkAccess* const pMarkAccess = m_xDoc->getIDocumentMarkAccess();
1381 for(IDocumentMarkAccess::const_iterator_t ppMark = pMarkAccess->getAllMarksBegin();
1382 ppMark != pMarkAccess->getAllMarksEnd();
1383 ++ppMark)
1384 {
1385 const ::sw::mark::IMark* pBookmark = *ppMark;
1386
1387 const sal_uLong nBookNdIdx = pBookmark->GetMarkPos().nNode.GetIndex();
1388 if( nBookNdIdx==nNodeIdx )
1389 {
1390 bHasMarks = true;
1391 break;
1392 }
1393 else if( nBookNdIdx > nNodeIdx )
1394 break;
1395 }
1396 }
1397
1398 return bHasMarks;
1399 }
1400
1401 /* */
1402
1404 {
1405 bool bSetSmallFont = false;
1406
1407 SwContentNode* pCNd = m_pPam->GetContentNode();
1408 sal_uLong nNodeIdx = m_pPam->GetPoint()->nNode.GetIndex();
1409 if( !m_pPam->GetPoint()->nContent.GetIndex() )
1410 {
1411 if( pCNd && pCNd->StartOfSectionIndex() + 2 <
1412 pCNd->EndOfSectionIndex() && CanRemoveNode(nNodeIdx))
1413 {
1414 const SwFrameFormats& rFrameFormatTable = *m_xDoc->GetSpzFrameFormats();
1415
1416 for( auto pFormat : rFrameFormatTable )
1417 {
1418 SwFormatAnchor const*const pAnchor = &pFormat->GetAnchor();
1419 SwPosition const*const pAPos = pAnchor->GetContentAnchor();
1420 if (pAPos &&
1421 ((RndStdIds::FLY_AT_PARA == pAnchor->GetAnchorId()) ||
1422 (RndStdIds::FLY_AT_CHAR == pAnchor->GetAnchorId())) &&
1423 pAPos->nNode == nNodeIdx )
1424
1425 return; // we can't delete the node
1426 }
1427
1428 SetAttr( false ); // the still open attributes must be
1429 // closed before the node is deleted,
1430 // otherwise the last index is dangling
1431
1432 if( pCNd->Len() && pCNd->IsTextNode() )
1433 {
1434 // fields were inserted into the node, now they have
1435 // to be moved
1436 SwTextNode *pPrvNd = m_xDoc->GetNodes()[nNodeIdx-1]->GetTextNode();
1437 if( pPrvNd )
1438 {
1439 SwIndex aSrc( pCNd, 0 );
1440 pCNd->GetTextNode()->CutText( pPrvNd, aSrc, pCNd->Len() );
1441 }
1442 }
1443
1444 // now we have to move maybe existing bookmarks
1445 IDocumentMarkAccess* const pMarkAccess = m_xDoc->getIDocumentMarkAccess();
1446 for(IDocumentMarkAccess::const_iterator_t ppMark = pMarkAccess->getAllMarksBegin();
1447 ppMark != pMarkAccess->getAllMarksEnd();
1448 ++ppMark)
1449 {
1450 ::sw::mark::IMark* pMark = *ppMark;
1451
1452 sal_uLong nBookNdIdx = pMark->GetMarkPos().nNode.GetIndex();
1453 if(nBookNdIdx==nNodeIdx)
1454 {
1455 SwNodeIndex nNewNdIdx(m_pPam->GetPoint()->nNode);
1456 SwContentNode* pNd = SwNodes::GoPrevious(&nNewNdIdx);
1457 if(!pNd)
1458 {
1459 OSL_ENSURE(false, "Oops, where is my predecessor node?");
1460 return;
1461 }
1462 // #i81002# - refactoring
1463 // Do not directly manipulate member of <SwBookmark>
1464 {
1465 SwPosition aNewPos(*pNd);
1466 aNewPos.nContent.Assign(pNd, pNd->Len());
1467 const SwPaM aPaM(aNewPos);
1468 pMarkAccess->repositionMark(*ppMark, aPaM);
1469 }
1470 }
1471 else if( nBookNdIdx > nNodeIdx )
1472 break;
1473 }
1474
1475 m_pPam->GetPoint()->nContent.Assign( nullptr, 0 );
1476 m_pPam->SetMark();
1477 m_pPam->DeleteMark();
1478 m_xDoc->GetNodes().Delete( m_pPam->GetPoint()->nNode );
1480 }
1481 else if (pCNd && pCNd->IsTextNode() && m_xTable)
1482 {
1483 // In empty cells we set a small font, so that the cell doesn't
1484 // get higher than the graphic resp. as low as possible.
1485 bSetSmallFont = true;
1486 }
1487 }
1488 else if( pCNd && pCNd->IsTextNode() && m_xTable &&
1489 pCNd->StartOfSectionIndex()+2 ==
1490 pCNd->EndOfSectionIndex() )
1491 {
1492 // When the cell contains only as-character anchored graphics/frames,
1493 // then we also set a small font.
1494 bSetSmallFont = true;
1495 SwTextNode* pTextNd = pCNd->GetTextNode();
1496
1497 sal_Int32 nPos = m_pPam->GetPoint()->nContent.GetIndex();
1498 while( bSetSmallFont && nPos>0 )
1499 {
1500 --nPos;
1501 bSetSmallFont =
1502 (CH_TXTATR_BREAKWORD == pTextNd->GetText()[nPos]) &&
1503 (nullptr != pTextNd->GetTextAttrForCharAt( nPos, RES_TXTATR_FLYCNT ));
1504 }
1505 }
1506
1507 if( bSetSmallFont )
1508 {
1509 // Added default to CJK and CTL
1510 SvxFontHeightItem aFontHeight( 40, 100, RES_CHRATR_FONTSIZE );
1511 pCNd->SetAttr( aFontHeight );
1512 SvxFontHeightItem aFontHeightCJK( 40, 100, RES_CHRATR_CJK_FONTSIZE );
1513 pCNd->SetAttr( aFontHeightCJK );
1514 SvxFontHeightItem aFontHeightCTL( 40, 100, RES_CHRATR_CTL_FONTSIZE );
1515 pCNd->SetAttr( aFontHeightCTL );
1516 }
1517 }
1518
1519 /* vim:set shiftwidth=4 softtabstop=4 expandtab: */
sal_uInt16 IncGrfsThatResizeTable()
Definition: htmltab.cxx:3175
long Width() const
sal_uInt8 GetLevel() const
Definition: htmlnum.hxx:110
EXTENDED_STYPE
virtual sal_Int32 Len() const
Definition: node.cxx:1183
static void SetSpace(const Size &rPixSpace, SfxItemSet &rItemSet, SvxCSS1PropertyInfo &rPropInfo, SfxItemSet &rFlyItemSet)
Definition: htmlplug.cxx:245
void DeleteMark()
Definition: pam.hxx:177
SvxMacro & Insert(SvMacroItemId nEvent, const SvxMacro &rMacro)
void SetAnchorAndAdjustment(sal_Int16 eVertOri, sal_Int16 eHoriOri, const SvxCSS1PropertyInfo &rPropInfo, SfxItemSet &rFrameSet)
Definition: htmlgrin.cxx:153
const SvxColorItem & GetColor(bool=true) const
Definition: charatr.hxx:131
sal_uLong GetIndex() const
Definition: node.hxx:282
SwNode & GetNode(bool bPoint=true) const
Definition: pam.hxx:223
EnumT GetEnum(const HTMLOptionEnum< EnumT > *pOptEnums, EnumT nDflt=static_cast< EnumT >(0)) const
SvKeyValueIterator * GetHeaderAttributes()
SwNode & GetEndOfAutotext() const
Section for all Flys/Header/Footers.
Definition: ndarr.hxx:157
OString stripEnd(const OString &rIn, sal_Char c)
#define OOO_STRING_SVTOOLS_HTML_VA_texttop
HtmlOptionId GetToken() const
sal_uInt16 m_nMissingImgMaps
Definition: swhtml.hxx:400
static Css1ScriptFlags GetScriptFromClass(OUString &rClass, bool bSubClassOnly=true)
Definition: htmlcss1.cxx:555
TOOLS_DLLPUBLIC OString convertLineEnd(const OString &rIn, LineEnd eLineEnd)
void FinishFootEndNote()
Definition: htmlftn.cxx:188
#define OOO_STRING_SVTOOLS_HTML_VA_bottom
Marks a position in the document model.
Definition: pam.hxx:35
void InsertImage()
Definition: htmlgrin.cxx:320
void SetAttr(bool bChkEnd=true, bool bBeforeTable=false, std::deque< std::unique_ptr< HTMLAttr >> *pPostIts=nullptr)
Definition: swhtml.hxx:486
ErrCode ImportGraphic(Graphic &rGraphic, const INetURLObject &rPath, sal_uInt16 nFormat=GRFILTER_FORMAT_DONTKNOW, sal_uInt16 *pDeterminedFormat=nullptr, GraphicFilterImportFlags nImportFlags=GraphicFilterImportFlags::NONE)
#define RES_CHRATR_CJK_LANGUAGE
Definition: hintids.hxx:92
#define OOO_STRING_SVTOOLS_HTML_sdendnote_anc
HTMLAttrs m_aSetAttrTab
Definition: swhtml.hxx:360
#define RES_CHRATR_FONTSIZE
Definition: hintids.hxx:76
sal_uLong StartOfSectionIndex() const
Definition: node.hxx:673
long AdjustWidth(long n)
#define RES_CHRATR_LANGUAGE
Definition: hintids.hxx:78
const OUString & GetText() const
Definition: ndtxt.hxx:211
ImageMap * FindImageMap(const OUString &rURL) const
Definition: htmlgrin.cxx:99
std::string GetValue
Point LogicToLogic(const Point &rPtSource, const MapMode *pMapModeSource, const MapMode *pMapModeDest) const
Size GetSizePixel(const OutputDevice *pRefDevice=nullptr) const
#define OOO_STRING_SVTOOLS_HTML_VA_absmiddle
#define OOO_STRING_SVTOOLS_HTML_AL_center
long Height() const
void ScaleImageMap()
Scale an image-map: the image-map becomes zoomed in / out by factor between graphic-size and border-s...
Definition: ndgrf.cxx:639
SwNodeIndex nNode
Definition: pam.hxx:37
void InsertBookmark(const OUString &rName)
Definition: htmlgrin.cxx:1348
wrapper iterator: wraps iterator of implementation while hiding MarkBase class; only IMark instances ...
const OUString & GetName() const
#define MINFLY
Definition: swtypes.hxx:65
sal_uIntPtr sal_uLong
void SetTitle(const OUString &rTitle)
Definition: ndnotxt.cxx:247
#define OOO_STRING_SVTOOLS_HTML_sdfootnote_anc
void NewAttr(const std::shared_ptr< HTMLAttrTable > &rAttrTab, HTMLAttr **ppAttr, const SfxPoolItem &rItem)
Definition: swhtml.cxx:3024
#define RES_FRMATR_END
Definition: hintids.hxx:236
#define RES_FLTR_BOOKMARK
Definition: hintids.hxx:321
Provides access to the marks of a document.
const OUString & GetString() const
Definition: doc.hxx:185
JumpToMarks m_eJumpTo
Definition: swhtml.hxx:408
void CutText(SwTextNode *const pDest, const SwIndex &rStart, const sal_Int32 nLen)
Definition: ndtxt.cxx:2402
HTMLAttrContexts m_aContexts
Definition: swhtml.hxx:363
virtual const SwPosition & GetMarkPos() const =0
sal_uInt32 m_aFontHeights[7]
Definition: swhtml.hxx:392
static SwContentNode * GoPrevious(SwNodeIndex *)
Definition: nodes.cxx:1290
SvMacroItemId
#define OOO_STRING_SVTOOLS_HTML_sdfootnote_sym
const SvxMacro * GetMacro(SvMacroItemId nEvent) const
Definition: fmtatr2.cxx:256
OUString m_sBaseURL
Definition: swhtml.hxx:341
#define RES_CHRATR_CJK_FONTSIZE
Definition: hintids.hxx:91
Frame cannot be moved in Var-direction.
Definition: fmtfsize.hxx:38
std::unique_ptr< ImageMaps > m_pImageMaps
all Image-Maps that have been read
Definition: swhtml.hxx:387
long AdjustHeight(long n)
css::chart::ChartAxisLabelPosition ePos
int GetActualListLevel() const
Returns the actual list level of this text node, when it is a list item.
Definition: ndtxt.cxx:4095
void SetMacroTable(const SvxMacroTableDtor *pTable)
Set a new MacroTable or clear the current one.
Definition: fmtatr2.cxx:233
void InsertBasicDocEvent(const OUString &aEventName, const OUString &rName, ScriptType eScrType, const OUString &rScrType)
Definition: htmlbas.cxx:231
SwContentNode * GetContentNode(bool bPoint=true) const
Definition: pam.hxx:229
static OutputDevice * GetDefaultDevice()
SwTextAttr * GetTextAttrForCharAt(const sal_Int32 nIndex, const sal_uInt16 nWhich=RES_TXTATR_END) const
get the text attribute at position nIndex which owns the dummy character CH_TXTATR_* at that position...
Definition: ndtxt.cxx:3058
#define CH_TXTATR_BREAKWORD
Definition: hintids.hxx:43
GPOS_TILED
void SetChgTwipSize(bool b)
Definition: ndgrf.hxx:99
const OUString & GetValue() const
Definition: fmtinfmt.hxx:75
static void ResetFrameFormatAttrs(SfxItemSet &rFrameSet)
Definition: shellio.cxx:625
static LanguageType convertToLanguageTypeWithFallback(const OUString &rBcp47)
SwNodeType GetNodeType() const
Definition: node.hxx:144
#define RES_CHRATR_CTL_FONTSIZE
Definition: hintids.hxx:96
SwIndex nContent
Definition: pam.hxx:38
ScriptType GetScriptType(SvKeyValueIterator *) const
void GetDefaultScriptType(ScriptType &rType, OUString &rTypeStr) const
Definition: htmlgrin.cxx:294
void SetColor(const Color &rColor)
void InsertAttrs(std::deque< std::unique_ptr< HTMLAttr >> rAttrs)
Definition: swhtml.cxx:3425
bool ParseStyleOptions(const OUString &rStyle, const OUString &rId, const OUString &rClass, SfxItemSet &rItemSet, SvxCSS1PropertyInfo &rPropInfo, const OUString *pLang=nullptr, const OUString *pDir=nullptr)
Definition: htmlcss1.cxx:1845
sal_uLong GetIndex() const
Definition: ndindex.hxx:151
LineEnd GetSystemLineEnd()
static OUString GetAbsURL(OUString const &rTheBaseURIRef, OUString const &rTheRelURIRef, EncodeMechanism eEncodeMechanism=EncodeMechanism::WasEncoded, DecodeMechanism eDecodeMechanism=DecodeMechanism::ToIUri, rtl_TextEncoding eCharset=RTL_TEXTENCODING_UTF8)
virtual void SetName(const OUString &rNewName, bool bBroadcast=false) override
Definition: atrfrm.cxx:2464
const sal_uInt8 MAXLEVEL
Definition: swtypes.hxx:95
void NewAnchor()
Definition: htmlgrin.cxx:1126
#define RES_UL_SPACE
Definition: hintids.hxx:197
#define OOO_STRING_SVTOOLS_HTML_VA_top
const SwFormatSurround & GetSurround(bool=true) const
Definition: fmtsrnd.hxx:66
void SetCountedInList(bool bCounted)
Definition: ndtxt.cxx:4223
static SvtScriptType GetScriptTypeOfLanguage(LanguageType nLang)
void SetMap(const ImageMap *pM)
Pointer will be copied.
Definition: atrfrm.cxx:1742
SwPaM * m_pPam
Definition: swhtml.hxx:377
bool isUntrustedReferer(OUString const &referer) const
static OUString StripQueryFromPath(const OUString &rBase, const OUString &rPath)
Strips query and fragment from a URL path if base URL is a file:// one.
Definition: htmlplug.cxx:338
void InsertBodyOptions()
Definition: htmlgrin.cxx:894
Specific frame formats (frames, DrawObjects).
Definition: docary.hxx:201
void InsertAttr(const SfxPoolItem &rItem, bool bInsAtStart)
Definition: swhtml.cxx:3416
const OUString & GetName() const
Definition: fmtinfmt.hxx:80
virtual void repositionMark(::sw::mark::IMark *io_pMark, const SwPaM &rPaM)=0
Moves an existing mark to a new selection and performs needed updates.
OUString m_sJmpMark
Definition: swhtml.hxx:352
virtual const_iterator_t getAllMarksBegin() const =0
returns a STL-like random access iterator to the begin of the sequence of marks.
#define RES_CHRATR_COLOR
Definition: hintids.hxx:71
static bool HasStyleOptions(const OUString &rStyle, const OUString &rId, const OUString &rClass, const OUString *pLang=nullptr, const OUString *pDir=nullptr)
Definition: swhtml.hxx:991
static void SetFrameFormatAttrs(SfxItemSet &rItemSet, HtmlFrameFormatFlags nFlags, SfxItemSet &rFrameItemSet)
Definition: htmlcss1.cxx:2068
PaM is Point and Mark: a selection of the document model.
Definition: pam.hxx:136
std::vector< SwFrameFormat * > m_aMoveFlyFrames
Definition: swhtml.hxx:364
bool Move(SwMoveFnCollection const &fnMove=fnMoveForward, SwGoInDoc fnGo=GoInContent)
Movement of cursor.
Definition: pam.cxx:483
void InsertFootEndNote(const OUString &rName, bool bEndNote, bool bFixed)
Definition: htmlftn.cxx:173
bool empty() const
Style of a layout element.
Definition: frmfmt.hxx:57
void EndTag(HtmlTokenId nToken)
Definition: swhtml.cxx:3558
SwNodeType
Definition: ndtyp.hxx:28
void GetMarginsFromContextWithNumBul(sal_uInt16 &nLeft, sal_uInt16 &nRight, short &nIndent) const
Definition: htmlcss1.cxx:2170
virtual const_iterator_t getAllMarksEnd() const =0
returns a STL-like random access iterator to the end of the sequence of marks.
OUString const m_aPathToFile
Definition: swhtml.hxx:340
sal_uInt16 GetDepth() const
Definition: htmlnum.hxx:73
const Size & GetSizePixel() const
std::deque< sal_Int32 > m_aMoveFlyCnts
Definition: swhtml.hxx:365
const SwFormatAnchor & GetAnchor(bool=true) const
Definition: fmtanchr.hxx:81
Internet visited.
Definition: poolfmt.hxx:122
bool GoInNode(SwPaM &rPam, SwMoveFnCollection const &fnMove)
Definition: pam.cxx:894
const SwPosition * GetPoint() const
Definition: pam.hxx:207
SwIndex & Assign(SwIndexReg *, sal_Int32)
Definition: index.cxx:198
RndStdIds GetAnchorId() const
Definition: fmtanchr.hxx:65
static OUString GetEventName(GlobalEventId nID)
#define OOO_STRING_SVTOOLS_HTML_VA_baseline
const SwPosition * GetContentAnchor() const
Definition: fmtanchr.hxx:67
bool HasCurrentParaBookmarks(bool bIgnoreStack=false) const
Definition: htmlgrin.cxx:1355
const Color & GetColor() const
Text body.
Definition: poolfmt.hxx:251
int i
std::unique_ptr< SwCSS1Parser > m_pCSS1Parser
Definition: swhtml.hxx:372
FlyAnchors.
Definition: fmtanchr.hxx:34
const SwFormatURL & GetURL(bool=true) const
Definition: fmturl.hxx:78
void EndContext(HTMLAttrContext *pContext)
Definition: htmlctxt.cxx:372
#define OOO_STRING_SVTOOLS_HTML_AL_left
SvxGraphicPosition
Internet normal.
Definition: poolfmt.hxx:121
Marks a character position inside a document model node.
Definition: index.hxx:37
std::unique_ptr< HTMLAttrContext > PopContext(HtmlTokenId nToken=HtmlTokenId::NONE)
Definition: htmlcss1.cxx:2105
void StripTrailingPara()
Definition: htmlgrin.cxx:1403
long const nBorder
#define LANGUAGE_DONTKNOW
css::text::WrapTextMode GetSurround() const
Definition: fmtsrnd.hxx:51
bool DoPositioning(SfxItemSet &rItemSet, SvxCSS1PropertyInfo &rPropInfo, HTMLAttrContext *pContext)
Definition: htmlctxt.cxx:468
void EndAnchor()
Definition: htmlgrin.cxx:1331
Marks a node in the document model.
Definition: ndindex.hxx:31
bool m_bInFootEndNoteSymbol
Definition: swhtml.hxx:449
bool HasName() const
MapUnit GetMapUnit() const
void RegisterFlyFrame(SwFrameFormat *pFlyFrame)
Definition: htmlgrin.cxx:280
void SetURL(const OUString &rURL, bool bServerMap)
Definition: atrfrm.cxx:1736
#define OOO_STRING_SVTOOLS_HTML_VA_middle
const sal_Unicode cMarkSeparator
Definition: swtypes.hxx:137
void SetTwipSize(const Size &rSz)
Definition: ndgrf.cxx:626
bool IsScaleImageMap() const
Definition: ndgrf.hxx:107
virtual Size GetTwipSize() const override
Definition: ndgrf.cxx:428
void SetMacro(SvMacroItemId nEvent, const SvxMacro &)
HTMLOptionEnum< sal_Int16 > const aHTMLImgVAlignTable[]
Definition: htmlgrin.cxx:86
bool HasCurrentParaFlys(bool bNoSurroundOnly=false, bool bSurroundOnly=false) const
Definition: swhtml.cxx:4486
sal_uInt32 GetNumber() const
size_t m_nContextStMin
Definition: swhtml.hxx:402
const SwNodeIndex * GetContentIdx() const
Definition: fmtcntnt.hxx:46
Point PixelToLogic(const Point &rDevicePt) const
Size GetPrefSize() const
virtual bool SetFormatAttr(const SfxPoolItem &rAttr)
Definition: format.cxx:460
#define HTML_DFLT_IMG_WIDTH
Definition: swhtml.hxx:62
sal_uInt16 Which() const
for Querying of Writer-functions.
Definition: format.hxx:78
sal_uLong EndOfSectionIndex() const
Definition: node.hxx:677
const SfxPoolItem * Put(const SfxPoolItem &rItem, sal_uInt16 nWhich)
SwTextNode is a paragraph in the document model.
Definition: ndtxt.hxx:79
const SvxMacroTableDtor & GetMacroTable() const
void ConnectImageMaps()
Definition: htmlgrin.cxx:116
void SetInvalidRule(bool bFlag)
Definition: number.cxx:862
virtual bool SetAttr(const SfxPoolItem &)
made virtual
Definition: node.cxx:1471
HTMLAttrs m_aParaAttrs
Definition: swhtml.hxx:361
#define OOO_STRING_SVTOOLS_HTML_AL_right
void SetScaleImageMap(bool b)
Definition: ndgrf.hxx:108
#define SAL_WARN_IF(condition, area, stream)
#define OOO_STRING_SVTOOLS_HTML_sdendnote_sym
#define RES_DRAWFRMFMT
Definition: hintids.hxx:277
#define ERRCODE_NONE
MapMode GetPrefMapMode() const
#define RES_CHRATR_CTL_LANGUAGE
Definition: hintids.hxx:97
unsigned char sal_uInt8
OUString GetPartBeforeLastName() const
const OUString & GetTargetFrame() const
Definition: fmtinfmt.hxx:89
Graphic ImportUnloadedGraphic(SvStream &rIStream, sal_uInt64 sizeLimit=0, Size *pSizeHint=nullptr)
void SetWidthPercent(sal_uInt8 n)
Definition: fmtfsize.hxx:95
SwMoveFnCollection const & fnMoveForward
SwPam::Move()/Find() default argument.
Definition: paminit.cxx:59
::std::vector< HTMLOption > HTMLOptions
void GetULSpaceFromContext(sal_uInt16 &rUpper, sal_uInt16 &rLower) const
Definition: htmlcss1.cxx:2186
bool CanRemoveNode(sal_uLong nNodeIdx) const
Definition: swhtml.cxx:583
void SetTextCollAttrs(HTMLAttrContext *pContext=nullptr)
Definition: swhtml.cxx:4558
sal_Int32 GetIndex() const
Definition: index.hxx:95
SwHTMLNumRuleInfo & GetNumInfo()
Definition: swhtml.hxx:536
bool IsCountedInList() const
Definition: ndtxt.cxx:4238
INetProtocol GetProtocol() const
bool EndAttr(HTMLAttr *pAttr, bool bChkEmpty=true)
Definition: swhtml.cxx:3040
SfxObjectShell * GetPersist() const
Definition: docnew.cxx:634
SwTableNode is derived from SwStartNode.
#define RES_TXTATR_FLYCNT
Definition: hintids.hxx:151
std::shared_ptr< HTMLTable > m_xTable
Definition: swhtml.hxx:382
const SwFormatContent & GetContent(bool=true) const
Definition: fmtcntnt.hxx:55
SwMoveFnCollection const & fnMoveBackward
Definition: paminit.cxx:58
void GetColor(Color &) const
void SetType(RndStdIds nRndId)
Definition: fmtanchr.hxx:71
OString const aName
#define RES_BOX
Definition: hintids.hxx:211
#define HTML_DFLT_IMG_HEIGHT
Definition: swhtml.hxx:63
static GraphicFilter & GetGraphicFilter()
#define OOO_STRING_SVTOOLS_HTML_VA_absbottom
HTMLOptionEnum< sal_Int16 > const aHTMLImgHAlignTable[]
Definition: htmlgrin.cxx:79
void SetDefaultType()
ScriptType
std::shared_ptr< HTMLAttrTable > m_xAttrTab
Definition: swhtml.hxx:362
virtual void SetMark()
Unless this is called, the getter method of Mark will return Point.
Definition: pam.cxx:457
long getHeight() const
SwGrfNode * GetGrfNode()
Definition: ndgrf.hxx:155
const OUString & GetScriptTypeString(SvKeyValueIterator *) const
bool m_bInFootEndNoteAnchor
Definition: swhtml.hxx:448
bool AppendTextNode(SwHTMLAppendMode eMode=AM_NORMAL, bool bUpdateNum=true)
Definition: swhtml.cxx:2146
bool m_bCallNextToken
Definition: swhtml.hxx:428
#define RES_FRMMACRO
Definition: hintids.hxx:213
bool Detect(bool bExtendedInfo=false)
void PushContext(std::unique_ptr< HTMLAttrContext > &rCntxt)
Definition: swhtml.hxx:550
bool m_bChkJumpMark
Definition: swhtml.hxx:442
long getWidth() const
sal_Int32 nPos
rtl::Reference< SwDoc > m_xDoc
Definition: swhtml.hxx:376
bool IsTextNode() const
Definition: node.hxx:636
void setWidth(long nWidth)
SwNumRule * GetNumRule(SwTextFormatColl &rTextFormatColl)
determines the list style, which directly set at the given paragraph style
Definition: fmtcol.cxx:75
#define RES_FRMATR_BEGIN
Definition: hintids.hxx:192
SwNumRule * GetNumRule()
Definition: htmlnum.hxx:69
STARBASIC
SwFrameFormat * GetFlyFormat() const
If node is in a fly return the respective format.
Definition: node.cxx:710
std::unique_ptr< SfxPoolItem > m_pItem
Definition: swhtml.hxx:137
sal_uInt32 GetSttParaIdx() const
Definition: swhtml.hxx:158
std::unique_ptr< SvMemoryStream > getData() const
void SetName(const OUString &rNm)
Definition: fmtinfmt.hxx:84
const Color & GetValue() const
static bool MayBePositioned(const SvxCSS1PropertyInfo &rPropInfo, bool bAutoWidth=false)
Definition: htmlcss1.cxx:1420
SwTextNode * GetTextNode()
Inline methods from Node.hxx.
Definition: ndtxt.hxx:843
static void PixelToTwip(long &nWidth, long &nHeight)
Definition: svxcss1.cxx:867
OUString m_aBulletGrfs[MAXLEVEL]
Definition: swhtml.hxx:351
void SetAnchor(const SwPosition *pPos)
Definition: atrfrm.cxx:1486
void SetLine(const editeng::SvxBorderLine *pNew, SvxBoxItemLine nLine)
SVL_DLLPUBLIC OUString SmartRel2Abs(INetURLObject const &rTheBaseURIRef, OUString const &rTheRelURIRef, Link< OUString *, bool > const &rMaybeFileHdl=Link< OUString *, bool >(), bool bCheckFileExists=true, bool bIgnoreFragment=false, INetURLObject::EncodeMechanism eEncodeMechanism=INetURLObject::EncodeMechanism::WasEncoded, INetURLObject::DecodeMechanism eDecodeMechanism=INetURLObject::DecodeMechanism::ToIUri, rtl_TextEncoding eCharset=RTL_TEXTENCODING_UTF8, FSysStyle eStyle=FSysStyle::Detect)
Base class of the Writer document model elements.
Definition: node.hxx:79
void setHeight(long nHeight)
SfxMedium * GetMedium() const
static OUString decode(OUString const &rText, DecodeMechanism eMechanism, rtl_TextEncoding eCharset=RTL_TEXTENCODING_UTF8)
|
__label__pos
| 0.900027 |
prettified
Prettified error handling for Node.js
Prettified error handling for Node.js
npm install prettified
This sample code:
var errors = require('prettified').errors;
try {
throw new Error("Example error");
} catch(err) {
errors.print(err);
}
...will print errors using console.error() like this:
/---------------------------------- Error -----------------------------------\
| Error: Example error
+---------------------------------- stack -----------------------------------+
| at Object.<anonymous> (/home/jhh/git/node-prettified/examples/format.js:3:8)
| at Module._compile (module.js:449:26)
| at Object.Module._extensions..js (module.js:467:10)
| at Module.load (module.js:356:32)
| at Function.Module._load (module.js:312:12)
| at Module.runMain (module.js:492:10)
| at process.startup.processNextTick.process._tickCallback (node.js:244:9)
\----------------------------------------------------------------------------/
The errors.catchfail([opts, ]callback) is a wrapper builder to catch exceptions inside function call.
It returns a function which when invoked calls the callback and passes all original arguments and returns the value untouched.
If exceptions are thrown it will catch them and print them using console.error() or by using a handler specified in opts. Handlers can be functions or Promise A defers (see the q library).
You can simply wrap your existing callback handlers with catchfail like this:
require('fs').exists('test.txt', errors.catchfail(function(exists) {
console.log('test.txt ' + (exists ? 'exists' : 'not found') );
}));
If you like to handle the error you can pass an error handler as a first argument:
function do_error(err) {
errors.print(err);
}
setTimeout(errors.catchfail(do_error, function() {
throw new TypeError("Example error");
}), 200);
You can also use defers from the q library as an error handler:
function test() {
var defer = require('q').defer();
setTimeout(errors.catchfail(defer, function() {
throw new TypeError("Example error");
}), 200);
return defer.promise;
}
test().fail(function(err) {
errors.print(err);
});
You can set default error type for uncatched errors like this:
errors.setDefaultError(MySystemError);
|
__label__pos
| 0.992746 |
Project
Profile
Help
Issue #6442
closed
Advisory upload fails if checksum type is provided for any of its packages
Added by ttereshc over 3 years ago. Updated over 3 years ago.
Status:
CLOSED - CURRENTRELEASE
Priority:
Normal
Assignee:
Sprint/Milestone:
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version:
Platform Release:
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
Yes
Tags:
Sprint:
Quarter:
Description
Upload of an advisory raises invalid literal for int() with base 10: 'sha256' if checksum type is specified.
An example of advisory which triggers the error.
{
"id": "my_advisory_id",
"updated_date": "2014-06-10 00:00:00",
"description": "description",
"issued_date": "2014-06-10 00:00:00",
"fromstr": "me",
"status": "final",
"title": "kexec-tools bug fix update",
"summary": "summary",
"version": "1",
"type": "bugfix",
"severity": "",
"solution": "solution",
"release": "",
"rights": "Copyright 2014",
"pushcount": "1",
"pkglist": [
{
"name": "long namet",
"shortname": "short name",
"packages": [
{
"arch": "x86_64",
"epoch": "0",
"filename": "kexec-tools-2.0.4-32.el7_0.1.x86_64.rpm",
"name": "kexec-tools",
"reboot_suggested": false,
"relogin_suggested": false,
"restart_suggested": false,
"release": "32.el7_0.1",
"src": "kexec-tools-2.0.4-32.el7_0.1.src.rpm",
"sum": "8e214681104e4ba73726e0ce11d21b963ec0390fd70458d439ddc72372082034",
"sum_type": "sha256",
"version": "2.0.4"
}
]
}
],
"references": [
{
"href": "https://example.com/",
"id": "",
"title": "my advisory",
"type": "bugzilla"
}
],
"reboot_suggested": false
}
Proposed solution
Currently, sum_type is a TextField which stores an id(int) of checksum type as it defined in createrepo_c. The suggestion is:
• change a filed type and store an integer value in the PositiveIntegerField with choices, values for which are taken from what is supported in createrpo_c.
• write a migration to convert the field to integer
• convert a string representing checksum type to a createrepo_c id during upload
Also available in: Atom PDF
|
__label__pos
| 0.982079 |
Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:
Can extension methods be applied to the class?
For example, extend DateTime to include a Tomorrow() method that could be invoked like:
DateTime.Tomorrow();
I know I can use
static DateTime Tomorrow(this Datetime value) { //... }
Or
public static MyClass {
public static Tomorrow() { //... }
}
for a similar result, but how can I extend DateTime so that I could invoke DateTime.Tomorrow?
share|improve this question
8 Answers 8
up vote 42 down vote accepted
You cannot add methods to an existing type, you can only add methods that appear to be a member of the existing type through extension methods. Since this is the case you cannot add static methods to the type itself since extension methods use instances of that type.
There is nothing stopping you from creating your own static helper method like this:
static class DateTimeHelper
{
public static DateTime Tomorrow
{
get { return DateTime.Now.AddDays(1); }
}
}
Which you would use like this:
DateTime tomorrow = DateTimeHelper.Tomorrow;
share|improve this answer
4
huh woot? unless it was implemented within 6 months of this and Kumu's answer right there, this looks actually incomplete! – Cawas Jun 22 '12 at 14:28
1
@Cawas this is not incomplete, Andrew is showing how to do this with a static helper, not with an extension method (since there is no instance). – Nick N. Aug 12 '14 at 12:02
You're right, Nick. I do prefer extension methods though! ;) – Cawas Aug 12 '14 at 22:48
What's about extensionmethod.net/csharp/datetime ? IMHO, better samples for minimize learning curve are real applications with full source code and good patterns – Kiquenet Sep 19 '14 at 11:58
The problem with this code is that it only works on DateTime.Now and not any DateTime object. As a utility, one may want to use it to determine the day after some previous (or future) day. Not to mention DateTime.Now is determined each time you call it... – Storm Kiernan Sep 15 at 13:02
Create Extension Methods: http://msdn.microsoft.com/en-us/library/bb383977.aspx
Ex:
namespace ExtensionMethods
{
public static class MyExtensionMethods
{
public static DateTime Tomorrow(this DateTime date)
{
return date.AddDays(1);
}
}
}
usage:
DateTime.Now.Tomorrow();
or (any object of type DateTime).Tomorrow();
share|improve this answer
Shuggy's answer also shred some light on similar way to solving this. – Cawas Jun 22 '12 at 14:30
3
Don't forget 'using ExtensionMethods;' at the top of your document for this. – Luke Alderton Jul 3 '13 at 13:19
why can't i do DateTime.Tomorrow()? – lawphotog Jul 17 '14 at 9:11
Hi lawphotog, this extension needs an object, here DateTime is a struct and not an object. – Kumu Jul 18 '14 at 0:36
As mentioned in previous comments (it wasn't clear enough for me apparently), you will NOT be able to use DateTime.Tomorrow() as extension methods only work on INSTANCES of a class and a class struct. To "extend" a static method on a class struc, follow Andrew's answer or Shuggy's answer. – Alex Nov 17 at 19:06
Extension methods are syntactic sugar for making static methods whose first parameter is an instance of type T look as if they were an instance method on T.
As such the benefit is largely lost where you to make 'static extension methods' since they would serve to confuse the reader of the code even more than an extension method (since they appear to be fully qualified but are not actually defined in that class) for no syntactical gain (being able to chain calls in a fluent style within Linq for example).
Since you would have to bring the extensions into scope with a using anyway I would argue that it is simpler and safer to create:
public static class DateTimeUtils
{
public static DateTime Tomorrow { get { ... } }
}
And then use this in your code via:
WriteLine("{0}", DateTimeUtils.Tomorrow)
share|improve this answer
Awesome answer. +1 for first sentence. – Josh Nov 19 at 23:34
The closest I can get to the answer is by adding an extension method into a System.Type object. Not pretty, but still interesting.
public static class Foo
{
public static void Bar()
{
var now = DateTime.Now;
var tomorrow = typeof(DateTime).Tomorrow();
}
public static DateTime Tomorrow(this System.Type type)
{
if (type == typeof(DateTime)) {
return DateTime.Now.AddDays(1);
} else {
throw new InvalidOperationException();
}
}
}
Otherwise, IMO Andrew and ShuggyCoUk has a better implementation.
share|improve this answer
There are problems with this approach. Having to type "typeof(...)" is not convenient, and with intellisense you would see extensions of every type. Still, it's an interesting approach that I hadn't thought of, +1. – Meta-Knight Jul 27 '09 at 14:04
@Meta-Knight True, that's why personally I prefer the other's answer. My answer would have the closest syntax to OP question, but it's not the best way to solve this problem. – Adrian Godong Jul 27 '09 at 14:40
Type can be replaced with any other type required. I use it with From and it works perfectly. so I guess this answer is general but correct – katia Mar 2 at 6:40
I would do the same as Kumu
namespace ExtensionMethods
{
public static class MyExtensionMethods
{
public static DateTime Tomorrow(this DateTime date)
{
return date.AddDays(1);
}
}
}
but call it like this new DateTime().Tomorrow();
Think it makes more seens than DateTime.Now.Tomorrow();
share|improve this answer
And you missed a chance to write it as a comment on Kumu's answer! :P – Cawas Jun 22 '12 at 14:28
They provide the capability to extend existing types by adding new methods with no modifications necessary to the type. Calling methods from objects of the extended type within an application using instance method syntax is known as ‘‘extending’’ methods. Extension methods are not instance members on the type. The key point to remember is that extension methods, defined as static methods, are in scope only when the namespace is explicitly imported into your application source code via the using directive. Even though extension methods are defined as static methods, they are still called using instance syntax.
Check the full example here http://www.dotnetreaders.com/articles/Extension_methods_in_C-sharp.net,Methods_in_C_-sharp/201
Example:
class Extension
{
static void Main(string[] args)
{
string s = "sudhakar";
Console.WriteLine(s.GetWordCount());
Console.ReadLine();
}
}
public static class MyMathExtension
{
public static int GetWordCount(this System.String mystring)
{
return mystring.Length;
}
}
share|improve this answer
I was looking for something similar - a list of constraints on classes that provide Extension Methods. Seems tough to find a concise list so here goes:
1. You can't have any private or protected anything - fields, methods, etc.
2. It must be a static class, as in public static class....
3. Only methods can be in the class, and they must all be public static.
4. You can't have conventional static methods - ones that don't include a this argument aren't allowed.
5. All methods must begin:
public static ReturnType MethodName(this ClassName _this, ...)
So the first argument is always the this reference.
There is an implicit problem this creates - if you add methods that require a lock of any sort, you can't really provide it at the class level. Typically you'd provide a private instance-level lock, but it's not possible to add any private fields, leaving you with some very awkward options, like providing it as a public static on some outside class, etc. Gets dicey. Signs the C# language had kind of a bad turn in the design for these.
The workaround is to use your Extension Method class as just a Facade to a regular class, and all the static methods in your Extension class just call the real class, probably using a Singleton.
share|improve this answer
Unfortunately, you can't do that. I believe it would be useful, though. It is more natural to type:
DateTime.Tomorrow
than:
DateTimeUtil.Tomorrow
With a Util class, you have to check for the existence of a static method in two different classes, instead of one.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.978892 |
FinClip为企业提供小程序生态圈技术产品,开发者可在FinClip小程序开发帮助中心找到相关FinClip小程序指引
# 桌面端集成
集成样例代码
可以在这里获取集成样例代码https://github.com/finogeeks/finclip-desktop-demo (opens new window)
若您所在的环境无法访问 Github,也可以点击这里访问 gitee (opens new window) 的镜像仓库。
# 1. 获取凭据
请注意
集成 SDK 需要先在 FinClip 平台中创建应用绑定小程序,获得每个应用专属的SDK KEYSDK SECRET后,随后就可以在集成 SDK 时填写对应的参数。打开小程序时 SDK 会自动初始化,并校验SDK KEYSDK SECRETBundleID(Application ID)是否正确。
您可以 【点击这里】 查看如何获取所需要的SDK KEYSDK SECRET。请务必确认集成 SDK 时填写的参数正确,否则会导致小程序无法打开。
# 2. 获取与导入 SDK
桌面版 SDK 目录结构如下:
Finclip.zip
│ FinclipWrapper.(dll | so | dylib) # 动态链接库
│ finclip_api.h # 公共接口头文件
│ finclip_const.h # 常量定义文件
└───finclip/ # 主程序
# 2.1 导入 SDK
# 2.1.1 加载动态链接库
FinClip SDK 提供了头文件和动态链接库.
如果您使用 C/C++ 开发, 请直接使用我们提供的文件.
如果您使用的语言不能直接使用头文件编译, 请使用 LoadLibrary / dlopen 的方式读取动态链接库
# 以cmake为例, 在CMakeLists.txt加入以下两行
target_include_directories(TARGET PRIVATE ${path/to/include})
target_link_libraries(TARGET PRIVATE FinClipSDKWrapper)
[DllImport("FinClipSDKWrapper.dll", SetLastError = true)]
var
adll: Thandle;
begin
adll := LoadLibrary('path\to\FinClipSDKWrapper.dll');
...
end.
from ctypes import *
cdll.LoadLibrary("C:/Users/gyt/code/finclipsdk-desktop/build/wrapper/Debug/FinClipSDKWrapper.dll")
libc = CDLL("FinClipSDKWrapper.dll")
# 2.1.2 读取动态链接库中的函数
FinClip SDK Wrapper 是纯C符号的动态链接库.
C/C++ 使用过我们提供的头文件编译链接即可.
非 C/C++ 语言要使用 FinClip SDK 提供的 API , 通常有以下步骤:
1. 打开 finclip_api.h 文件找到要使用 API 的符号名
2. 根据函数声明, 定义目标语言的函数
下面以 finclip_start_applet_params 为例
它的函数签名为:
int FINSTDMETHODCALLTYPE finclip_start_applet_params(const char* appstore, const char* appid, void* params);
// 直接include头文件即可
#include "finclip_api.h"
finclip_start_applet_params(appstore, appid, params)
[DllImport("FinClipSDKWrapper.dll", SetLastError = true)]
public static extern Int32 finclip_start_applet_params([MarshalAs(UnmanagedType.LPUTF8Str)] string app_store, [MarshalAs(UnmanagedType.LPUTF8Str)] string appid, IntPtr start_params);
type
finclip_start_applet_params_type = function(appstore: PANSIChar;appid: PANSIChar;p: Pointer): Pointer; Cdel;
var
adll: Thandle;
finclip_start_applet_params: finclip_start_applet_params_type;
begin
adll := LoadLibrary('path\to\FinClipSDKWrapper.dll');
@finclip_start_applet_params := GetProcAddress(adll, 'finclip_start_applet_params');
end.
# 3. SDK 初始化
加载好动态链接库后, 接下来的示例均使用 C 语言展示, 根据上面的提示转换为您所使用的语言即可. 也可以 【点击这里】 (opens new window) 查看对应语言的完整demo.
# 3.1 配置 SDK
FinclipParams *config = finclip_create_params();
finclip_params_set(config, FINCLIP_CONFIG_APPKEY, appkey);
finclip_params_set(config, FINCLIP_CONFIG_SECRET, secret);
finclip_params_set(config, FINCLIP_CONFIG_DOMAIN, domain);
finclip_params_set(config, FINCLIP_CONFIG_EXE_PATH, exe_path, callback);
finclip_init_with_config(appstore, config);
appstore
appstore是服务器配置的唯一标识
可以创建多个服务器配置,搭配不同的appstore值使用
# 4. 小程序管理
# 4.1 打开小程序
配置完成后, 即可启动小程序
finclip_start_applet(app_store, appid); // 使用app_store中的配置打开对应环境的小程序
# 4.2 关闭小程序
您可以关闭指定小程序, 也可以关闭所有小程序
finclip_close_applet(appid); // 关闭指定小程序
finclip_close_all_applet(); // 关闭所有小程序
# 4.3 注册自定义api
您可以注册自定义api, 供小程序和 h5 直接使用
在小程序的FinClipConf.js里定义api名称
module.exports = {
extApi:[
{
name: 'test', //扩展api名
sync: false, //是否为同步api
params: { //扩展api 的参数格式
}
}
]
}
实现自定义api
// 定义自定义API
void WebApiExample(const char* event, const char* param, void* input,
int callbackid) {
FinClipParams* res = finclip_create_params();
finclip_params_set(res, "result", "ok");
finclip_callback_success(appid, callbackid, res);
finclip_destory_params(res);
}
void AppApiExample(const char* event, const char* param, void* input,
int callbackid) {
FinClipParams* res = finclip_create_params();
finclip_params_set(res, "result", "ok");
finclip_callback_success(appid, callbackid, res);
finclip_destory_params(res);
}
// 注册自定义API
finclip_register_api(kWebView, "test", WebApiExample, this);
finclip_register_api(kApplet, "test", AppApiExample, this);
在小程序里调用自定义api
// 异步api
wx.test({ success: console.log, error: console.error })
// 同步api
const result = wx.test()
# 4.4 监听小程序的生命周期事件
现在小程序定义了5个生命周期事件, 您可以根据需要监听对应的事件
// 支持的回调类型
enum LifecycleType {
kLifecycleStarted = 1, // 小程序启动
kLifecycleClosed = 2, // 小程序关闭
kLifecycleHide = 3, // 小程序窗口隐藏
kLifecycleShow = 4, // 小程序窗口显示
kLifecycleDomReady = 5, // 小程序首屏渲染完毕
};
// 定义自定义API处理函数
void LifecycleHandle(LifecycleType type, const char* appid, void* input) {
// 注意,这里需要处理线程问题,如:
// 1. 全局变量加锁
// 2. 如果此handler需要处理ui界面,根据ui框架可能需要指派dispath到ui线程
// ...
}
// 注册自定义API
finclip_register_lifecycle(appid, kLifecycleClosed, LifecycleHandle, window)
# 5. 嵌入模式
目前嵌入模式仅支持 Windows , Mac 和 Linux 正在开发中.
# 5.1 嵌入窗口
如果要将小程序嵌入到您的窗口中, 首先需要获取窗口句柄, 记为 hwnd 我们提供了两个api完成窗口的嵌入.
finclip_start_applet_embed(appstore, appid, params, hwnd); // 启动小程序时嵌入到指定窗口
finclip_embed_applet(appid, hwnd); // 将已启动的小程序嵌入到指定窗口
# 5.2 处理缩放事件
当您的窗口发生变化时, 您需要监听窗口变化事件, 将高和宽通知小程序进程
finclip_set_position(appid, 0, 0, width, height); // left和top暂不支持
# 5.3 关闭处理
退出分两种情况:
1. 宿主窗口先关闭, 此时需要调用该finclip_close_applet通知FinClip进程退出, 或者使用其他方式杀掉进程
2. FinClip进程先关闭, 此时需要接受FinClip发过来的消息
// 场景一, 宿主窗口先关闭, 您需要在窗口关闭事件中通知小程序退出
finclip_close_applet(appid); // 通知小程序关闭
// 场景二, 小程序先关闭, 您需要在小程序关闭的生命周期事件中关闭宿主窗口
void LifecycleHandle(LifecycleType type, const char* appid, void* input) {
if (type == kLifecycleClosed) {
window.close();
}
}
© 2022 FinClip with ❤
👋🏻 嘿,你好!
「FinClip」是一套基于云原生框架设计的小程序容器。能够让任何移动应用在集成小程序SDK之后,获得可用、安全的小程序运行能力。
>> 点我免费注册体验
查看产品文档
了解与 FinClip 相关的一切信息
产品博客 👈 了解产品更新与核心功能介绍
资源下载 👈 获取小程序 SDK 与开发工具
文档中心 👈 查询 FinClip 小程序开发指南与答疑
商务咨询热线
预约 FinClip 产品介绍,咨询商务报价或私有化部署事宜
0755-86967467
获取产品帮助
联系 FinClip 技术顾问,获取产品资料或加入开发者社群
联系线上
人工客服
或 👉 点击这里,提交咨询工单
|
__label__pos
| 0.767227 |
Using PHP/MySQL with Google Maps
This tutorial is intended for developers who are familiar with PHP/MySQL, and want to learn how to use Google Maps with a MySQL database. After completing this tutorial, you will have a Google Map based off a database of places. The map will differentiate between two types of places—restaurants and bars—by giving their markers distinguishing icons. An info window with name and address information will display above a marker when clicked.
The tutorial is broken up into the following steps:
+ Creating the table
+ Populating the table
+ Outputting XML with PHP
+ Creating the map
Creating the table
When you create the MySQL table, you want to pay particular attention to the lat and lng attributes. With the current zoom capabilities of Google Maps, you should only need 6 digits of precision after the decimal. To keep the storage space required for our table at a minimum, you can specify that the lat and lng attributes are floats of size (10,6). That will let the fields store 6 digits after the decimal, plus up to 4 digits before the decimal, e.g. -123.456789 degrees. Your table should also have an id attribute to serve as the primary key, and a type attribute to distinguish between restaurants and bars.
Note: This tutorial uses location data that already have latitude and longitude information needed to plot corresponding markers. If you’re trying to use your own data that don’t yet have that information, use a batch geocoding service to convert the addresses into latitudes/longitudes. Some sites make the mistake of geocoding addresses each time a page loads, but doing so will result in slower page loads and unnecessary repeat geocodes. It’s always better to hardcode the latitude/longitude information when possible. This link contains a good list of geocoders: http://groups.google.com/group/Google-Maps-API/web/resources-non-google-geocoders
If you prefer interacting with your database through the phpMyAdmin interface, here’s a screenshot of the table creation.
If you don’t have access to phpMyAdmin or prefer using SQL commands instead, here’s the SQL statement that creates the table
CREATE TABLE `markers` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`name` VARCHAR( 60 ) NOT NULL ,
`address` VARCHAR( 80 ) NOT NULL ,
`lat` FLOAT( 10, 6 ) NOT NULL ,
`lng` FLOAT( 10, 6 ) NOT NULL ,
`type` VARCHAR( 30 ) NOT NULL
) ENGINE = MYISAM ;
Populating the table
After creating the table, it’s time to populate it with data. Sample data for 10 Seattle places are provided below. In phpMyAdmin, you can use the IMPORT tab to import various file formats, including CSV (comma-separated values). Microsoft Excel and Google Spreadsheets both export to CSV format, so you can easily transfer data from spreadsheets to MySQL tables through exporting/importing CSV files.
If you’d rather not use the phpMyAdmin interface, here are the SQL statements that accomplish the same results
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Pan Africa Market', '1521 1st Ave, Seattle, WA', '47.608941', '-122.340145', 'restaurant');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Buddha Thai & Bar', '2222 2nd Ave, Seattle, WA', '47.613591', '-122.344394', 'bar');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('The Melting Pot', '14 Mercer St, Seattle, WA', '47.624562', '-122.356442', 'restaurant');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Ipanema Grill', '1225 1st Ave, Seattle, WA', '47.606366', '-122.337656', 'restaurant');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Sake House', '2230 1st Ave, Seattle, WA', '47.612825', '-122.34567', 'bar');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Crab Pot', '1301 Alaskan Way, Seattle, WA', '47.605961', '-122.34036', 'restaurant');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Mama\'s Mexican Kitchen', '2234 2nd Ave, Seattle, WA', '47.613975', '-122.345467', 'bar');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Wingdome', '1416 E Olive Way, Seattle, WA', '47.617215', '-122.326584', 'bar');
INSERT INTO `markers` (`name`, `address`, `lat`, `lng`, `type`) VALUES ('Piroshky Piroshky', '1908 Pike pl, Seattle, WA', '47.610127', '-122.342838', 'restaurant');
Outputting XML with PHP
At this point, you should have a table named markers filled with sample data. You now need to write some PHP statements to export the table data into an XML format that our map can retrieve through asynchronous JavaScript calls. If you’ve never written PHP to connect to a MySQL database, you should visit php.net and read up on mysql_connect, mysql_select_db, my_sql_query, and mysql_error.
Note: Some tutorials may suggest actually writing your map page as a PHP file and outputting JavaScript for each marker you want to create, but that technique can be problematic. By using an XML file as an intermediary between our database and our Google Map, it makes for a faster initial page load, a more flexible map application, and easier debugging. You can independently verify the XML output from the database and the JavaScript parsing of the XML. And at any point, you could even decide to eliminate your database entirely and just run the map based on static XML files.
First, you should put your database connection information in a separate file. This is generally a good idea whenever you’re using PHP to access a database, as it keeps your confidential information in a file that you won’t be tempted to share. In the Maps API forum, we’ve occasionally had people accidentally publish their database connection information when they were just trying to debug their XML-outputting code. The file should look like this, but with your own database information filled in(phpsqlajax_dbinfo.php):
<?
$username="username";
$password="password";
$database="username-databaseName";
?>
Using PHP’s echo to output XML
If you don’t have access to PHP’s dom_xml functions, then you can simply output the XML with the echo function. When using just the echo function, you’ll need to use a helper function (e.g. parseToXML) that will correctly encode a few special entities (<,>,”,’) to be XML friendly.
In the PHP, first connect to the database and execute the SELECT * (select all) query on the markers table. Then echo out the parent markers node, and iterate through the query results. For each row in the table (each location), you need to echo out the XML node for that marker, sending the name and address fields through the parseToXML function first in case there are any special entities in them. Finish the script by echoing out the closing markers tag.
Note: If your database contains international characters or you otherwise need to force UTF-8 output, you can use utf8_encode on the outputted data.
The PHP file that does all this is shown below (phpsqlajax_genxml2.php):
<?php
require("phpsqlajax_dbinfo.php");
function parseToXML($htmlStr)
{
$xmlStr=str_replace('<','<',$htmlStr);
$xmlStr=str_replace('>','>',$xmlStr);
$xmlStr=str_replace('"','"',$xmlStr);
$xmlStr=str_replace("'",''',$xmlStr);
$xmlStr=str_replace("&",'&',$xmlStr);
return $xmlStr;
}
// Opens a connection to a MySQL server
$connection=mysql_connect (localhost, $username, $password);
if (!$connection) {
die('Not connected : ' . mysql_error());
}
// Set the active MySQL database
$db_selected = mysql_select_db($database, $connection);
if (!$db_selected) {
die ('Can\'t use db : ' . mysql_error());
}
// Select all the rows in the markers table
$query = "SELECT * FROM markers WHERE 1";
$result = mysql_query($query);
if (!$result) {
die('Invalid query: ' . mysql_error());
}
header("Content-type: text/xml");
// Start XML file, echo parent node
echo '<markers>';
// Iterate through the rows, printing XML nodes for each
while ($row = @mysql_fetch_assoc($result)){
// ADD TO XML DOCUMENT NODE
echo '<marker ';
echo 'name="' . parseToXML($row['name']) . '" ';
echo 'address="' . parseToXML($row['address']) . '" ';
echo 'lat="' . $row['lat'] . '" ';
echo 'lng="' . $row['lng'] . '" ';
echo 'type="' . $row['type'] . '" ';
echo '/>';
}
// End XML file
echo '</markers>';
?>
Checking that XML output works
Call this PHP script from the browser to make sure it’s producing valid XML. If you suspect there’s a problem with connecting to your database, you may find it easier to debug if you remove the line in the file that sets the header to the text/xml content type, as that usually causes your browser to try to parse XML and may make it difficult to see your debugging messages.
If the script is working correctly, you should see XML output like this
<markers>
<marker name="Pan Africa Market" address="1521 1st Ave, Seattle, WA" lat="47.608940" lng="-122.340141" type="restaurant"/>
<marker name="Buddha Thai & Bar" address="2222 2nd Ave, Seattle, WA" lat="47.613590" lng="-122.344391" type="bar"/>
<marker name="The Melting Pot" address="14 Mercer St, Seattle, WA" lat="47.624561" lng="-122.356445" type="restaurant"/>
<marker name="Ipanema Grill" address="1225 1st Ave, Seattle, WA" lat="47.606365" lng="-122.337654" type="restaurant"/>
<marker name="Sake House" address="2230 1st Ave, Seattle, WA" lat="47.612823" lng="-122.345673" type="bar"/>
<marker name="Crab Pot" address="1301 Alaskan Way, Seattle, WA" lat="47.605961" lng="-122.340363" type="restaurant"/>
<marker name="Mama's Mexican Kitchen" address="2234 2nd Ave, Seattle, WA" lat="47.613976" lng="-122.345467" type="bar"/>
<marker name="Wingdome" address="1416 E Olive Way, Seattle, WA" lat="47.617214" lng="-122.326584" type="bar"/>
<marker name="Piroshky Piroshky" address="1908 Pike pl, Seattle, WA" lat="47.610126" lng="-122.342834" type="restaurant"/>
</markers>
Creating the map
Once the XML is working in the browser, it’s time to move on to actually creating the map with JavaScript. If you have never created a Google Map, please try some of the basic examples in the documentation to make sure you understand the basics of creating a Google Map.
Loading the XML file
To load the XML file into our page, you can take advantage of the API function GDownloadURL. GDownloadURL is a wrapper for the XMLHttpRequest that’s used to request an XML file from the server where the HTML page resides. The first parameter to GDownloadURL is the path to your file—it’s usually easiest to have the XML file in the same directory as the HTML so that you can just refer to it by filename. The second parameter to GDownloadURL is the function that’s called when the XML is returned to the JavaScript.
Note: It’s important to know that GDownloadURL is asynchronous—the callback function won’t be called as soon as you invoke GDownloadURL. The bigger your XML file, the longer it may take. Don’t put any code after GDownloadURL that relies on the markers existing already—put it inside the callback function instead.
In the callback function, you need to find all the “marker” elements in the XML, and iterate through them. For each marker element you find, retrieve the name, address, type, and lat/lng attributes and pass them to createMarker, which returns a marker that you can add to the map.
GDownloadUrl("phpsqlajax_genxml.php", function(data) {
var xml = GXml.parse(data);
var markers = xml.documentElement.getElementsByTagName("marker");
for (var i = 0; i < markers.length; i++) {
var name = markers[i].getAttribute("name");
var address = markers[i].getAttribute("address");
var type = markers[i].getAttribute("type");
var point = new GLatLng(parseFloat(markers[i].getAttribute("lat")),
parseFloat(markers[i].getAttribute("lng")));
var marker = createMarker(point, name, address, type);
map.addOverlay(marker);
}
});
[/javascript]
<strong>Creating custom icons</strong>
You can use the GIcon class to define custom icons which can later be assigned to the markers. Start by declaring two GIcon objects—iconBlue and iconRed—and define their properties.
Warning: You may get away with specifying fewer properties than in the example, but by doing so, you run the risk of encountering peculiar errors later.
You then create an associative array which associates each GIcon with one of your type strings: 'restaurant' or 'bar.' This makes the icons easy to reference later when you create markers from the XML.
[javascript]
var iconBlue = new GIcon();
iconBlue.image = 'http://labs.google.com/ridefinder/images/mm_20_blue.png';
iconBlue.shadow = 'http://labs.google.com/ridefinder/images/mm_20_shadow.png';
iconBlue.iconSize = new GSize(12, 20);
iconBlue.shadowSize = new GSize(22, 20);
iconBlue.iconAnchor = new GPoint(6, 20);
iconBlue.infoWindowAnchor = new GPoint(5, 1);
var iconRed = new GIcon();
iconRed.image = 'http://labs.google.com/ridefinder/images/mm_20_red.png';
iconRed.shadow = 'http://labs.google.com/ridefinder/images/mm_20_shadow.png';
iconRed.iconSize = new GSize(12, 20);
iconRed.shadowSize = new GSize(22, 20);
iconRed.iconAnchor = new GPoint(6, 20);
iconRed.infoWindowAnchor = new GPoint(5, 1);
var customIcons = [];
customIcons["restaurant"] = iconBlue;
customIcons["bar"] = iconRed;
Creating markers & info windows
You should have all your marker creation code in a createMarker function. You can retrieve the appropriate GIcon by using the type as the key for the associative array that was globally defined, and pass that into the GMarker constructor. Then, construct the HTML that you want to show up in the info window by concatenating the name, address, and some tags to bold the name.
Tip: Some tutorials instruct you to store HTML-formatted descriptions in your database, but doing so means you then have to deal with escaping HTML entities, and you’ll be bound to that HTML output. By waiting until you’ve retrieved each attribute separately in the JavaScript, you are free to play around with the HTML on the client side and can quickly preview new formatting.
After constructing the HTML string, add an event listener to the marker so that when clicked, an info window is displayed.
function createMarker(point, name, address, type) {
var marker = new GMarker(point, customIcons[type]);
var html = "<b>" + name + "</b> <br/>" + address;
GEvent.addListener(marker, 'click', function() {
marker.openInfoWindowHtml(html);
});
return marker;
}
Putting it all together
Here’s the web page that ties the markers, icons, and XML together. When the page loads, the load function is called. This function sets up the map and then calls GDownloadUrl. Make sure your GDownloadUrl is passing in the file that outputs the XML and that you can preview that XML in the browser.
The full HTML that accomplishes this is shown below (phpsqlajax_map.htm):
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<title>Google Maps AJAX + MySQL/PHP Example</title>
<script src="http://maps.google.com/maps?file=api&v=2&key=ABQIAAAAjU0EJWnWPMv7oQ-jjS7dYxTPZYElJSBeBUeMSX5xXgq6lLjHthSAk20WnZ_iuuzhMt60X_ukms-AUg"
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
var iconBlue = new GIcon();
iconBlue.image = 'http://labs.google.com/ridefinder/images/mm_20_blue.png';
iconBlue.shadow = 'http://labs.google.com/ridefinder/images/mm_20_shadow.png';
iconBlue.iconSize = new GSize(12, 20);
iconBlue.shadowSize = new GSize(22, 20);
iconBlue.iconAnchor = new GPoint(6, 20);
iconBlue.infoWindowAnchor = new GPoint(5, 1);
var iconRed = new GIcon();
iconRed.image = 'http://labs.google.com/ridefinder/images/mm_20_red.png';
iconRed.shadow = 'http://labs.google.com/ridefinder/images/mm_20_shadow.png';
iconRed.iconSize = new GSize(12, 20);
iconRed.shadowSize = new GSize(22, 20);
iconRed.iconAnchor = new GPoint(6, 20);
iconRed.infoWindowAnchor = new GPoint(5, 1);
var customIcons = [];
customIcons["restaurant"] = iconBlue;
customIcons["bar"] = iconRed;
function load() {
if (GBrowserIsCompatible()) {
var map = new GMap2(document.getElementById("map"));
map.addControl(new GSmallMapControl());
map.addControl(new GMapTypeControl());
map.setCenter(new GLatLng(47.614495, -122.341861), 13);
GDownloadUrl("phpsqlajax_genxml.php", function(data) {
var xml = GXml.parse(data);
var markers = xml.documentElement.getElementsByTagName("marker");
for (var i = 0; i < markers.length; i++) {
var name = markers[i].getAttribute("name");
var address = markers[i].getAttribute("address");
var type = markers[i].getAttribute("type");
var point = new GLatLng(parseFloat(markers[i].getAttribute("lat")),
parseFloat(markers[i].getAttribute("lng")));
var marker = createMarker(point, name, address, type);
map.addOverlay(marker);
}
});
}
}
function createMarker(point, name, address, type) {
var marker = new GMarker(point, customIcons[type]);
var html = "<b>" + name + "</b> <br/>" + address;
GEvent.addListener(marker, 'click', function() {
marker.openInfoWindowHtml(html);
});
return marker;
}
//]]>
</script>
</head>
<body onload="load()" onunload="GUnload()">
<div id="map" style="width: 500px; height: 300px"></div>
</body>
</html>
The map should look like this when loaded:
|
__label__pos
| 0.592402 |
Android Question Nested for loops question
Devv
Active Member
Licensed User
May someone explain why this code make z equals 3 instead of 9 ?
B4X:
Sub Process_Globals
Dim x,y,z As Int
End Sub
Sub Activity_Create(FirstTime As Boolean)
For x = 1 To 3
For x = 0 To 2
z = z +1
Next
Next
Log("z :" & z)
End Sub
BillMeyer
Well-Known Member
Licensed User
The reason you get 3 is that you keep initializing "x" to a different value than expected in the 1st loop and you hinder the working of your first loop.
Essentially you have 2 loops both with the same variable "x". So for example, when your first loop reaches, for example 2, when it goes to the 2nd loop that x is now no longer 2 but now resets to 0. So when you break from the loop (they have completed running), z = 3.
All I have done is changed the second "x" to "y" (you have "Dim"ed it after all) and this should give the desired z = 9.
B4X:
Sub Process_Globals
Dim x,y,z As Int
End Sub
Sub Activity_Create(FirstTime As Boolean)
For x = 1 To 3
For y = 0 To 2 '<--- Change this from x to y for example and I would have made this 1 to 3 - just easier to read
z = z + 1
Next
Next
Log("z :" & z)
End Sub
This is untested code - so I'm trusting that my coding is OK.
As Mr Simpson would say: "Enjoy..."
Top
|
__label__pos
| 0.961392 |
Issuu on Google+
NVIDIA CUDA™
NVIDIA CUDA C Programming Guide
Version 3.2 10/22/2010
Changes from Version 3.1.1
ii
Simplified all the code samples that use cuParamSetv() to set a kernel parameter of type CUdeviceptr since CUdeviceptr is now of same size and alignment as void*, so there is no longer any need to go through an interneditate void* variable. Added Section 3.2.4.1.4 on 16-bit floating-point textures. Added Section 3.2.4.4 on read/write coherency for texture and surface memory. Added more details about surface memory access to Section 3.2.4.2. Added more details to Section 3.2.6.5. Mentioned new stream synchronization function cudaStreamSynchronize() in Section 3.2.6.5.2. Mentioned in Sections 3.2.7.2, 3.3.10.2, and 4.3 the new API calls to deal with devices using NVIDIA SLI in AFR mode. Added Sections 3.2.9 and 3.3.12 about the call stack. Changed the type of the pitch variable in the second code sample of Section 3.3.4 from unsigned int to size_t following the function signature change of cuMemAllocPitch(). Changed the type of the bytes variable in the last code sample of Section 3.3.4 from unsigned int to size_t following the function signature change of cuModuleGetGlobal(). Removed cuParamSetTexRef() from Section 3.3.7 as it is no longer necessary. Updated Section 5.2.3, Table 5-1, and Section G.4.1 for devices of compute capability 2.1. Added GeForce GTX 480M, GeForce GTX 470M, GeForce GTX 460M, GeForce GTX 445M, GeForce GTX 435M, GeForce GTX 425M, GeForce GTX 420M, GeForce GTX 415M, GeForce GTX 460, GeForce GTS 450, GeForce GTX 465, Quadro 2000, Quadro 600, Quadro 4000, Quadro 5000, Quadro 5000M, and Quadro 6000 to Table A-1. Fixed sample code in Section B.2.3: array[] was declared as an array of char causing a compiler error (“Unaligned memory accesses not supported”) when casting array to a pointer of higher alignment requirement; declaring array[] as an array of float fixes it. Mentioned in Section B.11 that any atomic operation can be implemented based on atomic Compare And Swap. Added Section B.15 on the new malloc() and free() device functions. Moved the type casting functions to a separate section C.2.4. Fixed the maximum height of a 2D texture reference for devices of compute capability 2.x (65535 instead of 65536) in Section G.1. Fixed the maximum dimensions for surface references in Section G.1.
CUDA C Programming Guide Version 3.2
Mentioned the new
cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() API calls in
Section G.4.1. Mentioned in Section G.4.2 that global memory accesses that are cached in L2 only are serviced with 32-byte memory transactions.
CUDA C Programming Guide Version 3.2
iii
Table of Contents
Chapter 1. Introduction ................................................................................... 1 1.1
From Graphics Processing to General-Purpose Parallel Computing................... 1
1.2
CUDA™: a General-Purpose Parallel Computing Architecture .......................... 3
1.3
A Scalable Programming Model .................................................................... 4
1.4
Document’s Structure ................................................................................. 6
Chapter 2. Programming Model ....................................................................... 7 2.1
Kernels ...................................................................................................... 7
2.2
Thread Hierarchy ........................................................................................ 8
2.3
Memory Hierarchy .................................................................................... 10
2.4
Heterogeneous Programming .................................................................... 11
2.5
Compute Capability ................................................................................... 14
Chapter 3. Programming Interface ................................................................ 15 3.1
3.1.1
Compilation Workflow......................................................................... 16
3.1.2
Binary Compatibility ........................................................................... 16
3.1.3
PTX Compatibility ............................................................................... 16
3.1.4
Application Compatibility ..................................................................... 17
3.1.5
C/C++ Compatibility .......................................................................... 18
3.1.6
64-Bit Compatibility ............................................................................ 18
3.2
iv
Compilation with NVCC ............................................................................. 15
CUDA C ................................................................................................... 18
3.2.1
Device Memory .................................................................................. 19
3.2.2
Shared Memory ................................................................................. 21
3.2.3
Multiple Devices ................................................................................. 28
3.2.4
Texture and Surface Memory .............................................................. 29
3.2.4.1
Texture Memory .......................................................................... 29
3.2.4.2
Surface Memory .......................................................................... 34
3.2.4.3
CUDA Arrays ............................................................................... 36
3.2.4.4
Read/Write Coherency ................................................................. 36
CUDA C Programming Guide Version 3.2
3.2.5
Page-Locked Host Memory.................................................................. 36
3.2.5.1
Portable Memory ......................................................................... 37
3.2.5.2
Write-Combining Memory ............................................................. 37
3.2.5.3
Mapped Memory.......................................................................... 37
3.2.6
Asynchronous Concurrent Execution .................................................... 38
3.2.6.1
Concurrent Execution between Host and Device ............................. 38
3.2.6.2
Overlap of Data Transfer and Kernel Execution .............................. 38
3.2.6.3
Concurrent Kernel Execution ........................................................ 38
3.2.6.4
Concurrent Data Transfers ........................................................... 39
3.2.6.5
Stream ....................................................................................... 39
3.2.6.6
Event ......................................................................................... 41
3.2.6.7
Synchronous Calls ....................................................................... 42
3.2.7
Graphics Interoperability..................................................................... 42
3.2.7.1
OpenGL Interoperability ............................................................... 43
3.2.7.2
Direct3D Interoperability .............................................................. 45
3.2.8
Error Handling ................................................................................... 51
3.2.9
Call Stack .......................................................................................... 52
3.3
Driver API ................................................................................................ 52
3.3.1
Context ............................................................................................. 54
3.3.2
Module .............................................................................................. 55
3.3.3
Kernel Execution ................................................................................ 56
3.3.4
Device Memory .................................................................................. 58
3.3.5
Shared Memory ................................................................................. 61
3.3.6
Multiple Devices ................................................................................. 62
3.3.7
Texture and Surface Memory .............................................................. 62
3.3.7.1
Texture Memory .......................................................................... 62
3.3.7.2
Surface Memory .......................................................................... 64
3.3.8
Page-Locked Host Memory.................................................................. 65
3.3.9
Asynchronous Concurrent Execution .................................................... 66
3.3.9.1
Stream ....................................................................................... 66
3.3.9.2
Event Management ...................................................................... 67
3.3.9.3
Synchronous Calls ....................................................................... 67
3.3.10
Graphics Interoperability..................................................................... 67
CUDA C Programming Guide Version 3.2
v
3.3.10.1
OpenGL Interoperability ............................................................... 68
3.3.10.2
Direct3D Interoperability .............................................................. 70
3.3.11
Error Handling ................................................................................... 77
3.3.12
Call Stack .......................................................................................... 77
3.4
Interoperability between Runtime and Driver APIs ....................................... 77
3.5
Versioning and Compatibility...................................................................... 78
3.6
Compute Modes ....................................................................................... 79
3.7
Mode Switches ......................................................................................... 79
Chapter 4. Hardware Implementation ........................................................... 81 4.1
SIMT Architecture ..................................................................................... 81
4.2
Hardware Multithreading ........................................................................... 82
4.3
Multiple Devices ....................................................................................... 83
Chapter 5. Performance Guidelines ............................................................... 85 5.1
Overall Performance Optimization Strategies ............................................... 85
5.2
Maximize Utilization .................................................................................. 85
5.2.1
Application Level ................................................................................ 85
5.2.2
Device Level ...................................................................................... 86
5.2.3
Multiprocessor Level ........................................................................... 86
5.3
Maximize Memory Throughput ................................................................... 88
5.3.1
Data Transfer between Host and Device .............................................. 89
5.3.2
Device Memory Accesses .................................................................... 89
5.4
5.3.2.1
Global Memory ............................................................................ 90
5.3.2.2
Local Memory.............................................................................. 91
5.3.2.3
Shared Memory ........................................................................... 92
5.3.2.4
Constant Memory ........................................................................ 92
5.3.2.5
Texture and Surface Memory ........................................................ 93
Maximize Instruction Throughput ............................................................... 93
5.4.1
Arithmetic Instructions ....................................................................... 94
5.4.2
Control Flow Instructions .................................................................... 96
5.4.3
Synchronization Instruction ................................................................. 97
Appendix A. CUDA-Enabled GPUs .................................................................. 99 Appendix B. C Language Extensions ............................................................ 103 B.1
vi
Function Type Qualifiers .......................................................................... 103
CUDA C Programming Guide Version 3.2
B.1.1
__device__ ...................................................................................... 103
B.1.2
__global__ ...................................................................................... 103
B.1.3
__host__ ......................................................................................... 103
B.1.4
Restrictions ..................................................................................... 104
B.2
B.1.4.1
Functions Parameters ................................................................ 104
B.1.4.2
Variadic Functions ..................................................................... 104
B.1.4.3
Static Variables ......................................................................... 104
B.1.4.4
Function Pointers....................................................................... 104
B.1.4.5
Recursion ................................................................................. 104
Variable Type Qualifiers .......................................................................... 105
B.2.1
__device__ ...................................................................................... 105
B.2.2
__constant__ ................................................................................... 105
B.2.3
__shared__ ..................................................................................... 105
B.2.4
Restrictions ..................................................................................... 106
B.2.4.1
Storage and Scope .................................................................... 106
B.2.4.2
Assignment ............................................................................... 106
B.2.4.3
Automatic Variable .................................................................... 106
B.2.4.4
Pointers .................................................................................... 107
B.2.5 B.3
volatile ............................................................................................ 107
Built-in Vector Types ............................................................................... 108
B.3.1 char1, uchar1, char2, uchar2, char3, uchar3, char4, uchar4, short1, ushort1, short2, ushort2, short3, ushort3, short4, ushort4, int1, uint1, int2, uint2, int3, uint3, int4, uint4, long1, ulong1, long2, ulong2, long3, ulong3, long4, ulong4, longlong1, ulonglong1, longlong2, ulonglong2, float1, float2, float3, float4, double1, double2 108 B.3.2 B.4
dim3 ............................................................................................... 109
Built-in Variables .................................................................................... 109
B.4.1
gridDim ........................................................................................... 109
B.4.2
blockIdx .......................................................................................... 109
B.4.3
blockDim ......................................................................................... 109
B.4.4
threadIdx ........................................................................................ 109
B.4.5
warpSize ......................................................................................... 110
B.4.6
Restrictions ..................................................................................... 110
B.5
Memory Fence Functions ......................................................................... 110
CUDA C Programming Guide Version 3.2
vii
B.6
Synchronization Functions ....................................................................... 111
B.7
Mathematical Functions ........................................................................... 112
B.8
Texture Functions ................................................................................... 113
B.8.1
tex1Dfetch() .................................................................................... 113
B.8.2
tex1D() ........................................................................................... 114
B.8.3
tex2D() ........................................................................................... 114
B.8.4
tex3D() ........................................................................................... 114
B.9
Surface Functions ................................................................................... 114
B.9.1
surf1Dread() .................................................................................... 115
B.9.2
surf1Dwrite() ................................................................................... 115
B.9.3
surf2Dread() .................................................................................... 115
B.9.4
surf2Dwrite() ................................................................................... 115
B.10
Time Function ........................................................................................ 115
B.11
Atomic Functions .................................................................................... 116
B.11.1
B.11.1.1
atomicAdd() .............................................................................. 116
B.11.1.2
atomicSub() .............................................................................. 117
B.11.1.3
atomicExch() ............................................................................. 117
B.11.1.4
atomicMin() .............................................................................. 117
B.11.1.5
atomicMax().............................................................................. 117
B.11.1.6
atomicInc() ............................................................................... 117
B.11.1.7
atomicDec() .............................................................................. 118
B.11.1.8
atomicCAS() .............................................................................. 118
B.11.2
viii
Arithmetic Functions ......................................................................... 116
Bitwise Functions ............................................................................. 118
B.11.2.1
atomicAnd() .............................................................................. 118
B.11.2.2
atomicOr() ................................................................................ 118
B.11.2.3
atomicXor()............................................................................... 118
B.12
Warp Vote Functions............................................................................... 119
B.13
Profiler Counter Function ......................................................................... 119
B.14
Formatted Output ................................................................................... 119
B.14.1
Format Specifiers ............................................................................. 120
B.14.2
Limitations ...................................................................................... 120
B.14.3
Associated Host-Side API .................................................................. 121
CUDA C Programming Guide Version 3.2
B.14.4 B.15
Examples ........................................................................................ 121
Dynamic Global Memory Allocation ........................................................... 122
B.15.1
Heap Memory Allocation ................................................................... 123
B.15.2
Interoperability with Host Memory API ............................................... 123
B.15.3
Examples ........................................................................................ 123
B.15.3.1
Per Thread Allocation ................................................................. 123
B.15.3.2
Per Thread Block Allocation ........................................................ 124
B.15.3.3
Allocation Persisting Between Kernel Launches ............................. 125
B.16
Execution Configuration .......................................................................... 126
B.17
Launch Bounds ....................................................................................... 127
Appendix C. Mathematical Functions ........................................................... 129 C.1
Standard Functions ................................................................................. 129
C.1.1
Single-Precision Floating-Point Functions ............................................ 129
C.1.2
Double-Precision Floating-Point Functions .......................................... 132
C.1.3
Integer Functions ............................................................................. 134
C.2
Intrinsic Functions .................................................................................. 134
C.2.1
Single-Precision Floating-Point Functions ............................................ 134
C.2.2
Double-Precision Floating-Point Functions .......................................... 136
C.2.3
Integer Functions ............................................................................. 136
C.2.4
Type Casting Functions..................................................................... 137
Appendix D. C++ Language Constructs ....................................................... 139 D.1
Polymorphism ........................................................................................ 139
D.2
Default Parameters ................................................................................. 140
D.3
Operator Overloading.............................................................................. 140
D.4
Namespaces........................................................................................... 141
D.5
Function Templates ................................................................................ 141
D.6
Classes .................................................................................................. 142
D.6.1
Example 1 Pixel Data Type................................................................ 142
D.6.2
Example 2 Functor Class ................................................................... 143
Appendix E. NVCC Specifics ......................................................................... 145 E.1
__noinline__ and __forceinline__ ............................................................. 145
E.2
#pragma unroll ...................................................................................... 145
E.3
__restrict__ ........................................................................................... 146
CUDA C Programming Guide Version 3.2
ix
Appendix F. Texture Fetching ...................................................................... 149 F.1
Nearest-Point Sampling ........................................................................... 150
F.2
Linear Filtering ....................................................................................... 150
F.3
Table Lookup ......................................................................................... 152
Appendix G. Compute Capabilities ............................................................... 153 G.1
Features and Technical Specifications ....................................................... 154
G.2
Floating-Point Standard ........................................................................... 155
G.3
Compute Capability 1.x ........................................................................... 157
G.3.1
Architecture ..................................................................................... 157
G.3.2
Global Memory ................................................................................ 158
G.3.2.1
Devices of Compute Capability 1.0 and 1.1 .................................. 158
G.3.2.2
Devices of Compute Capability 1.2 and 1.3 .................................. 158
G.3.3
G.4
G.3.3.1
32-Bit Strided Access ................................................................. 159
G.3.3.2
32-Bit Broadcast Access ............................................................. 160
G.3.3.3
8-Bit and 16-Bit Access .............................................................. 160
G.3.3.4
Larger Than 32-Bit Access .......................................................... 160
Compute Capability 2.x ........................................................................... 161
G.4.1
Architecture ..................................................................................... 161
G.4.2
Global Memory ................................................................................ 163
G.4.3
Shared Memory ............................................................................... 165
G.4.3.1
32-Bit Strided Access ................................................................. 165
G.4.3.2
Larger Than 32-Bit Access .......................................................... 165
G.4.4
x
Shared Memory ............................................................................... 159
Constant Memory ............................................................................. 166
CUDA C Programming Guide Version 3.2
List of Figures
Figure 1-1. Floating-Point Operations per Second and Memory Bandwidth for the CPU and GPU 2 Figure 1-2.
The GPU Devotes More Transistors to Data Processing ............................ 3
Figure 1-3. CUDA is Designed to Support Various Languages or Application Programming Interfaces .................................................................................... 4 Figure 1-4.
Automatic Scalability ............................................................................ 5
Figure 2-1.
Grid of Thread Blocks ........................................................................... 9
Figure 2-2.
Memory Hierarchy .............................................................................. 11
Figure 2-3.
Heterogeneous Programming .............................................................. 13
Figure 3-1.
Matrix Multiplication without Shared Memory ........................................ 24
Figure 3-2.
Matrix Multiplication with Shared Memory ............................................ 28
Figure 3-3.
Library Context Management .............................................................. 55
Figure 3-4.
The Driver API is Backward, but Not Forward Compatible ...................... 79
Figure F-1.
Nearest-Point Sampling of a One-Dimensional Texture of Four Texels .. 150
Figure F-2. Linear Filtering of a One-Dimensional Texture of Four Texels in Clamp Addressing Mode........................................................................................... 151 Figure F-3.
One-Dimensional Table Lookup Using Linear Filtering .......................... 152
Figure G-1. Examples of Global Memory Accesses by a Warp, 4-Byte Word per Thread, and Associated Memory Transactions Based on Compute Capability .................. 164 Figure G-2 Examples of Strided Shared Memory Accesses for Devices of Compute Capability 2.x ................................................................................................ 167 Figure G-3 Examples of Irregular and Colliding Shared Memory Accesses for Devices of Compute Capability 2.x .............................................................................. 169
CUDA C Programming Guide Version 3.2
xi
Chapter 1. Introduction
1.1
From Graphics Processing to General-Purpose Parallel Computing Driven by the insatiable market demand for realtime, high-definition 3D graphics, the programmable Graphic Processor Unit or GPU has evolved into a highly parallel, multithreaded, manycore processor with tremendous computational horsepower and very high memory bandwidth, as illustrated by Figure 1-1.
CUDA C Programming Guide Version 3.1
1
Chapter 1. Introduction
Figure 1-1. Floating-Point Operations per Second and Memory Bandwidth for the CPU and GPU
2
CUDA C Programming Guide Version 3.2
Chapter 1. Introduction
The reason behind the discrepancy in floating-point capability between the CPU and the GPU is that the GPU is specialized for compute-intensive, highly parallel computation – exactly what graphics rendering is about – and therefore designed such that more transistors are devoted to data processing rather than data caching and flow control, as schematically illustrated by Figure 1-2. Control
ALU
ALU
ALU
ALU
Cache
DRAM
DRAM
CPU
GPU
Figure 1-2. The GPU Devotes More Transistors to Data Processing More specifically, the GPU is especially well-suited to address problems that can be expressed as data-parallel computations – the same program is executed on many data elements in parallel – with high arithmetic intensity – the ratio of arithmetic operations to memory operations. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control, and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches. Data-parallel processing maps data elements to parallel processing threads. Many applications that process large data sets can use a data-parallel programming model to speed up the computations. In 3D rendering, large sets of pixels and vertices are mapped to parallel threads. Similarly, image and media processing applications such as post-processing of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing, from general signal processing or physics simulation to computational finance or computational biology.
1.2
CUDA™: a General-Purpose Parallel Computing Architecture In November 2006, NVIDIA introduced CUDA™, a general purpose parallel computing architecture – with a new parallel programming model and instruction set architecture – that leverages the parallel compute engine in NVIDIA GPUs to
CUDA C Programming Guide Version 3.2
3
Chapter 1. Introduction
solve many complex computational problems in a more efficient way than on a CPU. CUDA comes with a software environment that allows developers to use C as a high-level programming language. As illustrated by Figure 1-3, other languages or application programming interfaces are supported, such as CUDA FORTRAN, OpenCL, and DirectCompute.
Figure 1-3. CUDA is Designed to Support Various Languages or Application Programming Interfaces
1.3
A Scalable Programming Model The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Mooreâ€&#x;s law. The challenge is to develop application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores. The CUDA parallel programming model is designed to overcome this challenge while maintaining a low learning curve for programmers familiar with standard programming languages such as C. At its core are three key abstractions – a hierarchy of thread groups, shared memories, and barrier synchronization – that are simply exposed to the programmer as a minimal set of language extensions. These abstractions provide fine-grained data parallelism and thread parallelism, nested within coarse-grained data parallelism and task parallelism. They guide the programmer to partition the problem into coarse sub-problems that can be solved independently in parallel by blocks of threads, and each sub-problem into finer pieces that can be solved cooperatively in parallel by all threads within the block. This decomposition preserves language expressivity by allowing threads to
4
CUDA C Programming Guide Version 3.2
Chapter 1. Introduction
cooperate when solving each sub-problem, and at the same time enables automatic scalability. Indeed, each block of threads can be scheduled on any of the available processor cores, in any order, concurrently or sequentially, so that a compiled CUDA program can execute on any number of processor cores as illustrated by Figure 1-4, and only the runtime system needs to know the physical processor count. This scalable programming model allows the CUDA architecture to span a wide market range by simply scaling the number of processors and memory partitions: from the high-performance enthusiast GeForce GPUs and professional Quadro and Tesla computing products to a variety of inexpensive, mainstream GeForce GPUs (see Appendix A for a list of all CUDA-enabled GPUs).
Multithreaded CUDA Program Block 0
Block 1
Block 2
Block 3
Block 4
Block 5 Block 5
Block 6 Block 6
Block 7
GPU with 2 Cores
GPU with 4 Cores
Core 0
Core 1
Core 0
Core 1
Core 2
Core 3
Block 0
Block 1
Block 0
Block 1
Block 2
Block 3
Block 2
Block 3
Block 4
Block 5
Block 6
Block 7
Block 4
Block 5
Block 6
Block 7
A multithreaded program is partitioned into blocks of threads that execute independently from each other, so that a GPU with more cores will automatically execute the program in less time than a GPU with fewer cores.
Figure 1-4. Automatic Scalability
CUDA C Programming Guide Version 3.2
5
Chapter 1. Introduction
1.4
Document’s Structure This document is organized into the following chapters:
6
Chapter 1 is a general introduction to CUDA. Chapter 2 outlines the CUDA programming model. Chapter 3 describes the programming interface. Chapter 4 describes the hardware implementation. Chapter 5 gives some guidance on how to achieve maximum performance. Appendix A lists all CUDA-enabled devices. Appendix B is a detailed description of all extensions to the C language. Appendix C lists the mathematical functions supported in CUDA. Appendix D lists the C++ constructs supported in device code. Appendix E lists the specific keywords and directives supported by nvcc. Appendix F gives more details on texture fetching. Appendix G gives the technical specifications of various devices, as well as more architectural details.
CUDA C Programming Guide Version 3.2
Chapter 2. Programming Model
This chapter introduces the main concepts behind the CUDA programming model by outlining how they are exposed in C. An extensive description of CUDA C is given in Section 3.2. Full code for the vector addition example used in this chapter and the next can be found in the vectorAdd SDK code sample.
2.1
Kernels CUDA C extends C by allowing the programmer to define C functions, called kernels, that, when called, are executed N times in parallel by N different CUDA threads, as opposed to only once like regular C functions. A kernel is defined using the __global__ declaration specifier and the number of CUDA threads that execute that kernel for a given kernel call is specified using a new <<<‌>>> execution configuration syntax (see Appendix B.16). Each thread that executes the kernel is given a unique thread ID that is accessible within the kernel through the built-in threadIdx variable. As an illustration, the following sample code adds two vectors A and B of size N and stores the result into vector C: // Kernel definition __global__ void VecAdd(float* A, float* B, float* C) { int i = threadIdx.x; C[i] = A[i] + B[i]; } int main() { ... // Kernel invocation with N threads VecAdd<<<1, N>>>(A, B, C); }
Here, each of the N threads that execute VecAdd() performs one pair-wise addition.
CUDA C Programming Guide Version 3.1
7
Chapter 2. Programming Model
2.2
Thread Hierarchy For convenience, threadIdx is a 3-component vector, so that threads can be identified using a one-dimensional, two-dimensional, or three-dimensional thread index, forming a one-dimensional, two-dimensional, or three-dimensional thread block. This provides a natural way to invoke computation across the elements in a domain such as a vector, matrix, or volume. The index of a thread and its thread ID relate to each other in a straightforward way: For a one-dimensional block, they are the same; for a two-dimensional block of size (Dx, Dy), the thread ID of a thread of index (x, y) is (x + y Dx); for a threedimensional block of size (Dx, Dy, Dz), the thread ID of a thread of index (x, y, z) is (x + y Dx + z Dx Dy). As an example, the following code adds two matrices A and B of size NxN and stores the result into matrix C: // Kernel definition __global__ void MatAdd(float A[N][N], float B[N][N], float C[N][N]) { int i = threadIdx.x; int j = threadIdx.y; C[i][j] = A[i][j] + B[i][j]; } int main() { ... // Kernel invocation with one block of N * N * 1 threads int numBlocks = 1; dim3 threadsPerBlock(N, N); MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C); }
There is a limit to the number of threads per block, since all threads of a block are expected to reside on the same processor core and must share the limited memory resources of that core. On current GPUs, a thread block may contain up to 1024 threads. However, a kernel can be executed by multiple equally-shaped thread blocks, so that the total number of threads is equal to the number of threads per block times the number of blocks. Blocks are organized into a one-dimensional or two-dimensional grid of thread blocks as illustrated by Figure 2-1. The number of thread blocks in a grid is usually dictated by the size of the data being processed or the number of processors in the system, which it can greatly exceed.
8
CUDA C Programming Guide Version 3.2
Chapter 2: Programming Model
Grid Block (0, 0)
Block (1, 0)
Block (2, 0)
Block (0, 1)
Block (1, 1)
Block (2, 1)
Block (1, 1) Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0)
Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1)
Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2)
Figure 2-1. Grid of Thread Blocks The number of threads per block and the number of blocks per grid specified in the <<<‌>>> syntax can be of type int or dim3. Two-dimensional blocks or grids can be specified as in the example above. Each block within the grid can be identified by a one-dimensional or twodimensional index accessible within the kernel through the built-in blockIdx variable. The dimension of the thread block is accessible within the kernel through the built-in blockDim variable. Extending the previous MatAdd() example to handle multiple blocks, the code becomes as follows. // Kernel definition __global__ void MatAdd(float A[N][N], float B[N][N], float C[N][N]) { int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; if (i < N && j < N) C[i][j] = A[i][j] + B[i][j];
CUDA C Programming Guide Version 3.2
9
Chapter 2. Programming Model
} int main() { ... // Kernel invocation dim3 threadsPerBlock(16, 16); dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y); MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C); }
A thread block size of 16x16 (256 threads), although arbitrary in this case, is a common choice. The grid is created with enough blocks to have one thread per matrix element as before. For simplicity, this example assumes that the number of threads per grid in each dimension is evenly divisible by the number of threads per block in that dimension, although that need not be the case. Thread blocks are required to execute independently: It must be possible to execute them in any order, in parallel or in series. This independence requirement allows thread blocks to be scheduled in any order across any number of cores as illustrated by Figure 1-4, enabling programmers to write code that scales with the number of cores. Threads within a block can cooperate by sharing data through some shared memory and by synchronizing their execution to coordinate memory accesses. More precisely, one can specify synchronization points in the kernel by calling the __syncthreads() intrinsic function; __syncthreads() acts as a barrier at which all threads in the block must wait before any is allowed to proceed. Section 3.2.2 gives an example of using shared memory. For efficient cooperation, the shared memory is expected to be a low-latency memory near each processor core (much like an L1 cache) and __syncthreads() is expected to be lightweight.
2.3
Memory Hierarchy CUDA threads may access data from multiple memory spaces during their execution as illustrated by Figure 2-2. Each thread has private local memory. Each thread block has shared memory visible to all threads of the block and with the same lifetime as the block. All threads have access to the same global memory. There are also two additional read-only memory spaces accessible by all threads: the constant and texture memory spaces. The global, constant, and texture memory spaces are optimized for different memory usages (see Sections 5.3.2.1, 5.3.2.4, and 5.3.2.5). Texture memory also offers different addressing modes, as well as data filtering, for some specific data formats (see Section 3.2.4). The global, constant, and texture memory spaces are persistent across kernel launches by the same application.
10
CUDA C Programming Guide Version 3.2
Chapter 2: Programming Model
Thread Per-thread local memory
Thread Block Per-block shared memory
Grid 0 Block (0, 0)
Block (1, 0)
Block (2, 0)
Block (0, 1)
Block (1, 1)
Block (2, 1)
Grid 1
Global memory
Block (0, 0)
Block (1, 0)
Block (0, 1)
Block (1, 1)
Block (0, 2)
Block (1, 2)
Figure 2-2. Memory Hierarchy
2.4
Heterogeneous Programming As illustrated by Figure 2-3, the CUDA programming model assumes that the CUDA threads execute on a physically separate device that operates as a coprocessor to the host running the C program. This is the case, for example, when the kernels execute on a GPU and the rest of the C program executes on a CPU.
CUDA C Programming Guide Version 3.2
11
Chapter 2. Programming Model
The CUDA programming model also assumes that both the host and the device maintain their own separate memory spaces in DRAM, referred to as host memory and device memory, respectively. Therefore, a program manages the global, constant, and texture memory spaces visible to kernels through calls to the CUDA runtime (described in Chapter 3). This includes device memory allocation and deallocation as well as data transfer between host and device memory.
12
CUDA C Programming Guide Version 3.2
Chapter 2: Programming Model
C Program Sequential Execution Serial code
Host
Parallel kernel
Device
Kernel0<<<>>>()
Grid 0
Serial code
Parallel kernel Kernel1<<<>>>()
Block (0, 0)
Block (1, 0)
Block (2, 0)
Block (0, 1)
Block (1, 1)
Block (2, 1)
Host
Device Grid 1 Block (0, 0)
Block (1, 0)
Block (0, 1)
Block (1, 1)
Block (0, 2)
Block (1, 2)
Serial code executes on the host while parallel code executes on the device.
Figure 2-3. Heterogeneous Programming
CUDA C Programming Guide Version 3.2
13
Chapter 2. Programming Model
2.5
Compute Capability The compute capability of a device is defined by a major revision number and a minor revision number. Devices with the same major revision number are of the same core architecture. The major revision number of devices based on the Fermi architecture is 2. Prior devices are all of compute capability 1.x (Their major revision number is 1). The minor revision number corresponds to an incremental improvement to the core architecture, possibly including new features. Appendix A lists of all CUDA-enabled devices along with their compute capability. Appendix G gives the technical specifications of each compute capability.
14
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
Two interfaces are currently supported to write CUDA programs: CUDA C and the CUDA driver API. An application typically uses either one or the other, but it can use both as described in Section 3.4. CUDA C exposes the CUDA programming model as a minimal set of extensions to the C language. Any source file that contains some of these extensions must be compiled with nvcc as outlined in Section 3.1. These extensions allow programmers to define a kernel as a C function and use some new syntax to specify the grid and block dimension each time the function is called. The CUDA driver API is a lower-level C API that provides functions to load kernels as modules of CUDA binary or assembly code, to inspect their parameters, and to launch them. Binary and assembly codes are usually obtained by compiling kernels written in C. CUDA C comes with a runtime API and both the runtime API and the driver API provide functions to allocate and deallocate device memory, transfer data between host memory and device memory, manage systems with multiple devices, etc. The runtime API is built on top of the CUDA driver API. Initialization, context, and module management are all implicit and resulting code is more concise. In contrast, the CUDA driver API requires more code, is harder to program and debug, but offers a better level of control and is language-independent since it handles binary or assembly code. Section 3.2 continues the description of CUDA C started in Chapter 2. It also introduces concepts that are common to both CUDA C and the driver API: linear memory, CUDA arrays, shared memory, texture memory, page-locked host memory, device enumeration, asynchronous execution, interoperability with graphics APIs. Section 3.3 assumes knowledge of these concepts and describes how they are exposed by the driver API.
3.1
Compilation with NVCC Kernels can be written using the CUDA instruction set architecture, called PTX, which is described in the PTX reference manual. It is however usually more
CUDA C Programming Guide Version 3.1
15
Chapter 3. Programming Interface
effective to use a high-level programming language such as C. In both cases, kernels must be compiled into binary code by nvcc to execute on the device. nvcc is a compiler driver that simplifies the process of compiling C or PTX code: It
provides simple and familiar command line options and executes them by invoking the collection of tools that implement the different compilation stages. This section gives an overview of nvcc workflow and command options. A complete description can be found in the nvcc user manual.
3.1.1
Compilation Workflow Source files compiled with nvcc can include a mix of host code (i.e. code that executes on the host) and device code (i.e. code that executes on the device). nvcc‟s basic workflow consists in separating device code from host code and compiling the device code into an assembly form (PTX code) and/or binary form (cubin object). The generated host code is output either as C code that is left to be compiled using another tool or as object code directly by letting nvcc invoke the host compiler during the last compilation stage. Applications can then: Either load and execute the PTX code or cubin object on the device using the CUDA driver API (see Section 3.3) and ignore the generated host code (if any); Or link to the generated host code; the generated host code includes the PTX code and/or cubin object as a global initialized data array and a translation of the <<<…>>> syntax introduced in Section 2.1 (and described in more details in Section B.16) into the necessary CUDA C runtime function calls to load and launch each compiled kernel. Any PTX code loaded by an application at runtime is compiled further to binary code by the device driver. This is called just-in-time compilation. Just-in-time compilation increases application load time, but allow applications to benefit from latest compiler improvements. It is also the only way for applications to run on devices that did not exist at the time the application was compiled, as detailed in Section 3.1.4.
3.1.2
Binary Compatibility Binary code is architecture-specific. A cubin object is generated using the compiler option –code that specifies the targeted architecture: For example, compiling with –code=sm_13 produces binary code for devices of compute capability 1.3. Binary compatibility is guaranteed from one minor revision to the next one, but not from one minor revision to the previous one or across major revisions. In other words, a cubin object generated for compute capability X.y is only guaranteed to execute on devices of compute capability X.z where z≥y.
3.1.3
PTX Compatibility Some PTX instructions are only supported on devices of higher compute capabilities. For example, atomic instructions on global memory are only supported
16
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
on devices of compute capability 1.1 and above; double-precision instructions are only supported on devices of compute capability 1.3 and above. The –arch compiler option specifies the compute capability that is assumed when compiling C to PTX code. So, code that contains double-precision arithmetic, for example, must be compiled with “-arch=sm_13” (or higher compute capability), otherwise double-precision arithmetic will get demoted to single-precision arithmetic. PTX code produced for some specific compute capability can always be compiled to binary code of greater or equal compute capability.
3.1.4
Application Compatibility To execute code on devices of specific compute capability, an application must load binary or PTX code that is compatible with this compute capability as described in Sections 3.1.2 and 3.1.3. In particular, to be able to execute code on future architectures with higher compute capability – for which no binary code can be generated yet –, an application must load PTX code that will be compiled just-intime for these devices. Which PTX and binary code gets embedded in a CUDA C application is controlled by the –arch and –code compiler options or the –gencode compiler option as detailed in the nvcc user manual. For example, nvcc x.cu –gencode arch=compute_10,code=sm_10 –gencode arch=compute_11,code=\’compute_11,sm_11\’
embeds binary code compatible with compute capability 1.0 (first –gencode option) and PTX and binary code compatible with compute capability 1.1 (second -gencode option). Host code is generated to automatically select at runtime the most appropriate code to load and execute, which, in the above example, will be: 1.0 binary code for devices with compute capability 1.0, 1.1 binary code for devices with compute capability 1.1, 1.2, 1.3, binary code obtained by compiling 1.1 PTX code for devices with compute capabilities 2.0 and higher. x.cu can have an optimized code path that uses atomic operations, for example, which are only supported in devices of compute capability 1.1 and higher. The __CUDA_ARCH__ macro can be used to differentiate various code paths based on compute capability. It is only defined for device code. When compiling with “arch=compute_11” for example, __CUDA_ARCH__ is equal to 110.
Applications using the driver API must compile code to separate files and explicitly load and execute the most appropriate file at runtime. The nvcc user manual lists various shorthands for the –arch, –code, and gencode compiler options. For example, “arch=sm_13” is a shorthand for “arch=compute_13 code=compute_13,sm_13” (which is the same as “gencode arch=compute_13,code=\’compute_13,sm_13\’”).
CUDA C Programming Guide Version 3.2
17
Chapter 3. Programming Interface
3.1.5
C/C++ Compatibility The front end of the compiler processes CUDA source files according to C++ syntax rules. Full C++ is supported for the host code. However, only a subset of C++ is fully supported for the device code as described in detail in Appendix D. As a consequence of the use of C++ syntax rules, void pointers (e.g., returned by malloc()) cannot be assigned to non-void pointers without a typecast. nvcc also support specific keywords and directives detailed in Appendix E.
3.1.6
64-Bit Compatibility The 64-bit version of nvcc compiles device code in 64-bit mode (i.e. pointers are 64-bit). Device code compiled in 64-bit mode is only supported with host code compiled in 64-bit mode. Similarly, the 32-bit version of nvcc compiles device code in 32-bit mode and device code compiled in 32-bit mode is only supported with host code compiled in 32-bit mode. The 32-bit version of nvcc can compile device code in 64-bit mode also using the ď€m64 compiler option. The 64-bit version of nvcc can compile device code in 32-bit mode also using the ď€m32 compiler option.
3.2
CUDA C CUDA C provides a simple path for users familiar with the C programming language to easily write programs for execution by the device. It consists of a minimal set of extensions to the C language and a runtime library. The core language extensions have been introduced in Chapter 2. This section continues with an introduction to the runtime. A complete description of all extensions can be found in Appendix B and a complete description of the runtime in the CUDA reference manual. The runtime is implemented in the cudart dynamic library and all its entry points are prefixed with cuda. There is no explicit initialization function for the runtime; it initializes the first time a runtime function is called (more specifically any function other than functions from the device and version management sections of the reference manual). One needs to keep this in mind when timing runtime function calls and when interpreting the error code from the first call into the runtime. Once the runtime has been initialized in a host thread, any resource (memory, stream, event, etc.) allocated via some runtime function call in the host thread is only valid within the context of the host thread. Therefore only runtime functions calls made by the host thread (memory copies, kernel launches, ‌) can operate on these resources. This is because a CUDA context (see Section 3.3.1) is created under
18
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
the hood as part of initialization and made current to the host thread, and it cannot be made current to any other host thread. On system with multiple devices, kernels are executed on device 0 by default as detailed in Section 3.2.3.
3.2.1
Device Memory As mentioned in Section 2.4, the CUDA programming model assumes a system composed of a host and a device, each with their own separate memory. Kernels can only operate out of device memory, so the runtime provides functions to allocate, deallocate, and copy device memory, as well as transfer data between host memory and device memory. Device memory can be allocated either as linear memory or as CUDA arrays. CUDA arrays are opaque memory layouts optimized for texture fetching. They are described in Section 3.2.4. Linear memory exists on the device in a 32-bit address space for devices of compute capability 1.x and 40-bit address space of devices of compute capability 2.x, so separately allocated entities can reference one another via pointers, for example, in a binary tree. Linear memory is typically allocated using cudaMalloc() and freed using cudaFree() and data transfer between host memory and device memory are typically done using cudaMemcpy(). In the vector addition code sample of Section 2.1, the vectors need to be copied from host memory to device memory: // Device code __global__ void VecAdd(float* A, float* B, float* C, int N) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i < N) C[i] = A[i] + B[i]; } // Host code int main() { int N = ...; size_t size = N * sizeof(float); // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size); float* h_B = (float*)malloc(size); // Initialize input vectors ... // Allocate vectors in device memory float* d_A; cudaMalloc(&d_A, size); float* d_B; cudaMalloc(&d_B, size); float* d_C;
CUDA C Programming Guide Version 3.2
19
Chapter 3. Programming Interface
cudaMalloc(&d_C, size); // Copy vectors from host memory to device memory cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice); cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice); // Invoke kernel int threadsPerBlock = 256; int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock; VecAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, N); // Copy result from device memory to host memory // h_C contains the result in host memory cudaMemcpy(h_C, d_C, size, cudaMemcpyDeviceToHost); // Free device memory cudaFree(d_A); cudaFree(d_B); cudaFree(d_C); // Free host memory ... }
Linear memory can also be allocated through cudaMallocPitch() and cudaMalloc3D(). These functions are recommended for allocations of 2D or 3D arrays as it makes sure that the allocation is appropriately padded to meet the alignment requirements described in Section 5.3.2.1, therefore ensuring best performance when accessing the row addresses or performing copies between 2D arrays and other regions of device memory (using the cudaMemcpy2D() and cudaMemcpy3D() functions). The returned pitch (or stride) must be used to access array elements. The following code sample allocates a width×height 2D array of floating-point values and shows how to loop over the array elements in device code: // Host code int width = 64, height = 64; float* devPtr; size_t pitch; cudaMallocPitch(&devPtr, &pitch, width * sizeof(float), height); MyKernel<<<100, 512>>>(devPtr, pitch, width, height); // Device code __global__ void MyKernel(float* devPtr, size_t pitch, int width, int height) { for (int r = 0; r < height; ++r) { float* row = (float*)((char*)devPtr + r * pitch); for (int c = 0; c < width; ++c) { float element = row[c]; } } }
The following code sample allocates a width×height×depth 3D array of floating-point values and shows how to loop over the array elements in device code:
20
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Host code int width = 64, height = 64, depth = 64; cudaExtent extent = make_cudaExtent(width * sizeof(float), height, depth); cudaPitchedPtr devPitchedPtr; cudaMalloc3D(&devPitchedPtr, extent); MyKernel<<<100, 512>>>(devPitchedPtr, width, height, depth); // Device code __global__ void MyKernel(cudaPitchedPtr devPitchedPtr, int width, int height, int depth) { char* devPtr = devPitchedPtr.ptr; size_t pitch = devPitchedPtr.pitch; size_t slicePitch = pitch * height; for (int z = 0; z < depth; ++z) { char* slice = devPtr + z * slicePitch; for (int y = 0; y < height; ++y) { float* row = (float*)(slice + y * pitch); for (int x = 0; x < width; ++x) { float element = row[x]; } } } }
The reference manual lists all the various functions used to copy memory between linear memory allocated with cudaMalloc(), linear memory allocated with cudaMallocPitch() or cudaMalloc3D(), CUDA arrays, and memory allocated for variables declared in global or constant memory space. The following code sample illustrates various ways of accessing global variables via the runtime API: __constant__ float constData[256]; float data[256]; cudaMemcpyToSymbol(constData, data, sizeof(data)); cudaMemcpyFromSymbol(data, constData, sizeof(data)); __device__ float devData; float value = 3.14f; cudaMemcpyToSymbol(devData, &value, sizeof(float)); __device__ float* devPointer; float* ptr; cudaMalloc(&ptr, 256 * sizeof(float)); cudaMemcpyToSymbol(devPointer, &ptr, sizeof(ptr));
cudaGetSymbolAddress() is used to retrieve the address pointing to the
memory allocated for a variable declared in global memory space. The size of the allocated memory is obtained through cudaGetSymbolSize().
3.2.2
Shared Memory As detailed in Section B.2 shared memory is allocated using the __shared__ qualifier.
CUDA C Programming Guide Version 3.2
21
Chapter 3. Programming Interface
Shared memory is expected to be much faster than global memory as mentioned in Section 2.2 and detailed in Section 5.3.2.3. Any opportunity to replace global memory accesses by shared memory accesses should therefore be exploited as illustrated by the following matrix multiplication example. The following code sample is a straightforward implementation of matrix multiplication that does not take advantage of shared memory. Each thread reads one row of A and one column of B and computes the corresponding element of C as illustrated in Figure 3-1. A is therefore read B.width times from global memory and B is read A.height times. // Matrices are stored in row-major order: // M(row, col) = *(M.elements + row * M.width + col) typedef struct { int width; int height; float* elements; } Matrix; // Thread block size #define BLOCK_SIZE 16 // Forward declaration of the matrix multiplication kernel __global__ void MatMulKernel(const Matrix, const Matrix, Matrix); // Matrix multiplication - Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A, const Matrix B, Matrix C) { // Load A and B to device memory Matrix d_A; d_A.width = A.width; d_A.height = A.height; size_t size = A.width * A.height * sizeof(float); cudaMalloc(&d_A.elements, size); cudaMemcpy(d_A.elements, A.elements, size, cudaMemcpyHostToDevice); Matrix d_B; d_B.width = B.width; d_B.height = B.height; size = B.width * B.height * sizeof(float); cudaMalloc(&d_B.elements, size); cudaMemcpy(d_B.elements, B.elements, size, cudaMemcpyHostToDevice); // Allocate C in device memory Matrix d_C; d_C.width = C.width; d_C.height = C.height; size = C.width * C.height * sizeof(float); cudaMalloc(&d_C.elements, size); // Invoke kernel dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y); MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C); // Read C from device memory cudaMemcpy(C.elements, Cd.elements, size, cudaMemcpyDeviceToHost);
22
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Free device memory cudaFree(d_A.elements); cudaFree(d_B.elements); cudaFree(d_C.elements); } // Matrix multiplication kernel called by MatMul() __global__ void MatMulKernel(Matrix A, Matrix B, Matrix C) { // Each thread computes one element of C // by accumulating results into Cvalue float Cvalue = 0; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; for (int e = 0; e < A.width; ++e) Cvalue += A.elements[row * A.width + e] * B.elements[e * B.width + col]; C.elements[row * C.width + col] = Cvalue; }
CUDA C Programming Guide Version 3.2
23
B.width-1
Chapter 3. Programming Interface
col
0
B.height
B
A
C
A.height
0
row
A.width
B.width
A.height-1
Figure 3-1. Matrix Multiplication without Shared Memory The following code sample is an implementation of matrix multiplication that does take advantage of shared memory. In this implementation, each thread block is responsible for computing one square sub-matrix Csub of C and each thread within the block is responsible for computing one element of Csub. As illustrated in Figure 3-2, Csub is equal to the product of two rectangular matrices: the sub-matrix of A of dimension (A.width, block_size) that has the same line indices as Csub, and the submatrix of B of dimension (block_size, A.width) that has the same column indices as Csub. In order to fit into the deviceâ€&#x;s resources, these two rectangular matrices are divided into as many square matrices of dimension block_size as necessary and Csub is computed as the sum of the products of these square matrices. Each of these products is performed by first loading the two corresponding square matrices from global memory to shared memory with one thread loading one element of each matrix, and then by having each thread compute one element of the product. Each thread accumulates the result of each of these products into a register and once done writes the result to global memory.
24
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
By blocking the computation this way, we take advantage of fast shared memory and save a lot of global memory bandwidth since A is only read (B.width / block_size) times from global memory and B is read (A.height / block_size) times. The Matrix type from the previous code sample is augmented with a stride field, so that sub-matrices can be efficiently represented with the same type. __device__ functions (see Section B.1.1) are used to get and set elements and build any submatrix from a matrix. // Matrices are stored in row-major order: // M(row, col) = *(M.elements + row * M.stride + col) typedef struct { int width; int height; int stride; float* elements; } Matrix; // Get a matrix element __device__ float GetElement(const Matrix A, int row, int col) { return A.elements[row * A.stride + col]; } // Set a matrix element __device__ void SetElement(Matrix A, int row, int col, float value) { A.elements[row * A.stride + col] = value; } // Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is // located col sub-matrices to the right and row sub-matrices down // from the upper-left corner of A __device__ Matrix GetSubMatrix(Matrix A, int row, int col) { Matrix Asub; Asub.width = BLOCK_SIZE; Asub.height = BLOCK_SIZE; Asub.stride = A.stride; Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row + BLOCK_SIZE * col]; return Asub; } // Thread block size #define BLOCK_SIZE 16 // Forward declaration of the matrix multiplication kernel __global__ void MatMulKernel(const Matrix, const Matrix, Matrix); // Matrix multiplication - Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A, const Matrix B, Matrix C) { // Load A and B to device memory Matrix d_A;
CUDA C Programming Guide Version 3.2
25
Chapter 3. Programming Interface
d_A.width = d_A.stride = A.width; d_A.height = A.height; size_t size = A.width * A.height * sizeof(float); cudaMalloc(&d_A.elements, size); cudaMemcpy(d_A.elements, A.elements, size, cudaMemcpyHostToDevice); Matrix d_B; d_B.width = d_B.stride = B.width; d_B.height = B.height; size = B.width * B.height * sizeof(float); cudaMalloc(&d_B.elements, size); cudaMemcpy(d_B.elements, B.elements, size, cudaMemcpyHostToDevice); // Allocate C in device memory Matrix d_C; d_C.width = d_C.stride = C.width; d_C.height = C.height; size = C.width * C.height * sizeof(float); cudaMalloc(&d_C.elements, size); // Invoke kernel dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y); MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C); // Read C from device memory cudaMemcpy(C.elements, d_C.elements, size, cudaMemcpyDeviceToHost); // Free device memory cudaFree(d_A.elements); cudaFree(d_B.elements); cudaFree(d_C.elements); } // Matrix multiplication kernel called by MatMul() __global__ void MatMulKernel(Matrix A, Matrix B, Matrix C) { // Block row and column int blockRow = blockIdx.y; int blockCol = blockIdx.x; // Each thread block computes one sub-matrix Csub of C Matrix Csub = GetSubMatrix(C, blockRow, blockCol); // Each thread computes one element of Csub // by accumulating results into Cvalue float Cvalue = 0; // Thread row and column within Csub int row = threadIdx.y; int col = threadIdx.x; // Loop over all the sub-matrices of A and B that are // required to compute Csub // Multiply each pair of sub-matrices together // and accumulate the results for (int m = 0; m < (A.width / BLOCK_SIZE); ++m) {
26
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Get sub-matrix Asub of A Matrix Asub = GetSubMatrix(A, blockRow, m); // Get sub-matrix Bsub of B Matrix Bsub = GetSubMatrix(B, m, blockCol); // Shared memory used to store Asub and Bsub respectively __shared__ float As[BLOCK_SIZE][BLOCK_SIZE]; __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE]; // Load Asub and Bsub from device memory to shared memory // Each thread loads one element of each sub-matrix As[row][col] = GetElement(Asub, row, col); Bs[row][col] = GetElement(Bsub, row, col); // Synchronize to make sure the sub-matrices are loaded // before starting the computation __syncthreads(); // Multiply Asub and Bsub together for (int e = 0; e < BLOCK_SIZE; ++e) Cvalue += As[row][e] * Bs[e][col]; // Synchronize to make sure that the preceding // computation is done before loading two new // sub-matrices of A and B in the next iteration __syncthreads(); } // Write Csub to device memory // Each thread writes one element SetElement(Csub, row, col, Cvalue); }
CUDA C Programming Guide Version 3.2
27
Chapter 3. Programming Interface
col
0
blockRow
0
Csub
row BLOCK_SIZE-1
BLOCK_SIZE
BLOCK_SIZE
A.width
A.height
C
BLOCK_SIZE
A
BLOCK_SIZE-1
BLOCK_SIZE
B
B.height
BLOCK_SIZE
blockCol
BLOCK_SIZE B.width
Figure 3-2. Matrix Multiplication with Shared Memory
3.2.3
Multiple Devices A host system can have multiple devices. These devices can be enumerated, their properties can be queried, and one of them can be selected for kernel executions. Several host threads can execute device code on the same device, but by design, a host thread can execute device code on only one device at any given time. As a consequence, multiple host threads are required to execute device code on multiple devices. Also, any CUDA resources created through the runtime in one host thread cannot be used by the runtime from another host thread. The following code sample enumerates all devices in the system and retrieves their properties. It also determines the number of CUDA-enabled devices. int deviceCount; cudaGetDeviceCount(&deviceCount); int device; for (device = 0; device < deviceCount; ++device) { cudaDeviceProp deviceProp; cudaGetDeviceProperties(&deviceProp, device); if (dev == 0) {
28
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
if (deviceProp.major == 9999 && deviceProp.minor == 9999) printf("There is no device supporting CUDA.\n"); else if (deviceCount == 1) printf("There is 1 device supporting CUDA\n"); else printf("There are %d devices supporting CUDA\n", deviceCount); } }
By default, the device associated to the host thread is implicitly selected as device 0 as soon as a non-device management runtime function is called (see Section 3.6 for exceptions). Any other device can be selected by calling cudaSetDevice() first. After a device has been selected, either implicitly or explicitly, any subsequent explicit call to cudaSetDevice() will fail up until cudaThreadExit() is called. cudaThreadExit() cleans up all runtime-related resources associated with the calling host thread. Any subsequent API call reinitializes the runtime.
3.2.4
Texture and Surface Memory CUDA supports a subset of the texturing hardware that the GPU uses for graphics to access texture and surface memory. Reading data from texture or surface memory instead of global memory can have several performance benefits as described in Section 5.3.2.5.
3.2.4.1
Texture Memory Texture memory is read from kernels using device functions called texture fetches, described in Section B.8. The first parameter of a texture fetch specifies an object called a texture reference. A texture reference defines which part of texture memory is fetched. As detailed in Section 3.2.4.1.3, it must be bound through runtime functions to some region of memory, called a texture, before it can be used by a kernel. Several distinct texture references might be bound to the same texture or to textures that overlap in memory. A texture reference has several attributes. One of them is its dimensionality that specifies whether the texture is addressed as a one-dimensional array using one texture coordinate, a two-dimensional array using two texture coordinates, or a threedimensional array using three texture coordinates. Elements of the array are called texels, short for “texture elements.� Other attributes define the input and output data types of the texture fetch, as well as how the input coordinates are interpreted and what processing should be done. A texture can be any region of linear memory or a CUDA array (described in Section 3.2.4.3). Section G.1 lists the maximum texture width, height, and depth depending on the compute capability of the device.
CUDA C Programming Guide Version 3.2
29
Chapter 3. Programming Interface
3.2.4.1.1
Texture Reference Declaration Some of the attributes of a texture reference are immutable and must be known at compile time; they are specified when declaring the texture reference. A texture reference is declared at file scope as a variable of type texture: texture<Type, Dim, ReadMode> texRef;
where:
Type specifies the type of data that is returned when fetching the texture; Type
is restricted to the basic integer and single-precision floating-point types and any of the 1-, 2-, and 4-component vector types defined in Section B.3.1; Dim specifies the dimensionality of the texture reference and is equal to 1, 2, or 3; Dim is an optional argument which defaults to 1; ReadMode is equal to cudaReadModeNormalizedFloat or cudaReadModeElementType; if it is cudaReadModeNormalizedFloat and Type is a 16-bit or 8-bit integer type, the value is actually returned as floating-point type and the full range of the integer type is mapped to [0.0, 1.0] for unsigned integer type and [-1.0, 1.0] for signed integer type; for example, an unsigned 8-bit texture element with the value 0xff reads as 1; if it is cudaReadModeElementType, no conversion is performed; ReadMode is an optional argument which defaults to cudaReadModeElementType. A texture reference can only be declared as a static global variable and cannot be passed as an argument to a function.
3.2.4.1.2
Runtime Texture Reference Attributes The other attributes of a texture reference are mutable and can be changed at runtime through the host runtime. They specify whether texture coordinates are normalized or not, the addressing mode, and texture filtering, as detailed below. By default, textures are referenced using floating-point coordinates in the range [0, N) where N is the size of the texture in the dimension corresponding to the coordinate. For example, a texture that is 6432 in size will be referenced with coordinates in the range [0, 63] and [0, 31] for the x and y dimensions, respectively. Normalized texture coordinates cause the coordinates to be specified in the range [0.0, 1.0) instead of [0, N), so the same 6432 texture would be addressed by normalized coordinates in the range [0, 1) in both the x and y dimensions. Normalized texture coordinates are a natural fit to some applications‟ requirements, if it is preferable for the texture coordinates to be independent of the texture size. The addressing mode defines what happens when texture coordinates are out of range. When using unnormalized texture coordinates, texture coordinates outside the range [0, N) are clamped: Values below 0 are set to 0 and values greater or equal to N are set to N-1. Clamping is also the default addressing mode when using normalized texture coordinates: Values below 0.0 or above 1.0 are clamped to the range [0.0, 1.0). For normalized coordinates, the “wrap” addressing mode also may be specified. Wrap addressing is usually used when the texture contains a periodic signal. It uses only the fractional part of the texture coordinate; for example, 1.25 is treated the same as 0.25 and -1.25 is treated the same as 0.75. Linear texture filtering may be done only for textures that are configured to return floating-point data. It performs low-precision interpolation between neighboring texels. When enabled, the texels surrounding a texture fetch location are read and
30
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
the return value of the texture fetch is interpolated based on where the texture coordinates fell between the texels. Simple linear interpolation is performed for onedimensional textures and bilinear interpolation is performed for two-dimensional textures. Appendix F gives more details on texture fetching.
3.2.4.1.3
Texture Binding As explained in the reference manual, the runtime API has a low-level C-style interface and a high-level C++-style interface. The texture type is defined in the high-level API as a structure publicly derived from the textureReference type defined in the low-level API as such: struct textureReference { int enum cudaTextureFilterMode enum cudaTextureAddressMode struct cudaChannelFormatDesc }
normalized; filterMode; addressMode[3]; channelDesc;
normalized specifies whether texture coordinates are normalized or not; if it
is non-zero, all elements in the texture are addressed with texture coordinates in the range [0,1] rather than in the range [0,width-1], [0,height-1], or [0,depth-1] where width, height, and depth are the texture sizes; filterMode specifies the filtering mode, that is how the value returned when fetching the texture is computed based on the input texture coordinates; filterMode is equal to cudaFilterModePoint or cudaFilterModeLinear; if it is cudaFilterModePoint, the returned value is the texel whose texture coordinates are the closest to the input texture coordinates; if it is cudaFilterModeLinear, the returned value is the linear interpolation of the two (for a one-dimensional texture), four (for a two-dimensional texture), or eight (for a three-dimensional texture) texels whose texture coordinates are the closest to the input texture coordinates; cudaFilterModeLinear is only valid for returned values of floating-point type; addressMode specifies the addressing mode, that is how out-of-range texture coordinates are handled; addressMode is an array of size three whose first, second, and third elements specify the addressing mode for the first, second, and third texture coordinates, respectively; the addressing mode is equal to either cudaAddressModeClamp, in which case out-of-range texture coordinates are clamped to the valid range, or cudaAddressModeWrap, in which case out-of-range texture coordinates are wrapped to the valid range; cudaAddressModeWrap is only supported for normalized texture coordinates; channelDesc describes the format of the value that is returned when fetching the texture; channelDesc is of the following type: struct cudaChannelFormatDesc { int x, y, z, w; enum cudaChannelFormatKind f; };
where x, y, z, and w are equal to the number of bits of each component of the returned value and f is:
CUDA C Programming Guide Version 3.2
31
Chapter 3. Programming Interface
cudaChannelFormatKindSigned if these components are of signed integer type, cudaChannelFormatKindUnsigned if they are of unsigned integer type, cudaChannelFormatKindFloat if they are of floating point type. normalized, addressMode, and filterMode may be directly modified in host code. Before a kernel can use a texture reference to read from texture memory, the texture reference must be bound to a texture using cudaBindTexture() or cudaBindTextureToArray(). cudaUnbindTexture() is used to unbind a texture reference. The following code samples bind a texture reference to linear memory pointed to by devPtr:
Using the low-level API: texture<float, 2, cudaReadModeElementType> texRef; textureReference* texRefPtr; cudaGetTextureReference(&texRefPtr, “texRef”); cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>(); cudaBindTexture2D(0, texRefPtr, devPtr, &channelDesc, width, height, pitch);
Using the high-level API: texture<float, 2, cudaReadModeElementType> texRef; cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>(); cudaBindTexture2D(0, texRef, devPtr, &channelDesc, width, height, pitch);
The following code samples bind a texture reference to a CUDA array cuArray:
Using the low-level API: texture<float, 2, cudaReadModeElementType> texRef; textureReference* texRefPtr; cudaGetTextureReference(&texRefPtr, “texRef”); cudaChannelFormatDesc channelDesc; cudaGetChannelDesc(&channelDesc, cuArray); cudaBindTextureToArray(texRef, cuArray, &channelDesc);
Using the high-level API: texture<float, 2, cudaReadModeElementType> texRef; cudaBindTextureToArray(texRef, cuArray);
The format specified when binding a texture to a texture reference must match the parameters specified when declaring the texture reference; otherwise, the results of texture fetches are undefined. The following code sample applies some simple transformation kernel to a // 2D float texture texture<float, 2, cudaReadModeElementType> texRef; // Simple transformation kernel __global__ void transformKernel(float* output, int width, int height, float theta) {
32
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Calculate normalized texture coordinates unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; float u = x / (float)width; float v = y / (float)height; // Transform coordinates u -= 0.5f; v -= 0.5f; float tu = u * cosf(theta) – v * sinf(theta) + 0.5f; float tv = v * cosf(theta) + u * sinf(theta) + 0.5f; // Read from texture and write to global memory output[y * width + x] = tex2D(texRef, tu, tv); } // Host code int main() { // Allocate CUDA array in device memory cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(32, 0, 0, 0, cudaChannelFormatKindFloat); cudaArray* cuArray; cudaMallocArray(&cuArray, &channelDesc, width, height); // Copy to device memory some data located at address h_data // in host memory cudaMemcpyToArray(cuArray, 0, 0, h_data, size, cudaMemcpyHostToDevice); // Set texture parameters texRef.addressMode[0] = cudaAddressModeWrap; texRef.addressMode[1] = cudaAddressModeWrap; texRef.filterMode = cudaFilterModeLinear; texRef.normalized = true; // Bind the array to the texture reference cudaBindTextureToArray(texRef, cuArray, channelDesc); // Allocate result of transformation in device memory float* output; cudaMalloc(&output, width * height * sizeof(float)); // Invoke kernel dim3 dimBlock(16, 16); dim3 dimGrid((width + dimBlock.x – 1) / dimBlock.x, (height + dimBlock.y – 1) / dimBlock.y); transformKernel<<<dimGrid, dimBlock>>>(output, width, height, angle); // Free device memory cudaFreeArray(cuArray); cudaFree(output); }
CUDA C Programming Guide Version 3.2
33
Chapter 3. Programming Interface
3.2.4.1.4
16-Bit Floating-Point Textures The 16-bit floating-point or half format supported by CUDA arrays is the same as the IEEE 754-2008 binary2 format. CUDA C does not support a matching data type, but provides intrinsic functions to convert to and from the 32-bit floating-point format via the unsigned short type: __float2half(float) and __half2float(unsigned short). These functions are only supported in device code. Equivalent functions for the host code can be found in the OpenEXR library, for example. 16-bit floating-point components are promoted to 32 bit float during texture fetching before any filtering is performed. A channel description for the 16-bit floating-point format can be created by calling one of the cudaCreateChannelDescHalf*() functions.
3.2.4.2
Surface Memory A CUDA array (described in Section 3.2.4.3), created with the cudaArraySurfaceLoadStore flag, can be read and written via a surface reference using the functions described in Section B.9. Section G.1 lists the maximum surface width, height, and depth depending on the compute capability of the device.
3.2.4.2.1
Surface Reference Declaration A surface reference is declared at file scope as a variable of type surface: surface<void, Dim> surfRef;
where Dim specifies the dimensionality of the surface reference and is equal to 1 or 2; Dim is an optional argument which defaults to 1. A surface reference can only be declared as a static global variable and cannot be passed as an argument to a function.
3.2.4.2.2
Surface Binding Before a kernel can use a surface reference to access a CUDA array, the surface reference must be bound to the CUDA array using cudaBindSurfaceToArray(). The following code samples bind a surface reference to a CUDA array cuArray:
Using the low-level API: surface<void, 2> surfRef; surfaceReference* surfRefPtr; cudaGetSurfaceReference(&surfRefPtr, “surfRef”); cudaChannelFormatDesc channelDesc; cudaGetChannelDesc(&channelDesc, cuArray); cudaBindSurfaceToArray(surfRef, cuArray, &channelDesc);
Using the high-level API: surface<void, 2> surfRef; cudaBindSurfaceToArray(surfRef, cuArray);
A CUDA array must be read and written using surface functions of matching dimensionality and type and via a surface reference of matching dimensionality; otherwise, the results of reading and writing the CUDA array are undefined. 34
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
Unlike texture memory, surface memory uses byte addressing. This means that the x-coordinate used to access a texture element via texture functions needs to be multiplied by the byte size of the element to access the same element via a surface function. For example, the element at texture coordinate x of a one-dimensional floating-point CUDA array bound to a texture reference texRef and a surface reference surfRef is read using tex1d(texRef, x) via texRef, but surf1Dread(surfRef, 4*x) via surfRef. Similarly, the element at texture coordinate x and y of a two-dimensional floating-point CUDA array bound to a texture reference texRef and a surface reference surfRef is accessed using tex2d(texRef, x, y) via texRef, but surf2Dread(surfRef, 4*x, y) via surfRef (the byte offset of the y-coordinate is internally calculated from the underlying line pitch of the CUDA array). The following code sample applies some simple transformation kernel to a // 2D surfaces surface<void, 2> inputSurfRef; surface<void, 2> outputSurfRef; // Simple copy kernel __global__ void copyKernel(int width, int height) { // Calculate surface coordinates unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; if (x < width && y < height) { uchar4 data; // Read from input surface surf2Dread(&data, inputSurfRef, x * 4, y); // Write to output surface surf2Dwrite(data, outputSurfRef, x * 4, y); } } // Host code int main() { // Allocate CUDA arrays in device memory cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsigned); cudaArray* cuInputArray; cudaMallocArray(&cuInputArray, &channelDesc, width, height, cudaArraySurfaceLoadStore); cudaArray* cuOutputArray; cudaMallocArray(&cuOutputArray, &channelDesc, width, height, cudaArraySurfaceLoadStore); // Copy to device memory some data located at address h_data // in host memory cudaMemcpyToArray(cuInputArray, 0, 0, h_data, size, cudaMemcpyHostToDevice); // Bind the arrays to the surface references cudaBindSurfaceToArray(inputSurfRef, cuInputArray); cudaBindSurfaceToArray(outputSurfRef, cuOutputArray);
CUDA C Programming Guide Version 3.2
35
Chapter 3. Programming Interface
// Invoke kernel dim3 dimBlock(16, 16); dim3 dimGrid((width + dimBlock.x – 1) / dimBlock.x, (height + dimBlock.y – 1) / dimBlock.y); copyKernel<<<dimGrid, dimBlock>>>(width, height); // Free device memory cudaFreeArray(cuInputArray); cudaFreeArray(cuOutputArray); }
3.2.4.3
CUDA Arrays CUDA arrays are opaque memory layouts optimized for texture fetching. They are one-dimensional, two-dimensional, or three-dimensional and composed of elements, each of which has 1, 2 or 4 components that may be signed or unsigned 8-, 16- or 32-bit integers, 16-bit floats, or 32-bit floats. CUDA arrays are only readable by kernels through texture fetching and may only be bound to texture references with the same number of packed components.
3.2.4.4
Read/Write Coherency The texture and surface memory is cached (see Section 5.3.2.5) and within the same kernel call, the cache is not kept coherent with respect to global memory writes and surface memory writes, so any texture fetch or surface read to an address that has been written to via a global write or a surface write in the same kernel call returns undefined data. In other words, a thread can safely read some texture or surface memory location only if this memory location has been updated by a previous kernel call or memory copy, but not if it has been previously updated by the same thread or another thread from the same kernel call.
3.2.5
Page-Locked Host Memory The runtime also provides functions to allocate and free page-locked (also known as pinned) host memory – as opposed to regular pageable host memory allocated by malloc(): cudaHostAlloc() and cudaFreeHost(). Using page-locked host memory has several benefits: Copies between page-locked host memory and device memory can be performed concurrently with kernel execution for some devices as mentioned in Section 3.2.6; On some devices, page-locked host memory can be mapped into the address space of the device, eliminating the need to copy it to or from device memory as detailed in Section 3.2.5.3; On systems with a front-side bus, bandwidth between host memory and device memory is higher if host memory is allocated as page-locked and even higher if in addition it is allocated as write-combining as described in Section 3.2.5.2. Page-locked host memory is a scarce resource however, so allocations in pagelocked memory will start failing long before allocations in pageable memory. In addition, by reducing the amount of physical memory available to the operating
36
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
system for paging, allocating too much page-locked memory reduces overall system performance. The simple zero-copy SDK sample comes with a detailed document on the pagelocked memory APIs.
3.2.5.1
Portable Memory A block of page-locked memory can be used by any host threads, but by default, the benefits of using page-locked memory described above are only available for the thread that allocates it. To make these advantages available to all threads, it needs to be allocated by passing flag cudaHostAllocPortable to cudaHostAlloc().
3.2.5.2
Write-Combining Memory By default page-locked host memory is allocated as cacheable. It can optionally be allocated as write-combining instead by passing flag cudaHostAllocWriteCombined to cudaHostAlloc(). Write-combining memory frees up L1 and L2 cache resources, making more cache available to the rest of the application. In addition, write-combining memory is not snooped during transfers across the PCI Express bus, which can improve transfer performance by up to 40%. Reading from write-combining memory from the host is prohibitively slow, so write-combining memory should in general be used for memory that the host only writes to.
3.2.5.3
Mapped Memory On devices of compute capability greater than 1.0, a block of page-locked host memory can also be mapped into the address space of the device by passing flag cudaHostAllocMapped to cudaHostAlloc(). Such a block has therefore two addresses: one in host memory and one in device memory. The host memory pointer is returned by cudaHostAlloc() and the device memory pointer can be retrieved using cudaHostGetDevicePointer()and then used to access the block from within a kernel. Accessing host memory directly from within a kernel has several advantages: There is no need to allocate a block in device memory and copy data between this block and the block in host memory; data transfers are implicitly performed as needed by the kernel; ď ą There is no need to use streams (see Section 3.2.6.4) to overlap data transfers with kernel execution; the kernel-originated data transfers automatically overlap with kernel execution. Since mapped page-locked memory is shared between host and device however, the application must synchronize memory accesses using streams or events (see Section 3.2.6) to avoid any potential read-after-write, write-after-read, or write-afterwrite hazards. ď ą
A block of page-locked host memory can be allocated as both mapped and portable (see Section 3.2.5.1), in which case each host thread that needs to map the block to its device address space must call cudaHostGetDevicePointer() to retrieve a device pointer, as device pointers will generally differ from one host thread to the other.
CUDA C Programming Guide Version 3.2
37
Chapter 3. Programming Interface
To be able to retrieve the device pointer to any mapped page-locked memory within a given host thread, page-locked memory mapping must be enabled by calling cudaSetDeviceFlags() with the cudaDeviceMapHost flag before any other CUDA calls is performed by the thread. Otherwise, cudaHostGetDevicePointer() will return an error. cudaHostGetDevicePointer() also returns an error if the device does not
support mapped page-locked host memory. Applications may query whether a device supports mapped page-locked host memory or not by calling cudaGetDeviceProperties() and checking the canMapHostMemory property. Note that atomic functions (Section B.11) operating on mapped page-locked memory are not atomic from the point of view of the host or other devices.
3.2.6
Asynchronous Concurrent Execution
3.2.6.1
Concurrent Execution between Host and Device In order to facilitate concurrent execution between host and device, some function calls are asynchronous: Control is returned to the host thread before the device has completed the requested task. These are:
Kernel launches;
Device device memory copies;
Host device memory copies of a memory block of 64 KB or less; Memory copies performed by functions that are suffixed with Async; Memory set function calls. Programmers can globally disable asynchronous kernel launches for all CUDA applications running on a system by setting the CUDA_LAUNCH_BLOCKING environment variable to 1. This feature is provided for debugging purposes only and should never be used as a way to make production software run reliably.
When an application is run via a CUDA debugger or profiler (cuda-gdb, CUDA Visual Profiler, Parallel Nsight), all launches are synchronous.
3.2.6.2
Overlap of Data Transfer and Kernel Execution Some devices of compute capability 1.1 and higher can perform copies between page-locked host memory and device memory concurrently with kernel execution. Applications may query this capability by calling cudaGetDeviceProperties() and checking the deviceOverlap property. This capability is currently supported only for memory copies that do not involve CUDA arrays or 2D arrays allocated through cudaMallocPitch() (see Section 3.2.1).
3.2.6.3
Concurrent Kernel Execution Some devices of compute capability 2.x can execute multiple kernels concurrently. Applications may query this capability by calling cudaGetDeviceProperties() and checking the concurrentKernels property. The maximum number of kernel launches that a device can execute concurrently is sixteen.
38
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
A kernel from one CUDA context cannot execute concurrently with a kernel from another CUDA context. Kernels that use many textures or a large amount of local memory are less likely to execute concurrently with other kernels.
3.2.6.4
Concurrent Data Transfers Some devices of compute capability 2.x can perform a copy from page-locked host memory to device memory concurrently with a copy from device memory to pagelocked host memory.
3.2.6.5
Stream Applications manage concurrency through streams. A stream is a sequence of commands that execute in order. Different streams, on the other hand, may execute their commands out of order with respect to one another or concurrently; this behavior is not guaranteed and should therefore not be relied upon for correctness (e.g. inter-kernel communication is undefined).
3.2.6.5.1
Creation and Destruction A stream is defined by creating a stream object and specifying it as the stream parameter to a sequence of kernel launches and host  device memory copies. The following code sample creates two streams and allocates an array hostPtr of float in page-locked memory. cudaStream_t stream[2]; for (int i = 0; i < 2; ++i) cudaStreamCreate(&stream[i]); float* hostPtr; cudaMallocHost(&hostPtr, 2 * size);
Each of these streams is defined by the following code sample as a sequence of one memory copy from host to device, one kernel launch, and one memory copy from device to host: for (int i = 0; i < 2; ++i) { cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size, size, cudaMemcpyHostToDevice, stream[i]); MyKernel<<<100, 512, 0, stream[i]>>> (outputDevPtr + i * size, inputDevPtr + i * size, size); cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size, size, cudaMemcpyDeviceToHost, stream[i]); }
Each stream copies its portion of input array hostPtr to array inputDevPtr in device memory, processes inputDevPtr on the device by calling MyKernel(), and copies the result outputDevPtr back to the same portion of hostPtr. Section 3.2.6.5.4 describes how the streams overlap in this example depending on the capability of the device. Note that hostPtr must point to page-locked host memory for any overlap to occur. Streams are released by calling cudaStreamDestroy(). for (int i = 0; i < 2; ++i) cudaStreamDestroy(stream[i]);
cudaStreamDestroy() waits for all preceding commands in the given stream to
complete before destroying the stream and returning control to the host thread.
CUDA C Programming Guide Version 3.2
39
Chapter 3. Programming Interface
3.2.6.5.2
Explicit Synchronization There are various ways to explicitly synchronize streams with each other. cudaThreadSynchronize()
completed.
waits until all preceding commands in all streams have
takes a stream as a parameter and waits until all preceding commands in the given stream have completed. It can be used to synchronize the host with a specific stream, allowing other streams to continue executing on the device. cudaStreamSynchronize()
takes a stream and an event as parameters (see Section 3.2.6.6 for a description of events) and makes all the commands added to the given stream after the call to cudaStreamWaitEvent() delay their execution until the given event has completed. The stream can be 0, in which case all the commands added to any stream after the call to cudaStreamWaitEvent() wait on the event. cudaStreamWaitEvent()
cudaStreamQuery() provides applications with a way to know if all preceding
commands in a stream have completed. To avoid unnecessary slowdowns, all these synchronization functions are usually best used for timing purposes or to isolate a launch or memory copy that is failing.
3.2.6.5.3
Implicit Synchronization Two commands from different streams cannot run concurrently if either one of the following operations is issued in-between them by the host thread: a page-locked host memory allocation, a device memory allocation, a device memory set,
a device device memory copy,
any CUDA command to stream 0 (including kernel launches and host device memory copies that do not specify any stream parameter), a switch between the L1/shared memory configurations described in Section G.4.1. For devices that support concurrent kernel execution, any operation that requires a dependency check to see if a streamed kernel launch is complete:
Can start executing only when all thread blocks of all prior kernel launches from any stream in the CUDA context have started executing; Blocks all later kernel launches from any stream in the CUDA context until the kernel launch being checked is complete. Operations that require a dependency check include any other commands within the same stream as the launch being checked and any call to cudaStreamQuery() on that stream. Therefore, applications should follow these guidelines to improve their potential for concurrent kernel execution:
40
All independent operations should be issued before dependent operations, Synchronization of any kind should be delayed as long as possible.
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
3.2.6.5.4
Overlapping Behavior The amount of execution overlap between two streams depends on the order in which the commands are issued to each stream and whether or not the device supports overlap of data transfer and kernel execution (Section 3.2.6.2), concurrent kernel execution (Section 3.2.6.3), and/or concurrent data transfers (Section 3.2.6.4). For example, on devices that do not support concurrent data transfers, the two streams of the code sample of Section 3.2.6.5.1 do not overlap at all because the memory copy from host to device is issued to stream 1 after the memory copy from device to host is issued to stream 0. If the code is rewritten the following way (and assuming the device supports overlap of data transfer and kernel execution) for (int i = 0; i < 2; ++i) cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size, size, cudaMemcpyHostToDevice, stream[i]); for (int i = 0; i < 2; ++i) MyKernel<<<100, 512, 0, stream[i]>>> (outputDevPtr + i * size, inputDevPtr + i * size, size); for (int i = 0; i < 2; ++i) cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size, size, cudaMemcpyDeviceToHost, stream[i]);
then the memory copy from host to device issued to stream 1 overlaps with the kernel launch issued to stream 0. On devices that do support concurrent data transfers, the two streams of the code sample of Section 3.2.6.5.1 do overlap: The memory copy from host to device issued to stream 1 overlaps with the memory copy from device to host issued to stream 0 and even with the kernel launch issued to stream 0 (assuming the device supports overlap of data transfer and kernel execution). However, the kernel executions cannot possibly overlap because the kernel launch is issued to stream 1 after the memory copy from device to host is issued to stream 0, so it is blocked until the kernel launch issued to stream 0 is complete as per Section 3.2.6.5.3. If the code is rewritten as above, the kernel executions overlap (assuming the device supports concurrent kernel execution) since the kernel launch is issued to stream 1 before the memory copy from device to host is issued to stream 0. In that case however, the memory copy from device to host issued to stream 0 only overlaps with the last thread blocks of the kernel launch issued to stream 1 as per Section 3.2.6.5.3, which can represent a small portion of the total execution time of the kernel.
3.2.6.6
Event The runtime also provides a way to closely monitor the device‟s progress, as well as perform accurate timing, by letting the application asynchronously record events at any point in the program and query when these events are completed. An event has completed when all tasks – or optionally, all commands in a given stream – preceding the event have completed. Events in stream zero are completed after all preceding task and commands in all streams are completed. The following code sample creates two events: cudaEvent_t start, stop; cudaEventCreate(&start); cudaEventCreate(&stop);
CUDA C Programming Guide Version 3.2
41
Chapter 3. Programming Interface
These events can be used to time the code sample of the previous section the following way: cudaEventRecord(start, 0); for (int i = 0; i < 2; ++i) { cudaMemcpyAsync(inputDev + i * size, inputHost + i * size, size, cudaMemcpyHostToDevice, stream[i]); MyKernel<<<100, 512, 0, stream[i]>>> (outputDev + i * size, inputDev + i * size, size); cudaMemcpyAsync(outputHost + i * size, outputDev + i * size, size, cudaMemcpyDeviceToHost, stream[i]); } cudaEventRecord(stop, 0); cudaEventSynchronize(stop); float elapsedTime; cudaEventElapsedTime(&elapsedTime, start, stop);
They are destroyed this way: cudaEventDestroy(start); cudaEventDestroy(stop);
3.2.6.7
Synchronous Calls When a synchronous function is called, control is not returned to the host thread before the device has completed the requested task. Whether the host thread will then yield, block, or spin can be specified by calling cudaSetDeviceFlags()with some specific flags (see reference manual for details) before any other CUDA calls is performed by the host thread.
3.2.7
Graphics Interoperability Some resources from OpenGL and Direct3D may be mapped into the address space of CUDA, either to enable CUDA to read data written by OpenGL or Direct3D, or to enable CUDA to write data for consumption by OpenGL or Direct3D. A resource must be registered to CUDA before it can be mapped using the functions mentioned in Sections 3.2.7.1 and 3.2.7.2. These functions return a pointer to a CUDA graphics resource of type struct cudaGraphicsResource. Registering a resource is potentially high-overhead and therefore typically called only once per resource. A CUDA graphics resource is unregistered using cudaGraphicsUnregisterResource(). Once a resource is registered to CUDA, it can be mapped and unmapped as many times as necessary using cudaGraphicsMapResources() and cudaGraphicsUnmapResources(). cudaGraphicsResourceSetMapFlags() can be called to specify usage hints (write-only, read-only) that the CUDA driver can use to optimize resource management. A mapped resource can be read from or written to by kernels using the device memory address returned by cudaGraphicsResourceGetMappedPointer() for buffers and cudaGraphicsSubResourceGetMappedArray() for CUDA arrays.
42
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA produces undefined results. Sections 3.2.7.1 and 3.2.7.2 give specifics for each graphics API and some code samples.
3.2.7.1
OpenGL Interoperability Interoperability with OpenGL requires that the CUDA device be specified by cudaGLSetGLDevice() before any other runtime calls. Note that cudaSetDevice()and cudaGLSetGLDevice() are mutually exclusive. The OpenGL resources that may be mapped into the address space of CUDA are OpenGL buffer, texture, and renderbuffer objects. A buffer object is registered using cudaGraphicsGLRegisterBuffer(). In CUDA, it appears as a device pointer and can therefore be read and written by kernels or via cudaMemcpy() calls. A texture or renderbuffer object is registered using cudaGraphicsGLRegisterImage(). In CUDA, it appears as a CUDA array and can therefore be bound to a texture reference and be read and written by kernels or via cudaMemcpy2D() calls. cudaGraphicsGLRegisterImage() supports all texture formats with 1, 2, or 4 components and an internal type of float (e.g. GL_RGBA_FLOAT32) and unnormalized integer (e.g. GL_RGBA8UI). It does not currently support normalized integer formats (e.g. GL_RGBA8). Please note that since GL_RGBA8UI is an OpenGL 3.0 texture format, it can only be written by shaders, not the fixed function pipeline. The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices stored in a vertex buffer object: GLuint positionsVBO; struct cudaGraphicsResource* positionsVBO_CUDA; int main() { // Explicitly set device cudaGLSetGLDevice(0); // Initialize OpenGL and GLUT ... glutDisplayFunc(display); // Create buffer object and register it with CUDA glGenBuffers(1, positionsVBO); glBindBuffer(GL_ARRAY_BUFFER, &vbo); unsigned int size = width * height * 4 * sizeof(float); glBufferData(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); cudaGraphicsGLRegisterBuffer(&positionsVBO_CUDA, positionsVBO, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop glutMainLoop(); }
CUDA C Programming Guide Version 3.2
43
Chapter 3. Programming Interface
void display() { // Map buffer object for writing from CUDA float4* positions; cudaGraphicsMapResources(1, &positionsVBO_CUDA, 0); size_t num_bytes; cudaGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVBO_CUDA)); // Execute kernel dim3 dimBlock(16, 16, 1); dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1); createVertices<<<dimGrid, dimBlock>>>(positions, time, width, height); // Unmap buffer object cudaGraphicsUnmapResources(1, &positionsVBO_CUDA, 0); // Render from buffer object glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindBuffer(GL_ARRAY_BUFFER, positionsVBO); glVertexPointer(4, GL_FLOAT, 0, 0); glEnableClientState(GL_VERTEX_ARRAY); glDrawArrays(GL_POINTS, 0, width * height); glDisableClientState(GL_VERTEX_ARRAY); // Swap buffers glutSwapBuffers(); glutPostRedisplay(); } void deleteVBO() { cudaGraphicsUnregisterResource(positionsVBO_CUDA); glDeleteBuffers(1, &positionsVBO); } __global__ void createVertices(float4* positions, float time, unsigned int width, unsigned int height) { unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; // Calculate uv coordinates float u = x / (float)width; float v = y / (float)height; u = u * 2.0f - 1.0f; v = v * 2.0f - 1.0f; // calculate simple sine wave pattern float freq = 4.0f; float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.5f; // Write positions positions[y * width + x] = make_float4(u, w, v, 1.0f);
44
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
}
On Windows and for Quadro GPUs, cudaWGLGetDevice() can be used to retrieve the CUDA device associated to the handle returned by wglEnumGpusNV(). Quadro GPUs offer higher performance OpenGL interoperability than GeForce and Tesla GPUs in a multi-GPU configuration where OpenGL rendering is performed on the Quadro GPU and CUDA computations are performed on other GPUs in the system.
3.2.7.2
Direct3D Interoperability Direct3D interoperability is supported for Direct3D 9, Direct3D 10, and Direct3D 11. A CUDA context may interoperate with only one Direct3D device at a time and the CUDA context and Direct3D device must be created on the same GPU. Moreover, the Direct3D device must be created with the D3DCREATE_HARDWARE_VERTEXPROCESSING flag. Interoperability with Direct3D requires that the Direct3D device be specified by cudaD3D9SetDirect3DDevice(), cudaD3D10SetDirect3DDevice() and cudaD3D11SetDirect3DDevice(), before any other runtime calls. cudaD3D9GetDevice(), cudaD3D10GetDevice(), and cudaD3D11GetDevice() can be used to retrieve the CUDA device associated to some adapter. A set of calls is also available to allow the creation of CUDA devices with interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate Frame Rendering) mode: cudaD3D[9|10|11]GetDevices(). A call to cuD3D[9|10|11]GetDevices()can be used to obtain a list of CUDA device handles that can be passed as the (optional) last parameter to cudaD3D[9|10|11]SetDirect3DDevice(). The application has the choice to either create multiple CPU threads, each using a different CUDA context, or a single CPU thread using multiple CUDA context. Each of these CUDA contexts would be created using one of the CUDA device handles returned by cudaD3D[9|10|11]GetDevices()). If using a single CPU thread, the application relies on the interoperability between CUDA driver and runtime APIs (Section 3.4), which allows it to call cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context active at a given time. See Section 4.3 for general recommendations related to interoperability between Direct3D devices using SLI and CUDA contexts. The Direct3D resources that may be mapped into the address space of CUDA are Direct3D buffers, textures, and surfaces. These resources are registered using cudaGraphicsD3D9RegisterResource(), cudaGraphicsD3D10RegisterResource(), and cudaGraphicsD3D11RegisterResource(). The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices stored in a vertex buffer object.
Direct3D 9 Version: IDirect3D9* D3D;
CUDA C Programming Guide Version 3.2
45
Chapter 3. Programming Interface
IDirect3DDevice9* device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; IDirect3DVertexBuffer9* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Initialize Direct3D D3D = Direct3DCreate9(D3D_SDK_VERSION); // Get a CUDA-enabled adapter unsigned int adapter = 0; for (; adapter < g_pD3D->GetAdapterCount(); adapter++) { D3DADAPTER_IDENTIFIER9 adapterId; g_pD3D->GetAdapterIdentifier(adapter, 0, &adapterId); int dev; if (cudaD3D9GetDevice(&dev, adapterId.DeviceName) == cudaSuccess) break; } // Create device ... D3D->CreateDevice(adapter, D3DDEVTYPE_HAL, hWnd, D3DCREATE_HARDWARE_VERTEXPROCESSING, ¶ms, &device); // Register device with CUDA cudaD3D9SetDirect3DDevice(device); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); device->CreateVertexBuffer(size, 0, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &positionsVB, 0); cudaGraphicsD3D9RegisterResource(&positionsVB_CUDA, positionsVB, cudaGraphicsRegisterFlagsNone); cudaGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cudaGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes;
46
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
cudaGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVB_CUDA)); // Execute kernel dim3 dimBlock(16, 16, 1); dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1); createVertices<<<dimGrid, dimBlock>>>(positions, time, width, height); // Unmap vertex buffer cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0); // Draw and present ... } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA); positionsVB->Release(); } __global__ void createVertices(float4* positions, float time, unsigned int width, unsigned int height) { unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; // Calculate uv coordinates float u = x / (float)width; float v = y / (float)height; u = u * 2.0f - 1.0f; v = v * 2.0f - 1.0f; // Calculate simple sine wave pattern float freq = 4.0f; float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.5f; // Write positions positions[y * width + x] = make_float4(u, w, v, __int_as_float(0xff00ff00)); }
Direct3D 10 Version: ID3D10Device* device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; ID3D10Buffer* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Get a CUDA-enabled adapter
CUDA C Programming Guide Version 3.2
47
Chapter 3. Programming Interface
IDXGIFactory* factory; CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); IDXGIAdapter* adapter = 0; for (unsigned int i = 0; !adapter; ++i) { if (FAILED(factory->EnumAdapters(i, &adapter)) break; int dev; if (cudaD3D10GetDevice(&dev, adapter) == cudaSuccess) break; adapter->Release(); } factory->Release(); // Create swap chain and device ... D3D10CreateDeviceAndSwapChain(adapter, D3D10_DRIVER_TYPE_HARDWARE, 0, D3D10_CREATE_DEVICE_DEBUG, D3D10_SDK_VERSION, &swapChainDesc, &swapChain, &device); adapter->Release(); // Register device with CUDA cudaD3D10SetDirect3DDevice(device); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = size; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; device->CreateBuffer(&bufferDesc, 0, &positionsVB); cudaGraphicsD3D10RegisterResource(&positionsVB_CUDA, positionsVB, cudaGraphicsRegisterFlagsNone); cudaGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cudaGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes; cudaGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes,
48
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
positionsVB_CUDA)); // Execute kernel dim3 dimBlock(16, 16, 1); dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1); createVertices<<<dimGrid, dimBlock>>>(positions, time, width, height); // Unmap vertex buffer cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0); // Draw and present ... } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA); positionsVB->Release(); } __global__ void createVertices(float4* positions, float time, unsigned int width, unsigned int height) { unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; // Calculate uv coordinates float u = x / (float)width; float v = y / (float)height; u = u * 2.0f - 1.0f; v = v * 2.0f - 1.0f; // Calculate simple sine wave pattern float freq = 4.0f; float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.5f; // Write positions positions[y * width + x] = make_float4(u, w, v, __int_as_float(0xff00ff00)); }
Direct3D 11 Version: ID3D11Device* device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; ID3D11Buffer* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory; CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
CUDA C Programming Guide Version 3.2
49
Chapter 3. Programming Interface
IDXGIAdapter* adapter = 0; for (unsigned int i = 0; !adapter; ++i) { if (FAILED(factory->EnumAdapters(i, &adapter)) break; int dev; if (cudaD3D11GetDevice(&dev, adapter) == cudaSuccess) break; adapter->Release(); } factory->Release(); // Create swap chain and device ... sFnPtr_D3D11CreateDeviceAndSwapChain(adapter, D3D11_DRIVER_TYPE_HARDWARE, 0, D3D11_CREATE_DEVICE_DEBUG, featureLevels, 3, D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &device, &featureLevel, &deviceContext); adapter->Release(); // Register device with CUDA cudaD3D11SetDirect3DDevice(device); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); D3D11_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D11_USAGE_DEFAULT; bufferDesc.ByteWidth = size; bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; device->CreateBuffer(&bufferDesc, 0, &positionsVB); cudaGraphicsD3D11RegisterResource(&positionsVB_CUDA, positionsVB, cudaGraphicsRegisterFlagsNone); cudaGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cudaGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes;
50
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
cudaGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVB_CUDA)); // Execute kernel dim3 dimBlock(16, 16, 1); dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1); createVertices<<<dimGrid, dimBlock>>>(positions, time, width, height); // Unmap vertex buffer cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0); // Draw and present ... } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA); positionsVB->Release(); } __global__ void createVertices(float4* positions, float time, unsigned int width, unsigned int height) { unsigned int x = blockIdx.x * blockDim.x + threadIdx.x; unsigned int y = blockIdx.y * blockDim.y + threadIdx.y; // Calculate uv coordinates float u = x / (float)width; float v = y / (float)height; u = u * 2.0f - 1.0f; v = v * 2.0f - 1.0f; // Calculate simple sine wave pattern float freq = 4.0f; float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.5f; // Write positions positions[y * width + x] = make_float4(u, w, v, __int_as_float(0xff00ff00)); }
3.2.8
Error Handling All runtime functions return an error code, but for an asynchronous function (see Section 3.2.6), this error code cannot possibly report any of the asynchronous errors that could occur on the device since the function returns before the device has completed the task; the error code only reports errors that occur on the host prior to executing the task, typically related to parameter validation; if an asynchronous error occurs, it will be reported by some subsequent unrelated runtime function call.
CUDA C Programming Guide Version 3.2
51
Chapter 3. Programming Interface
The only way to check for asynchronous errors just after some asynchronous function call is therefore to synchronize just after the call by calling cudaThreadSynchronize() (or by using any other synchronization mechanisms described in Section 3.2.6) and checking the error code returned by cudaThreadSynchronize(). The runtime maintains an error variable for each host thread that is initialized to cudaSuccess and is overwritten by the error code every time an error occurs (be it a parameter validation error or an asynchronous error). cudaPeekAtLastError() returns this variable. cudaGetLastError() returns this variable and resets it to cudaSuccess. Kernel launches do not return any error code, so cudaPeekAtLastError() or cudaGetLastError() must be called just after the kernel launch to retrieve any pre-launch errors. To ensure that any error returned by cudaPeekAtLastError() or cudaGetLastError() does not originate from calls prior to the kernel launch, one has to make sure that the runtime error variable is set to cudaSuccess just before the kernel launch, for example, by calling cudaGetLastError() just before the kernel launch. Kernel launches are asynchronous, so to check for asynchronous errors, the application must synchronize in-between the kernel launch and the call to cudaPeekAtLastError() or cudaGetLastError(). Note that cudaErrorNotReady that may be returned by cudaStreamQuery() and cudaEventQuery() is not considered an error and is therefore not reported by cudaPeekAtLastError() or cudaGetLastError().
3.2.9
Call Stack On devices of compute capability 2.x, the size of the call stack can be queried using cudaThreadGetLimit() and set using cudaThreadSetLimit(). When the call stack overflows, the kernel call fails with a stack overflow error if the application is run via a CUDA debugger (cuda-gdb, Parallel Nsight) or an unspecified launch error, otherwise.
3.3
Driver API The driver API is a handle-based, imperative API: Most objects are referenced by opaque handles that may be specified to functions to manipulate the objects. The objects available in the driver API are summarized in Table 3-1.
Table 3-1. Objects Available in the CUDA Driver API
52
Object
Handle
Description
Device
CUdevice
CUDA-enabled device
Context
CUcontext
Roughly equivalent to a CPU process
Module
CUmodule
Roughly equivalent to a dynamic library
Function
CUfunction
Kernel
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
Heap memory
CUdeviceptr
Pointer to device memory
CUDA array
CUarray
Opaque container for one-dimensional or two-dimensional data on the device, readable via texture or surface references
Texture reference
CUtexref
Object that describes how to interpret texture memory data
Surface reference
CUsurfref
Object that describes how to read or write CUDA arrays
The driver API is implemented in the nvcuda dynamic library and all its entry points are prefixed with cu. The driver API must be initialized with cuInit() before any function from the driver API is called. A CUDA context must then be created that is attached to a specific device and made current to the calling host thread as detailed in Section 3.3.1. Within a CUDA context, kernels are explicitly loaded as PTX or binary objects by the host code as described in Section 3.3.2. Kernels written in C must therefore be compiled separately into PTX or binary objects. Kernels are launched using API entry points as described in Section 3.3.3. Any application that wants to run on future device architectures must load PTX, not binary code. This is because binary code is architecture-specific and therefore incompatible with future architectures, whereas PTX code is compiled to binary code at load time by the driver. Here is the host code of the sample from Section 2.1 written using the driver API: int main() { int N = ...; size_t size = N * sizeof(float); // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size); float* h_B = (float*)malloc(size); // Initialize input vectors ... // Initialize cuInit(0); // Get number of devices supporting CUDA int deviceCount = 0; cuDeviceGetCount(&deviceCount); if (deviceCount == 0) { printf("There is no device supporting CUDA.\n"); exit (0); } // Get handle for device 0 CUdevice cuDevice; cuDeviceGet(&cuDevice, 0); // Create context CUcontext cuContext; cuCtxCreate(&cuContext, 0, cuDevice);
CUDA C Programming Guide Version 3.2
53
Chapter 3. Programming Interface
// Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “VecAdd.ptx”); // Allocate vectors in device memory CUdeviceptr d_A; cuMemAlloc(&d_A, size); CUdeviceptr d_B; cuMemAlloc(&d_B, size); CUdeviceptr d_C; cuMemAlloc(&d_C, size); // Copy vectors from host memory to device memory cuMemcpyHtoD(d_A, h_A, size); cuMemcpyHtoD(d_B, h_B, size); // Get function handle from module CUfunction vecAdd; cuModuleGetFunction(&vecAdd, cuModule, "VecAdd"); // Invoke kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(d_A)); cuParamSetv(vecAdd, offset, &d_A, sizeof(d_A)); offset += sizeof(d_A); ALIGN_UP(offset, __alignof(d_B)); cuParamSetv(vecAdd, offset, &d_B, sizeof(d_B)); offset += sizeof(d_B); ALIGN_UP(offset, __alignof(d_C)); cuParamSetv(vecAdd, offset, &d_C, sizeof(d_C)); offset += sizeof(d_C); ALIGN_UP(offset, __alignof(N)); cuParamSeti(vecAdd, offset, N); offset += sizeof(N); cuParamSetSize(vecAdd, offset); int threadsPerBlock = 256; int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock; cuFuncSetBlockShape(vecAdd, threadsPerBlock, 1, 1); cuLaunchGrid(vecAdd, blocksPerGrid, 1); ... }
Full code can be found in the vectorAddDrv SDK code sample.
3.3.1
Context A CUDA context is analogous to a CPU process. All resources and actions performed within the driver API are encapsulated inside a CUDA context, and the system automatically cleans up these resources when the context is destroyed. Besides objects such as modules and texture or surface references, each context has
54
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
its own distinct 32-bit address space. As a result, CUdeviceptr values from different contexts reference different memory locations. A host thread may have only one device context current at a time. When a context is created with cuCtxCreate(), it is made current to the calling host thread. CUDA functions that operate in a context (most functions that do not involve device enumeration or context management) will return CUDA_ERROR_INVALID_CONTEXT if a valid context is not current to the thread. Each host thread has a stack of current contexts. cuCtxCreate() pushes the new context onto the top of the stack. cuCtxPopCurrent() may be called to detach the context from the host thread. The context is then "floating" and may be pushed as the current context for any host thread. cuCtxPopCurrent() also restores the previous current context, if any. A usage count is also maintained for each context. cuCtxCreate() creates a context with a usage count of 1. cuCtxAttach() increments the usage count and cuCtxDetach() decrements it. A context is destroyed when the usage count goes to 0 when calling cuCtxDetach() or cuCtxDestroy(). Usage count facilitates interoperability between third party authored code operating in the same context. For example, if three libraries are loaded to use the same context, each library would call cuCtxAttach() to increment the usage count and cuCtxDetach() to decrement the usage count when the library is done using the context. For most libraries, it is expected that the application will have created a context before loading or initializing the library; that way, the application can create the context using its own heuristics, and the library simply operates on the context handed to it. Libraries that wish to create their own contexts – unbeknownst to their API clients who may or may not have created contexts of their own – would use cuCtxPushCurrent() and cuCtxPopCurrent() as illustrated in Figure 3-3. Library Initialization Call cuCtxCreate()
Initialize context
cuCtxPopCurrent()
Library Call cuCtxPushCurrent()
Use context
cuCtxPopCurrent()
Figure 3-3. Library Context Management
3.3.2
Module Modules are dynamically loadable packages of device code and data, akin to DLLs in Windows, that are output by nvcc (see Section 3.1). The names for all symbols, including functions, global variables, and texture or surface references, are
CUDA C Programming Guide Version 3.2
55
Chapter 3. Programming Interface
maintained at module scope so that modules written by independent third parties may interoperate in the same CUDA context. This code sample loads a module and retrieves a handle to some kernel: CUmodule cuModule; cuModuleLoad(&cuModule, “myModule.ptx”); CUfunction myKernel; cuModuleGetFunction(&myKernel, cuModule, “MyKernel”);
This code sample compiles and loads a new module from PTX code and parses compilation errors: #define ERROR_BUFFER_SIZE 100 CUmodule cuModule; CUjit_option options[3]; void* values[3]; char* PTXCode = “some PTX code”; options[0] = CU_ASM_ERROR_LOG_BUFFER; values[0] = (void*)malloc(ERROR_BUFFER_SIZE); options[1] = CU_ASM_ERROR_LOG_BUFFER_SIZE_BYTES; values[1] = (void*)ERROR_BUFFER_SIZE; options[2] = CU_ASM_TARGET_FROM_CUCONTEXT; values[2] = 0; cuModuleLoadDataEx(&cuModule, PTXCode, 3, options, values); for (int i = 0; i < values[1]; ++i) { // Parse error string here }
3.3.3
Kernel Execution cuFuncSetBlockShape() sets the number of threads per block for a given
function, and how their threadIDs are assigned. cuFuncSetSharedSize() sets the size of shared memory for the function.
The cuParam*() family of functions is used to specify the parameters that will be provided to the kernel the next time cuLaunchGrid() or cuLaunch() is invoked to launch the kernel. The second argument of each of the cuParam*() functions specifies the offset of the parameter in the parameter stack. This offset must match the alignment requirement for the parameter type in device code. Alignment requirements in device code for the built-in vector types are listed in Table B-1. For all other basic types, the alignment requirement in device code matches the alignment requirement in host code and can therefore be obtained using __alignof(). The only exception is when the host compiler aligns double and long long (and long on a 64-bit system) on a one-word boundary instead of a two-word boundary (for example, using gcc‟s compilation flag -mno-aligndouble) since in device code these types are always aligned on a two-word boundary. CUdeviceptr is an integer, but represents a pointer, so its alignment requirement is __alignof(void*).
The following code sample uses a macro to adjust the offset of each parameter to meet its alignment requirement. 56
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
#define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; int i; ALIGN_UP(offset, __alignof(i)); cuParamSeti(cuFunction, offset, i); offset += sizeof(i); float4 f4; ALIGN_UP(offset, 16); // float4‟s alignment is 16 cuParamSetv(cuFunction, offset, &f4, sizeof(f4)); offset += sizeof(f4); char c; ALIGN_UP(offset, __alignof(c)); cuParamSeti(cuFunction, offset, c); offset += sizeof(c); float f; ALIGN_UP(offset, __alignof(f)); cuParamSetf(cuFunction, offset, f); offset += sizeof(f); CUdeviceptr dptr; ALIGN_UP(offset, __alignof(dptr)); cuParamSetv(cuFunction, offset, &dptr, sizeof(dptr)); offset += sizeof(dptr); float2 f2; ALIGN_UP(offset, 8); // float2‟s alignment is 8 cuParamSetv(cuFunction, offset, &f2, sizeof(f2)); offset += sizeof(f2); cuParamSetSize(cuFunction, offset); cuFuncSetBlockShape(cuFunction, blockWidth, blockHeight, 1); cuLaunchGrid(cuFunction, gridWidth, gridHeight);
The alignment requirement of a structure is equal to the maximum of the alignment requirements of its fields. The alignment requirement of a structure that contains built-in vector types, CUdeviceptr, or non-aligned double and long long, might therefore differ between device code and host code. Such a structure might also be padded differently. The following structure, for example, is not padded at all in host code, but it is padded in device code with 12 bytes after field f since the alignment requirement for field f4 is 16. typedef struct { float f; float4 f4; } myStruct;
Any parameter of type myStruct must therefore be passed using separate calls to cuParam*(), such as: myStruct s; int offset = 0;
CUDA C Programming Guide Version 3.2
57
Chapter 3. Programming Interface
cuParamSetv(cuFunction, offset, &s.f, sizeof(s.f)); offset += sizeof(s.f); ALIGN_UP(offset, 16); // float4‟s alignment is 16 cuParamSetv(cuFunction, offset, &s.f4, sizeof(s.f4)); offset += sizeof(s.f4);
3.3.4
Device Memory Linear memory is allocated using cuMemAlloc() or cuMemAllocPitch() and freed using cuMemFree(). Here is the host code of the sample from Section 3.2.1 written using the driver API: // Host code int main() { // Initialize if (cuInit(0) != CUDA_SUCCESS) exit (0); // Get number of devices supporting CUDA int deviceCount = 0; cuDeviceGetCount(&deviceCount); if (deviceCount == 0) { printf("There is no device supporting CUDA.\n"); exit (0); } // Get handle for device 0 CUdevice cuDevice = 0; cuDeviceGet(&cuDevice, 0); // Create context CUcontext cuContext; cuCtxCreate(&cuContext, 0, cuDevice); // Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “VecAdd.ptx”); // Get function handle from module CUfunction vecAdd; cuModuleGetFunction(&vecAdd, cuModule, "VecAdd"); // Allocate vectors in device memory size_t size = N * sizeof(float); CUdeviceptr d_A; cuMemAlloc(&d_A, size); CUdeviceptr d_B; cuMemAlloc(&d_B, size); CUdeviceptr d_C; cuMemAlloc(&d_C, size); // Copy vectors from host memory to device memory // h_A and h_B are input vectors stored in host memory
58
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
cuMemcpyHtoD(d_A, h_A, size); cuMemcpyHtoD(d_B, h_B, size); // Invoke kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(d_A)); cuParamSetv(vecAdd, offset, &d_A, sizeof(d_A)); offset += sizeof(d_A); ALIGN_UP(offset, __alignof(d_B)); cuParamSetv(vecAdd, offset, &d_B, sizeof(d_B)); offset += sizeof(d_B); ALIGN_UP(offset, __alignof(d_C)); cuParamSetv(vecAdd, offset, &d_C, sizeof(d_C)); offset += sizeof(d_C); cuParamSetSize(VecAdd, offset); int threadsPerBlock = 256; int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock; cuFuncSetBlockShape(vecAdd, threadsPerBlock, 1, 1); cuLaunchGrid(VecAdd, blocksPerGrid, 1); // Copy result from device memory to host memory // h_C contains the result in host memory cuMemcpyDtoH(h_C, d_C, size); // Free device memory cuMemFree(d_A); cuMemFree(d_B); cuMemFree(d_C); }
Linear memory can also be allocated through cuMemAllocPitch(). This function is recommended for allocations of 2D arrays as it makes sure that the allocation is appropriately padded to meet the alignment requirements described in Section 5.3.2.1, therefore ensuring best performance when accessing the row addresses or performing copies between 2D arrays and other regions of device memory (using the cuMemcpy2D()). The returned pitch (or stride) must be used to access array elements. The following code sample allocates a width×height 2D array of floating-point values and shows how to loop over the array elements in device code: // Host code (assuming cuModule has been loaded) CUdeviceptr devPtr; size_t pitch; cuMemAllocPitch(&devPtr, &pitch, width * sizeof(float), height, 4); CUfunction myKernel; cuModuleGetFunction(&myKernel, cuModule, “MyKernel”); cuParamSetv(myKernel, 0, &devPtr, sizeof(devPtr)); cuParamSetSize(myKernel, sizeof(devPtr)); cuFuncSetBlockShape(myKernel, 512, 1, 1); cuLaunchGrid(myKernel, 100, 1); // Device code __global__ void MyKernel(float* devPtr)
CUDA C Programming Guide Version 3.2
59
Chapter 3. Programming Interface
{ for (int r = 0; r < height; ++r) { float* row = (float*)((char*)devPtr + r * pitch); for (int c = 0; c < width; ++c) { float element = row[c]; } } }
The following code sample allocates a width×height CUDA array of one 32-bit floating-point component: CUDA_ARRAY_DESCRIPTOR desc; desc.Format = CU_AD_FORMAT_FLOAT; desc.NumChannels = 1; desc.Width = width; desc.Height = height; CUarray cuArray; cuArrayCreate(&cuArray, &desc);
The reference manual lists all the various functions used to copy memory between linear memory allocated with cuMemAlloc(), linear memory allocated with cuMemAllocPitch(), and CUDA arrays. The following code sample copies the 2D array to the CUDA array allocated in the previous code samples: CUDA_MEMCPY2D copyParam; memset(©Param, 0, sizeof(copyParam)); copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY; copyParam.dstArray = cuArray; copyParam.srcMemoryType = CU_MEMORYTYPE_DEVICE; copyParam.srcDevice = devPtr; copyParam.srcPitch = pitch; copyParam.WidthInBytes = width * sizeof(float); copyParam.Height = height; cuMemcpy2D(©Param);
The following code sample illustrates various ways of accessing global variables via the driver API: CUdeviceptr devPtr; size_t bytes; __constant__ float constData[256]; float data[256]; cuModuleGetGlobal(&devPtr, &bytes, cuModule, “constData”); cuMemcpyHtoD(devPtr, data, bytes); cuMemcpyDtoH(data, devPtr, bytes); __device__ float devData; float value = 3.14f; cuModuleGetGlobal(&devPtr, &bytes, cuModule, “devData”); cuMemcpyHtoD(devPtr, &value, sizeof(float)); __device__ float* devPointer; CUdeviceptr ptr; cuMemAlloc(&ptr, 256 * sizeof(float)); cuModuleGetGlobal(&devPtr, &bytes, cuModule, “devPointer”); cuMemcpyHtoD(devPtr, &ptr, sizeof(ptr));
60
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
3.3.5
Shared Memory The following code sample is the driver version of the host code of the sample from Section 3.2.2. In this sample, shared memory is statically allocated within the kernel as opposed to allocated at runtime through cuFuncSetSharedSize(). // Matrices are stored in row-major order: // M(row, col) = *(M.elements + row * M.stride + col) typedef struct { int width; int height; int stride; float* elements; } Matrix; // Matrix multiplication - Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A, const Matrix B, Matrix C) { CUdeviceptr elements; // Load A and B to device memory Matrix d_A; d_A.width = d_A.stride = A.width; d_A.height = A.height; size_t size = A.width * A.height * sizeof(float); cuMemAlloc(&elements, size); cuMemcpyHtoD(elements, A.elements, size); d_A.elements = (float*)elements; Matrix d_B; d_B.width = d_B.stride = B.width; d_B.height = B.height; size = B.width * B.height * sizeof(float); cuMemAlloc(elements, size); cuMemcpyHtoD(elements, B.elements, size); d_B.elements = (float*)elements; // Allocate C in device memory Matrix d_C; d_C.width = d_C.stride = C.width; d_C.height = C.height; size = C.width * C.height * sizeof(float); cuMemAlloc(&elements, size); d_C.elements = (float*)elements; // Invoke kernel (assuming cuModule has been loaded) CUfunction matMulKernel; cuModuleGetFunction(&matMulKernel, cuModule, "MatMulKernel"); int offset = 0; cuParamSetv(matMulKernel, offset, &d_A, sizeof(d_A)); offset += sizeof(d_A); cuParamSetv(matMulKernel, offset, &d_B, sizeof(d_B)); offset += sizeof(d_B); cuParamSetv(matMulKernel, offset, &d_C, sizeof(d_C)); offset += sizeof(d_C); cuParamSetSize(matMulKernel, offset); cuFuncSetBlockShape(matMulKernel, BLOCK_SIZE, BLOCK_SIZE, 1); cuLaunchGrid(matMulKernel,
CUDA C Programming Guide Version 3.2
61
Chapter 3. Programming Interface
B.width / dimBlock.x, A.height / dimBlock.y); // Read C from device memory cuMemcpyDtoH(C.elements, (CUdeviceptr)d_C.elements, size); // Free device memory cuMemFree((CUdeviceptr)d_A.elements); cuMemFree((CUdeviceptr)d_B.elements); cuMemFree((CUdeviceptr)d_C.elements); }
3.3.6
Multiple Devices cuDeviceGetCount() and cuDeviceGet() provide a way to enumerate the
devices present in the system and other functions (described in the reference manual) to retrieve their properties: int deviceCount; cuDeviceGetCount(&deviceCount); int device; for (int device = 0; device < deviceCount; ++device) { CUdevice cuDevice; cuDeviceGet(&cuDevice, device); int major, minor; cuDeviceComputeCapability(&major, &minor, cuDevice); }
3.3.7
Texture and Surface Memory
3.3.7.1
Texture Memory Texure binding is done using cuTexRefSetAddress() for linear memory and cuTexRefSetArray() for CUDA arrays. If a module cuModule contains some texture reference texRef defined as texture<float, 2, cudaReadModeElementType> texRef;
the following code sample retrieves texRef„s handle: CUtexref cuTexRef; cuModuleGetTexRef(&cuTexRef, cuModule, “texRef”);
The following code sample binds texRef to some linear memory pointed to by devPtr: CUDA_ARRAY_DESCRIPTOR desc; cuTexRefSetAddress2D(cuTexRef, &desc, devPtr, pitch);
The following code samples bind texRef to a CUDA array cuArray: cuTexRefSetArray(cuTexRef, cuArray, CU_TRSA_OVERRIDE_FORMAT);
The reference manual lists various functions used to set address mode, filter mode, format, and other flags for some texture reference. The format specified when binding a texture to a texture reference must match the parameters specified when declaring the texture reference; otherwise, the results of texture fetches are undefined.
62
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
The following code sample is the driver version of the host code of the sample from Section 3.2.4.1.3. // Host code int main() { // Allocate CUDA array in device memory CUarray cuArray; CUDA_ARRAY_DESCRIPTOR desc; desc.Format = CU_AD_FORMAT_FLOAT; desc.NumChannels = 1; desc.Width = width; desc.Height = height; cuArrayCreate(&cuArray, &desc); // Copy to device memory some data located at address h_data // in host memory CUDA_MEMCPY2D copyParam; memset(©Param, 0, sizeof(copyParam)); copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY; copyParam.dstArray = cuArray; copyParam.srcMemoryType = CU_MEMORYTYPE_HOST; copyParam.srcHost = h_data; copyParam.srcPitch = width * sizeof(float); copyParam.WidthInBytes = copyParam.srcPitch; copyParam.Height = height; cuMemcpy2D(©Param); // Set texture parameters CUtexref texRef; cuModuleGetTexRef(&texRef, cuModule, "texRef")); cuTexRefSetAddressMode(texRef, 0, CU_TR_ADDRESS_MODE_WRAP); cuTexRefSetAddressMode(texRef, 1, CU_TR_ADDRESS_MODE_WRAP); cuTexRefSetFilterMode(texRef, CU_TR_FILTER_MODE_LINEAR); cuTexRefSetFlags(texRef, CU_TRSF_NORMALIZED_COORDINATES); cuTexRefSetFormat(texRef, CU_AD_FORMAT_FLOAT, 1); // Bind the array to the texture reference cuTexRefSetArray(texRef, cuArray, CU_TRSA_OVERRIDE_FORMAT); // Allocate result of transformation in device memory CUdeviceptr output; cuMemAlloc(&output, width * height * sizeof(float)); // Invoke kernel (assuming cuModule has been loaded) CUfunction transformKernel; cuModuleGetFunction(&transformKernel, cuModule, "transformKernel"); #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(output)); cuParamSetv(transformKernel, offset, &output, sizeof(output)); offset += sizeof(output); ALIGN_UP(offset, __alignof(width)); cuParamSeti(transformKernel, offset, width); offset += sizeof(width);
CUDA C Programming Guide Version 3.2
63
Chapter 3. Programming Interface
ALIGN_UP(offset, __alignof(height)); cuParamSeti(transformKernel, offset, height); offset += sizeof(height); ALIGN_UP(offset, __alignof(angle)); cuParamSetf(transformKernel, offset, angle); offset += sizeof(angle); cuParamSetSize(transformKernel, offset)); cuFuncSetBlockShape(transformKernel, 16, 16, 1); cuLaunchGrid(transformKernel, (width + dimBlock.x – 1) / dimBlock.x, (height + dimBlock.y – 1) / dimBlock.y); // Free device memory cuArrayDestroy(cuArray); cuMemFree(output); }
3.3.7.2
Surface Memory Surface binding is done using cuSurfRefSetArray() for CUDA arrays. If a module cuModule contains some surface reference surfRef defined as surface<void, 2> surfRef;
the following code sample retrieves surfRef„s handle: CUsurfref cuSurfRef; cuModuleGetSurfRef(&cuSurfRef, cuModule, “surfRef”);
The following code samples bind surfRef to a CUDA array cuArray: cuSurfRefSetArray(cuSurfRef, cuArray, CU_SRSA_USE_ARRAY_FORMAT);
The following code sample is the driver version of the host code of the sample from Section 3.2.4.1.4. // Host code int main() { // Allocate CUDA arrays in device memory CUDA_ARRAY_DESCRIPTOR desc; desc.Format = CU_AD_FORMAT_UNSIGNED_INT8; desc.NumChannels = 4; desc.Width = width; desc.Height = height; CUarray cuInputArray; cuArrayCreate(&cuInputArray, &desc); CUarray cuOutputArray; cuArrayCreate(&cuOutputArray, &desc); // Copy to device memory some data located at address h_data // in host memory CUDA_MEMCPY2D copyParam; memset(©Param, 0, sizeof(copyParam)); copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY; copyParam.dstArray = cuInputArray; copyParam.srcMemoryType = CU_MEMORYTYPE_HOST; copyParam.srcHost = h_data; copyParam.srcPitch = width * sizeof(float); copyParam.WidthInBytes = copyParam.srcPitch; copyParam.Height = height;
64
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
cuMemcpy2D(©Param); // Bind the arrays to the surface references cuSurfRefSetArray(inputSurfRef, cuInputArray, CU_SRSA_USE_ARRAY_FORMAT); cuSurfRefSetArray(outputSurfRef, cuOutputArray, CU_SRSA_USE_ARRAY_FORMAT); // Invoke kernel (assuming cuModule has been loaded) CUfunction copyKernel; cuModuleGetFunction(©Kernel, cuModule, "copyKernel"); #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(width)); cuParamSeti(copyKernel, offset, width); offset += sizeof(width); ALIGN_UP(offset, __alignof(height)); cuParamSeti(copyKernel, offset, height); offset += sizeof(height); cuParamSetSize(copyKernel, offset)); cuFuncSetBlockShape(copyKernel, 16, 16, 1); cuLaunchGrid(copyKernel, (width + dimBlock.x – 1) / dimBlock.x, (height + dimBlock.y – 1) / dimBlock.y); // Free device memory cuArrayDestroy(cuInputArray); cuArrayDestroy(cuOutputArray); }
3.3.8
Page-Locked Host Memory Page-locked host memory can be allocated using cuMemHostAlloc() with optional mutually non-exclusive flags:
CU_MEMHOSTALLOC_PORTABLE to allocate memory that is portable across
CUDA contexts (see Section 3.2.5.1) 3.2.5.2; CU_MEMHOSTALLOC_WRITECOMBINED to allocate memory as writecombining (see Section 3.2.5.2); CU_MEMHOSTALLOC_DEVICEMAP to allocate mapped page-locked memory (see Section 3.2.5.3). Page-locked host memory is freed using cuMemFreeHost().
Page-locked memory mapping is enabled for a CUDA context by creating the context with the CU_CTX_MAP_HOST flag and device pointers to mapped pagelocked memory are retrieved using cuMemHostGetDevicePointer(). Applications may query whether a device supports mapped page-locked host memory or not by checking the CU_DEVICE_ATTRIBUTE_CAN_MAP_HOST_MEMORY attribute using cuDeviceGetAttribute().
CUDA C Programming Guide Version 3.2
65
Chapter 3. Programming Interface
3.3.9
Asynchronous Concurrent Execution Applications may query if a device can perform copies between page-locked host memory and device memory concurrently with kernel execution by checking the CU_DEVICE_ATTRIBUTE_GPU_OVERLAP attribute using cuDeviceGetAttribute(). Applications may query if a device supports multiple kernels running concurrently by checking the CU_DEVICE_ATTRIBUTE_CONCURRENT_KERNELS attribute using cuDeviceGetAttribute().
3.3.9.1
Stream The driver API provides functions similar to the runtime API to manage streams. The following code sample is the driver version of the code sample from Section 3.2.6.4. CUstream stream[2]; for (int i = 0; i < 2; ++i) cuStreamCreate(&stream[i], 0); float* hostPtr; cuMemAllocHost(&hostPtr, 2 * size); for (int i = 0; i < 2; ++i) cuMemcpyHtoDAsync(inputDevPtr + i * size, hostPtr + i * size, size, stream[i]); for (int i = 0; i < 2; ++i) { #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(outputDevPtr)); cuParamSetv(cuFunction, offset, &outputDevPtr, sizeof(outputDevPtr)); offset += sizeof(outputDevPtr); ALIGN_UP(offset, __alignof(inputDevPtr)); cuParamSetv(cuFunction, offset, &inputDevPtr, sizeof(inputDevPtr)); offset += sizeof(inputDevPtr); ALIGN_UP(offset, __alignof(size)); cuParamSeti(cuFunction, offset, size); offset += sizeof(int); cuParamSetSize(cuFunction, offset); cuFuncSetBlockShape(cuFunction, 512, 1, 1); cuLaunchGridAsync(cuFunction, 100, 1, stream[i]); } for (int i = 0; i < 2; ++i) cuMemcpyDtoHAsync(hostPtr + i * size, outputDevPtr + i * size, size, stream[i]); cuCtxSynchronize(); for (int i = 0; i < 2; ++i) cuStreamDestroy(&stream[i]);
66
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
3.3.9.2
Event Management The driver API provides functions similar to the runtime API to manage events. The following code sample is the driver version of the code sample from Section 3.2.6.6. CUevent start, stop; cuEventCreate(&start); cuEventCreate(&stop); cuEventRecord(start, 0); for (int i = 0; i < 2; ++i) cuMemcpyHtoDAsync(inputDevPtr + i * size, hostPtr + i * size, size, stream[i]); for (int i = 0; i < 2; ++i) { #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(outputDevPtr)); cuParamSetv(cuFunction, offset, &outputDevPtr, sizeof(outputDevPtr)); offset += sizeof(outputDevPtr); ALIGN_UP(offset, __alignof(inputDevPtr)); cuParamSetv(cuFunction, offset, &inputDevPtr, sizeof(inputDevPtr)); offset += sizeof(inputDevPtr); ALIGN_UP(offset, __alignof(size)); cuParamSeti(cuFunction, offset, size); offset += sizeof(size); cuParamSetSize(cuFunction, offset); cuFuncSetBlockShape(cuFunction, 512, 1, 1); cuLaunchGridAsync(cuFunction, 100, 1, stream[i]); } for (int i = 0; i < 2; ++i) cuMemcpyDtoHAsync(hostPtr + i * size, outputDevPtr + i * size, size, stream[i]); cuEventRecord(stop, 0); cuEventSynchronize(stop); float elapsedTime; cuEventElapsedTime(&elapsedTime, start, stop);
They are destroyed this way: cuEventDestroy(start); cuEventDestroy(stop);
3.3.9.3
Synchronous Calls Whether the host thread will yield, block, or spin on a synchronous function call can be specified by calling cuCtxCreate() with some specific flags as described in the reference manual.
3.3.10
Graphics Interoperability The driver API provides functions similar to the runtime API to manage graphics interoperability.
CUDA C Programming Guide Version 3.2
67
Chapter 3. Programming Interface
A resource must be registered to CUDA before it can be mapped using the functions mentioned in Sections 3.3.10.1 and 3.3.10.2. These functions return a CUDA graphics resource of type CUgraphicsResource. Registering a resource is potentially high-overhead and therefore typically called only once per resource. A CUDA graphics resource is unregistered using cuGraphicsUnregisterResource(). Once a resource is registered to CUDA, it can be mapped and unmapped as many times as necessary using cuGraphicsMapResources() and cuGraphicsUnmapResources(). cuGraphicsResourceSetMapFlags() can be called to specify usage hints (write-only, read-only) that the CUDA driver can use to optimize resource management. A mapped resource can be read from or written to by kernels using the device memory address returned by cuGraphicsResourceGetMappedPointer() for buffers and cuGraphicsSubResourceGetMappedArray() for CUDA arrays. Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA produces undefined results. Sections 3.3.10.1 and 3.3.10.2 give specifics for each graphics API and some code samples.
3.3.10.1
OpenGL Interoperability Interoperability with OpenGL requires that the CUDA context be specifically created using cuGLCtxCreate() instead of cuCtxCreate(). The OpenGL resources that may be mapped into the address space of CUDA are OpenGL buffer, texture, and renderbuffer objects. A buffer object is registered using cuGraphicsGLRegisterBuffer(). A texture or renderbuffer object is registered using cuGraphicsGLRegisterImage(). The same restrictions described in Section 3.2.7.1 apply. The following code sample is the driver version of the code sample from Section 3.2.7.1. CUfunction createVertices; GLuint positionsVBO; struct cudaGraphicsResource* positionsVBO_CUDA; int main() { // Initialize driver API ... // Get handle for device 0 CUdevice cuDevice = 0; cuDeviceGet(&cuDevice, 0); // Create context CUcontext cuContext; cuGLCtxCreate(&cuContext, 0, cuDevice); // Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “createVertices.ptx�);
68
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Get function handle from module cuModuleGetFunction(&createVertices, cuModule, "createVertices"); // Initialize OpenGL and GLUT ... glutDisplayFunc(display); // Create buffer object and register it with CUDA glGenBuffers(1, positionsVBO); glBindBuffer(GL_ARRAY_BUFFER, &vbo); unsigned int size = width * height * 4 * sizeof(float); glBufferData(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); cuGraphicsGLRegisterBuffer(&positionsVBO_CUDA, positionsVBO, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop glutMainLoop(); } void display() { // Map OpenGL buffer object for writing from CUDA CUdeviceptr positions; cuGraphicsMapResources(1, &positionsVBO_CUDA, 0); size_t num_bytes; cuGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVBO_CUDA)); // Execute kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(positions)); cuParamSetv(createVertices, offset, &positions, sizeof(positions)); offset += sizeof(positions); ALIGN_UP(offset, __alignof(time)); cuParamSetf(createVertices, offset, time); offset += sizeof(time); ALIGN_UP(offset, __alignof(width)); cuParamSeti(createVertices, offset, width); offset += sizeof(width); ALIGN_UP(offset, __alignof(height)); cuParamSeti(createVertices, offset, height); offset += sizeof(height); cuParamSetSize(createVertices, offset); int threadsPerBlock = 16; cuFuncSetBlockShape(createVertices, threadsPerBlock, threadsPerBlock, 1); cuLaunchGrid(createVertices, width / threadsPerBlock, height / threadsPerBlock); // Unmap buffer object
CUDA C Programming Guide Version 3.2
69
Chapter 3. Programming Interface
cuGraphicsUnmapResources(1, &positionsVBO_CUDA, 0); // Render from buffer object glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindBuffer(GL_ARRAY_BUFFER, positionsVBO); glVertexPointer(4, GL_FLOAT, 0, 0); glEnableClientState(GL_VERTEX_ARRAY); glDrawArrays(GL_POINTS, 0, width * height); glDisableClientState(GL_VERTEX_ARRAY); // Swap buffers glutSwapBuffers(); glutPostRedisplay(); } void deleteVBO() { cuGraphicsUnregisterResource(positionsVBO_CUDA); glDeleteBuffers(1, &positionsVBO); }
On Windows and for Quadro GPUs, cuWGLGetDevice() can be used to retrieve the CUDA device associated to the handle returned by wglEnumGpusNV().
3.3.10.2
Direct3D Interoperability Interoperability with Direct3D requires that the Direct3D device be specified when the CUDA context is created. This is done by creating the CUDA context using cuD3D9CtxCreate() or cuD3D9CtxCreateOnDevice() (resp. cuD3D10CtxCreate()or cuD3D10CtxCreateOnDevice() and cuD3D11CtxCreate()or cuD3D11CtxCreateOnDevice()) instead of cuCtxCreate(). Two sets of calls are also available to allow the creation of CUDA devices with interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate Frame Rendering) mode. These two new sets of calls are cuD3D[9|10|11]CtxCreateOnDevice() and cuD3D[9|10|11]GetDevices(). A call to cuD3D[9|10|11]GetDevices()should be used to obtain a list of CUDA device handles that can be passed as the last parameter to cuD3D[9|10|11]CtxCreateOnDevice(). Applications that intend to support interoperability between Direct3D devices in SLI configurations and CUDA should be written to only use these calls instead of the cuD3D[9|10|11]CtxCreate() calls. In addition, they can call cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context active at a given time. See Section 4.3 for general recommendations related to interoperability between Direct3D devices using SLI and CUDA contexts. The Direct3D resources that may be mapped into the address space of CUDA are Direct3D buffers, textures, and surfaces. These resources are registered using cuGraphicsD3D9RegisterResource(), cuGraphicsD3D10RegisterResource(), and cuGraphicsD3D11RegisterResource().
70
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
The following code sample is the driver version of the host code of the sample from Section 3.2.7.2.
Direct3D 9 Version: IDirect3D9* D3D; IDirect3DDevice9 device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; IDirect3DVertexBuffer9* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Initialize Direct3D D3D = Direct3DCreate9(D3D_SDK_VERSION); // Get a CUDA-enabled adapter unsigned int adapter = 0; for (; adapter < g_pD3D->GetAdapterCount(); adapter++) { D3DADAPTER_IDENTIFIER9 adapterId; g_pD3D->GetAdapterIdentifier(adapter, 0, &adapterId); int dev; if (cuD3D9GetDevice(&dev, adapterId.DeviceName) == cudaSuccess) break; } // Create device ... D3D->CreateDevice(adapter, D3DDEVTYPE_HAL, hWnd, D3DCREATE_HARDWARE_VERTEXPROCESSING, ¶ms, &device); // Initialize driver API ... // Create context CUdevice cuDevice; CUcontext cuContext; cuD3D9CtxCreate(&cuContext, &cuDevice, 0, device); // Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “createVertices.ptx”); // Get function handle from module cuModuleGetFunction(&createVertices, cuModule, "createVertices"); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); device->CreateVertexBuffer(size, 0, D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT, &positionsVB, 0); cuGraphicsD3D9RegisterResource(&positionsVB_CUDA,
CUDA C Programming Guide Version 3.2
71
Chapter 3. Programming Interface
positionsVB, cudaGraphicsRegisterFlagsNone); cuGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cuGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes; cuGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVB_CUDA)); // Execute kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(positions)); cuParamSetv(createVertices, offset, &positions, sizeof(positions)); offset += sizeof(positions); ALIGN_UP(offset, __alignof(time)); cuParamSetf(createVertices, offset, time); offset += sizeof(time); ALIGN_UP(offset, __alignof(width)); cuParamSeti(createVertices, offset, width); offset += sizeof(width); ALIGN_UP(offset, __alignof(height)); cuParamSeti(createVertices, offset, height); offset += sizeof(height); cuParamSetSize(createVertices, offset); int threadsPerBlock = 16; cuFuncSetBlockShape(createVertices, threadsPerBlock, threadsPerBlock, 1); cuLaunchGrid(createVertices, width / threadsPerBlock, height / threadsPerBlock); // Unmap vertex buffer cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0); // Draw and present ... } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA);
72
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
positionsVB->Release(); }
Direct3D 10 Version: ID3D10Device* device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; ID3D10Buffer* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory; CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); IDXGIAdapter* adapter = 0; for (unsigned int i = 0; !adapter; ++i) { if (FAILED(factory->EnumAdapters(i, &adapter)) break; int dev; if (cuD3D10GetDevice(&dev, adapter) == cudaSuccess) break; adapter->Release(); } factory->Release(); // Create swap chain and device ... D3D10CreateDeviceAndSwapChain(adapter, D3D10_DRIVER_TYPE_HARDWARE, 0, D3D10_CREATE_DEVICE_DEBUG, D3D10_SDK_VERSION, &swapChainDesc &swapChain, &device); adapter->Release(); // Initialize driver API ... // Create context CUdevice cuDevice; CUcontext cuContext; cuD3D10CtxCreate(&cuContext, &cuDevice, 0, device); // Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “createVertices.ptx”); // Get function handle from module cuModuleGetFunction(&createVertices, cuModule, "createVertices"); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); D3D10_BUFFER_DESC bufferDesc;
CUDA C Programming Guide Version 3.2
73
Chapter 3. Programming Interface
bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = size; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; device->CreateBuffer(&bufferDesc, 0, &positionsVB); cuGraphicsD3D10RegisterResource(&positionsVB_CUDA, positionsVB, cudaGraphicsRegisterFlagsNone); cuGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cuGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes; cuGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVB_CUDA)); // Execute kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(positions)); cuParamSetv(createVertices, offset, &positions, sizeof(positions)); offset += sizeof(positions); ALIGN_UP(offset, __alignof(time)); cuParamSetf(createVertices, offset, time); offset += sizeof(time); ALIGN_UP(offset, __alignof(width)); cuParamSeti(createVertices, offset, width); offset += sizeof(width); ALIGN_UP(offset, __alignof(height)); cuParamSeti(createVertices, offset, height); offset += sizeof(height); cuParamSetSize(createVertices, offset); int threadsPerBlock = 16; cuFuncSetBlockShape(createVertices, threadsPerBlock, threadsPerBlock, 1); cuLaunchGrid(createVertices, width / threadsPerBlock, height / threadsPerBlock); // Unmap vertex buffer cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0);
74
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
// Draw and present ... } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA); positionsVB->Release(); }
Direct3D 11 Version: ID3D11Device* device; struct CUSTOMVERTEX { FLOAT x, y, z; DWORD color; }; ID3D11Buffer* positionsVB; struct cudaGraphicsResource* positionsVB_CUDA; int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory; CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); IDXGIAdapter* adapter = 0; for (unsigned int i = 0; !adapter; ++i) { if (FAILED(factory->EnumAdapters(i, &adapter)) break; int dev; if (cuD3D11GetDevice(&dev, adapter) == cudaSuccess) break; adapter->Release(); } factory->Release(); // Create swap chain and device ... sFnPtr_D3D11CreateDeviceAndSwapChain(adapter, D3D11_DRIVER_TYPE_HARDWARE, 0, D3D11_CREATE_DEVICE_DEBUG, featureLevels, 3, D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &device, &featureLevel, &deviceContext); adapter->Release(); // Initialize driver API ... // Create context CUdevice cuDevice; CUcontext cuContext; cuD3D11CtxCreate(&cuContext, &cuDevice, 0, device);
CUDA C Programming Guide Version 3.2
75
Chapter 3. Programming Interface
// Create module from binary file CUmodule cuModule; cuModuleLoad(&cuModule, “createVertices.ptx”); // Get function handle from module cuModuleGetFunction(&createVertices, cuModule, "createVertices"); // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX); D3D11_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D11_USAGE_DEFAULT; bufferDesc.ByteWidth = size; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; device->CreateBuffer(&bufferDesc, 0, &positionsVB); cuGraphicsD3D11RegisterResource(&positionsVB_CUDA, positionsVB, cudaGraphicsRegisterFlagsNone); cuGraphicsResourceSetMapFlags(positionsVB_CUDA, cudaGraphicsMapFlagsWriteDiscard); // Launch rendering loop while (...) { ... Render(); ... } } void Render() { // Map vertex buffer for writing from CUDA float4* positions; cuGraphicsMapResources(1, &positionsVB_CUDA, 0); size_t num_bytes; cuGraphicsResourceGetMappedPointer((void**)&positions, &num_bytes, positionsVB_CUDA)); // Execute kernel #define ALIGN_UP(offset, alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0; ALIGN_UP(offset, __alignof(positions)); cuParamSetv(createVertices, offset, &positions, sizeof(positions)); offset += sizeof(positions); ALIGN_UP(offset, __alignof(time)); cuParamSetf(createVertices, offset, time); offset += sizeof(time); ALIGN_UP(offset, __alignof(width)); cuParamSeti(createVertices, offset, width); offset += sizeof(width); ALIGN_UP(offset, __alignof(height)); cuParamSeti(createVertices, offset, height);
76
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
offset += sizeof(height); cuParamSetSize(createVertices, offset); int threadsPerBlock = 16; cuFuncSetBlockShape(createVertices, threadsPerBlock, threadsPerBlock, 1); cuLaunchGrid(createVertices, width / threadsPerBlock, height / threadsPerBlock); // Unmap vertex buffer cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0); // Draw and present ... } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA); positionsVB->Release(); }
3.3.11
Error Handling All driver functions return an error code, but for an asynchronous function (see Section 3.2.6), this error code cannot possibly report any of the asynchronous errors that could occur on the device since the function returns before the device has completed the task; the error code only reports errors that occur on the host prior to executing the task, typically related to parameter validation; if an asynchronous error occurs, it will be reported by some subsequent unrelated runtime function call. The only way to check for asynchronous errors just after some asynchronous function call is therefore to synchronize just after the call by calling cuCtxSynchronize() (or by using any other synchronization mechanisms described in Section 3.3.9) and checking the error code returned by cuCtxSynchronize().
3.3.12
Call Stack On devices of compute capability 2.x, the size of the call stack can be queried using cuCtxGetLimit() and set using cuCtxSetLimit().
3.4
Interoperability between Runtime and Driver APIs An application can mix runtime API code with driver API code. If a context is created and made current via the driver API, subsequent runtime calls will pick up this context instead of creating a new one.
CUDA C Programming Guide Version 3.2
77
Chapter 3. Programming Interface
If the runtime is initialized (implicitly as mentioned in Section 3.2), cuCtxAttach() can be used to retrieve the context created during initialization. This context can be used by subsequent driver API calls. Device memory can be allocated and freed using either API. CUdeviceptr can be cast to regular pointers and vice-versa: CUdeviceptr devPtr; float* d_data; // Allocation using driver API cuMemAlloc(&devPtr, size); d_data = (float*)devPtr; // Allocation using runtime API cudaMalloc(&d_data, size); devPtr = (CUdeviceptr)d_data;
In particular, this means that applications written using the driver API can invoke libraries written using the runtime API (such as CUFFT, CUBLAS, …). All functions from the device and version management sections of the reference manual can be used interchangeably.
3.5
Versioning and Compatibility There are two version numbers that developers should care about when developing a CUDA application: The compute capability that describes the general specifications and features of the compute device (see Section 2.5) and the version of the CUDA driver API that describes the features supported by the driver API and runtime. The version of the driver API is defined in the driver header file as CUDA_VERSION. It allows developers to check whether their application requires a newer driver than the one currently installed. This is important, because the driver API is backward compatible, meaning that applications, plug-ins, and libraries (including the C runtime) compiled against a particular version of the driver API will continue to work on subsequent driver releases as illustrated in Figure 3-4. The driver API is not forward compatible, which means that applications, plug-ins, and libraries (including the C runtime) compiled against a particular version of the driver API will not work on previous versions of the driver. It is important to note that mixing and matching versions is not supported; specifically: All applications, plug-ins, and libraries on a system must use the same version of the CUDA driver API, since only one version of the CUDA driver can be installed on a system. All plug-ins and libraries used by an application must use the same version of the runtime. All plug-ins and libraries used by an application must use the same version of any libraries that use the runtime (such as CUFFT, CUBLAS, …).
78
CUDA C Programming Guide Version 3.2
Chapter 3. Programming Interface
Apps, Libs & Plug-ins
Apps, Libs & Plug-ins
1.0 Driver
1.1 Driver
Compatible
Apps, Libs & Plug-ins ...
2.0 Driver
...
Incompatible
Figure 3-4. The Driver API is Backward, but Not Forward Compatible
3.6
Compute Modes On Tesla solutions running Linux, one can set any device in a system in one of the three following modes using NVIDIA‟s System Management Interface (nvidia-smi), which is a tool distributed as part of the Linux driver: Default compute mode: Multiple host threads can use the device (by calling cudaSetDevice() on this device, when using the runtime API, or by making current a context associated to the device, when using the driver API) at the same time. Exclusive compute mode: Only one host thread can use the device at any given time. Prohibited compute mode: No host thread can use the device. This means, in particular, that a host thread using the runtime API without explicitly calling cudaSetDevice() might be associated with a device other than device 0 if device 0 turns out to be in prohibited compute mode or in exclusive compute mode and used by another host thread. cudaSetValidDevices() can be used to set a device from a prioritized list of devices.
Applications may query the compute mode of a device by calling cudaGetDeviceProperties() and checking the computeMode property or checking the CU_DEVICE_COMPUTE_MODE attribute using cuDeviceGetAttribute().
3.7
Mode Switches GPUs dedicate some DRAM memory to the so-called primary surface, which is used to refresh the display device whose output is viewed by the user. When users initiate
CUDA C Programming Guide Version 3.2
79
Chapter 3. Programming Interface
a mode switch of the display by changing the resolution or bit depth of the display (using NVIDIA control panel or the Display control panel on Windows), the amount of memory needed for the primary surface changes. For example, if the user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Fullscreen graphics applications running with anti-aliasing enabled may require much more display memory for the primary surface.) On Windows, other events that may initiate display mode switches include launching a full-screen DirectX application, hitting Alt+Tab to task switch away from a full-screen DirectX application, or hitting Ctrl+Alt+Del to lock the computer. If a mode switch increases the amount of memory needed for the primary surface, the system may have to cannibalize memory allocations dedicated to CUDA applications. Therefore, a mode switch results in any call to the CUDA runtime to fail and return an invalid context error.
80
CUDA C Programming Guide Version 3.2
Chapter 4. Hardware Implementation
The CUDA architecture is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). When a CUDA program on the host CPU invokes a kernel grid, the blocks of the grid are enumerated and distributed to multiprocessors with available execution capacity. The threads of a thread block execute concurrently on one multiprocessor, and multiple thread blocks can execute concurrently on one multiprocessor. As thread blocks terminate, new blocks are launched on the vacated multiprocessors. A multiprocessor is designed to execute hundreds of threads concurrently. To manage such a large amount of threads, it employs a unique architecture called SIMT (Single-Instruction, Multiple-Thread) that is described in Section 4.1. To maximize utilization of its functional units, it leverages thread-level parallelism by using hardware multithreading as detailed in Section 4.2, more so than instruction-level parallelism within a single thread (instructions are pipelined, but unlike CPU cores they are executed in order and there is no branch prediction and no speculative execution). Sections 4.1 and 4.2 describe the architecture features of the streaming multiprocessor that are common to all devices. Sections G.3.1 and G.4.1 provide the specifics for devices of compute capabilities 1.x and 2.x, respectively.
4.1
SIMT Architecture The multiprocessor creates, manages, schedules, and executes threads in groups of 32 parallel threads called warps. Individual threads composing a warp start together at the same program address, but they have their own instruction address counter and register state and are therefore free to branch and execute independently. The term warp originates from weaving, the first parallel thread technology. A half-warp is either the first or second half of a warp. A quarter-warp is either the first, second, third, or fourth quarter of a warp. When a multiprocessor is given one or more thread blocks to execute, it partitions them into warps that get scheduled by a warp scheduler for execution. The way a block is partitioned into warps is always the same; each warp contains threads of consecutive, increasing thread IDs with the first warp containing thread 0. Section 2.2 describes how thread IDs relate to thread indices in the block.
CUDA C Programming Guide Version 3.1
81
Chapter 4. Hardware Implementation
A warp executes one common instruction at a time, so full efficiency is realized when all 32 threads of a warp agree on their execution path. If threads of a warp diverge via a data-dependent conditional branch, the warp serially executes each branch path taken, disabling threads that are not on that path, and when all paths complete, the threads converge back to the same execution path. Branch divergence occurs only within a warp; different warps execute independently regardless of whether they are executing common or disjoint code paths. The SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector organizations in that a single instruction controls multiple processing elements. A key difference is that SIMD vector organizations expose the SIMD width to the software, whereas SIMT instructions specify the execution and branching behavior of a single thread. In contrast with SIMD vector machines, SIMT enables programmers to write thread-level parallel code for independent, scalar threads, as well as data-parallel code for coordinated threads. For the purposes of correctness, the programmer can essentially ignore the SIMT behavior; however, substantial performance improvements can be realized by taking care that the code seldom requires threads in a warp to diverge. In practice, this is analogous to the role of cache lines in traditional code: Cache line size can be safely ignored when designing for correctness but must be considered in the code structure when designing for peak performance. Vector architectures, on the other hand, require the software to coalesce loads into vectors and manage divergence manually. If a non-atomic instruction executed by a warp writes to the same location in global or shared memory for more than one of the threads of the warp, the number of serialized writes that occur to that location varies depending on the compute capability of the device (see Sections G.3.2, G.3.3, G.4.2, and G.4.3) and which thread performs the final write is undefined. If an atomic instruction (see Section B.11) executed by a warp reads, modifies, and writes to the same location in global memory for more than one of the threads of the warp, each read, modify, write to that location occurs and they are all serialized, but the order in which they occur is undefined.
4.2
Hardware Multithreading The execution context (program counters, registers, etc) for each warp processed by a multiprocessor is maintained on-chip during the entire lifetime of the warp. Switching from one execution context to another therefore has no cost, and at every instruction issue time, a warp scheduler selects a warp that has threads ready to execute its next instruction (active threads) and issues the instruction to those threads. In particular, each multiprocessor has a set of 32-bit registers that are partitioned among the warps, and a parallel data cache or shared memory that is partitioned among the thread blocks. The number of blocks and warps that can reside and be processed together on the multiprocessor for a given kernel depends on the amount of registers and shared memory used by the kernel and the amount of registers and shared memory available on the multiprocessor. There are also a maximum number of resident blocks and a maximum number of resident warps per multiprocessor. These limits as well the amount of registers and shared memory available on the multiprocessor
82
CUDA C Programming Guide Version 3.2
Chapter 4: Hardware Implementation
are a function of the compute capability of the device and are given in Appendix G. If there are not enough registers or shared memory available per multiprocessor to process at least one block, the kernel will fail to launch. The total number of warps Wblock in a block is as follows: Wblock ceil (
T ,1) Wsize
T is the number of threads per block, Wsize is the warp size, which is equal to 32, ceil(x, y) is equal to x rounded up to the nearest multiple of y. The total number of registers Rblock allocated for a block is as follows:
For devices of compute capability 1.x:
Rblock ceil (ceil (Wblock , GW ) Wsize Rk , GT ) For devices of compute capability 2.x:
Rblock ceil ( Rk Wsize , GT ) Wblock GW is the warp allocation granularity, equal to 2 (compute capability 1.x only), Rk is the number of registers used by the kernel, GT is the thread allocation granularity, equal to 256 for devices of compute capability 1.0 and 1.1, and 512 for devices of compute capability 1.2 and 1.3, and 64 for devices of compute capability 2.x. The total amount of shared memory Sblock in bytes allocated for a block is as follows:
S block ceil (S k , GS )
Sk is the amount of shared memory used by the kernel in bytes, GS is the shared memory allocation granularity, which is equal to 512 for devices of compute capability 1.x and 128 for devices of compute capability 2.x.
4.3
Multiple Devices In a system with multiple GPUs, all CUDA-enabled GPUs are accessible via the CUDA driver and runtime as separate devices. There are however special considerations as described below when the system is in SLI mode. First, an allocation in one CUDA device on one GPU will consume memory on other GPUs that are part of the SLI configuration of the Direct3D device. Because of this, allocations may fail earlier than otherwise expected. Second, applications have to create multiple CUDA contexts, one for each GPU in the SLI configuration and deal with the fact that a different GPU is used for rendering by the Direct3D device at every frame. The application can use the cuD3D[9|10|11]GetDevices() set of calls to identify the CUDA device handle(s) for the GPU(s) that are performing the rendering in the current and next frame. Given this information the application will typically map Direct3D resources to the CUDA context corresponding to the CUDA device returned by cuD3D[9|10|11]GetDevices() when the deviceList parameter is set to
CUDA C Programming Guide Version 3.2
83
Chapter 4. Hardware Implementation
CU_D3D10_DEVICE_LIST_CURRENT_FRAME. See Sections 3.2.7.2 and 3.3.10.2
for details on how to use CUDA-Direct3D interoperability.
84
CUDA C Programming Guide Version 3.2
Chapter 5. Performance Guidelines
5.1
Overall Performance Optimization Strategies Performance optimization revolves around three basic strategies: Maximize parallel execution to achieve maximum utilization; Optimize memory usage to achieve maximum memory throughput; Optimize instruction usage to achieve maximum instruction throughput. Which strategies will yield the best performance gain for a particular portion of an application depends on the performance limiters for that portion; optimizing instruction usage of a kernel that is mostly limited by memory accesses will not yield any significant performance gain, for example. Optimization efforts should therefore be constantly directed by measuring and monitoring the performance limiters, for example using the CUDA profiler. Also, comparing the floating-point operation throughput or memory throughput – whichever makes more sense – of a particular kernel to the corresponding peak theoretical throughput of the device indicates how much room for improvement there is for the kernel.
5.2
Maximize Utilization To maximize utilization the application should be structured in a way that it exposes as much parallelism as possible and efficiently maps this parallelism to the various components of the system to keep them busy most of the time.
5.2.1
Application Level At a high level, the application should maximize parallel execution between the host, the devices, and the bus connecting the host to the devices, by using asynchronous functions calls and streams as described in Section 3.2.6. It should assign to each processor the type of work it does best: serial workloads to the host; parallel workloads to the devices. For the parallel workloads, at points in the algorithm where parallelism is broken because some threads need to synchronize in order to share data with each other, there are two cases: Either these threads belong to the same block, in which case
CUDA C Programming Guide Version 3.1
85
Chapter 5.
Performance Guidelines they should use __syncthreads() and share data through shared memory within the same kernel invocation, or they belong to different blocks, in which case they must share data through global memory using two separate kernel invocations, one for writing to and one for reading from global memory. The second case is much less optimal since it adds the overhead of extra kernel invocations and global memory traffic. Its occurrence should therefore be minimized by mapping the algorithm to the CUDA programming model in such a way that the computations that require inter-thread communication are performed within a single thread block as much as possible.
5.2.2
Device Level At a lower level, the application should maximize parallel execution between the multiprocessors of a device. For devices of compute capability 1.x, only one kernel can execute on a device at one time, so the kernel should be launched with at least as many thread blocks as there are multiprocessors in the device. For devices of compute capability 2.x, multiple kernels can execute concurrently on a device, so maximum utilization can also be achieved by using streams to enable enough kernels to execute concurrently as described in Section 3.2.6.
5.2.3
Multiprocessor Level At an even lower level, the application should maximize parallel execution between the various functional units within a multiprocessor. As described in Section 4.2, a GPU multiprocessor relies on thread-level parallelism to maximize utilization of its functional units. Utilization is therefore directly linked to the number of resident warps. At every instruction issue time, a warp scheduler selects a warp that is ready to execute its next instruction, if any, and issues the instruction to the active threads of the warp. The number of clock cycles it takes for a warp to be ready to execute its next instruction is called the latency, and full utilization is achieved when all warp schedulers always have some instruction to issue for some warp at every clock cycle during that latency period, or in other words, when latency is completely “hidden”. The number of instructions required to hide a latency of L clock cycles depends on the respective throughputs of these instructions (see Section 5.4.1 for the throughputs of various arithmetic instructions); assuming maximum throughput for all instructions, it is: L/4 (rounded up to nearest integer) for devices of compute capability 1.x since a multiprocessor issues one instruction per warp over 4 clock cycles, as mentioned in Section G.3.1, L (rounded up to nearest integer) for devices of compute capability 2.0 since a multiprocessor issues one instruction per warp over 2 clock cycles for 2 warps at a time, as mentioned in Section G.4.1, 2L (rounded up to nearest integer) for devices of compute capability 2.1 since a multiprocessor issues a pair of instructions per warp over 2 clock cycles for 2 warps at a time, as mentioned in Section G.4.1.
86
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
For devices of compute capability 2.0, the two instructions issued every other cycle are for two different warps. For devices of compute capability 2.1, the four instructions issued every other cycle are two pairs for two different warps, each pair being for the same warp. The most common reason a warp is not ready to execute its next instruction is that the instructionâ€&#x;s input operands are not yet available. If all input operands are registers, latency is caused by register dependencies, i.e. some of the input operands are written by some previous instruction(s) whose execution has not completed yet. In the case of a back-to-back register dependency (i.e. some input operand is written by the previous instruction), the latency is equal to the execution time of the previous instruction and the warp scheduler must schedule instructions for different warps during that time. Execution time varies depending on the instruction, but it is typically about 22 clock cycles, which translates to 6 warps for devices of compute capability 1.x and 22 warps for devices of compute capability 2.x. If some input operand resides in off-chip memory, the latency is much higher: 400 to 800 clock cycles. The number of warps required to keep the warp schedulers busy during such high latency periods depends on the kernel code; in general, more warps are required if the ratio of the number of instructions with no off-chip memory operands (i.e. arithmetic instructions most of the time) to the number of instructions with off-chip memory operands is low (this ratio is commonly called the arithmetic intensity of the program). If this ratio is 15, for example, then to hide latencies of about 600 clock cycles, about 10 warps are required for devices of compute capability 1.x and about 40 for devices of compute capability 2.x. Another reason a warp is not ready to execute its next instruction is that it is waiting at some memory fence (Section B.5) or synchronization point (Section B.6). A synchronization point can force the multiprocessor to idle as more and more warps wait for other warps in the same block to complete execution of instructions prior to the synchronization point. Having multiple resident blocks per multiprocessor can help reduce idling in this case, as warps from different blocks do not need to wait for each other at synchronization points. The number of blocks and warps residing on each multiprocessor for a given kernel call depends on the execution configuration of the call (Section B.16), the memory resources of the multiprocessor, and the resource requirements of the kernel as described in Section 4.2. To assist programmers in choosing thread block size based on register and shared memory requirements, the CUDA Software Development Kit provides a spreadsheet, called the CUDA Occupancy Calculator, where occupancy is defined as the ratio of the number of resident warps to the maximum number of resident warps (given in Appendix G for various compute capabilities). Register, local, shared, and constant memory usages are reported by the compiler when compiling with the --ptxas-options=-v option. The total amount of shared memory required for a block is equal to the sum of the amount of statically allocated shared memory, the amount of dynamically allocated shared memory, and for devices of compute capability 1.x, the amount of shared memory used to pass the kernelâ€&#x;s arguments (see Section B.1.4). The number of registers used by a kernel can have a significant impact on the number of resident warps. For example, for devices of compute capability 1.2, if a
CUDA C Programming Guide Version 3.2
87
Chapter 5.
Performance Guidelines kernel uses 16 registers and each block has 512 threads and requires very little shared memory, then two blocks (i.e. 32 warps) can reside on the multiprocessor since they require 2x512x16 registers, which exactly matches the number of registers available on the multiprocessor. But as soon as the kernel uses one more register, only one block (i.e. 16 warps) can be resident since two blocks would require 2x512x17 registers, which are more registers than are available on the multiprocessor. Therefore, the compiler attempts to minimize register usage while keeping register spilling (see Section 5.3.2.2) and the number of instructions to a minimum. Register usage can be controlled using the -maxrregcount compiler option or launch bounds as described in Section B.17. Each double variable (on devices that supports native double precision, i.e. devices of compute capability 1.2 and higher) and each long long variable uses two registers. However, devices of compute capability 1.2 and higher have at least twice as many registers per multiprocessor as devices with lower compute capability. The effect of execution configuration on performance for a given kernel call generally depends on the kernel code. Experimentation is therefore recommended. Applications can also parameterize execution configurations based on register file size and shared memory size, which depends on the compute capability of the device, as well as on the number of multiprocessors and memory bandwidth of the device, all of which can be queried using the runtime or driver API (see reference manual). The number of threads per block should be chosen as a multiple of the warp size to avoid wasting computing resources with under-populated warps as much as possible.
5.3
Maximize Memory Throughput The first step in maximizing overall memory throughput for the application is to minimize data transfers with low bandwidth. That means minimizing data transfers between the host and the device, as detailed in Section 5.3.1, since these have much lower bandwidth than data transfers between global memory and the device. That also means minimizing data transfers between global memory and the device by maximizing use of on-chip memory: shared memory and caches (i.e. L1/L2 caches available on devices of compute capability 2.x, texture cache and constant cache available on all devices). Shared memory is equivalent to a user-managed cache: The application explicitly allocates and accesses it. As illustrated in Section 3.2.2, a typical programming pattern is to stage data coming from device memory into shared memory; in other words, to have each thread of a block: Load data from device memory to shared memory, Synchronize with all the other threads of the block so that each thread can safely read shared memory locations that were populated by different threads, Process the data in shared memory, Synchronize again if necessary to make sure that shared memory has been updated with the results,
88
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
Write the results back to device memory. For some applications (e.g. for which global memory accesses are data-dependent), a traditional hardware-managed cache is more appropriate to exploit data locality. As mentioned in Section G.4.1, for devices of compute capability 2.x, the same on-chip memory is used for both L1 and shared memory, and how much of it is dedicated to L1 versus shared memory is configurable for each kernel call. ď ą
The throughput of memory accesses by a kernel can vary by an order of magnitude depending on access pattern for each type of memory. The next step in maximizing memory throughput is therefore to organize memory accesses as optimally as possible based on the optimal memory access patterns described in Sections 5.3.2.1, 5.3.2.3, 5.3.2.4, and 5.3.2.5. This optimization is especially important for global memory accesses as global memory bandwidth is low, so non-optimal global memory accesses have a higher impact on performance.
5.3.1
Data Transfer between Host and Device Applications should strive to minimize data transfer between the host and the device. One way to accomplish this is to move more code from the host to the device, even if that means running kernels with low parallelism computations. Intermediate data structures may be created in device memory, operated on by the device, and destroyed without ever being mapped by the host or copied to host memory. Also, because of the overhead associated with each transfer, batching many small transfers into a single large transfer always performs better than making each transfer separately. On systems with a front-side bus, higher performance for data transfers between host and device is achieved by using page-locked host memory as described in Section 3.2.4.1.4. In addition, when using mapped page-locked memory (Section 3.2.5.3), there is no need to allocate any device memory and explicitly copy data between device and host memory. Data transfers are implicitly performed each time the kernel accesses the mapped memory. For maximum performance, these memory accesses must be coalesced as with accesses to global memory (see Section 5.3.2.1). Assuming that they are and that the mapped memory is read or written only once, using mapped page-locked memory instead of explicit copies between device and host memory can be a win for performance. On integrated systems where device memory and host memory are physically the same, any copy between host and device memory is superfluous and mapped pagelocked memory should be used instead. Applications may query whether a device is integrated or not by calling cudaGetDeviceProperties() and checking the integrated property or checking the CU_DEVICE_ATTRIBUTE_INTEGRATED attribute using cuDeviceGetAttribute().
5.3.2
Device Memory Accesses An instruction that accesses addressable memory (i.e. global, local, shared, constant, or texture memory) might need to be re-issued multiple times depending on the
CUDA C Programming Guide Version 3.2
89
Chapter 5.
Performance Guidelines distribution of the memory addresses across the threads within the warp. How the distribution affects the instruction throughput this way is specific to each type of memory and described in the following sections. For example, for global memory, as a general rule, the more scattered the addresses are, the more reduced the throughput is.
5.3.2.1
Global Memory Global memory resides in device memory and device memory is accessed via 32-, 64-, or 128-byte memory transactions. These memory transactions must be naturally aligned: Only the 32-, 64-, or 128-byte segments of device memory that are aligned to their size (i.e. whose first address is a multiple of their size) can be read or written by memory transactions. When a warp executes an instruction that accesses global memory, it coalesces the memory accesses of the threads within the warp into one or more of these memory transactions depending on the size of the word accessed by each thread and the distribution of the memory addresses across the threads. In general, the more transactions are necessary, the more unused words are transferred in addition to the words accessed by the threads, reducing the instruction throughput accordingly. For example, if a 32-byte memory transaction is generated for each thread‟s 4-byte access, throughput is divided by 8. How many transactions are necessary and how throughput is ultimately affected varies with the compute capability of the device. For devices of compute capability 1.0 and 1.1, the requirements on the distribution of the addresses across the threads to get any coalescing at all are very strict. They are much more relaxed for devices of higher compute capabilities. For devices of compute capability 2.x, the memory transactions are cached, so data locality is exploited to reduce impact on throughput. Sections G.3.2 and G.4.2 give more details on how global memory accesses are handled for various compute capabilities. To maximize global memory throughput, it is therefore important to maximize coalescing by: Following the most optimal access patterns based on Sections G.3.2 and G.4.2, Using data types that meet the size and alignment requirement detailed in Section 5.3.2.1.1, Padding data in some cases, for example, when accessing a two-dimensional array as described in Section 5.3.2.1.2.
5.3.2.1.1
Size and Alignment Requirement Global memory instructions support reading or writing words of size equal to 1, 2, 4, 8, or 16 bytes. Any access (via a variable or a pointer) to data residing in global memory compiles to a single global memory instruction if and only if the size of the data type is 1, 2, 4, 8, or 16 bytes and the data is naturally aligned (i.e. its address is a multiple of that size). If this size and alignment requirement is not fulfilled, the access compiles to multiple instructions with interleaved access patterns that prevent these instructions from fully coalescing. It is therefore recommended to use types that meet this requirement for data that resides in global memory.
90
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
The alignment requirement is automatically fulfilled for the built-in types of Section B.3.1 like float2 or float4. For structures, the size and alignment requirements can be enforced by the compiler using the alignment specifiers __align__(8) or __align__(16), such as struct __align__(8) { float x; float y; };
or struct __align__(16) { float x; float y; float z; };
Any address of a variable residing in global memory or returned by one of the memory allocation routines from the driver or runtime API is always aligned to at least 256 bytes. Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results (off by a few words), so special care must be taken to maintain alignment of the starting address of any value or array of values of these types. A typical case where this might be easily overlooked is when using some custom global memory allocation scheme, whereby the allocations of multiple arrays (with multiple calls to cudaMalloc() or cuMemAlloc()) is replaced by the allocation of a single large block of memory partitioned into multiple arrays, in which case the starting address of each array is offset from the blockâ€&#x;s starting address.
5.3.2.1.2
Two-Dimensional Arrays A common global memory access pattern is when each thread of index (tx,ty) uses the following address to access one element of a 2D array of width width, located at address BaseAddress of type type* (where type meets the requirement described in Section 5.3.2.1.1): BaseAddress + width * ty + tx
For these accesses to be fully coalesced, both the width of the thread block and the width of the array must be a multiple of the warp size (or only half the warp size for devices of compute capability 1.x). In particular, this means that an array whose width is not a multiple of this size will be accessed much more efficiently if it is actually allocated with a width rounded up to the closest multiple of this size and its rows padded accordingly. The cudaMallocPitch() and cuMemAllocPitch() functions and associated memory copy functions described in the reference manual enable programmers to write non-hardware-dependent code to allocate arrays that conform to these constraints.
5.3.2.2
Local Memory Local memory accesses only occur for some automatic variables as mentioned in Section B.2.4. Automatic variables that the compiler is likely to place in local memory are:
CUDA C Programming Guide Version 3.2
91
Chapter 5.
Performance Guidelines Arrays for which it cannot determine that they are indexed with constant quantities, Large structures or arrays that would consume too much register space, Any variable if the kernel uses more registers than available (this is also known as register spilling). Inspection of the PTX assembly code (obtained by compiling with the –ptx or -keep option) will tell if a variable has been placed in local memory during the first compilation phases as it will be declared using the .local mnemonic and accessed using the ld.local and st.local mnemonics. Even if it has not, subsequent compilation phases might still decide otherwise though if they find it consumes too much register space for the targeted architecture: Inspection of the cubin object using cuobjdump will tell if this is the case. Also, the compiler reports total local memory usage per kernel (lmem) when compiling with the --ptxas-options=-v option. Note that some mathematical functions have implementation paths that might access local memory.
The local memory space resides in device memory, so local memory accesses have same high latency and low bandwidth as global memory accesses and are subject to the same requirements for memory coalescing as described in Section 5.3.2.1. Local memory is however organized such that consecutive 32-bit words are accessed by consecutive thread IDs. Accesses are therefore fully coalesced as long as all threads in a warp access the same relative address (e.g. same index in an array variable, same member in a structure variable). On devices of compute capability 2.x, local memory accesses are always cached in L1 and L2 in the same way as global memory accesses (see Section G.4.2).
5.3.2.3
Shared Memory Because it is on-chip, the shared memory space is much faster than the local and global memory spaces. In fact, for all threads of a warp, accessing shared memory is fast as long as there are no bank conflicts between the threads, as detailed below. To achieve high bandwidth, shared memory is divided into equally-sized memory modules, called banks, which can be accessed simultaneously. Any memory read or write request made of n addresses that fall in n distinct memory banks can therefore be serviced simultaneously, yielding an overall bandwidth that is n times as high as the bandwidth of a single module. However, if two addresses of a memory request fall in the same memory bank, there is a bank conflict and the access has to be serialized. The hardware splits a memory request with bank conflicts into as many separate conflict-free requests as necessary, decreasing throughput by a factor equal to the number of separate memory requests. If the number of separate memory requests is n, the initial memory request is said to cause n-way bank conflicts. To get maximum performance, it is therefore important to understand how memory addresses map to memory banks in order to schedule the memory requests so as to minimize bank conflicts. This is described in Sections G.3.3 and G.4.3 for devices of compute capability 1.x and 2.x, respectively.
5.3.2.4
Constant Memory The constant memory space resides in device memory and is cached in the constant cache mentioned in Sections G.3.1 and G.4.1.
92
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
For devices of compute capability 1.x, a constant memory request for a warp is first split into two requests, one for each half-warp, that are issued independently. A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests. The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
5.3.2.5
Texture and Surface Memory The texture and surface memory spaces reside in device memory and are cached in texture cache, so a texture fetch or surface read costs one memory read from device memory only on a cache miss, otherwise it just costs one read from texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that read texture or surface addresses that are close together in 2D will achieve best performance. Also, it is designed for streaming fetches with a constant latency; a cache hit reduces DRAM bandwidth demand but not fetch latency. Reading device memory through texture or surface fetching present some benefits that can make it an advantageous alternative to reading device memory from global or constant memory: If the memory reads do not follow the access patterns that global or constant memory reads must respect to get good performance (see Sections 5.3.2.1 and 5.3.2.4), higher bandwidth can be achieved providing that there is locality in the texture fetches or surface reads (this is less likely for devices of compute capability 2.x given that global memory reads are cached on these devices); Addressing calculations are performed outside the kernel by dedicated units; Packed data may be broadcast to separate variables in a single operation; 8-bit and 16-bit integer input data may be optionally converted to 32-bit floating-point values in the range [0.0, 1.0] or [-1.0, 1.0] (see Section 3.2.4.1.1).
5.4
Maximize Instruction Throughput To maximize instruction throughput the application should: Minimize the use of arithmetic instructions with low throughput; this includes trading precision for speed when it does not affect the end result, such as using intrinsic instead of regular functions (intrinsic functions are listed in Section C.2), single-precision instead of double-precision, or flushing denormalized numbers to zero; Minimize divergent warps caused by control flow instructions as detailed in Section 5.4.2; Reduce the number of instructions, for example, by optimizing out synchronization points whenever possible as described in Section 5.4.3 or by using restricted pointers as described in Section E.3. In this section, throughputs are given in number of operations per clock cycle per multiprocessor. For a warp size of 32, one instruction results in 32 operations. Therefore, if T is the number of operations per clock cycle, the instruction throughput is one instruction every 32/T clock cycles.
CUDA C Programming Guide Version 3.2
93
Chapter 5.
Performance Guidelines All throughputs are for one multiprocessor. They must be multiplied by the number of multiprocessors in the device to get throughput for the whole device.
5.4.1
Arithmetic Instructions Table 5-1 gives the throughputs of the arithmetic instructions that are natively supported in hardware for devices of various compute capabilities.
Table 5-1.
Throughput of Native Arithmetic Instructions (Operations per Clock Cycle per Multiprocessor) Compute Capability 1.x
Compute Capability 2.0
Compute Capability 2.1
32-bit floating-point add, multiply, multiply-add
8
32
48
64-bit floating-point add, multiply, multiply-add
1
16
4
32-bit integer add, logical operation
8
32
48
32-bit integer shift, compare
8
16
16
Multiple instructions
16
16
8
Multiple instructions
Multiple instructions
2
4
8
8
16
16
32-bit integer multiply, multiply-add, sum of absolute difference 24-bit integer multiply (__[u]mul24) 32-bit floating-point reciprocal, reciprocal square root, base-2 logarithm (__log2f), base-2 exponential (exp2f), sine (__sinf), cosine (__cosf) Type conversions
Other instructions and functions are implemented on top of the native instructions. The implementation may be different for devices of compute capability 1.x and devices of compute capability 2.x, and the number of native instructions after compilation may fluctuate with every compiler version. For complicated functions, there can be multiple code paths depending on input. cuobjdump can be used to inspect a particular implementation in a cubin object. The implementation of some functions are readily available on the CUDA header files (math_functions.h, device_functions.h, ‌). In general, code compiled with -ftz=true (denormalized numbers are flushed to zero) tends to have higher performance than code compiled with -ftz=false. Similarly, code compiled with -prec-div=false (less precise division) tends to have higher performance code than code compiled with -prec-div=true, and
94
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
code compiled with -prec-sqrt=false (less precise square root) tends to have higher performance than code compiled with -prec-sqrt=true. The nvcc user manual describes these compilation flags in more details.
Single-Precision Floating-Point Addition and Multiplication Intrinsics __fadd_r[d,u], __fmul_r[d,u], and __fmaf_r[n,z,d,u] (see
Section C.2.1) compile to tens of instructions for devices of compute capability 1.x, but map to a single native instruction for devices of compute capability 2.x.
Single-Precision Floating-Point Division __fdividef(x, y) (see Section C.2.1) provides faster single-precision floating-
point division than the division operator.
Single-Precision Floating-Point Reciprocal Square Root To preserve IEEE-754 semantics the compiler can optimize 1.0/sqrtf() into rsqrtf() only when both reciprocal and square root are approximate, (i.e. with -prec-div=false and -prec-sqrt=false). It is therefore recommended to invoke rsqrtf() directly where desired.
Single-Precision Floating-Point Square Root Single-precision floating-point square root is implemented as a reciprocal square root followed by a reciprocal instead of a reciprocal square root followed by a multiplication so that it gives correct results for 0 and infinity. Therefore, its throughput is 1 operation per clock cycle for devices of compute capability 1.x and 2 operations per clock cycle for devices of compute capability 2.x.
Sine and Cosine sinf(x), cosf(x), tanf(x), sincosf(x), and corresponding double-
precision instructions are much more expensive and even more so if the argument x is large in magnitude. More precisely, the argument reduction code (see math_functions.h for implementation) comprises two code paths referred to as the fast path and the slow path, respectively. The fast path is used for arguments sufficiently small in magnitude and essentially consists of a few multiply-add operations. The slow path is used for arguments large in magnitude and consists of lengthy computations required to achieve correct results over the entire argument range. At present, the argument reduction code for the trigonometric functions selects the fast path for arguments whose magnitude is less than 48039.0f for the singleprecision functions, and less than 2147483648.0 for the double-precision functions. As the slow path requires more registers than the fast path, an attempt has been made to reduce register pressure in the slow path by storing some intermediate variables in local memory, which may affect performance because of local memory high latency and bandwidth (see Section 5.3.2.2). At present, 28 bytes of local memory are used by single-precision functions, and 44 bytes are used by doubleprecision functions. However, the exact amount is subject to change.
CUDA C Programming Guide Version 3.2
95
Chapter 5.
Performance Guidelines Due to the lengthy computations and use of local memory in the slow path, the throughput of these trigonometric functions is lower by one order of magnitude when the slow path reduction is required as opposed to the fast path reduction.
Integer Arithmetic On devices of compute capability 1.x, 32-bit integer multiplication is implemented using multiple instructions as it is not natively supported. 24-bit integer multiplication is natively supported however via the __[u]mul24 intrinsic (see Section C.2.3). Using __[u]mul24 instead of the 32-bit multiplication operator whenever possible usually improves performance for instruction bound kernels. It can have the opposite effect however in cases where the use of __[u]mul24 inhibits compiler optimizations. On devices of compute capability 2.x, 32-bit integer multiplication is natively supported, but 24-bit integer multiplication is not. __[u]mul24 is therefore implemented using multiple instructions and should not be used. Integer division and modulo operation are costly: tens of instructions on devices of compute capability 1.x, below 20 instructions on devices of compute capability 2.x. They can be replaced with bitwise operations in some cases: If n is a power of 2, (i/n) is equivalent to (i>>log2(n)) and (i%n) is equivalent to (i&(n-1)); the compiler will perform these conversions if n is literal. __brev, __brevll, __popc, and __popcll (see Section C.2.3) compile to tens of instructions for devices of compute capability 1.x, but __brev and __popc map to a single instruction for devices of compute capability 2.x and __brevll and __popcll to just a few. __clz, __clzll, __ffs, and __ffsll (see Section C.2.3) compile to fewer
instructions for devices of compute capability 2.x than for devices of compute capability 1.x.
Type Conversion Sometimes, the compiler must insert conversion instructions, introducing additional execution cycles. This is the case for: Functions operating on variables of type char or short whose operands generally need to be converted to int, ď ą Double-precision floating-point constants (i.e. those constants defined without any type suffix) used as input to single-precision floating-point computations (as mandated by C/C++ standards). This last case can be avoided by using single-precision floating-point constants, defined with an f suffix such as 3.141592653589793f, 1.0f, 0.5f. ď ą
5.4.2
Control Flow Instructions Any flow control instruction (if, switch, do, for, while) can significantly impact the effective instruction throughput by causing threads of the same warp to diverge (i.e. to follow different execution paths). If this happens, the different executions paths have to be serialized, increasing the total number of instructions executed for this warp. When all the different execution paths have completed, the threads converge back to the same execution path.
96
CUDA C Programming Guide Version 3.2
Chapter 5.
Performance Guidelines
To obtain best performance in cases where the control flow depends on the thread ID, the controlling condition should be written so as to minimize the number of divergent warps. This is possible because the distribution of the warps across the block is deterministic as mentioned in Section 4.1. A trivial example is when the controlling condition only depends on (threadIdx / warpSize) where warpSize is the warp size. In this case, no warp diverges since the controlling condition is perfectly aligned with the warps. Sometimes, the compiler may unroll loops or it may optimize out if or switch statements by using branch predication instead, as detailed below. In these cases, no warp can ever diverge. The programmer can also control loop unrolling using the #pragma unroll directive (see Section E.2). When using branch predication none of the instructions whose execution depends on the controlling condition gets skipped. Instead, each of them is associated with a per-thread condition code or predicate that is set to true or false based on the controlling condition and although each of these instructions gets scheduled for execution, only the instructions with a true predicate are actually executed. Instructions with a false predicate do not write results, and also do not evaluate addresses or read operands. The compiler replaces a branch instruction with predicated instructions only if the number of instructions controlled by the branch condition is less or equal to a certain threshold: If the compiler determines that the condition is likely to produce many divergent warps, this threshold is 7, otherwise it is 4.
5.4.3
Synchronization Instruction Throughput for __syncthreads() is 8 operations per clock cycle for devices of compute capability 1.x and 16 operations per clock cycle for devices of compute capability 2.x. Note that __syncthreads() can impact performance by forcing the multiprocessor to idle as detailed in Section 5.2.3. Because a warp executes one common instruction at a time, threads within a warp are implicitly synchronized and this can sometimes be used to omit __syncthreads() for better performance. In the following code sample, for example, both calls to __syncthreads() are required to get the expected result (i.e. result[i] = 2 * myArray[i] for i > 0). Without synchronization, any of the two references to myArray[tid] could return either 2 or the value initially stored in myArray, depending on whether the memory read occurs before or after the memory write from myArray[tid + 1] = 2. // myArray is an array of integers located in global or shared // memory __global__ void MyKernel(int* result) { int tid = threadIdx.x; ... int ref1 = myArray[tid]; __syncthreads(); myArray[tid + 1] = 2; __syncthreads();
CUDA C Programming Guide Version 3.2
97
Chapter 5.
Performance Guidelines int ref2 = myArray[tid]; result[tid] = ref1 * ref2; ... }
However, in the following slightly modified code sample, threads are guaranteed to belong to the same warp, so that there is no need for any __syncthreads(). // myArray is an array of integers located in global or shared // memory __global__ void MyKernel(int* result) { int tid = threadIdx.x; ... if (tid < warpSize) { int ref1 = myArray[tid]; myArray[tid + 1] = 2; int ref2 = myArray[tid]; result[tid] = ref1 * ref2; } ... }
Simply removing the __syncthreads() is not enough however; myArray must also be declared as volatile as described in Section B.2.5.
98
CUDA C Programming Guide Version 3.2
Appendix A. CUDA-Enabled GPUs
Table A-1 lists all CUDA-enabled devices with their compute capability, number of multiprocessors, and number of CUDA cores. These, as well as the clock frequency and the total amount of device memory, can be queried using the runtime or driver API (see reference manual).
Table A-1.
CUDA-Enabled Devices with Compute Capability, Number of Multiprocessors, and Number of CUDA Cores Compute Capability
Number of Multiprocessors
Number of CUDA Cores
GeForce GTX 460
2.1
7
336
GeForce GTX 470M
2.1
6
288
GeForce GTS 450, GTX 460M
2.1
4
192
GeForce GT 445M
2.1
3
144
GeForce GT 435M, GT 425M, GT 420M
2.1
2
96
GeForce GT 415M
2.1
1
48
GeForce GTX 480
2.0
15
480
GeForce GTX 470
2.0
14
448
GeForce GTX 465, GTX 480M
2.0
11
352
GeForce GTX 295
1.3
2x30
2x240
GeForce GTX 285, GTX 280, GTX 275
1.3
30
240
GeForce GTX 260
1.3
24
192
GeForce 9800 GX2
1.1
2x16
2x128
GeForce GTS 250, GTS 150, 9800 GTX, 9800 GTX+, 8800 GTS 512, GTX 285M, GTX 280M
1.1
16
128
GeForce 8800 Ultra, 8800 GTX
1.0
16
128
GeForce 9800 GT, 8800 GT, GTX 260M, 9800M GTX
1.1
14
112
GeForce GT 240, GTS 360M,
1.2
12
96
CUDA C Programming Guide Version 3.1
99
Appendix A. CUDA-Enabled GPUs
Compute Capability
Number of Multiprocessors
Number of CUDA Cores
GeForce GT 130, 9600 GSO, 8800 GS, 8800M GTX, GTS 260M, GTS 250M, 9800M GT
1.1
12
96
GeForce 8800 GTS
1.0
12
96
GeForce GT 335M
1.2
9
72
GeForce 9600 GT, 8800M GTS, 9800M GTS
1.1
8
64
GeForce GT 220, GT 330M, GT 325M, GT 240M
1.2
6
48
GeForce 9700M GT, GT 230M
1.1
6
48
GeForce GT 120, 9500 GT, 8600 GTS, 8600 GT, 9700M GT, 9650M GS, 9600M GT, 9600M GS, 9500M GS, 8700M GT, 8600M GT, 8600M GS
1.1
4
32
GeForce 210, 310M, 305M
1.2
2
16
GeForce G100, 8500 GT, 8400 GS, 8400M GT, 9500M G, 9300M G, 8400M GS, 9400 mGPU, 9300 mGPU, 8300 mGPU, 8200 mGPU, 8100 mGPU, G210M, G110M
1.1
2
16
GeForce 9300M GS, 9200M GS, 9100M G, 8400M G, G105M
1.1
1
8
Tesla C2050
2.0
14
448
Tesla S1070
1.3
4x30
4x240
Tesla C1060
1.3
30
240
Tesla S870
1.0
4x16
4x128
Tesla D870
1.0
2x16
2x128
Tesla C870
1.0
16
128
Quadro 2000
2.1
4
192
Quadro 600
2.1
2
96
Quadro 6000
2.0
14
448
Quadro 5000
2.0
11
352
Quadro 5000M
2.0
10
320
Quadro 4000
2.0
8
256
Quadro Plex 2200 D2
1.3
2x30
2x240
Quadro Plex 2100 D4
1.1
4x14
4x112
Quadro Plex 2100 Model S4
1.0
4x16
4x128
Quadro Plex 1000 Model IV
1.0
2x16
2x128
Quadro FX 5800
1.3
30
240
Quadro FX 4800
1.3
24
192
Quadro FX 4700 X2
1.1
2x14
2x112
Quadro FX 3700M, FX 3800M
1.1
16
128
Quadro FX 5600
1.0
16
128
Quadro FX 3700
1.1
14
112
GTS 350M
100
CUDA C Programming Guide Version 3.2
Appendix A. CUDA-Enabled GPUs
Compute Capability
Number of Multiprocessors
Number of CUDA Cores
Quadro FX 2800M
1.1
12
96
Quadro FX 4600
1.0
12
96
Quadro FX 1800M
1.2
9
72
Quadro FX 3600M
1.1
8
64
Quadro FX 880M, NVS 5100M
1.2
6
48
Quadro FX 2700M
1.1
6
48
Quadro FX 1700, FX 570, NVS 320M, FX 1700M, FX 1600M, FX 770M, FX 570M
1.1
4
32
Quadro FX 380 LP, FX 380M, NVS 3100M, NVS 2100M
1.2
2
16
Quadro FX 370, NVS 290, NVS 160M, NVS 150M, NVS 140M, NVS 135M, FX 360M
1.1
2
16
Quadro FX 370M, NVS 130M
1.1
1
8
CUDA C Programming Guide Version 3.2
101
Appendix B. C Language Extensions
B.1
Function Type Qualifiers Function type qualifiers specify whether a function executes on the host or on the device and whether it is callable from the host or from the device.
B.1.1
__device__ The __device__ qualifier declares a function that is: Executed on the device Callable from the device only. In device code compiled for devices of compute capability 1.x, a __device__ function is always inlined by default. The __noinline__ function qualifier however can be used as a hint for the compiler not to inline the function if possible (see Section E.1).
B.1.2
__global__ The __global__ qualifier declares a function as being a kernel. Such a function is: Executed on the device, Callable from the host only. __global__ functions must have void return type.
Any call to a __global__ function must specify its execution configuration as described in Section B.16. A call to a __global__ function is asynchronous, meaning it returns before the device has completed its execution.
B.1.3
__host__ The __host__ qualifier declares a function that is:
CUDA C Programming Guide Version 3.1
103
Appendix B.
C Language Extensions Executed on the host, Callable from the host only. It is equivalent to declare a function with only the __host__ qualifier or to declare it without any of the __host__, __device__, or __global__ qualifier; in either case the function is compiled for the host only.
The __global__ and __host__ qualifiers cannot be used together. The __device__ and __host__ qualifiers can be used together however, in which case the function is compiled for both the host and the device. The __CUDA_ARCH__ macro introduced in Section 3.1.4 can be used to differentiate code paths between host and device: __host__ __device__ func() { #if __CUDA_ARCH__ == 100 // Device code path for compute capability 1.0 #elif __CUDA_ARCH__ == 200 // Device code path for compute capability 2.0 #elif !defined(__CUDA_ARCH__) // Host code path #endif }
B.1.4
Restrictions
B.1.4.1
Functions Parameters __global__ function parameters are passed to the device:
via shared memory and are limited to 256 bytes on devices of compute capability 1.x, via constant memory and are limited to 4 KB on devices of compute capability 2.x.
B.1.4.2
Variadic Functions __device__ and __global__ functions cannot have a variable number of
arguments.
B.1.4.3
Static Variables __device__ and __global__ functions cannot declare static variables inside
their body.
B.1.4.4
Function Pointers Function pointers to __global__ functions are supported, but function pointers to __device__ functions are only supported in device code compiled for devices of compute capability 2.x. It is not allowed to take the address of a __device__ function in host code.
B.1.4.5
Recursion __global__ functions do not support recursion.
104
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
__device__ functions only support recursion in device code compiled for devices
of compute capability 2.x.
B.2
Variable Type Qualifiers Variable type qualifiers specify the memory location on the device of a variable.
B.2.1
__device__ The __device__ qualifier declares a variable that resides on the device. At most one of the other type qualifiers defined in the next three sections may be used together with __device__ to further specify which memory space the variable belongs to. If none of them is present, the variable: Resides in global memory space, Has the lifetime of an application, Is accessible from all the threads within the grid and from the host through the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime API and cuModuleGetGlobal() for the driver API).
B.2.2
__constant__ The __constant__ qualifier, optionally used together with __device__, declares a variable that: Resides in constant memory space, Has the lifetime of an application, Is accessible from all the threads within the grid and from the host through the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime API and cuModuleGetGlobal() for the driver API).
B.2.3
__shared__ The __shared__ qualifier, optionally used together with __device__, declares a variable that: Resides in the shared memory space of a thread block, Has the lifetime of the block, Is only accessible from all the threads within the block. When declaring a variable in shared memory as an external array such as
extern __shared__ float shared[];
the size of the array is determined at launch time (see Section B.16). All variables declared in this fashion, start at the same address in memory, so that the layout of
CUDA C Programming Guide Version 3.2
105
Appendix B.
C Language Extensions the variables in the array must be explicitly managed through offsets. For example, if one wants the equivalent of short array0[128]; float array1[64]; int array2[256];
in dynamically allocated shared memory, one could declare and initialize the arrays the following way: extern __shared__ float array[]; __device__ void func() // __device__ or __global__ function { short* array0 = (short*)array; float* array1 = (float*)&array0[128]; int* array2 = (int*)&array1[64]; }
Note that pointers need to be aligned to the type they point to, so the following code, for example, does not work since array1 is not aligned to 4 bytes. extern __shared__ float array[]; __device__ void func() // __device__ or __global__ function { short* array0 = (short*)array; float* array1 = (float*)&array0[127]; }
Alignment requirements for the built-in vector types are listed in Table B-1.
B.2.4
Restrictions The __device__, __shared__ and __constant__ qualifiers are not allowed on struct and union members, on formal parameters and on local variables within a function that executes on the host.
B.2.4.1
Storage and Scope __shared__ and __constant__ variables have implied static storage. __device__, __shared__ and __constant__ variables cannot be defined as external using the extern keyword. The only exception is for dynamically allocated __shared__ variables as described in Section B.2.3. __device__ and __constant__ variables are only allowed at file scope.
B.2.4.2
Assignment __constant__ variables cannot be assigned to from the device, only from the
host through host runtime functions (Sections 3.2.1 and 3.3.4). __shared__ variables cannot have an initialization as part of their declaration.
B.2.4.3
Automatic Variable An automatic variable declared in device code without any of the __device__, __shared__ and __constant__ qualifiers generally resides in a register. However in some cases the compiler might choose to place it in local memory, which can have adverse performance consequences as detailed in Section 5.3.2.2.
106
CUDA C Programming Guide Version 3.2
Appendix B.
B.2.4.4
C Language Extensions
Pointers For devices of compute capability 1.x, pointers in code that is executed on the device are supported as long as the compiler is able to resolve whether they point to either the shared memory space or the global memory space, otherwise they are restricted to only point to memory allocated or declared in the global memory space. For devices of compute capability 2.x, pointers are supported without any restriction. Dereferencing a pointer either to global or shared memory in code that is executed on the host or to host memory in code that is executed on the device results in an undefined behavior, most often in a segmentation fault and application termination. The address obtained by taking the address of a __device__, __shared__ or __constant__ variable can only be used in device code. The address of a __device__ or __constant__ variable obtained through cudaGetSymbolAddress() as described in Section 3.3.4 can only be used in host code.
B.2.5
volatile Only after the execution of a __threadfence_block(), __threadfence(), or __syncthreads() (Sections B.5 and B.6) are prior writes to global or shared memory guaranteed to be visible by other threads. As long as this requirement is met, the compiler is free to optimize reads and writes to global or shared memory. For example, in the code sample below, the first reference to myArray[tid] compiles into a global or shared memory read instruction, but the second reference does not as the compiler simply reuses the result of the first read. // myArray is an array of non-zero integers // located in global or shared memory __global__ void MyKernel(int* result) { int tid = threadIdx.x; int ref1 = myArray[tid] * 1; myArray[tid + 1] = 2; int ref2 = myArray[tid] * 1; result[tid] = ref1 * ref2; }
Therefore, ref2 cannot possibly be equal to 2 in thread tid as a result of thread tid-1 overwriting myArray[tid] by 2. This behavior can be changed using the volatile keyword: If a variable located in global or shared memory is declared as volatile, the compiler assumes that its value can be changed at any time by another thread and therefore any reference to this variable compiles to an actual memory read instruction. Note that even if myArray is declared as volatile in the code sample above, there is no guarantee, in general, that ref2 will be equal to 2 in thread tid since thread tid might read myArray[tid] into ref2 before thread tid-1 overwrites its value by 2. Synchronization is required as mentioned in Section 5.4.3.
CUDA C Programming Guide Version 3.2
107
Appendix B.
C Language Extensions
B.3
Built-in Vector Types
B.3.1
char1, uchar1, char2, uchar2, char3, uchar3, char4, uchar4, short1, ushort1, short2, ushort2, short3, ushort3, short4, ushort4, int1, uint1, int2, uint2, int3, uint3, int4, uint4, long1, ulong1, long2, ulong2, long3, ulong3, long4, ulong4, longlong1, ulonglong1, longlong2, ulonglong2, float1, float2, float3, float4, double1, double2 These are vector types derived from the basic integer and floating-point types. They are structures and the 1st, 2nd, 3rd, and 4th components are accessible through the fields x, y, z, and w, respectively. They all come with a constructor function of the form make_<type name>; for example, int2 make_int2(int x, int y);
which creates a vector of type int2 with value (x, y). In host code, the alignment requirement of a vector type is equal to the alignment requirement of its base type. This is not always the case in device code as detailed in Table B-1.
Table B-1. Alignment Requirements in Device Code
108
Type
Alignment
char1, uchar1
1
char2, uchar2
2
char3, uchar3
1
char4, uchar4
4
short1, ushort1
2
short2, ushort2
4
short3, ushort3
2
short4, ushort4
8
int1, uint1
4
int2, uint2
8
int3, uint3
4
int4, uint4
16
long1, ulong1
4 if sizeof(long) is equal to sizeof(int), 8, otherwise
long2, ulong2
8 if sizeof(long) is equal to sizeof(int), 16, otherwise
long3, ulong3
4 if sizeof(long) is equal to sizeof(int), 8, otherwise
long4, ulong4
16
CUDA C Programming Guide Version 3.2
Appendix B.
B.3.2
longlong1, ulonglong1
8
longlong2, ulonglong2
16
float1
4
float2
8
float3
4
float4
16
double1
8
double2
16
C Language Extensions
dim3 This type is an integer vector type based on uint3 that is used to specify dimensions. When defining a variable of type dim3, any component left unspecified is initialized to 1.
B.4
Built-in Variables Built-in variables specify the grid and block dimensions and the block and thread indices. They are only valid within functions that are executed on the device.
B.4.1
gridDim This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the grid.
B.4.2
blockIdx This variable is of type uint3 (see Section B.3.1) and contains the block index within the grid.
B.4.3
blockDim This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the block.
B.4.4
threadIdx This variable is of type uint3 (see Section B.3.1) and contains the thread index within the block.
CUDA C Programming Guide Version 3.2
109
Appendix B.
B.4.5
C Language Extensions
warpSize This variable is of type int and contains the warp size in threads (see Section 4.1 for the definition of a warp).
B.4.6
Restrictions It is not allowed to take the address of any of the built-in variables. It is not allowed to assign values to any of the built-in variables.
B.5
Memory Fence Functions void __threadfence_block();
waits until all global and shared memory accesses made by the calling thread prior to __threadfence_block() are visible to all threads in the thread block. void __threadfence();
waits until all global and shared memory accesses made by the calling thread prior to __threadfence() are visible to: All threads in the thread block for shared memory accesses, All threads in the device for global memory accesses.
void __threadfence_system();
waits until all global and shared memory accesses made by the calling thread prior to __threadfence_system() are visible to: All threads in the thread block for shared memory accesses, All threads in the device for global memory accesses, Host threads for page-locked host memory accesses (see Section 3.2.5.3). __threadfence_system() is only supported by devices of compute capability 2.x.
In general, when a thread issues a series of writes to memory in a particular order, other threads may see the effects of these memory writes in a different order. __threadfence_block(), __threadfence(), and __threadfence_system() can be used to enforce some ordering. One use case is when threads consume some data produced by other threads as illustrated by the following code sample of a kernel that computes the sum of an array of N numbers in one call. Each block first sums a subset of the array and stores the result in global memory. When all blocks are done, the last block done reads each of these partial sums from global memory and sums them to obtain the final result. In order to determine which block is finished last, each block atomically increments a counter to signal that it is done with computing and storing its partial sum (see Section B.11 about atomic functions). The last block is the one that receives the counter value equal to gridDim.x-1. If no fence is placed between storing the partial sum and incrementing the counter, the counter might increment before the partial sum is stored and therefore, might reach gridDim.x-1 and let
110
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
the last block start reading partial sums before they have been actually updated in memory. __device__ unsigned int count = 0; __shared__ bool isLastBlockDone; __global__ void sum(const float* array, unsigned int N, float* result) { // Each block sums a subset of the input array float partialSum = calculatePartialSum(array, N); if (threadIdx.x == 0) { // Thread 0 of each block stores the partial sum // to global memory result[blockIdx.x] = partialSum; // Thread 0 makes sure its result is visible to // all other threads __threadfence(); // Thread 0 of each block signals that it is done unsigned int value = atomicInc(&count, gridDim.x); // Thread 0 of each block determines if its block is // the last block to be done isLastBlockDone = (value == (gridDim.x - 1)); } // Synchronize to make sure that each thread reads // the correct value of isLastBlockDone __syncthreads(); if (isLastBlockDone) { // The last block sums the partial sums // stored in result[0 .. gridDim.x-1] float totalSum = calculateTotalSum(result); if (threadIdx.x == 0) { // Thread 0 of last block stores total sum // to global memory and resets count so that // next kernel call works properly result[0] = totalSum; count = 0; } } }
B.6
Synchronization Functions void __syncthreads();
CUDA C Programming Guide Version 3.2
111
Appendix B.
C Language Extensions waits until all threads in the thread block have reached this point and all global and shared memory accesses made by these threads prior to __syncthreads() are visible to all threads in the block. __syncthreads() is used to coordinate communication between the threads of
the same block. When some threads within a block access the same addresses in shared or global memory, there are potential read-after-write, write-after-read, or write-after-write hazards for some of these memory accesses. These data hazards can be avoided by synchronizing threads in-between these accesses. __syncthreads() is allowed in conditional code but only if the conditional
evaluates identically across the entire thread block, otherwise the code execution is likely to hang or produce unintended side effects. Devices of compute capability 2.x support three variations of __syncthreads() described below. int __syncthreads_count(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns the number of threads for which predicate evaluates to non-zero. int __syncthreads_and(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns non-zero if and only if predicate evaluates to non-zero for all of them. int __syncthreads_or(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns non-zero if and only if predicate evaluates to non-zero for any of them.
B.7
Mathematical Functions Section C.1 contains a comprehensive list of the C/C++ standard library mathematical functions that are currently supported in device code, along with their respective error bounds. When executed in host code, a given function uses the C runtime implementation if available. For some of the functions of Section C.1, a less accurate, but faster version exists in the device runtime component; it has the same name prefixed with __ (such as __sinf(x)). These intrinsic functions are listed in Section C.2, along with their respective error bounds. The compiler has an option (-use_fast_math) that forces each function in Table B-2 to compile to its intrinsic counterpart. In addition to reduce accuracy of the affected functions, it may also cause some differences in special case handling. A more robust approach is to selectively replace mathematical function calls by calls to intrinsic functions only where it is merited by the performance gains and where changed properties such as reduced accuracy and different special case handling can be tolerated.
Table B-2. Functions Affected by –use_fast_math 112
CUDA C Programming Guide Version 3.2
Appendix B.
B.8
C Language Extensions
Operator/Function x/y
Device Function __fdividef(x,y)
sinf(x)
__sinf(x)
cosf(x)
__cosf(x)
tanf(x)
__tanf(x)
sincosf(x,sptr,cptr)
__sincosf(x,sptr,cptr)
logf(x)
__logf(x)
log2f(x)
__log2f(x)
log10f(x)
__log10f(x)
expf(x)
__expf(x)
exp10f(x)
__exp10f(x)
powf(x,y)
__powf(x,y)
Texture Functions For texture functions, a combination of the texture referenceâ€&#x;s immutable (i.e. compile-time) and mutable (i.e. runtime) attributes determine how the texture coordinates are interpreted, what processing occurs during the texture fetch, and the return value delivered by the texture fetch. Immutable attributes are described in Section 3.2.4.1.1. Mutable attributes are described in Section 3.2.4.1.2. Texture fetching is described in Appendix F.
B.8.1
tex1Dfetch() template<class Type> Type tex1Dfetch( texture<Type, 1, cudaReadModeElementType> texRef, int x); float tex1Dfetch( texture<unsigned char, 1, cudaReadModeNormalizedFloat> texRef, int x); float tex1Dfetch( texture<signed char, 1, cudaReadModeNormalizedFloat> texRef, int x); float tex1Dfetch( texture<unsigned short, 1, cudaReadModeNormalizedFloat> texRef, int x); float tex1Dfetch( texture<signed short, 1, cudaReadModeNormalizedFloat> texRef, int x);
fetch the region of linear memory bound to texture reference texRef using integer texture coordinate x. No texture filtering and addressing modes are supported. For integer types, these functions may optionally promote the integer to single-precision floating point.
CUDA C Programming Guide Version 3.2
113
Appendix B.
C Language Extensions Besides the functions shown above, 2-, and 4-tuples are supported; for example: float4 tex1Dfetch( texture<uchar4, 1, cudaReadModeNormalizedFloat> texRef, int x);
fetches the region of linear memory bound to texture reference texRef using texture coordinate x.
B.8.2
tex1D() template<class Type, enum cudaTextureReadMode readMode> Type tex1D(texture<Type, 1, readMode> texRef, float x);
fetches the CUDA array bound to texture reference texRef using texture coordinate x.
B.8.3
tex2D() template<class Type, enum cudaTextureReadMode readMode> Type tex2D(texture<Type, 2, readMode> texRef, float x, float y);
fetches the CUDA array or the region of linear memory bound to texture reference texRef using texture coordinates x and y.
B.8.4
tex3D() template<class Type, enum cudaTextureReadMode readMode> Type tex3D(texture<Type, 3, readMode> texRef, float x, float y, float z);
fetches the CUDA array bound to texture reference texRef using texture coordinates x, y, and z.
B.9
Surface Functions Surface functions are only supported by devices of compute capability 2.0 and higher. Surface reference declaration is described in Section 3.2.4.2.1 and surface binding in Section 3.2.4.2.2. In the sections below, boundaryMode specifies the boundary mode, that is how out-of-range surface coordinates are handled; it is equal to either cudaBoundaryModeClamp, in which case out-of-range coordinates are clamped to the valid range, or cudaBoundaryModeZero, in which case out-of-range reads return zero and out-of-range writes are ignored, or cudaBoundaryModeTrap, in which case out-of-range accesses cause the kernel execution to fail.
114
CUDA C Programming Guide Version 3.2
Appendix B.
B.9.1
C Language Extensions
surf1Dread() template<class Type> Type surf1Dread(surface<void, 1> surfRef, int x, boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to surface reference surfRef using coordinate x.
B.9.2
surf1Dwrite() template<class Type> void surf1Dwrite(Type data, surface<void, 1> surfRef, int x, boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to surface reference surfRef at coordinate x.
B.9.3
surf2Dread() template<class Type> Type surf2Dread(surface<void, 2> surfRef, int x, int y, boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to surface reference surfRef using coordinates x and y.
B.9.4
surf2Dwrite() template<class Type> void surf2Dwrite(Type data, surface<void, 2> surfRef, int x, int y, boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to surface reference surfRef at coordinate x and y.
B.10
Time Function clock_t clock();
when executed in device code, returns the value of a per-multiprocessor counter that is incremented every clock cycle. Sampling this counter at the beginning and at the end of a kernel, taking the difference of the two samples, and recording the result per thread provides a measure for each thread of the number of clock cycles taken by the device to completely execute the thread, but not of the number of clock cycles the device actually spent executing thread instructions. The former number is greater that the latter since threads are time sliced.
CUDA C Programming Guide Version 3.2
115
Appendix B.
C Language Extensions
B.11
Atomic Functions An atomic function performs a read-modify-write atomic operation on one 32-bit or 64-bit word residing in global or shared memory. For example, atomicAdd() reads a 32-bit word at some address in global or shared memory, adds a number to it, and writes the result back to the same address. The operation is atomic in the sense that it is guaranteed to be performed without interference from other threads. In other words, no other thread can access this address until the operation is complete. Atomic functions can only be used in device functions and are only available for devices of compute capability 1.1 and above. Atomic functions operating on shared memory and atomic functions operating on 64-bit words are only available for devices of compute capability 1.2 and above. Atomic functions operating on 64-bit words in shared memory are only available for devices of compute capability 2.x and higher. Atomic functions operating on mapped page-locked memory (Section 3.2.5.3) are not atomic from the point of view of the host or other devices. Atomic operations only work with signed and unsigned integers with the exception of atomicAdd() for devices of compute capability 2.x and atomicExch() for all devices, that also work for single-precision floating-point numbers. Note however that any atomic operation can be implemented based on atomicCAS() (Compare And Swap). For example, atomicAdd() for double-precision floating-point numbers can be implemented as follows: __device__ double atomicAdd(double* address, double val) { double old = *address, assumed; do { assumed = old; old = __longlong_as_double( atomicCAS((unsigned long long int*)address, __double_as_longlong(assumed), __double_as_longlong(val + assumed))); } while (assumed != old); return old; }
B.11.1
Arithmetic Functions
B.11.1.1
atomicAdd() int atomicAdd(int* address, int val); unsigned int atomicAdd(unsigned int* address, unsigned int val); unsigned long long int atomicAdd(unsigned long long int* address, unsigned long long int val); float atomicAdd(float* address, float val);
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, computes (old + val), and stores the result back to memory at
116
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
the same address. These three operations are performed in one atomic transaction. The function returns old. The floating-point version of atomicAdd() is only supported by devices of compute capability 2.x.
B.11.1.2
atomicSub() int atomicSub(int* address, int val); unsigned int atomicSub(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes (old - val), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.1.3
atomicExch() int atomicExch(int* address, int val); unsigned int atomicExch(unsigned int* address, unsigned int val); unsigned long long int atomicExch(unsigned long long int* address, unsigned long long int val); float atomicExch(float* address, float val);
reads the 32-bit or 64-bit word old located at the address address in global or shared memory and stores val back to memory at the same address. These two operations are performed in one atomic transaction. The function returns old.
B.11.1.4
atomicMin() int atomicMin(int* address, int val); unsigned int atomicMin(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes the minimum of old and val, and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.1.5
atomicMax() int atomicMax(int* address, int val); unsigned int atomicMax(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes the maximum of old and val, and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.1.6
atomicInc() unsigned int atomicInc(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes ((old >= val) ? 0 : (old+1)), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
CUDA C Programming Guide Version 3.2
117
Appendix B.
C Language Extensions
B.11.1.7
atomicDec() unsigned int atomicDec(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes (((old == 0) | (old > val)) ? val : (old-1)), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.1.8
atomicCAS() int atomicCAS(int* address, int compare, int val); unsigned int atomicCAS(unsigned int* address, unsigned int compare, unsigned int val); unsigned long long int atomicCAS(unsigned long long int* address, unsigned long long int compare, unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, computes (old == compare ? val : old), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old (Compare And Swap).
B.11.2
Bitwise Functions
B.11.2.1
atomicAnd() int atomicAnd(int* address, int val); unsigned int atomicAnd(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes (old & val), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.2.2
atomicOr() int atomicOr(int* address, int val); unsigned int atomicOr(unsigned int* address, unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory, computes (old | val), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.11.2.3
atomicXor() int atomicXor(int* address, int val); unsigned int atomicXor(unsigned int* address, unsigned int val);
118
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
reads the 32-bit word old located at the address address in global or shared memory, computes (old ^ val), and stores the result back to memory at the same address. These three operations are performed in one atomic transaction. The function returns old.
B.12
Warp Vote Functions Warp vote functions are only supported by devices of compute capability 1.2 and higher (see Section 4.1 for the definition of a warp). int __all(int predicate);
evaluates predicate for all threads of the warp and returns non-zero if and only if predicate evaluates to non-zero for all of them. int __any(int predicate);
evaluates predicate for all threads of the warp and returns non-zero if and only if predicate evaluates to non-zero for any of them. unsigned int __ballot(int predicate);
evaluates predicate for all threads of the warp and returns an integer whose Nth bit is set if and only if predicate evaluates to non-zero for the Nth thread of the warp. This function is only supported by devices of compute capability 2.x.
B.13
Profiler Counter Function Each multiprocessor has a set of sixteen hardware counters that an application can increment with a single instruction by calling the __prof_trigger() function. void __prof_trigger(int counter);
increments by one per warp the per-multiprocessor hardware counter of index counter. Counters 8 to 15 are reserved and should not be used by applications. The value of counters 0, 1, …, 7 for the first multiprocessor can be obtained via the CUDA profiler by listing prof_trigger_00, prof_trigger_01, …, prof_trigger_07, etc. in the profiler.conf file (see the profiler manual for more details). All counters are reset before each kernel call (note that when an application is run via a CUDA debugger or profiler (cuda-gdb, CUDA Visual Profiler, Parallel Nsight), all launches are synchronous).
B.14
Formatted Output Formatted output is only supported by devices of compute capability 2.x. int printf(const char *format[, arg, ...]);
prints formatted output from a kernel to a host-side output stream. The in-kernel printf() function behaves in a similar way to the standard C-library printf() function, and the user is referred to the host system‟s manual pages for a complete description of printf() behavior. In essence, the string passed in as format is output to a stream on the host, with substitutions made from the
CUDA C Programming Guide Version 3.2
119
Appendix B.
C Language Extensions argument list wherever a format specifier is encountered. Supported format specifiers are listed below. The printf() command is executed as any other device-side function: per-thread, and in the context of the calling thread. From a multi-threaded kernel, this means that a straightforward call to printf() will be executed by every thread, using that thread‟s data as specified. Multiple versions of the output string will then appear at the host stream, once for each thread which encountered the printf(). It is up to the programmer to limit the output to a single thread if only a single output string is desired (see Section B.14.4 for an illustrative example). Unlike the C-standard printf(), which returns the number of characters printed, CUDA‟s printf() returns the number of arguments parsed. If no arguments follow the format string, 0 is returned. If the format string is NULL, -1 is returned. If an internal error occurs, -2 is returned.
B.14.1
Format Specifiers As for standard printf(), format specifiers take the form: %[flags][width][.precision][size]type The following fields are supported (see widely-available documentation for a complete description of all behaviors): Flags: „#‟ „ „ „0‟ „+‟ „-„ Width: „*‟ „0-9‟ Precision: „0-9‟ Size: „h‟ „l‟ „ll‟ Type: „%cdiouxXpeEfgGaAs‟ Note that CUDA‟s printf() will accept any combination of flag, width, precision, size and type, whether or not overall they form a valid format specifier. In other words, “%hd” will be accepted and printf will expect a double-precision variable in the corresponding location in the argument list.
B.14.2
Limitations Final formatting of the printf() output takes place on the host system. This means that the format string must be understood by the host-system‟s compiler and C library. Every effort has been made to ensure that the format specifiers supported by CUDA‟s printf function form a universal subset from the most common host compilers, but exact behavior will be host-O/S-dependent. As described in Section B.14.1, printf() will accept all combinations of valid flags and types. This is because it cannot determine what will and will not be valid on the host system where the final output is formatted. The effect of this is that output may be undefined if the program emits a format string which contains invalid combinations. The output buffer for printf() is set to a fixed size before kernel launch (see below). This buffer is circular, and is flushed at any host-side synchronisation point
120
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
and at when the context is explicitly destroyed; if more output is produced during kernel execution than can fit in the buffer, older output is overwritten. The printf() command can accept at most 32 arguments in addition to the format string. Additional arguments beyond this will be ignored, and the format specifier output as-is. Owing to the differing size of the long type on 64-bit Windows platforms (four bytes on 64-bit Windows platforms, eight bytes on other 64-bit platforms), a kernel which is compiled on a non-Windows 64-bit machine but then run on a win64 machine will see corrupted output for all format strings which include “%ld”. It is recommended that the compilation platform matches the execution platform to ensure safety. The output buffer for printf() is not flushed automatically to the output stream, but instead is flushed only when one of these actions is performed: Kernel launch via <<<>>> or cuLaunch(), Synchronization via cudaThreadSynchronize(), cuCtxSynchronize(), cudaStreamSynchronize(), or cuStreamSynchronize(), Module loading/unloading via cuModuleLoad() or cuModuleUnload(), Context destruction via cudaThreadExit() or cuCtxDestroy(). Note that the buffer is not flushed automatically when the program exits. The user must call cudaThreadExit() or cuCtxDestroy() explicitly, as shown in the examples below.
B.14.3
Associated Host-Side API The following API functions get and set the size of the buffer used to transfer the printf() arguments and internal metadata to the host (default is 1 megabyte):
Driver API: cuCtxGetLimit(size_t* size, CU_LIMIT_PRINTF_FIFO_SIZE) cuCtxSetLimit(CU_LIMIT_PRINTF_FIFO_SIZE, size_t size)
Runtime API: cudaThreadGetLimit(size_t* size,cudaLimitPrintfFifoSize) cudaThreadSetLimit(cudaLimitPrintfFifoSize, size_t size)
B.14.4
Examples The following code sample: __global__ void helloCUDA(float f) { printf(“Hello thread %d, f=%f\n”, threadIdx.x, f) ; } void main() { helloCUDA<<<1, 5>>>(1.2345f);
CUDA C Programming Guide Version 3.2
121
Appendix B.
C Language Extensions cudaThreadExit(); }
will output: Hello Hello Hello Hello Hello
thread thread thread thread thread
0, 1, 2, 3, 4,
f=1.2345 f=1.2345 f=1.2345 f=1.2345 f=1.2345
Notice how each thread encounters the printf() command, so there are as many lines of output as there were threads launched in the grid. As expected, global values (i.e. float f) are common between all threads, and local values (i.e. threadIdx.x) are distinct per-thread. The following code sample: __global__ void helloCUDA(float f) { if (threadIdx.x == 0) printf(“Hello thread %d, f=%f\n�, threadIdx.x, f) ; } void main() { helloCUDA<<<1, 5>>>(1.2345f); cudaThreadExit(); }
will output: Hello thread 0, f=1.2345
Self-evidently, the if() statement limits which threads will call printf, so that only a single line of output is seen.
B.15
Dynamic Global Memory Allocation void* malloc(size_t size); void free(void* ptr);
allocate and free memory dynamically from a fixed-size heap in global memory. The CUDA in-kernel malloc() function allocates at least size bytes from the device heap and returns a pointer to the allocated memory or NULL if insufficient memory exists to fulfill the request. The returned pointer is guaranteed to be aligned to a 16-byte boundary. The CUDA in-kernel free() function deallocates the memory pointed to by ptr, which must have been returned by a previous call to malloc(). If ptr is NULL, the call to free() is ignored. Repeated calls to free() with the same ptr has undefined behavior. The memory allocated by a given CUDA thread via malloc() remains allocated for the lifetime of the CUDA context, or until it is explicitly released by a call to free(). It can be used by any other CUDA threads even from subsequent kernel launches. Any CUDA thread may free memory allocated by another thread, but care should be taken to ensure that the same pointer is not freed more than once.
122
CUDA C Programming Guide Version 3.2
Appendix B.
B.15.1
C Language Extensions
Heap Memory Allocation The device memory heap has a fixed size that must be specified before any program using malloc() or free() is loaded into the context. A default heap of eight megabytes is allocated if any program uses malloc() without explicitly specifying the heap size. The following API functions get and set the heap size:
Driver API: cuCtxGetLimit(size_t* size, CU_LIMIT_MALLOC_HEAP_SIZE) cuCtxSetLimit(CU_LIMIT_MALLOC_HEAP_SIZE, size_t size)
Runtime API: cudaThreadGetLimit(size_t* size, cudaLimitMallocHeapSize) cudaThreadSetLimit(cudaLimitMallocHeapSize, size_t size)
The heap size granted will be at least size bytes. cuCtxGetLimit() and cudaThreadGetLimit() return the currently requested heap size. The actual memory allocation for the heap occurs when a module is loaded into the context, either explicitly via the CUDA driver API (see Section 3.3.2), or implicitly via the CUDA runtime API (see Section 3.2). If the memory allocation fails, the module load will generate a CUDA_ERROR_SHARED_OBJECT_INIT_FAILED error. Heap size cannot be changed once a module load has occurred and it does not resize dynamically according to need. Memory reserved for the device heap is in addition to memory allocated through host-side CUDA API calls such as cudaMalloc().
B.15.2
Interoperability with Host Memory API Memory allocated via malloc() cannot be freed using the runtime or driver API (i.e. by calling any of the free memory functions from Sections 3.2.1 and 3.3.4). Similarly, memory allocated via the runtime or driver API (i.e. by calling any of the memory allocation functions from Sections 3.2.1 and 3.3.4) cannot be freed via free(). Memory allocated via malloc() can be copied using the runtime or driver API (i.e. by calling any of the copy memory functions from Sections 3.2.1 and 3.3.4).
B.15.3
Examples
B.15.3.1
Per Thread Allocation The following code sample: __global__ void mallocTest() { char* ptr = (char*)malloc(123); printf(“Thread %d got pointer: %p\n”, threadIdx.x, ptr); free(ptr);
CUDA C Programming Guide Version 3.2
123
Appendix B.
C Language Extensions } void main() { // Set a heap size of 128 megabytes. Note that this must // be done before any kernel is launched. cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); mallocTest<<<1, 5>>>(); cudaThreadSynchronize(); }
will output: Thread Thread Thread Thread Thread
0 1 2 3 4
got got got got got
pointer: pointer: pointer: pointer: pointer:
00057020 0005708c 000570f8 00057164 000571d0
Notice how each thread encounters the malloc() command and so receives its own allocation. (Exact pointer values will vary: these are illustrative.)
B.15.3.2
Per Thread Block Allocation __global__ void mallocTest() { __shared__ int* data; // // // // if
The first thread in the block does the allocation and then shares the pointer with all other threads through shared memory, so that access can easily be coalesced. 64 bytes per thread are allocated. (threadIdx.x == 0) data = (int*)malloc(blockDim.x * 64); __syncthreads(); // Check for failure if (data == NULL) return; // Threads index into the memory, ensuring coalescence int* ptr = data; for (int i = 0; i < 64; ++i) ptr[i * blockDim.x + threadIdx.x] = threadIdx.x; // Ensure all threads complete before freeing __syncthreads(); // Only one thread may free the memory! if (threadIdx.x == 0) free(data); } void main() { cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); mallocTest<<<10, 128>>>(); cudaThreadSynchronize();
124
CUDA C Programming Guide Version 3.2
Appendix B.
C Language Extensions
}
B.15.3.3
Allocation Persisting Between Kernel Launches #define NUM_BLOCKS 20 __device__ int* dataptr[NUM_BLOCKS]; // Per-block pointer __global__ void allocmem() { // Only the first thread in the block does the allocation // since we want only one allocation per block. if (threadIdx.x == 0) dataptr[blockIdx.x] = (int*)malloc(blockDim.x * 4); __syncthreads(); // Check for failure if (dataptr[blockIdx.x] == NULL) return; // Zero the data with all threads in parallel dataptr[blockIdx.x][threadIdx.x] = 0; } // Simple example: store thread ID into each element __global__ void usemem() { int* ptr = dataptr[blockIdx.x]; if (ptr != NULL) ptr[threadIdx.x] += threadIdx.x; } // Print the content of the buffer before freeing it __global__ void freemem() { int* ptr = dataptr[blockIdx.x]; if (ptr != NULL) printf(“Block %d, Thread %d: final value = %d\n”, blockIdx.x, threadIdx.x, ptr[threadIdx.x]); // Only free from one thread! if (threadIdx.x == 0) free(ptr); } void main() { cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); // Allocate memory allocmem<<< NUM_BLOCKS, 10 >>>(); // Use memory usemem<<< NUM_BLOCKS, 10 >>>(); usemem<<< NUM_BLOCKS, 10 >>>(); usemem<<< NUM_BLOCKS, 10 >>>(); // Free memory
CUDA C Programming Guide Version 3.2
125
Appendix B.
C Language Extensions freemem<<< NUM_BLOCKS, 10 >>>(); cudaThreadSynchronize(); }
B.16
Execution Configuration Any call to a __global__ function must specify the execution configuration for that call. The execution configuration defines the dimension of the grid and blocks that will be used to execute the function on the device, as well as the associated stream (see Section 3.3.9.1 for a description of streams). When using the driver API, the execution configuration is specified through a series of driver function calls as detailed in Section 3.3.3. When using the runtime API (Section 3.2), the execution configuration is specified by inserting an expression of the form <<< Dg, Db, Ns, S >>> between the function name and the parenthesized argument list, where:
Dg is of type dim3 (see Section B.3.2) and specifies the dimension and size of the grid, such that Dg.x * Dg.y equals the number of blocks being launched; Dg.z must be equal to 1;
Db is of type dim3 (see Section B.3.2) and specifies the dimension and size of each block, such that Db.x * Db.y * Db.z equals the number of threads
per block; Ns is of type size_t and specifies the number of bytes in shared memory that is dynamically allocated per block for this call in addition to the statically allocated memory; this dynamically allocated memory is used by any of the variables declared as an external array as mentioned in Section B.2.3; Ns is an optional argument which defaults to 0; S is of type cudaStream_t and specifies the associated stream; S is an optional argument which defaults to 0. As an example, a function declared as
__global__ void Func(float* parameter);
must be called like this: Func<<< Dg, Db, Ns >>>(parameter);
The arguments to the execution configuration are evaluated before the actual function arguments and like the function arguments, are currently passed via shared memory to the device. The function call will fail if Dg or Db are greater than the maximum sizes allowed for the device as specified in Appendix G, or if Ns is greater than the maximum amount of shared memory available on the device, minus the amount of shared memory required for static allocation, functions arguments (for devices of compute capability 1.x), and execution configuration.
126
CUDA C Programming Guide Version 3.2
Appendix B.
B.17
C Language Extensions
Launch Bounds As discussed in detail in Section 5.2.3, the fewer registers a kernel uses, the more threads and thread blocks are likely to reside on a multiprocessor, which can improve performance. Therefore, the compiler uses heuristics to minimize register usage while keeping register spilling (see Section 5.3.2.2) and instruction count to a minimum. An application can optionally aid these heuristics by providing additional information to the compiler in the form of launch bounds that are specified using the __launch_bounds__() qualifier in the definition of a __global__ function: __global__ void __launch_bounds__(maxThreadsPerBlock, minBlocksPerMultiprocessor) MyKernel(...) { ... }
maxThreadsPerBlock specifies the maximum number of threads per block with which the application will ever launch MyKernel(); it compiles to the .maxntid PTX directive;
minBlocksPerMultiprocessor is optional and specifies the desired
minimum number of resident blocks per multiprocessor; it compiles to the .minnctapersm PTX directive. If launch bounds are specified, the compiler first derives from them the upper limit L on the number of registers the kernel should use to ensure that minBlocksPerMultiprocessor blocks (or a single block if minBlocksPerMultiprocessor is not specified) of maxThreadsPerBlock threads can reside on the multiprocessor (see Section 4.2 for the relationship between the number of registers used by a kernel and the number of registers allocated per block). The compiler then optimizes register usage in the following way: If the initial register usage is higher than L, the compiler reduces it further until it becomes less or equal to L, usually at the expense of more local memory usage and/or higher number of instructions; If the initial register usage is lower than L, If maxThreadsPerBlock is specified and minBlocksPerMultiprocessor is not, the compiler uses maxThreadsPerBlock to determine the register usage thresholds for the transitions between n and n+1 resident blocks (i.e. when using one less register makes room for an additional resident block as in the example of Section 5.2.3) and then applies similar heuristics as when no launch bounds are specified; If both minBlocksPerMultiprocessor and maxThreadsPerBlock are specified, the compiler may increase register usage as high as L to reduce the number of instructions and better hide single thread instruction latency. A kernel will fail to launch if it is executed with more threads per block than its launch bound maxThreadsPerBlock.
CUDA C Programming Guide Version 3.2
127
Appendix B.
C Language Extensions Optimal launch bounds for a given kernel will usually differ across major architecture revisions. The sample code below shows how this is typically handled in device code using the __CUDA_ARCH__ macro introduced in Section 3.1.4. #define THREADS_PER_BLOCK #if __CUDA_ARCH__ >= 200 #define MY_KERNEL_MAX_THREADS #define MY_KERNEL_MIN_BLOCKS #else #define MY_KERNEL_MAX_THREADS #define MY_KERNEL_MIN_BLOCKS #endif
256 (2 * THREADS_PER_BLOCK) 3 THREADS_PER_BLOCK 2
// Device code __global__ void __launch_bounds__(MY_KERNEL_MAX_THREADS, MY_KERNEL_MIN_BLOCKS) MyKernel(...) { ... }
In the common case where MyKernel is invoked with the maximum number of threads per block (specified as the first parameter of __launch_bounds__()), it is tempting to use MY_KERNEL_MAX_THREADS as the number of threads per block in the execution configuration: // Host code MyKernel<<<blocksPerGrid, MY_KERNEL_MAX_THREADS>>>(...);
This will not work however since __CUDA_ARCH__ is undefined in host code as mentioned in Section 3.1.4, so MyKernel will launch with 256 threads per block even when __CUDA_ARCH__ is greater or equal to 200. Instead the number of threads per block should be determined:
Either at compile time using a macro that does not depend on __CUDA_ARCH__, for example // Host code MyKernel<<<blocksPerGrid, THREADS_PER_BLOCK>>>(...);
Or at runtime based on the compute capability // Host code cudaGetDeviceProperties(&deviceProp, device); int threadsPerBlock = (deviceProp.major >= 2 ? 2 * THREADS_PER_BLOCK : THREADS_PER_BLOCK); MyKernel<<<blocksPerGrid, threadsPerBlock>>>(...);
Register usage is reported by the --ptxas-options=-v compiler option. The number of resident blocks can be derived from the occupancy reported by the CUDA profiler (see Section 5.2.3 for a definition of occupancy). Register usage can also be controlled for all __global__ functions in a file using the -maxrregcount compiler option. The value of -maxrregcount is ignored for functions with launch bounds.
128
CUDA C Programming Guide Version 3.2
Appendix C. Mathematical Functions
Functions from Section C.1 can be used in both host and device code whereas functions from Section C.2 can only be used in device code. Note that floating-point functions are overloaded, so that in general, there are three prototypes for a given function <func-name>: (1) double <func-name>(double), e.g. double log(double) (2) float <func-name>(float), e.g. float log(float) (3) float <func-name>f(float), e.g. float logf(float) This means, in particular, that passing a float argument always results in a float result (variants (2) and (3) above).
C.1
Standard Functions This section lists all the mathematical standard library functions supported in device code. It also specifies the error bounds of each function when executed on the device. These error bounds also apply when the function is executed on the host in the case where the host does not supply the function. They are generated from extensive but not exhaustive tests, so they are not guaranteed bounds.
C.1.1
Single-Precision Floating-Point Functions Addition and multiplication are IEEE-compliant, so have a maximum error of 0.5 ulp. However, on the device, the compiler often combines them into a single multiply-add instruction (FMAD) and for devices of compute capability 1.x, FMAD truncates the intermediate result of the multiplication as mentioned in Section G.2. This combination can be avoided by using the __fadd_rn() and __fmul_rn() intrinsic functions (see Section C.2). The recommended way to round a single-precision floating-point operand to an integer, with the result being a single-precision floating-point number is rintf(), not roundf(). The reason is that roundf() maps to an 8-instruction sequence on the device, whereas rintf() maps to a single instruction. truncf(), ceilf(), and floorf() each map to a single instruction as well.
CUDA C Programming Guide Version 3.1
129
Appendix C.
Mathematical Functions
Table C-1.
Mathematical Standard Library Functions with Maximum ULP Error The maximum error is stated as the absolute value of the difference in ulps between a correctly rounded single-precision result and the result returned by the CUDA library function.
130
Function x+y
Maximum ulp error
x*y
0 (IEEE-754 round-to-nearest-even) (except for devices of compute capability 1.x when multiplication is merged into an FMAD)
x/y
0 for compute capability ≥ 2 when compiled with -prec-div=true 2 (full range), otherwise
1/x
0 for compute capability ≥ 2 when compiled with -prec-div=true 1 (full range), otherwise
rsqrtf(x) 1/sqrtf(x)
2 (full range) Applies to 1/sqrtf(x) only when it is converted to rsqrtf(x) by the compiler.
sqrtf(x)
0 for compute capability ≥ 2 when compiled with -prec-sqrt=true 3 (full range), otherwise
cbrtf(x)
1 (full range)
rcbrtf(x)
2 (full range)
hypotf(x,y)
3 (full range)
expf(x)
2 (full range)
exp2f(x)
2 (full range)
exp10f(x)
2 (full range)
expm1f(x)
1 (full range)
logf(x)
1 (full range)
log2f(x)
3 (full range)
log10f(x)
3 (full range)
log1pf(x)
2 (full range)
sinf(x)
2 (full range)
cosf(x)
2 (full range)
tanf(x)
4 (full range)
sincosf(x,sptr,cptr)
2 (full range)
sinpif(x)
2 (full range)
asinf(x)
4 (full range)
acosf(x)
3 (full range)
atanf(x)
2 (full range)
atan2f(y,x)
3 (full range)
sinhf(x)
3 (full range)
coshf(x)
2 (full range)
0 (IEEE-754 round-to-nearest-even) (except for devices of compute capability 1.x when addition is merged into an FMAD)
CUDA C Programming Guide Version 3.2
Appendix C.
Mathematical Functions
Function tanhf(x)
Maximum ulp error
asinhf(x)
3 (full range)
acoshf(x)
4 (full range)
atanhf(x)
3 (full range)
powf(x,y)
8 (full range)
erff(x)
3 (full range)
erfcf(x)
6 (full range)
erfinvf(x)
3 (full range)
erfcinvf(x)
7 (full range)
lgammaf(x)
6 (outside interval -10.001 ... -2.264; larger inside)
tgammaf(x)
11 (full range)
fmaf(x,y,z)
0 (full range)
frexpf(x,exp)
0 (full range)
ldexpf(x,exp)
0 (full range)
scalbnf(x,n)
0 (full range)
scalblnf(x,l)
0 (full range)
logbf(x)
0 (full range)
ilogbf(x)
0 (full range)
fmodf(x,y)
0 (full range)
remainderf(x,y)
0 (full range)
remquof(x,y,iptr)
0 (full range)
modff(x,iptr)
0 (full range)
fdimf(x,y)
0 (full range)
truncf(x)
0 (full range)
roundf(x)
0 (full range)
rintf(x)
0 (full range)
nearbyintf(x)
0 (full range)
ceilf(x)
0 (full range)
floorf(x)
0 (full range)
lrintf(x)
0 (full range)
lroundf(x)
0 (full range)
llrintf(x)
0 (full range)
llroundf(x)
0 (full range)
signbit(x)
N/A
isinf(x)
N/A
isnan(x)
N/A
isfinite(x)
N/A
copysignf(x,y)
N/A
fminf(x,y)
N/A
fmaxf(x,y)
N/A
fabsf(x)
N/A
nanf(cptr)
N/A
CUDA C Programming Guide Version 3.2
2 (full range)
131
Appendix C.
Mathematical Functions
Function nextafterf(x,y)
C.1.2
Maximum ulp error N/A
Double-Precision Floating-Point Functions The errors listed below only apply when compiling for devices with native doubleprecision support. When compiling for devices without such support, such as devices of compute capability 1.2 and lower, the double type gets demoted to float by default and the double-precision math functions are mapped to their single-precision equivalents. The recommended way to round a double-precision floating-point operand to an integer, with the result being a double-precision floating-point number is rint(), not round(). The reason is that round() maps to an 8-instruction sequence on the device, whereas rint() maps to a single instruction. trunc(), ceil(), and floor() each map to a single instruction as well.
Table C-2.
Mathematical Standard Library Functions with Maximum ULP Error The maximum error is stated as the absolute value of the difference in ulps between a correctly rounded double-precision result and the result returned by the CUDA library function.
132
Function x+y
Maximum ulp error
x*y
0 (IEEE-754 round-to-nearest-even)
x/y
0 (IEEE-754 round-to-nearest-even)
1/x
0 (IEEE-754 round-to-nearest-even)
sqrt(x)
0 (IEEE-754 round-to-nearest-even)
rsqrt(x)
1 (full range)
cbrt(x)
1 (full range)
rcbrt(x)
1 (full range)
hypot(x,y)
2 (full range)
exp(x)
1 (full range)
exp2(x)
1 (full range)
exp10(x)
1 (full range)
expm1(x)
1 (full range)
log(x)
1 (full range)
log2(x)
1 (full range)
log10(x)
1 (full range)
log1p(x)
1 (full range)
sin(x)
2 (full range)
cos(x)
2 (full range)
tan(x)
2 (full range)
sincos(x,sptr,cptr)
2 (full range)
sinpi(x)
2 (full range)
0 (IEEE-754 round-to-nearest-even)
CUDA C Programming Guide Version 3.2
Appendix C.
Mathematical Functions
Function asin(x)
Maximum ulp error
acos(x)
2 (full range)
atan(x)
2 (full range)
atan2(y,x)
2 (full range)
sinh(x)
1 (full range)
cosh(x)
1 (full range)
tanh(x)
1 (full range)
asinh(x)
2 (full range)
acosh(x)
2 (full range)
atanh(x)
2 (full range)
pow(x,y)
2 (full range)
erf(x)
2 (full range)
erfc(x)
5 (full range)
erfinv(x)
8 (full range)
erfcinv(x)
8 (full range)
lgamma(x)
4 (outside interval -11.0001 ... -2.2637; larger inside)
tgamma(x)
8 (full range)
fma(x,y,z)
0 (IEEE-754 round-to-nearest-even)
frexp(x,exp)
0 (full range)
ldexp(x,exp)
0 (full range)
scalbn(x,n)
0 (full range)
scalbln(x,l)
0 (full range)
logb(x)
0 (full range)
ilogb(x)
0 (full range)
fmod(x,y)
0 (full range)
remainder(x,y)
0 (full range)
remquo(x,y,iptr)
0 (full range)
modf(x,iptr)
0 (full range)
fdim(x,y)
0 (full range)
trunc(x)
0 (full range)
round(x)
0 (full range)
rint(x)
0 (full range)
nearbyint(x)
0 (full range)
ceil(x)
0 (full range)
floor(x)
0 (full range)
lrint(x)
0 (full range)
lround(x)
0 (full range)
llrint(x)
0 (full range)
llround(x)
0 (full range)
signbit(x)
N/A
isinf(x)
N/A
isnan(x)
N/A
CUDA C Programming Guide Version 3.2
2 (full range)
133
Appendix C.
C.1.3
Mathematical Functions
Function isfinite(x)
Maximum ulp error
copysign(x,y)
N/A
fmin(x,y)
N/A
fmax(x,y)
N/A
fabs(x)
N/A
nan(cptr)
N/A
nextafter(x,y)
N/A
N/A
Integer Functions Integer min(x,y) and max(x,y) are supported and map to a single instruction on the device.
C.2
Intrinsic Functions This section lists the intrinsic functions that are only supported in device code. Among these functions are the less accurate, but faster versions of some of the functions of Section C.1; they have the same name prefixed with __ (such as __sinf(x)). Functions suffixed with _rn operate using the round-to-nearest-even rounding mode. Functions suffixed with _rz operate using the round-towards-zero rounding mode. Functions suffixed with _ru operate using the round-up (to positive infinity) rounding mode. Functions suffixed with _rd operate using the round-down (to negative infinity) rounding mode.
C.2.1
Single-Precision Floating-Point Functions __fadd_rn() and __fmul_rn() map to addition and multiplication operations
that the compiler never merges into FMADs. By contrast, additions and multiplications generated from the '*' and '+' operators will frequently be combined into FMADs. The accuracy of floating-point division varies depending on the compute capability of the device and whether the code is compiled with -prec-div=false or -prec-div=true. For devices of compute capability 1.x or for devices of compute capability 2.x when the code is compiled with -prec-div=false, both the regular division “/” operator and __fdividef(x,y) have the same accuracy, but for 2126 < y < 2128, __fdividef(x,y) delivers a result of zero, whereas the “/” operator delivers the correct result to within the accuracy stated in Table C-3. Also, for 2126 < y < 2128, if x is infinity, __fdividef(x,y) delivers a NaN (as a result of multiplying infinity by zero), while the “/” operator returns infinity. For
134
CUDA C Programming Guide Version 3.2
Appendix C.
Mathematical Functions
devices of compute capability 2.x when the code is compiled with -prec-div=true, the “/” operator is IEEE compliant as mentioned in Section C.1.1. __saturate(x) returns 0 if x is less than 0, 1 if x is more than 1, and x
otherwise. __float2ll_[rn,rz,ru,rd](x) (respectively __float2ull_[rn,rz,ru,rd](x)) converts single-precision floating-point parameter x to 64-bit signed (respectively unsigned) integer with specified IEEE-
754 rounding modes.
Table C-3.
Single-Precision Floating-Point Intrinsic Functions Supported by the CUDA Runtime Library with Respective Error Bounds
Function __fadd_[rn,rz,ru,rd](x,y)
Error bounds
__fmul_[rn,rz,ru,rd](x,y)
IEEE-compliant.
__fmaf_[rn,rz,ru,rd](x,y,z)
IEEE-compliant.
__frcp_[rn,rz,ru,rd](x)
IEEE-compliant.
__fsqrt_[rn,rz,ru,rd](x)
IEEE-compliant.
__fdiv_[rn,rz,ru,rd](x,y)
IEEE-compliant.
__fdividef(x,y)
For y in [2-126, 2126], the maximum ulp error is 2.
__expf(x)
The maximum ulp error is
IEEE-compliant.
2 + floor(abs(1.16 * x)). __exp10f(x)
The maximum ulp error is
2 + floor(abs(2.95 * x)). __logf(x)
For x in [0.5, 2], the maximum absolute error is 2-21.41, otherwise, the maximum ulp error is 3.
__log2f(x)
For x in [0.5, 2], the maximum absolute error is 2-22, otherwise, the maximum ulp error is 2.
__log10f(x)
For x in [0.5, 2], the maximum absolute error is 2-24, otherwise, the maximum ulp error is 3.
__sinf(x)
For x in [-, ], the maximum absolute error is 2-21.41, and larger otherwise.
__cosf(x)
For x in [-, ], the maximum absolute error is 2-21.19, and larger otherwise.
__sincosf(x,sptr,cptr)
Same as sinf(x) and cosf(x).
__tanf(x)
Derived from its implementation as
__sinf(x) * (1 / __cosf(x)). __powf(x, y)
Derived from its implementation as exp2f(y * __log2f(x)).
__saturate(x)
N/A
CUDA C Programming Guide Version 3.2
135
Appendix C.
C.2.2
Mathematical Functions
Double-Precision Floating-Point Functions __dadd_rn() and __dmul_rn() map to addition and multiplication operations
that the compiler never merges into FMADs. By contrast, additions and multiplications generated from the '*' and '+' operators will frequently be combined into FMADs.
Table C-4.
C.2.3
Double-Precision Floating-Point Intrinsic Functions Supported by the CUDA Runtime Library with Respective Error Bounds
Function __dadd_[rn,rz,ru,rd](x,y)
Error bounds
__dmul_[rn,rz,ru,rd](x,y)
IEEE-compliant.
__fma_[rn,rz,ru,rd](x,y,z)
IEEE-compliant.
__ddiv_[rn,rz,ru,rd](x,y)(x,y)
IEEE-compliant. Requires compute capability ≥ 2.
__drcp_[rn,rz,ru,rd](x)
IEEE-compliant. Requires compute capability ≥ 2
__dsqrt_[rn,rz,ru,rd](x)
IEEE-compliant. Requires compute capability ≥ 2
IEEE-compliant.
Integer Functions __[u]mul24(x,y) computes the product of the 24 least significant bits of the integer parameters x and y and delivers the 32 least significant bits of the result. The 8 most significant bits of x or y are ignored. __[u]mulhi(x,y) computes the product of the integer parameters x and y and
delivers the 32 most significant bits of the 64-bit result. __[u]mul64hi(x,y) computes the product of the 64-bit integer parameters x and y and delivers the 64 most significant bits of the 128-bit result. __[u]sad(x,y,z) (Sum of Absolute Difference) returns the sum of integer parameter z and the absolute value of the difference between integer parameters x and y. __clz(x) returns the number, between 0 and 32 inclusive, of consecutive zero bits starting at the most significant bit (i.e. bit 31) of integer parameter x. __clzll(x) returns the number, between 0 and 64 inclusive, of consecutive zero bits starting at the most significant bit (i.e. bit 63) of 64-bit integer parameter x. __ffs(x) returns the position of the first (least significant) bit set in integer parameter x. The least significant bit is position 1. If x is 0, __ffs() returns 0. Note that this is identical to the Linux function ffs. __ffsll(x) returns the position of the first (least significant) bit set in 64-bit integer parameter x. The least significant bit is position 1. If x is 0, __ffsll() returns 0. Note that this is identical to the Linux function ffsll.
136
CUDA C Programming Guide Version 3.2
Appendix C.
Mathematical Functions
__popc(x) returns the number of bits that are set to 1 in the binary representation of 32-bit integer parameter x. __popcll(x) returns the number of bits that are set to 1 in the binary representation of 64-bit integer parameter x. __brev(x) reverses the bits of 32-bit unsigned integer parameter x, i.e. bit N of the result corresponds to bit 31-N of x. __brevll(x) reverses the bits of 64-bit unsigned long long parameter x, i.e. bit N of the result corresponds to bit 63-N of x. __byte_perm(x,y,s) returns, as a 32-bit integer r, four bytes from eight input bytes provided in the two input integers x and y. The input bytes are indexed as
follows: input[0] = x<0:7>
input[1] = x<8:15>
input[2] = x<16:23>
input[3] = x<24:31>
input[4] = y<0:7>
input[5] = y<8:15>
input[6] = y<16:23>
input[7] = y<24:31>
The selector indices are stored in 4-bit nibbles (with the upper 16-bit of the selector not being used): selector[0] = s<0:3>
selector[1] = s<4:7>
selector[2] = s<8:11>
selector[3] = s<12:15>
The returned value r is computed to be: result[n] := input[selector[n]]
where result[n] is the nth byte of r.
C.2.4
Type Casting Functions There are two categories of type casting functions: the type conversion functions (Table C-5) and the type reinterpretation functions (Table C-6). A type reinterpretation function does not change the binary representation of its input value. For example, __int_as_float(0xC0000000) is equal to -2.0f, __float_as_int(1.0f) is equal to 0x3f800000. A type conversion function may change the binary representation of its input value. For example, __int2float_rn(0xC0000000) is equal to -1073741824.0f, __float2int_rn(1.0f) is equal to 1.
Table C-5.
Type Conversion Functions
__float2int_[rn,rz,ru,rd](x) __float2uint_[rn,rz,ru,rd](x) __int2float_[rn,rz,ru,rd](x) __uint2float_[rn,rz,ru,rd](x) __float2ll_[rn,rz,ru,rd](x) __float2ull_[rn,rz,ru,rd](x)
CUDA C Programming Guide Version 3.2
137
Appendix C.
Mathematical Functions __ll2float_[rn,rz,ru,rd](x) __ull2float_[rn,rz,ru,rd](x) __float2half_rn(x) __half2float(x) __double2float_[rn,rz,ru,rd](x) __double2int_[rn,rz,ru,rd](x) __double2uint_[rn,rz,ru,rd](x) __double2ll_[rn,rz,ru,rd](x) __double2ull_[rn,rz,ru,rd](x) __int2double_rn(x) __uint2double_rn(x) __ll2double_[rn,rz,ru,rd](x) __ull2double_[rn,rz,ru,rd](x)
Table C-6.
Type Reinterpretation Functions
__int_as_float(x) __float_as_int(x) __double_as_longlong(x) __longlong_as_double(x) __double2hiint(x) __double2loint(x) __hiloint2double(hi, lo)
138
CUDA C Programming Guide Version 3.2
Appendix D. C++ Language Constructs
CUDA supports the following C++ language constructs for device code: Polymorphism Default Parameters Operator Overloading Namespaces Function Templates Classes for devices of compute capability 2.x These C++ constructs are implemented as specified in “The C++ Programming Langue” reference. It is valid to use any of these constructs in .cu CUDA files for host, device, and kernel (__global__) functions. Any restrictions detailed in previous parts of this programming guide, like the lack of support for recursion, still apply.
The following subsections provide examples of the various constructs.
D.1
Polymorphism Generally, polymorphism is the ability to define that functions or operators behave differently in different contexts. This is also referred to as function (and operator, see below) overloading. In practical terms, this means that it is permissible to define two different functions within the same scope (namespace) as long as they have a distinguishable function signature. That means that the two functions either consume a different number of parameters or parameters of different types. When either of the multiple functions gets invoked the compiler resolves to the function‟s implementation that matches the function signature. Because of implicit typecasting, a compiler may encounter multiple potential matches for a function invocation and in that case the matching rules as described in the C++ Language Standard apply. In practice this means that the compiler will pick the closest match in case of multiple potential matches.
Example: The following is valid CUDA code: __device__ void f(float x) {
CUDA C Programming Guide Version 3.1
139
Appendix D.
C++ Language Constructs // do something with x } __device__ void f(int i) { // do something with i } __device__ void f(double x, double y) { // do something with x and y }
D.2
Default Parameters With support for polymorphism as described in the previous subsection and the function signature matching rules in place it becomes possible to provide support for default values for function parameters.
Example: __device__ void f(float x = 0.0f) { // do something with x }
Kernel or other device functions can now invoke this version of f in one of two ways: f(); // or float x = /* some value */; f(x);
Default parameters can only be given for the last n parameters of a function.
D.3
Operator Overloading Operator overloading allows programmers to define operators for new data-types. Examples of overloadable operators in C++ are: +, -, *, /, +=, &, [], etc.
Example: The following is valid CUDA code, implementing the + operation between two uchar4 vectors:
__device__ uchar4 operator+ (const uchar4 & a, const uchar4 & b) { uchar4 r; r.x = a.x + b.x; ... return r; }
This new operator can now be used like this: uchar4 a, b, c; a = b = /* some initial value */;
140
CUDA C Programming Guide Version 3.2
Appendix D.
C++ Language Constructs
c = a + b;
D.4
Namespaces Namespaces in C++ allow for the creation of a hierarchy of scopes of visibility. All the symbols inside a namespace can be used within this namespaces without additional syntax. The use of namespaces can be used to solve the problem of name-clashes (two different symbols using identical names), which commonly occurs when using multiple function libraries from different sources.
Example: The following code defines two functions “f()” in two separate namespaces (“nvidia” and “other”): namespace nvidia { __device__ void f(float x) { /* do something with x */ ;} } namespace other { __device__ void f(float x) { /* do something with x */ ;} }
The functions can now be used anywhere via fully qualified names: nvidia::f(0.5f);
All the symbols in a namespace can be imported into another namespace (scope) like this: using namespace nvidia; f(0.5f);
D.5
Function Templates Function templates are a form of meta-programming that allows writing a generic function in a data-type independent fashion. CUDA supports function templates to the full extent of the C++ standard, including the following concepts: Implicit template parameter deduction. Explicit instantiation. Template specialization.
Example: template <T> __device__ bool f(T x) { return /* some clever code that turns x into a bool here */ }
This function will convert x of any data-type to a bool as long as the code in the function‟s body can be compiled for the actually type (T) of the variable x. f() can be invoked in two ways: int x = 1;
CUDA C Programming Guide Version 3.2
141
Appendix D.
C++ Language Constructs bool result = f(x);
This first type of invocation relies on the compiler‟s ability to implicitly deduce the correct function type for T. In this case the compiler would deduce T to be int and instantiate f<int>(x). The second type of invoking the template function is via explicit instantiation like this: bool result = f<double>(0.5);
Function templates may be specialized: template <T> __device__ bool f(T x) { return false; } template <> __device__ bool f<int>(T x) { return true; }
In this case the implementation for T representing the int type are specialized to return true, all other types will be caught by the more general template and return false. The complete set of matching rules (for implicitly deducing template parameters) and matching polymorphous functions apply as specified in the C++ standard.
D.6
Classes Code compiled for devices with compute capability 2.x and higher may make use of C++ classes, as long as none of the member functions are virtual (this restriction will be removed in some future release). There are two common use cases for classes without virtual member functions: Small-data aggregations. E.g. data types like pixels (r, g, b, a), 2D and 3D points, vectors, etc. Functor classes. The use of functors is necessitated by the fact that devicefunction pointers are not supported and thus it is not possible to pass functions as template parameters. A workaround for this restriction is the use of functor classes (see code sample below).
D.6.1
Example 1 Pixel Data Type The following is an example of a data type for RGBA pixels with 8 bit per channel depth: class PixelRGBA { public: __device__ PixelRGBA(): r_(0), g_(0), b_(0), a_(0) { ; }
142
CUDA C Programming Guide Version 3.2
Appendix D.
C++ Language Constructs
__device__ PixelRGBA(unsigned char r, unsigned char g, unsigned char b, unsigned char a = 255): r_(r), g_(g), b_(b), a_(a) { ; } // other methods and operators left out for sake of brevity private: unsigned char r_, g_, b_, a_; friend PixelRGBA operator+(const PixelRGBA &, const PixelRGBA &); }; __device__ PixelRGBA operator+(const PixelRGBA & p1, const PixelRGBA & p2) { return PixelRGBA(p1.r_ + p2.r_, p1.g_ + p2.g_, p1.b_ + p2.b_, p1.a_ + p2.a_); }
Other device code can now make use of this new data type as one would expect: PixelRGBA p1, p2; // [...] initialization of p1 and p2 here PixelRGBA p3 = p1 + p2;
D.6.2
Example 2 Functor Class The following example shows how functors may be used as function template parameters to implement a set of vector arithmetic operations. Here are two functors for float addition and subtraction: class Add { public: __device__ float operator() (float a, float b) const { return a + b; } }; class Sub { public: __device__ float
CUDA C Programming Guide Version 3.2
143
Appendix D.
C++ Language Constructs operator() (float a, float b) const { return a - b; } };
The following templatized kernel makes use of the functors like the ones above in order to implement operations on vectors of floats: // Device code template<class O> __global__ void VectorOperation(const float * A, const float * B, float * C, unsigned int N, O op) { unsigned int iElement = blockDim.x * blockIdx.x + threadIdx.x; if (iElement < N) { C[iElement] = op(A[iElement], B[iElement]); } }
The VectorOperation kernel may now be launched like this in order to get a vector addition: // Host code VectorOperation<<<blocks, threads>>>(v1, v2, v3, N, Add());
144
CUDA C Programming Guide Version 3.2
Appendix E. NVCC Specifics
E.1
__noinline__ and __forceinline__ When compiling code for devices of compute capability 1.x, a __device__ function is always inlined by default. When compiling code for devices of compute capability 2.x, a __device__ function is only inlined when deemed appropriate by the compiler. The __noinline__ function qualifier can be used as a hint for the compiler not to inline the function if possible. The function body must still be in the same file where it is called. For devices of compute capability 1.x, the compiler will not honor the __noinline__ qualifier for functions with pointer parameters and for functions with large parameter lists. For devices of compute capability 2.x, the compiler will always honor the __noinline__ qualifier. The __forceinline__ function qualifier can be used to force the compiler to inline the function.
E.2
#pragma unroll By default, the compiler unrolls small loops with a known trip count. The #pragma unroll directive however can be used to control unrolling of any given loop. It must be placed immediately before the loop and only applies to that loop. It is optionally followed by a number that specifies how many times the loop must be unrolled. For example, in this code sample: #pragma unroll 5 for (int i = 0; i < n; ++i)
the loop will be unrolled 5 times. The compiler will also insert code to ensure correctness (in the example above, to ensure that there will only be n iterations if n is less than 5, for example). It is up to the programmer to make sure that the specified unroll number gives the best performance. #pragma unroll 1 will prevent the compiler from ever unrolling a loop.
CUDA C Programming Guide Version 3.1
145
Appendix E.
NVCC Specifics If no number is specified after #pragma unroll, the loop is completely unrolled if its trip count is constant, otherwise it is not unrolled at all.
E.3
__restrict__ nvcc supports restricted pointers via the __restrict__ keyword.
Restricted pointers were introduced in C99 to alleviate the aliasing problem that exists in C-type languages, and which inhibits all kind of optimization from code reordering to common sub-expression elimination. Here is an example subject to the aliasing issue, where use of restricted pointer can help the compiler to reduce the number of instructions: void foo(const float* a, const float* b, float* c) { c[0] = a[0] * b[0]; c[1] = a[0] * b[0]; c[2] = a[0] * b[0] * a[1]; c[3] = a[0] * a[1]; c[4] = a[0] * b[0]; c[5] = b[0]; ... }
In C-type languages, the pointers a, b, and c may be aliased, so any write through c could modify elements of a or b. This means that to guarantee functional correctness, the compiler cannot load a[0] and b[0] into registers, multiply them, and store the result to both c[0] and c[1], because the results would differ from the abstract execution model if, say, a[0] is really the same location as c[0]. So the compiler cannot take advantage of the common sub-expression. Likewise, the compiler cannot just reorder the computation of c[4] into the proximity of the computation of c[0] and c[1] because the preceding write to c[3] could change the inputs to the computation of c[4]. By making a, b, and c restricted pointers, the programmer asserts to the compiler that the pointers are in fact not aliased, which in this case means writes through c would never overwrite elements of a or b. This changes the function prototype as follows: void foo(const float* __restrict__ a, const float* __restrict__ b, float* __restrict__ c);
Note that all pointer arguments need to be made restricted for the compiler optimizer to derive any benefit. With the __restrict keywords added, the compiler can now reorder and do common sub-expression elimination at will, while retaining functionality identical with the abstract execution model: void foo(const float* __restrict__ a, const float* __restrict__ b, float* __restrict__ c) { float t0 = a[0];
146
CUDA C Programming Guide Version 3.2
Appendix E.
NVCC Specifics
float t1 = b[0]; float t2 = t0 * t2; float t3 = a[1]; c[0] = t2; c[1] = t2; c[4] = t2; c[2] = t2 * t3; c[3] = t0 * t3; c[5] = t1; ... }
The effects here are a reduced number of memory accesses and reduced number of computations. This is balanced by an increase in register pressure due to "cached" loads and common sub-expressions. Since register pressure is a critical issue in many CUDA codes, use of restricted pointers can have negative performance impact on CUDA code, due to reduced occupancy.
CUDA C Programming Guide Version 3.2
147
Appendix F. Texture Fetching
This appendix gives the formula used to compute the value returned by the texture functions of Section B.8 depending on the various attributes of the texture reference (see Section 3.2.4). The texture bound to the texture reference is represented as an array T of N texels for a one-dimensional texture, N M texels for a two-dimensional texture, or N M L texels for a three-dimensional texture. It is fetched using texture coordinates x , y , and z . A texture coordinate must fall within T ‟s valid addressing range before it can be used to address T . The addressing mode specifies how an out-of-range texture coordinate x is remapped to the valid range. If x is non-normalized, only the clamp addressing mode is supported and x is replaced by 0 if x 0 and N 1 if N x . If x is normalized: is replaced by 0 if x 0 and 1 1 N if 1 x ,
In clamp addressing mode,
In wrap addressing mode, x is replaced by frac(x) , where frac( x) x floor( x) and floor(x) is the largest integer not greater than x .
x
In the remaining of the appendix, x , y , and z are the non-normalized texture coordinates remapped to T ‟s valid addressing range. x , y , and z are derived from the normalized texture coordinates xˆ , yˆ , and zˆ as such: x Nxˆ , y Myˆ , and z Lzˆ .
CUDA C Programming Guide Version 3.1
149
Appendix F. Texture Fetching
F.1
Nearest-Point Sampling In this filtering mode, the value returned by the texture fetch is
tex( x) T [i] for a one-dimensional texture,
tex( x, y) T [i, j ] for a two-dimensional texture,
tex( x, y, z) T [i, j, k ] for a three-dimensional texture,
where i floor(x) , j floor(y) , and k floor(z) . Figure D-1 illustrates nearest-point sampling for a one-dimensional texture with N 4. For integer textures, the value returned by the texture fetch can be optionally remapped to [0.0, 1.0] (see Section 3.2.4.1.1). tex(x)
T[3]
T[0] T[2]
T[1] x 0
1
2
3
4
Non-Normalized
0
0.25
0.5
0.75
1
Normalized
Figure F-1. Nearest-Point Sampling of a One-Dimensional Texture of Four Texels
F.2
Linear Filtering In this filtering mode, which is only available for floating-point textures, the value returned by the texture fetch is
150
tex( x) (1 )T [i] T [i 1] for a one-dimensional texture,
CUDA C Programming Guide Version 3.2
Appendix F.
Texture Fetching
tex( x, y) (1 )(1 )T [i, j ] (1 )T [i 1, j ] (1 )T [i, j 1] T [i 1, j 1]
for a two-dimensional texture,
tex( x, y, z )
(1 )(1 )(1 )T [i, j , k ] (1 )(1 )T [i 1, j , k ] (1 ) (1 )T [i, j 1, k ] (1 )T [i 1, j 1, k ] (1 )(1 )T [i, j , k 1] (1 )T [i 1, j , k 1] (1 ) T [i, j 1, k 1] T [i 1, j 1, k 1]
for a three-dimensional texture, where:
i floor( x B ) , frac( x B ) , x B x 0.5 ,
j floor( y B ) , frac( y B ) , y B y 0.5 ,
k floor( z B ) , frac( z B ) , z B z 0.5 .
, , and are stored in 9-bit fixed point format with 8 bits of fractional value (so 1.0 is exactly represented).
Figure F-2 illustrates nearest-point sampling for a one-dimensional texture with N 4. tex(x)
T[3]
T[0] T[2]
T[1] x 0
1
2
3
4
Non-Normalized
0
0.25
0.5
0.75
1
Normalized
Figure F-2. Linear Filtering of a One-Dimensional Texture of Four Texels in Clamp Addressing Mode
CUDA C Programming Guide Version 3.2
151
Appendix F. Texture Fetching
F.3
Table Lookup A table lookup TL(x) where TL( x) tex (
x
spans the interval [0, R] can be implemented as
N 1 x 0.5) in order to ensure that TL(0) T [0] and TL( R) T [ N 1] . R
Figure F-3 illustrates the use of texture filtering to implement a table lookup with R 4 or R 1 from a one-dimensional texture with N 4 . TL(x)
T[3]
T[0] T[2]
T[1] x 0
4/3
8/3
4
0
1/3
2/3
1
Figure F-3. One-Dimensional Table Lookup Using Linear Filtering
152
CUDA C Programming Guide Version 3.2
Appendix G. Compute Capabilities
The general specifications and features of a compute device depend on its compute capability (see Section 2.5). Section G.1 gives the features and technical specifications associated to each compute capability. Section G.2 reviews the compliance with the IEEE floating-point standard. Section G.3 and 0 give more details on the architecture of devices of compute capability 1.x and 2.x, respectively.
CUDA C Programming Guide Version 3.1
153
Appendix G.
G.1
Compute Capabilities
Features and Technical Specifications Compute Capability Feature Support
1.0
1.1
1.2
1.3
2.x
(Unlisted features are supported for all compute capabilities) Integer atomic functions operating on 32-bit words in global memory (Section B.11)
No
yes
Integer atomic functions operating on 64-bit words in global memory (Section B.11) No
Integer atomic functions operating on 32-bit words in shared memory (Section B.11)
Yes
Warp vote functions (Section B.12) Double-precision floating-point numbers
No
Yes
Floating-point atomic addition operating on 32-bit words in global and shared memory (Section B.11) __ballot() (Section B.12) __threadfence_system() (Section B.5)
No
Yes
__syncthreads_count(), __syncthreads_and(), __syncthreads_or() (Section B.6) Surface functions (Section B.9)
Compute Capability Technical Specifications
1.0
1.1
1.2
2.x
65535
Maximum x- or y-dimension of a grid of thread blocks Maximum number of threads per block
512
1024
Maximum x- or y-dimension of a block
512
1024
Maximum z-dimension of a block
64
Warp size
32
Maximum number of resident blocks per multiprocessor
8
Maximum number of resident warps per multiprocessor
24
32
48
Maximum number of resident threads per multiprocessor
768
1024
1536
Number of 32-bit registers per multiprocessor
8K
16 K
32 K
Maximum amount of shared memory
154
1.3
16 KB
48 KB
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
Compute Capability Technical Specifications
1.0
1.1
1.2
1.3
2.x
per multiprocessor Number of shared memory banks
16
Amount of local memory per thread
32
16 KB
512 KB
Constant memory size
64 KB
Cache working set per multiprocessor for constant memory
8 KB
Cache working set per multiprocessor for texture memory
Device dependent, between 6 KB and 8 KB
Maximum width for a 1D texture reference bound to a CUDA array
8192
Maximum width for a 1D texture reference bound to linear memory Maximum width and height for a 2D texture reference bound to linear memory or to a CUDA array Maximum width, height, and depth for a 3D texture reference bound to linear memory or a CUDA array
32768 227
65536 x 32768
2048 x 2048 x 2048
Maximum number of textures that can be bound to a kernel
128
Maximum width for a 1D surface reference bound to a CUDA array Maximum width and height for a 2D surface reference bound to a CUDA array
8192 N/A
Maximum number of surfaces that can be bound to a kernel Maximum number of instructions per kernel
G.2
65536 x 65535
8192 x 8192 8
2 million
Floating-Point Standard All compute devices follow the IEEE 754-2008 standard for binary floating-point arithmetic with the following deviations: There is no dynamically configurable rounding mode; however, most of the operations support multiple IEEE rounding modes, exposed via device intrinsics; There is no mechanism for detecting that a floating-point exception has occurred and all operations behave as if the IEEE-754 exceptions are always masked, and deliver the masked response as defined by IEEE-754 if there is an exceptional event; for the same reason, while SNaN encodings are supported, they are not signaling and are handled as quiet; The result of a single-precision floating-point operation involving one or more input NaNs is the quiet NaN of bit pattern 0x7fffffff;
CUDA C Programming Guide Version 3.2
155
Appendix G.
Compute Capabilities Double-precision floating-point absolute value and negation are not compliant with IEEE-754 with respect to NaNs; these are passed through unchanged; For single-precision floating-point numbers on devices of compute capability 1.x: Denormalized numbers are not supported; floating-point arithmetic and comparison instructions convert denormalized operands to zero prior to the floating-point operation; Underflowed results are flushed to zero; Some instructions are not IEEE-compliant: Addition and multiplication are often combined into a single multiplyadd instruction (FMAD), which truncates (i.e. without rounding) the intermediate mantissa of the multiplication; Division is implemented via the reciprocal in a non-standard-compliant way; Square root is implemented via the reciprocal square root in a nonstandard-compliant way; For addition and multiplication, only round-to-nearest-even and round-towards-zero are supported via static rounding modes; directed rounding towards +/- infinity is not supported;
To mitigate the impact of these restrictions, IEEE-compliant software (and therefore slower) implementations are provided through the following intrinsics (c.f. Section C.2.1):
__fmaf_r{n,z,u,d}(float, float, float): single-precision
fused multiply-add with IEEE rounding modes, __frcp_r[n,z,u,d](float): single-precision reciprocal with IEEE rounding modes, __fdiv_r[n,z,u,d](float, float): single-precision division with IEEE rounding modes, __fsqrt_r[n,z,u,d](float): single-precision square root with IEEE rounding modes, __fadd_r[u,d](float, float): single-precision addition with IEEE directed rounding, __fmul_r[u,d](float, float): single-precision multiplication with IEEE directed rounding;
For double-precision floating-point numbers on devices of compute capability 1.x: Round-to-nearest-even is the only supported IEEE rounding mode for reciprocal, division, and square root. When compiling for devices without native double-precision floating-point support, i.e. devices of compute capability 1.2 and lower, each double variable is converted to single-precision floating-point format (but retains its size of 64 bits) and doubleprecision floating-point arithmetic gets demoted to single-precision floating-point arithmetic.
For devices of compute capability 2.x, code must be compiled with -ftz=false, -prec-div=true, and -prec-sqrt=true to ensure IEEE compliance (this is the default setting; see the nvcc user manual for description of these compilation flags); code compiled with -ftz=true, -prec-div=false, and
156
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
-prec-sqrt=false comes closest to the code generated for devices of compute
capability 1.x. Addition and multiplication are often combined into a single multiply-add instruction: FMAD for single precision on devices of compute capability 1.x, FFMA for single precision on devices of compute capability 2.x. As mentioned above, FMAD truncates the mantissa prior to use it in the addition. FFMA, on the other hand, is an IEEE-754(2008) compliant fused multiply-add instruction, so the full-width product is being used in the addition and a single rounding occurs during generation of the final result. While FFMA in general has superior numerical properties compared to FMAD, the switch from FMAD to FFMA can cause slight changes in numeric results and can in rare circumstances lead to slighty larger error in final results.
In accordance to the IEEE-754R standard, if one of the input parameters to fminf(), fmin(), fmaxf(), or fmax() is NaN, but not the other, the result is the non-NaN parameter. The conversion of a floating-point value to an integer value in the case where the floating-point value falls outside the range of the integer format is left undefined by IEEE-754. For compute devices, the behavior is to clamp to the end of the supported range. This is unlike the x86 architecture behavior.
G.3
Compute Capability 1.x
G.3.1
Architecture For devices of compute capability 1.x, a multiprocessor consists of: 8 CUDA cores for integer and single-precision floating-point arithmetic operations, 1 double-precision floating-point unit for double-precision floating-point arithmetic operations, 2 special function units for single-precision floating-point transcendental functions (these units can also handle single-precision floating-point multiplications), 1 warp scheduler. To execute an instruction for all threads of a warp, the warp scheduler must therefore issue the instruction over:
4 clock cycles for an integer or single-precision floating-point arithmetic instruction, 32 clock cycles for a double-precision floating-point arithmetic instruction, 16 clock cycles for a single-precision floating-point transcendental instruction. A multiprocessor also has a read-only constant cache that is shared by all functional units and speeds up reads from the constant memory space, which resides in device memory.
CUDA C Programming Guide Version 3.2
157
Appendix G.
Compute Capabilities Multiprocessors are grouped into Texture Processor Clusters (TPCs). The number of multiprocessors per TPC is: 2 for devices of compute capabilities 1.0 and 1.1, 3 for devices of compute capabilities 1.2 and 1.3. Each TPC has a read-only texture cache that is shared by all multiprocessors and speeds up reads from the texture memory space, which resides in device memory. Each multiprocessor accesses the texture cache via a texture unit that implements the various addressing modes and data filtering mentioned in Section 3.2.4.
The local and global memory spaces reside in device memory and are not cached.
G.3.2
Global Memory A global memory request for a warp is split into two memory requests, one for each half-warp, that are issued independently. Sections G.3.2.1 and G.3.2.2 describe how the memory accesses of threads within a half-warp are coalesced into one or more memory transactions depending on the compute capability of the device. Figure G-1 shows some examples of global memory accesses and corresponding memory transactions based on compute capability. The resulting memory transactions are serviced at the throughput of device memory.
G.3.2.1
Devices of Compute Capability 1.0 and 1.1 To coalesce, the memory request for a half-warp must satisfy the following conditions: The size of the words accessed by the threads must be 4, 8, or 16 bytes; If this size is: 4, all 16 words must lie in the same 64-byte segment, 8, all 16 words must lie in the same 128-byte segment, 16, the first 8 words must lie in the same 128-byte segment and the last 8 words in the following 128-byte segment; Threads must access the words in sequence: The kth thread in the half-warp must access the kth word. If the half-warp meets these requirements, a 64-byte memory transaction, a 128-byte memory transaction, or two 128-byte memory transactions are issued if the size of the words accessed by the threads is 4, 8, or 16, respectively. Coalescing is achieved even if the warp is divergent, i.e. there are some inactive threads that do not actually access memory.
If the half-warp does not meet these requirements, 16 separate 32-byte memory transactions are issued.
G.3.2.2
Devices of Compute Capability 1.2 and 1.3 Threads can access any words in any order, including the same words, and a single memory transaction for each segment addressed by the half-warp is issued. This is in contrast with devices of compute capabilities 1.0 and 1.1 where threads need to access words in sequence and coalescing only happens if the half-warp addresses a single segment.
158
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
More precisely, the following protocol is used to determine the memory transactions necessary to service all threads in a half-warp:
G.3.3
Find the memory segment that contains the address requested by the lowest numbered active thread. The segment size depends on the size of the words accessed by the threads: 32 bytes for 1-byte words, 64 bytes for 2-byte words, 128 bytes for 4-, 8- and 16-byte words. Find all other active threads whose requested address lies in the same segment. Reduce the transaction size, if possible: If the transaction size is 128 bytes and only the lower or upper half is used, reduce the transaction size to 64 bytes; If the transaction size is 64 bytes (originally or after reduction from 128 bytes) and only the lower or upper half is used, reduce the transaction size to 32 bytes. Carry out the transaction and mark the serviced threads as inactive. Repeat until all threads in the half-warp are serviced.
Shared Memory Shared memory has 16 banks that are organized such that successive 32-bit words are assigned to successive banks, i.e. interleaved. Each bank has a bandwidth of 32 bits per two clock cycles. A shared memory request for a warp is split into two memory requests, one for each half-warp, that are issued independently. As a consequence, there can be no bank conflict between a thread belonging to the first half of a warp and a thread belonging to the second half of the same warp. If a non-atomic instruction executed by a warp writes to the same location in shared memory for more than one of the threads of the warp, only one thread per halfwarp performs a write and which thread performs the final write is undefined.
G.3.3.1
32-Bit Strided Access A common access pattern is for each thread to access a 32-bit word from an array indexed by the thread ID tid and with some stride s: __shared__ float shared[32]; float data = shared[BaseIndex + s * tid];
In this case, threads tid and tid+n access the same bank whenever s*n is a multiple of the number of banks (i.e. 16) or, equivalently, whenever n is a multiple of 16/d where d is the greatest common divisor of 16 and s. As a consequence, there will be no bank conflict only if half the warp size (i.e. 16) is less than or equal to 16/d., that is only if d is equal to 1, i.e. s is odd. Figure G-2 shows some examples of strided access for devices of compute capability 2.x. The same examples apply for devices of compute capability 1.x, but with 16 banks instead of 32.
CUDA C Programming Guide Version 3.2
159
Appendix G.
G.3.3.2
Compute Capabilities
32-Bit Broadcast Access Shared memory features a broadcast mechanism whereby a 32-bit word can be read and broadcast to several threads simultaneously when servicing one memory read request. This reduces the number of bank conflicts when several threads read from an address within the same 32-bit word. More precisely, a memory read request made of several addresses is serviced in several steps over time by servicing one conflict-free subset of these addresses per step until all addresses have been serviced; at each step, the subset is built from the remaining addresses that have yet to be serviced using the following procedure: Select one of the words pointed to by the remaining addresses as the broadcast word; Include in the subset: All addresses that are within the broadcast word, One address for each bank (other than the broadcasting bank) pointed to by the remaining addresses. Which word is selected as the broadcast word and which address is picked up for each bank at each cycle are unspecified.
A common conflict-free case is when all threads of a half-warp read from an address within the same 32-bit word. Figure G-3 shows some examples of memory read accesses that involve the broadcast mechanism. The same examples apply for devices of compute capability 1.x, but with 16 banks instead of 32.
G.3.3.3
8-Bit and 16-Bit Access 8-bit and 16-bit accesses typically generate bank conflicts. For example, there are bank conflicts if an array of char is accessed the following way: __shared__ char shared[32]; char data = shared[BaseIndex + tid];
because shared[0], shared[1], shared[2], and shared[3], for example, belong to the same bank. There are no bank conflicts however, if the same array is accessed the following way: char data = shared[BaseIndex + 4 * tid];
G.3.3.4
Larger Than 32-Bit Access Accesses that are larger than 32-bit per thread are split into 32-bit accesses that typically generate bank conflicts. For example, there are 2-way bank conflicts for arrays of doubles accessed as follows: __shared__ double shared[32]; double data = shared[BaseIndex + tid];
as the memory request is compiled into two separate 32-bit requests with a stride of two. One way to avoid bank conflicts in this case is two split the double operands like in the following sample code: __shared__ int shared_lo[32]; __shared__ int shared_hi[32];
160
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
double dataIn; shared_lo[BaseIndex + tid] = __double2loint(dataIn); shared_hi[BaseIndex + tid] = __double2hiint(dataIn); double dataOut = __hiloint2double(shared_hi[BaseIndex + tid], shared_lo[BaseIndex + tid]);
This might not always improve performance however and does perform worse on devices of compute capabilities 2.x. The same applies to structure assignments. The following code, for example: __shared__ struct type shared[32]; struct type data = shared[BaseIndex + tid];
results in:
Three separate reads without bank conflicts if type is defined as struct type { float x, y, z; };
since each member is accessed with an odd stride of three 32-bit words;
Two separate reads with bank conflicts if type is defined as struct type { float x, y; };
since each member is accessed with an even stride of two 32-bit words.
G.4
Compute Capability 2.x
G.4.1
Architecture For devices of compute capability 2.x, a multiprocessor consists of: For devices of compute capability 2.0: 32 CUDA cores for integer and floating-point arithmetic operations, 4 special function units for single-precision floating-point transcendental functions, For devices of compute capability 2.1: 48 CUDA cores for integer and floating-point arithmetic operations, 8 special function units for single-precision floating-point transcendental functions, 2 warp schedulers. At every instruction issue time, each scheduler issues:
One instruction for devices of compute capability 2.0, Two instructions for devices of compute capability 2.1, for some warp that is ready to execute, if any. The first scheduler is in charge of the warps with an odd ID and the second scheduler is in charge of the warps with an
CUDA C Programming Guide Version 3.2
161
Appendix G.
Compute Capabilities even ID. Note that when a scheduler issues a double-precision floating-point instruction, the other scheduler cannot issue any instruction. A warp scheduler can issue an instruction to only half of the CUDA cores. To execute an instruction for all threads of a warp, a warp scheduler must therefore issue the instruction over two clock cycles for an integer or floating-point arithmetic instruction. A multiprocessor also has a read-only uniform cache that is shared by all functional units and speeds up reads from the constant memory space, which resides in device memory. There is an L1 cache for each multiprocessor and an L2 cache shared by all multiprocessors, both of which are used to cache accesses to local or global memory, including temporary register spills. The cache behavior (e.g. whether reads are cached in both L1 and L2 or in L2 only) can be partially configured on a peraccess basis using modifiers to the load or store instruction. The same on-chip memory is used for both L1 and shared memory: It can be configured as 48 KB of shared memory and 16 KB of L1 cache or as 16 KB of shared memory and 48 KB of L1 cache, using cudaFuncSetCacheConfig()/cuFuncSetCacheConfig(): // Device code __global__ void MyKernel() { ... } // Host code // Runtime API // cudaFuncCachePreferShared: shared memory is 48 KB // cudaFuncCachePreferL1: shared memory is 16 KB // cudaFuncCachePreferNone: no preference cudaFuncSetCacheConfig(MyKernel, cudaFuncCachePreferShared) // Driver API // CU_FUNC_CACHE_PREFER_SHARED: shared memory is 48 KB // CU_FUNC_CACHE_PREFER_L1: shared memory is 16 KB // CU_FUNC_CACHE_PREFER_NONE: no preference CUfunction myKernel; cuFuncSetCacheConfig(myKernel, CU_FUNC_CACHE_PREFER_SHARED)
The default cache configuration is "prefer none," meaning "no preference." If a kernel is configured to have no preference, then it will default to the preference of the current thread/context, which is set using cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() (see the reference manual for details). If the current thread/context also has no preference (which is again the default setting), then whichever cache configuration was most recently used for any kernel will be the one that is used, unless a different cache configuration is required to launch the kernel (e.g., due to shared memory requirements). The initial configuration is 48KB of shared memory and 16KB of L1 cache. Multiprocessors are grouped into Graphics Processor Clusters (GPCs). A GPC includes four multiprocessors. 162
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
Each multiprocessor has a read-only texture cache to speed up reads from the texture memory space, which resides in device memory. It accesses the texture cache via a texture unit that implements the various addressing modes and data filtering mentioned in Section 3.2.4.
G.4.2
Global Memory Global memory accesses are cached. Using the –dlcm compilation flag, they can be configured at compile time to be cached in both L1 and L2 (-Xptxas -dlcm=ca) (this is the default setting) or in L2 only (-Xptxas -dlcm=cg). A cache line is 128 bytes and maps to a 128-byte aligned segment in device memory. Memory accesses that are cached in both L1 and L2 are serviced with 128-byte memory transactions whereas memory accesses that are cached in L2 only are serviced with 32-byte memory transactions. Caching in L2 only can therefore reduce over-fetch, for example, in the case of scattered memory accesses. If the size of the words accessed by each thread is more than 4 bytes, a memory request by a warp is first split into separate 128-byte memory requests that are issued independently: Two memory requests, one for each half-warp, if the size is 8 bytes, Four memory requests, one for each quarter-warp, if the size is 16 bytes. Each memory request is then broken down into cache line requests that are issued independently. A cache line request is serviced at the throughput of L1 or L2 cache in case of a cache hit, or at the throughput of device memory, otherwise.
Note that threads can access any words in any order, including the same words. If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp, only one thread performs a write and which thread does it is undefined.
CUDA C Programming Guide Version 3.2
163
Appendix G.
Compute Capabilities Aligned and sequential Addresses:
96
128
Threads:
160
192
224
0
Compute capability:
1.0 and 1.1
Memory transactions:
256
31
1.2 and 1.3
2.0
Uncached 1 x 64B at 128 1 x 64B at 192
288
Cached
1 x 64B at 128 1 x 64B at 192
1 x 128B at 128
Aligned and non-sequential Addresses:
96
128
Threads:
160
192
224
0
Compute capability:
1.0 and 1.1
Memory transactions:
256
31
1.2 and 1.3
2.0
Uncached 8x 8x 8x 8x
32B at 128 32B at 160 32B at 192 32B at 224
288
Cached
1 x 64B at 128 1 x 64B at 192
1 x 128B at 128
Misaligned and sequential Addresses:
96
128
Threads: Compute capability:
160
192
224
0 1.0 and 1.1
Memory transactions:
1.2 and 1.3
Uncached 7x 8x 8x 8x 1x
32B at 128 32B at 160 32B at 192 32B at 224 32B at 256
1 x 128B at 128 1 x 64B at 192 1 x 32B at 256
256
288
31 2.0 Cached 1 x 128B at 128 1 x 128B at 256
Figure G-1. Examples of Global Memory Accesses by a Warp, 4-Byte Word per Thread, and Associated Memory Transactions Based on Compute Capability
164
CUDA C Programming Guide Version 3.2
Appendix G.
G.4.3
Compute Capabilities
Shared Memory Shared memory has 32 banks that are organized such that successive 32-bit words are assigned to successive banks, i.e. interleaved. Each bank has a bandwidth of 32 bits per two clock cycles. Therefore, unlike for devices of lower compute capability, there may be bank conflicts between a thread belonging to the first half of a warp and a thread belonging to the second half of the same warp. A bank conflict only occurs if two or more threads access any bytes within different 32-bit words belonging to the same bank. If two or more threads access any bytes within the same 32-bit word, there is no bank conflict between these threads: For read accesses, the word is broadcast to the requesting threads (unlike for devices of compute capability 1.x, multiple words can be broadcast in a single transaction); for write accesses, each byte is written by only one of the threads (which thread performs the write is undefined). This means, in particular, that unlike for devices of compute capability 1.x, there are no bank conflicts if an array of char is accessed as follows, for example: __shared__ char shared[32]; char data = shared[BaseIndex + tid];
G.4.3.1
32-Bit Strided Access A common access pattern is for each thread to access a 32-bit word from an array indexed by the thread ID tid and with some stride s: __shared__ float shared[32]; float data = shared[BaseIndex + s * tid];
In this case, threads tid and tid+n access the same bank whenever s*n is a multiple of the number of banks (i.e. 32) or, equivalently, whenever n is a multiple of 32/d where d is the greatest common divisor of 32 and s. As a consequence, there will be no bank conflict only if the warp size (i.e. 32) is less than or equal to 32/d., that is only if d is equal to 1, i.e. s is odd. Figure G-2 shows some examples of strided access.
G.4.3.2
Larger Than 32-Bit Access 64-bit and 128-bit accesses are specifically handled to minimize bank conflicts as described below. Other accesses larger than 32-bit are split into 32-bit, 64-bit, or 128-bit accesses. The following code, for example: struct type { float x, y, z; }; __shared__ struct type shared[32]; struct type data = shared[BaseIndex + tid];
results in three separate 32-bit reads without bank conflicts since each member is accessed with a stride of three 32-bit words.
64-Bit Accesses For 64-bit accesses, a bank conflict only occurs if two or more threads in either of the half-warps access different addresses belonging to the same bank. CUDA C Programming Guide Version 3.2
165
Appendix G.
Compute Capabilities Unlike for devices of compute capability 1.x, there are no bank conflicts for arrays of doubles accessed as follows, for example: __shared__ double shared[32]; double data = shared[BaseIndex + tid];
128-Bit Accesses The majority of 128-bit accesses will cause 2-way bank conflicts, even if no two threads in a quarter-warp access different addresses belonging to the same bank. Therefore, to determine the ways of bank conflicts, one must add 1 to the maximum number of threads in a quarter-warp that access different addresses belonging to the same bank.
G.4.4
Constant Memory In addition to the constant memory space supported by devices of all compute capabilities (where __constant__ variables reside), devices of compute capability 2.x support the LDU (LoaD Uniform) instruction that the compiler use to load any variable that is: pointing to global memory, read-only in the kernel (programmer can enforce this using the const keyword), not dependent on thread ID.
166
CUDA C Programming Guide Version 3.2
Appendix G.
Threads:
Banks:
Threads:
Banks:
Compute Capabilities
Threads:
Banks:
0
0
0
0
0
0
1
1
1
1
1
1
2
2
2
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
5
5
5
5
5
5
6
6
6
6
6
6
7
7
7
7
7
7
8
8
8
8
8
8
9
9
9
9
9
9
10
10
10
10
10
10
11
11
11
11
11
11
12
12
12
12
12
12
13
13
13
13
13
13
14
14
14
14
14
14
15
15
15
15
15
15
16
16
16
16
16
16
17
17
17
17
17
17
18
18
18
18
18
18
19
19
19
19
19
19
20
20
20
20
20
20
21
21
21
21
21
21
22
22
22
22
22
22
23
23
23
23
23
23
24
24
24
24
24
24
25
25
25
25
25
25
26
26
26
26
26
26
27
27
27
27
27
27
28
28
28
28
28
28
29
29
29
29
29
29
30
30
30
30
30
30
31
31
31
31
31
31
Left: Linear addressing with a stride of one 32-bit word (no bank conflict). Middle: Linear addressing with a stride of two 32-bit words (2-way bank conflicts). Right: Linear addressing with a stride of three 32-bit words (no bank conflict).
Figure G-2 Examples of Strided Shared Memory Accesses for Devices of Compute Capability 2.x
CUDA C Programming Guide Version 3.2
167
Appendix G.
Compute Capabilities Threads:
Banks:
Threads:
Banks:
Threads:
Banks:
0
0
0
0
0
0
1
1
1
1
1
1
2
2
2
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
5
5
5
5
5
5
6
6
6
6
6
6
7
7
7
7
7
7
8
8
8
8
8
8
9
9
9
9
9
9
10
10
10
10
10
10
11
11
11
11
11
11
12
12
12
12
12
12
13
13
13
13
13
13
14
14
14
14
14
14
15
15
15
15
15
15
16
16
16
16
16
16
17
17
17
17
17
17
18
18
18
18
18
18
19
19
19
19
19
19
20
20
20
20
20
20
21
21
21
21
21
21
22
22
22
22
22
22
23
23
23
23
23
23
24
24
24
24
24
24
25
25
25
25
25
25
26
26
26
26
26
26
27
27
27
27
27
27
28
28
28
28
28
28
29
29
29
29
29
29
30
30
30
30
30
30
31
31
31
31
31
31
Left: Conflict-free access via random permutation. Middle: Conflict-free access since threads 3, 4, 6, 7, and 9 access the same word within bank 5. Right: Conflict-free broadcast access (all threads access the same word).
168
CUDA C Programming Guide Version 3.2
Appendix G.
Compute Capabilities
Figure G-3 Examples of Irregular and Colliding Shared Memory Accesses for Devices of Compute Capability 2.x
CUDA C Programming Guide Version 3.2
169
Notice ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation. Trademarks NVIDIA, the NVIDIA logo, GeForce, Tesla, and Quadro are trademarks or registered trademarks of NVIDIA Corporation. Other company and product names may be trademarks of the respective companies with which they are associated. OpenCL is trademark of Apple Inc. used under license to the Khronos Group Inc. Copyright © 2006-2010 NVIDIA Corporation. All rights reserved. This work incorporates portions of on an earlier work: Scalable Parallel Programming with CUDA, in ACM Queue, VOL 6, No. 2 (March/April 2008), © ACM, 2008. http://mags.acm.org/queue/20080304/?u1=texterity"
NVIDIA Corporation 2701 San Tomas Expressway Santa Clara, CA 95050 www.nvidia.com
CUDA_C_Programming_Guide
|
__label__pos
| 0.956578 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I have three tables
//1
CREATE TABLE `client_domain` (
`client_id` int(10) unsigned NOT NULL,
`domain_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`client_id`,`domain_id`),
KEY `FK_client_domains_domain` (`domain_id`),
CONSTRAINT `FK_client_domain` FOREIGN KEY (`domain_id`) REFERENCES `domain` (`id`) ON DELETE CASCADE,
CONSTRAINT `FK_client_domains_client` FOREIGN KEY (`client_id`) REFERENCES `client` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf-8;
//2
CREATE TABLE `client` (
`id` int(10) unsigned NOT NULL,
`name` varchar(50) NOT NULL,
`notes` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf-8;
//3
CREATE TABLE `domain` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`domain_name` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `IX_domain` (`domain_name`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf-8;
All work fine but, when I'm trying to delete record from client_domain table using:
$del = new ClientDom(array('db' => $this->_adapter));
$where[] = $del->getAdapter()->quoteInto('client_id = ?', $client);
$where[] = $del->getAdapter()->quoteInto('domain_id = ?', $domain);
$result = $del->delete($where)->toArray(); Idelete record but with an error:
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'client_id' in 'where clause'...
What is wrong... Also the same thing if I 'fetchAll($where)' but on insert all work fine.
share|improve this question
1 Answer 1
Resolved via: $del->delete(array( 'client_id = ?' => $client, 'domain_id = ?' => $domain )); Don't know why but via $where it's not works... If some one know why... please write it here :)
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999965 |
You asked: How do you find the smallest digit of a number in Java?
How do you find the smallest digit of a number?
Take a number n as the input. An integer function smallest_digit(int n) takes ‘n’ as the input and returns the smallest digit in the given number. Now initialize min as the last digit of the given number.
How do you find the smallest number in Java?
Let’s see another example to get the smallest element or number in java array using Arrays.
1. import java.util.*;
2. public class SmallestInArrayExample1{
3. public static int getSmallest(int[] a, int total){
4. Arrays.sort(a);
5. return a[0];
6. }
7. public static void main(String args[]){
8. int a[]={1,2,5,6,3,2};
How do you find the largest digit of a number in Java?
public static int maximum(int max){ int num = 0; while(num != 0){ // true upon entry, false for second iteration int rightDigit = num % 10; // will result in 0 num /= 10; // will result in 0 if(rightDigit > max) // 0 is NOT > max rightDigit = max; // not executed } return max; // return original number entered??? } 1.
See also You asked: Is PC the biggest gaming platform?
Which is smallest digit?
The smallest one-digit number is 1 (one) and greatest one-digit number is 9. All the digits become numbers when used as a number. (ii) There are 90 numbers of two digits. The smallest two-digit number is 10 and greatest two-digit number is 99.
What is the highest number?
Googol. It is a large number, unimaginably large. It is easy to write in exponential format: 10100, an extremely compact method, to easily represent the largest numbers (and also the smallest numbers).
What is largest digit number?
Digits are the single symbols used to represent numbers in Maths. In mathematics, these digits are said to be numerical digits or sometimes simply numbers. … The smallest one-digit number is 1 and the largest one-digit number is 9.
How do you find the smallest number with 3 numbers?
Program Explanation
Get three inputs num1, num2 and num3 from user using scanf statements. Check whether num1 is smaller than num2 and num3 using if statement, if it is true print num1 is smallest using printf statement. Else, num2 or num3 is smallest. So check whether num2 is smaller than num3 using elseif statement.
How do you find the minimum of 3 numbers in Java?
Write a method minimum3 that returns the smallest of three floating-point numbers. Use the Math. min method to implement minimum3. Incorporate the method into an application that reads three values from the user, determines the smallest value and displays the result.
How do you find the minimum of two numbers in Java?
math. min() is an inbuilt method in Java which is used to return Minimum or Lowest value from the given two arguments.
Example 1:
1. public class MinExample1.
2. {
3. public static void main(String args[])
4. {
5. int x = 20;
6. int y = 50;
7. //print the minimum of two numbers.
8. System. out. println(Math. min(x, y));
See also Does Uranus have coldest temperature?
How do you find the second largest digit in a number in Java?
Find 2nd Largest Number in Array using Arrays
1. import java.util.Arrays;
2. public class SecondLargestInArrayExample1{
3. public static int getSecondLargest(int[] a, int total){
4. Arrays.sort(a);
5. return a[total-2];
6. }
7. public static void main(String args[]){
8. int a[]={1,2,5,6,3,2};
What is the largest 3 digit number?
Answer: The smallest 3-digit number is 100 and the largest 3-digit number is 999.
How do you find the greatest digit number?
To get the greatest number, we arrange the digits in descending order. 8 > 7 > 5 > 2. The greatest number using the digits 7 5 2 8 is 8752. To get the smallest number, we arrange the digits in ascending order.
What is the 6 digit smallest number?
(iv) On adding one to the largest five digit number, we get 100000 which is the smallest six digit number.
Is 0 a digit number?
0 (zero) is a number, and the numerical digit used to represent that number in numerals. It fulfills a central role in mathematics as the additive identity of the integers, real numbers, and many other algebraic structures. As a digit, 0 is used as a placeholder in place value systems.
What is greatest and smallest number?
Thus, the greatest number is 8741. To get the smallest number, the smallest digit 1 is placed at thousands-place, next greater digit 4 at hundred’s place, still greater digit 7 at ten’s place and greatest digit 8 at one’s or units place. Thus, the smallest number is 1478.
Like this post? Please share to your friends:
|
__label__pos
| 1 |
Java 拼图游戏
2014-07-01· 10549 次浏览
## 效果图  ## 准备工作 准备2张500X500像素的图片,命名分别为0.jpg与1.jpg,放在代码根目录 ## 代码 ```java import java.awt.Choice; import java.awt.Image; import java.awt.Toolkit; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.image.CropImageFilter; import java.awt.image.FilteredImageSource; import java.awt.image.ImageFilter; import java.util.Random; import javax.swing.Icon; import javax.swing.ImageIcon; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JOptionPane; import javax.swing.JPanel; public class PintuGame { public static void main(String args[]) { new PintuFrame().StartFrame(); } } class PintuFrame extends JFrame { private static final long serialVersionUID = 1L; // 等级设置 private static int level = 3; // 图片索引 private static int index = 0; // 图片数量 private static int picCount = 2; // 开始时间 private long startTime; // 初始化小方块 private JButton[] buttons; // 初始化空方块 private JPanel emptyPanel = new JPanel(); // 初始化监听类 private PintuListener listener = new PintuListener(); // 初始化Panel private JPanel panel = new JPanel(null); // 图片预览 private JLabel label; private String[] imgpath = new String[picCount]; // 选图时的图片路径 String path; public PintuFrame() { for (int i = 0; i < picCount; i++) { imgpath[i] = i + ".jpg"; System.out.println(imgpath[i]); } path = imgpath[index]; } /** * 开始窗体加载 */ public void StartFrame() { panel.removeAll(); JButton start = new JButton("开始");// 开始按钮 JButton left = new JButton("<"); JButton right = new JButton(">"); JLabel selLevel = new JLabel("LV:"); label = new JLabel(getIcon());// 根据图标设置标签 final Choice choice = new Choice();// 创建选择器 choice.add("--初级--");// 添加列表项 choice.add("--中级--"); choice.add("--高级--"); selLevel.setBounds(5, 0, 20, 20);// 设置坐标 choice.setBounds(28, 0, 65, 20); start.setBounds(93, 0, 85, 20); left.setBounds(178, 0, 61, 20); right.setBounds(239, 0, 61, 20); label.setBounds(0, 22, 300, 300);// 设置标签的方位 panel.add(selLevel); panel.add(choice); panel.add(start); panel.add(left); panel.add(right); panel.add(label); panel.repaint(); add(panel); setTitle("拼图游戏"); setBounds(450, 130, 300, 322); setResizable(false); // 添加关闭按钮 this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); // 监听等级选择 start.addMouseListener(new MouseAdapter() { @Override public void mousePressed(MouseEvent e) { level = choice.getSelectedIndex() + 3; launchFrame(); } }); // 监听选图按钮 <- left.addMouseListener(new MouseAdapter() { @Override public void mousePressed(MouseEvent e) { if (index == 0) { index = picCount - 1; path = imgpath[index]; } else { path = imgpath[--index]; } panel.remove(label); label = new JLabel(getIcon()); label.setBounds(0, 22, 300, 300); panel.add(label); panel.repaint(); } }); // 监听选图按钮 -> right.addMouseListener(new MouseAdapter() { @Override public void mousePressed(MouseEvent e) { if (index == picCount - 1) { index = 0; path = imgpath[index]; } else { path = imgpath[++index]; } panel.remove(label); label = new JLabel(getIcon()); label.setBounds(0, 22, 300, 300); panel.add(label); panel.repaint(); } }); } /** * 拼图窗体加载 */ public void launchFrame() { startTime = System.currentTimeMillis(); panel.removeAll(); buttons = new JButton[level * level]; // 设置图标组 Icon[] icon = new PintuFrame().creatIcon(path); // 小方块索引 int index = 0; // 小方块坐标 int x = 0, y = 0; // 设置小方块位置,图标,监听 for (int i = 0; i < level; i++) { for (int j = 0; j < level; j++) { // 添加图标 buttons[index] = new JButton(icon[index]); // 添加监听 buttons[index].addMouseListener(listener); // 设置位置 buttons[index].setBounds(x, y, 100, 100); // 添加到panel panel.add(buttons[index++]); x += 100; } y += 100; x = 0; } // 移除最后一个小方块 panel.remove(buttons[(level * level) - 1]); // 设置空方块位置 emptyPanel.setBounds((level - 1) * 100, (level - 1) * 100, 100, 100); // 添加空方块 panel.add(emptyPanel); panel.repaint(); add(panel); setResizable(false); setTitle("拼图游戏"); // 设置大小 setBounds(450, 130, level * 100, level * 100 + 30); // 打乱方格顺序 breakRank(); // 添加关闭按钮 setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } // 选图界面图像 public Icon getIcon() { ImageIcon bi = new ImageIcon(getClass().getClassLoader().getResource(path)); // 缩放大小并显示到窗体 Image image = bi.getImage().getScaledInstance(300, 300, Image.SCALE_REPLICATE); return new ImageIcon(image); } // 打乱方格 public void breakRank() { Random r = new Random(); int x = 0, y = 0, emptyDir_X = 0, emptyDir_Y = 0; // 模拟随即点击1000次,打乱方格 for (int i = 0; i < 1000; i++) { int rid = r.nextInt(level * level - 1); // 获得该方格按钮的横坐标 x = buttons[rid].getBounds().x; // 获得该方格按钮的纵坐标 y = buttons[rid].getBounds().y; // 得到空方格的横坐标 emptyDir_X = emptyPanel.getBounds().x; // 得到空方格的纵坐标 emptyDir_Y = emptyPanel.getBounds().y; move(x, y, emptyDir_X, emptyDir_Y, buttons[rid]); } } // 移动方格 public void move(int x, int y, int emptyDir_X, int emptyDir_Y, JButton button) { // 进行比较果满足条件则交换 if (x == emptyDir_X && y - emptyDir_Y == 100) { button.setLocation(button.getBounds().x, button.getBounds().y - 100); } else if (x == emptyDir_X && y - emptyDir_Y == -100) { button.setLocation(button.getBounds().x, button.getBounds().y + 100); } else if (x - emptyDir_X == 100 & y == emptyDir_Y) { button.setLocation(button.getBounds().x - 100, button.getBounds().y); } else if (x - emptyDir_X == -100 && y == emptyDir_Y) { button.setLocation(button.getBounds().x + 100, button.getBounds().y); } else return; // 重新设置空方格的位置 emptyPanel.setLocation(x, y); } // 判断是否拼凑成功 public boolean isFinish() { for (int i = 0; i < (level * level) - 1; i++) { int x = buttons[i].getBounds().x; int y = buttons[i].getBounds().y; // 根据坐标位置判断是否拼凑成功 0+0 0+1 .. if (y / 100 * level + x / 100 != i) return false; } return true; } // 事件监听类 public class PintuListener extends MouseAdapter { @Override public void mousePressed(MouseEvent e) { JButton button = (JButton) e.getSource();// 获得鼠标按的方格按钮 int x = button.getBounds().x;// 获得该方格按钮的横坐标 int y = button.getBounds().y;// 获得该方格按钮的纵坐标 int nullDir_X = emptyPanel.getBounds().x;// 得到空方格的横坐标 int nullDir_Y = emptyPanel.getBounds().y;// 得到空方格的纵坐标 move(x, y, nullDir_X, nullDir_Y, button); if (isFinish()) {// 进行是否完成的判断 panel.remove(emptyPanel);// 移除最后一个小方块 panel.add(buttons[(level * level) - 1]);// 移除最后一个小方块 JOptionPane.showMessageDialog(null, "恭喜你,完成拼图\r\n用时为:" + (System.currentTimeMillis() - startTime) / 1000 + "S"); for (int i = 0; i < picCount; i++) {// 循环撤消鼠标事件 buttons[i].removeMouseListener(listener); } StartFrame(); } repaint(); } } // 创建方格图标组 public Icon[] creatIcon(String srcImageFile) { ImageIcon bi = new ImageIcon(this.getClass().getClassLoader().getResource(srcImageFile)); // 读取源图像 Image image = bi.getImage(); int index = 0; int x = 0, y = 0; Icon[] icon = new Icon[level * level];// 根据窗体大小创建图标数量 for (int i = 0; i < level; i++) { for (int j = 0; j < level; j++) { // 从原图像上获取一个方形位置 ImageFilter cropFilter = new CropImageFilter(x, y, 100, 100); // 截取方形图像 Image img = Toolkit.getDefaultToolkit() .createImage(new FilteredImageSource(image.getSource(), cropFilter)); icon[index++] = new ImageIcon(img); x += 100; } y += 100; x = 0; } return icon; } } ```
|
__label__pos
| 0.949281 |
Start
2020-04-06 05:15 AKDT
Kattis Set 12
End
2020-04-13 01:30 AKDT
The end is near!
Contest is over.
Not yet started.
Contest is starting in -569 days 6:55:31
Time elapsed
164:15:00
Time remaining
0:00:00
Problem F
Unusual Darts
/problems/unusualdarts/file/statement/en/img-0001.jpg
In the game of Unusual Darts, Alice throws seven darts onto a $2$-foot by $2$-foot board, and then Bob may or may not throw three darts.
Alice’s seven darts define a polygon by the order in which they are thrown, with the perimeter of the polygon connecting Dart $1$ to Dart $2$ to Dart $3$ to Dart $4$ to Dart $5$ to Dart $6$ to Dart $7$, and back to Dart $1$. If the polygon so defined is not simple (meaning it intersects itself) then Alice loses. If the polygon is simple, then Bob throws three darts. He is not a very good player, so although his darts always land on the board, they land randomly on the dart board following a uniform distribution. If all three of these darts land within the interior of the polygon, Bob wins, otherwise Alice wins.
For this problem you are given the locations of Alice’s darts (which form a simple polygon) and the probability that Bob wins. Your job is to determine the order in which Alice threw her darts.
\includegraphics[width=0.8\textwidth ]{UnusualDarts.png}
Input
The first line of input contains an integer $N$ ($1\leq N\leq 1\, 000$), indicating the number of Darts games that follow. Each game description has $8$ lines. Lines $1$ through $7$ each have a pair of real numbers with $3$ digits after the decimal point. These indicate the $x$ and $y$ coordinates of Alice’s seven darts ($x_1~ y_1$ to $x_7~ y_7$), which are all at distinct locations. All coordinates are given in feet, in the range ($0 \le x_ i, y_ i \le 2$). The $8^\textrm {th}$ line contains a real number $p$ with $5$ digits after the decimal point, giving the probability that Bob wins. In all test cases, Alice’s darts do form a simple polygon, but not necessarily in the order given.
Output
For each Darts game, output the order in which the darts could have been thrown, relative to the order they were given in the input, so that Bob wins with probability $p$. If several answers are possible, give the one that is lexicographically least. Any ordering that would give Bob a probability of winning within $10^{-5}$ of the given value of $p$ is considered a valid ordering.
Sample Input 1 Sample Output 1
3
0.000 0.000
0.000 2.000
1.000 1.800
1.000 0.200
1.800 1.000
2.000 0.000
2.000 2.000
0.61413
0.000 0.000
0.000 2.000
1.000 1.800
1.000 0.200
1.800 1.000
2.000 0.000
2.000 2.000
0.12500
0.000 0.000
0.000 1.900
0.400 2.000
1.700 0.000
1.800 2.000
2.000 0.200
2.000 0.600
0.86416
1 2 3 7 5 6 4
1 4 3 2 7 5 6
1 2 3 5 7 6 4
|
__label__pos
| 0.855727 |
Makefile.ds
author Sam Lantinga <[email protected]>
Sat, 06 Aug 2011 01:21:24 -0400
changeset 5604 e2ad06c52c65
parent 5536 05af1b9ff46d
child 6251 3e8c673cad58
permissions -rw-r--r--
Updated configure for new changes in configure.in
1 #---------------------------------------------------------------------------------
2 .SUFFIXES:
3 #---------------------------------------------------------------------------------
4
5 ifeq ($(strip $(DEVKITARM)),)
6 $(error "Please set DEVKITARM in your environment. export DEVKITARM=<path to>devkitARM")
7 endif
8
9 include $(DEVKITARM)/ds_rules
10
11 #---------------------------------------------------------------------------------
12 # TARGET is the name of the output
13 # BUILD is the directory where object files & intermediate files will be placed
14 # SOURCES is a list of directories containing source code
15 # DATA is a list of directories containing data files
16 # INCLUDES is a list of directories containing header files
17 #---------------------------------------------------------------------------------
18 TARGET := $(shell basename $(CURDIR))
19 BUILD := src
20 SOURCES := src
21 DATA := data
22 INCLUDES := include
23
24 #---------------------------------------------------------------------------------
25 # options for code generation
26 #---------------------------------------------------------------------------------
27 ARCH := -mthumb -mthumb-interwork \
28 -D__NDS__ -DENABLE_NDS -DNO_SIGNAL_H -DDISABLE_THREADS -DPACKAGE=\"SDL\" \
29 -DVERSION=\"1.3\" -DHAVE_ALLOCA_H=1 -DHAVE_ALLOCA=1
30
31 CFLAGS := -g -Wall -O2\
32 -march=armv5te -mtune=arm946e-s \
33 -fomit-frame-pointer -ffast-math \
34 $(ARCH)
35
36 CFLAGS += $(INCLUDE) -DARM9
37 CXXFLAGS := $(CFLAGS) -fno-rtti -fno-exceptions
38
39 ASFLAGS := -g $(ARCH) -march=armv5te -mtune=arm946e-s
40 LDFLAGS = -specs=ds_arm9.specs -g $(ARCH) -Wl,-Map,$(notdir $*.map)
41
42 # Set to 0 to use a framer buffer, or 1 to use the hardware
43 # renderer. Alas, both cannot be used at the same time for lack of
44 # display/texture memory.
45 USE_HW_RENDERER := 1
46
47 ifeq ($(USE_HW_RENDERER),1)
48 CFLAGS += -DUSE_HW_RENDERER
49 else
50 endif
51
52 #---------------------------------------------------------------------------------
53 # list of directories containing libraries, this must be the top level containing
54 # include and lib
55 #---------------------------------------------------------------------------------
56 LIBDIRS := $(LIBNDS) $(PORTLIBS)
57
58 #---------------------------------------------------------------------------------
59 # no real need to edit anything past this point unless you need to add additional
60 # rules for different file extensions
61 #---------------------------------------------------------------------------------
62 ifneq ($(BUILD),$(notdir $(CURDIR)))
63 #---------------------------------------------------------------------------------
64
65 export OUTPUT := $(CURDIR)/lib/lib$(TARGET).a
66
67 export VPATH := $(foreach dir,$(SOURCES),$(CURDIR)/$(dir)) \
68 $(foreach dir,$(DATA),$(CURDIR)/$(dir))
69
70 export DEPSDIR := $(CURDIR)/$(BUILD)
71
72 CFILES := \
73 SDL.c \
74 SDL_assert.c \
75 SDL_compat.c \
76 SDL_error.c \
77 SDL_fatal.c \
78 SDL_hints.c \
79 SDL_log.c \
80 atomic/SDL_atomic.c \
81 atomic/SDL_spinlock.arm.c \
82 audio/SDL_audio.c \
83 audio/SDL_audiocvt.c \
84 audio/SDL_audiodev.c \
85 audio/SDL_audiotypecvt.c \
86 audio/SDL_mixer.c \
87 audio/SDL_wave.c \
88 audio/nds/SDL_ndsaudio.c \
89 cpuinfo/SDL_cpuinfo.c \
90 events/SDL_events.c \
91 events/SDL_keyboard.c \
92 events/SDL_mouse.c \
93 events/SDL_quit.c \
94 events/SDL_touch.c \
95 events/SDL_windowevents.c \
96 events/nds/SDL_ndsgesture.c \
97 file/SDL_rwops.c \
98 haptic/SDL_haptic.c \
99 haptic/nds/SDL_syshaptic.c \
100 joystick/SDL_joystick.c \
101 joystick/nds/SDL_sysjoystick.c \
102 power/SDL_power.c \
103 power/nds/SDL_syspower.c \
104 render/SDL_render.c \
105 render/SDL_yuv_sw.c \
106 render/nds/SDL_ndsrender.c \
107 render/software/SDL_blendfillrect.c \
108 render/software/SDL_blendline.c \
109 render/software/SDL_blendpoint.c \
110 render/software/SDL_drawline.c \
111 render/software/SDL_drawpoint.c \
112 render/software/SDL_render_sw.c \
113 stdlib/SDL_getenv.c \
114 stdlib/SDL_iconv.c \
115 stdlib/SDL_malloc.c \
116 stdlib/SDL_qsort.c \
117 stdlib/SDL_stdlib.c \
118 stdlib/SDL_string.c \
119 thread/SDL_thread.c \
120 thread/nds/SDL_syscond.c \
121 thread/nds/SDL_sysmutex.c \
122 thread/nds/SDL_syssem.c \
123 thread/nds/SDL_systhread.c \
124 timer/SDL_timer.c \
125 timer/nds/SDL_systimer.c \
126 video/SDL_RLEaccel.c \
127 video/SDL_blit.c \
128 video/SDL_blit_0.c \
129 video/SDL_blit_1.c \
130 video/SDL_blit_A.c \
131 video/SDL_blit_N.c \
132 video/SDL_blit_auto.c \
133 video/SDL_blit_copy.c \
134 video/SDL_blit_slow.c \
135 video/SDL_bmp.c \
136 video/SDL_clipboard.c \
137 video/SDL_fillrect.c \
138 video/SDL_pixels.c \
139 video/SDL_rect.c \
140 video/SDL_stretch.c \
141 video/SDL_surface.c \
142 video/SDL_video.c \
143 video/nds/SDL_ndsevents.c \
144 video/nds/SDL_ndsvideo.c \
145 video/nds/SDL_ndswindow.c
146
147
148 #CPPFILES := $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.cpp)))
149 #SFILES := $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.s)))
150 #BINFILES := $(foreach dir,$(DATA),$(notdir $(wildcard $(dir)/*.*)))
151
152 #---------------------------------------------------------------------------------
153 # use CXX for linking C++ projects, CC for standard C
154 #---------------------------------------------------------------------------------
155 ifeq ($(strip $(CPPFILES)),)
156 #---------------------------------------------------------------------------------
157 export LD := $(CC)
158 #---------------------------------------------------------------------------------
159 else
160 #---------------------------------------------------------------------------------
161 export LD := $(CXX)
162 #---------------------------------------------------------------------------------
163 endif
164 #---------------------------------------------------------------------------------
165
166 export OFILES := $(addsuffix .o,$(BINFILES)) \
167 $(CPPFILES:.cpp=.o) $(CFILES:.c=.o) $(SFILES:.s=.o)
168
169 export INCLUDE := $(foreach dir,$(INCLUDES),-I$(CURDIR)/$(dir)) \
170 $(foreach dir,$(LIBDIRS),-I$(dir)/include) \
171 -I$(CURDIR)/$(BUILD) \
172 -I$(PORTLIBS)/include/SDL
173
174 .PHONY: $(BUILD) clean all
175
176 #---------------------------------------------------------------------------------
177 all: arm_only $(BUILD) install nds_test
178
179 lib:
180 @[ -d $@ ] || mkdir -p $@
181
182 $(BUILD): lib
183 @[ -d $@ ] || mkdir -p $@
184 @$(MAKE) --no-print-directory -C $(BUILD) -f $(CURDIR)/Makefile.ds -s
185
186 install: $(BUILD)
187 @mkdir -p $(PORTLIBS)/include/SDL/
188 @rsync -a $(OUTPUT) $(PORTLIBS)/lib/
189 @rsync -a include/*.h $(PORTLIBS)/include/SDL/
190
191 nds_test:
192 $(MAKE) -C test/nds-test-progs -s
193
194 tags:
195 cd $(SOURCES); etags $(CFILES)
196
197 # This file must be compiled with the ARM instruction set, not
198 # thumb. Use devkitpro way of doing things.
199 arm_only: src/atomic/SDL_spinlock.arm.c
200 src/atomic/SDL_spinlock.arm.c: src/atomic/SDL_spinlock.c
201 @cp $< $@
202
203 #---------------------------------------------------------------------------------
204 clean:
205 @echo clean ...
206 @cd src; rm -fr $(OFILES) $(OFILES:.o=.d) lib
207 @rm -f $(OUTPUT)
208 @make -C test/nds-test-progs -s clean
209
210 #---------------------------------------------------------------------------------
211 else
212
213 DEPENDS := $(OFILES:.o=.d)
214
215 #---------------------------------------------------------------------------------
216 # main targets
217 #---------------------------------------------------------------------------------
218 $(OUTPUT) : $(OFILES)
219
220 #---------------------------------------------------------------------------------
221 %.bin.o : %.bin
222 #---------------------------------------------------------------------------------
223 @echo $(notdir $<)
224 @$(bin2o)
225
226
227 -include $(DEPENDS)
228
229 #---------------------------------------------------------------------------------------
230 endif
231 #---------------------------------------------------------------------------------------
|
__label__pos
| 0.759349 |
GestureDetector 的基本使用
Android对于手势的检测提供了一系列的API,其中包括对于手势检测的监听器、对于手势识别的API等等。
OnGestureListener
手势检测,主要有以下类型事件:按下(Down)、 一扔(Fling)、长按(LongPress)、滚动(Scroll)、触摸反馈(ShowPress) 和 单击抬起(SingleTapUp)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
//当手指按下的时候触发下面的方法
override fun onDown(e: MotionEvent?): Boolean {
Log.i(TAG, "onDown 当手指按下的时候触发下面的方法 x " + e?.x + " y " + e?.y)
return false
}
//当用户手指在屏幕上按下,而且还未移动和松开的时候触发这个方法
override fun onShowPress(e: MotionEvent?) {
Log.i(TAG, "onShowPress 当用户手指在屏幕上按下,而且还未移动和松开的时候触发这个方法 x " + e?.x + " y " + e?.y)
}
//当手指在屏幕上轻轻点击的时候触发下面的方法
override fun onSingleTapUp(e: MotionEvent?): Boolean {
Log.i(TAG, "onSingleTapUp 当手指在屏幕上轻轻点击的时候触发下面的方法 x " + e?.x + " y " + e?.y)
return false
}
//当手指在屏幕上滚动的时候触发这个方法
override fun onScroll(
e1: MotionEvent?,
e2: MotionEvent?,
distanceX: Float,
distanceY: Float
): Boolean {
Log.i(TAG, "onScroll 当手指在屏幕上滚动的时候触发这个方法")
return false
}
//当用户手指在屏幕上长按的时候触发下面的方法
override fun onLongPress(e: MotionEvent?) {
Log.i(TAG, "onLongPress 当用户手指在屏幕上长按的时候触发下面的方法 x " + e?.x + " y " + e?.y)
}
//当用户的手指在触摸屏上拖过的时候触发下面的方法,velocityX代表横向上的速度,velocityY代表纵向上的速度
override fun onFling(
e1: MotionEvent?,
e2: MotionEvent?,
velocityX: Float,
velocityY: Float
): Boolean {
Log.i(TAG, "onFling 当用户的手指在触摸屏上拖过的时候触发下面的方法,velocityX代表横向上的速度,velocityY代表纵向上的速度")
return false
}
OnDoubleTapListener
双击事件,有三个回调类型:双击(DoubleTap)、单击确认(SingleTapConfirmed) 和 双击事件回调(DoubleTapEvent)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//单击事件
override fun onSingleTapConfirmed(e: MotionEvent?): Boolean {
Log.i(TAG, "onSingleTapConfirmed 单击事件 x " + e?.x + " y " + e?.y)
return false
}
//双击事件
override fun onDoubleTap(e: MotionEvent?): Boolean {
Log.i(TAG, "onDoubleTap 双击事件 x " + e?.x + " y " + e?.y)
return false
}
//双击间隔中发生的动作
override fun onDoubleTapEvent(e: MotionEvent?): Boolean {
Log.i(TAG, "onDoubleTapEvent 双击间隔中发生的动作 x " + e?.x + " y " + e?.y)
return false
}
OnContextClickListener
用于检测外部设备上的按钮是否按下的,例如蓝牙触控笔上的按钮
参考
安卓自定义View进阶-手势检测(GestureDetector)
|
__label__pos
| 0.786681 |
google pie chart colors
The following values are Returns the screen x-coordinate of position relative to the chart's container. This example shows two sets of x and two sets However, when Default: '#666' backgroundColor.strokeWidth : The border width, in pixels. chart: The chart margins include the axis labels and the legend If you want to add a generic label to describe a whole Keep colors consistent throughout your article/report. Enter any data, customize the chart's colors, fonts and other details, then download it or easily share it with a shortened url | Meta-Chart.com ! Handling Events, and for the x-axis and for the t-axis. Gradient fills are fades from a one color to another color. the chco parameter. margin on the right side is set to the width of the chart legend, and is 3:|Jan|Feb|Mar An array of strings, where each element is an HTML color string, for example: colors:['red','#004411']. Extended description. If you have 4 elements A, B, C, and D, where A is green, B is red, C is blue, and D is yellow, then you will have to check your data before you draw the chart to see what elements are present and what order they are in. A simple number is a value in pixels; a number followed by % is a Below, we assign progressively larger offsets to slices 4 10, 35, and 75. To place these labels in specific locations along the axis, use the chxp parameter. and one x-axis (x). use default values for a slice, specify an empty object (i.e., {}). the chxr parameter. If true, displays a three-dimensional chart. On the left pie chart, you can see that there are four main hues used and four tints of each hue. If you do to the lower and upper values of your data format range, respectively. legend. chf= chs=220x100. chs=220x100, chl=May|June|July|August| When an object is used, the following properties can Example: chartArea:{left:20,top:0,width:'50%',height:'75%'}. The format for id isn't yet documented (they're the return This example includes axis labels on the left and right (Pie, Google-o-meter charts: background only.) are calculated from start, end, Create a customized Pie Chart for free. When i use pie chart to display i.e. or t-axis. Here is an example of a motion chart, with an initial line chart displayed. I have made some dynamic pie charts. assign two dummy series, and assign a function to each. 'none' - The tooltip will not be displayed. For example, don’t color one chart green and another one blue. The legend This is how I was trying with the plugin (which was messing up the colors is all statuses were not there): of axis labels, one per series. —the text that should appear in the tooltip for the action, and action (Borders added explicitly to the element. You can specify long tick marks for specific axes. the left and right axes do show them. Colors: Contains the color of each slice e.g. Each axis has a defined range. If you specify both, separate them with a bar for the x-axis (x). This default value varies by chart type. chxt=x,y,r,x The reason The chart is rendered only after all functions I got a requirement that I should make graphical representation for statistics data. respectively. The default of 0 will orient By default, the top and bottom axes do not show tick marks by the values, while The label text is specified using the chxl parameter. Pie Chart using Recharts. Google Charts is the popular charting tool library available now. elements, legend entries, axes, gridlines, or labels are clicked. the chart area width, but is cropped to fit within the chart. Defining a gradient; here, from yellow to red. 0,0,500| By default, pie segment colors are interpolated from dark orange to pale You can specify the font size, color, and Cancels any previous selection. The first chart demonstrates horizontal legend entries (chdlp=t, default string, for example to show currency symbols or trailing zeroes. To hide all axis lines in a line chart, specify :nda after Charts are drawn using SVG in standard browsers like Chrome, Firefox, Safari, Internet Explorer(IE). If you have multiple copies of an axis, September|October By default, the top and bottom axes do not show tick marks by the values, while can create donut charts with the pieHole option: The pieHole option should be set to a number between 0 When chdls - [Optional] Specifies the color and font size of the legend text. the scale of the chart elements, only the scale of the axis labels. To specify properties of is specified for the parameter. Clears the chart, and releases all of its allocated resources. The angle, in degrees, to rotate the chart by. You cannot crop a legend by specifying a size that Typically this is used to extend Change the x and y-axes, too. as wide as it is high, and a three-dimensional pie chart needs to be This example includes r-axis labels at specified of the corresponding data table element. It supports a wide range of charts. You can specify the color as a string in hexadecimal, RGB, or HSL notations. The Web Part can be used both with Windows Sharepoint Services V3, MOSS 2007 and Sharepoint 2010. chart container's top edge. (0). You can choose to have your If false, the chart You must specify a smaller or resizeable bar You can also choose which section is represented by which color by clicking “Pie slices” option: Adjust colors of any section: Use contrasting colors for each section to stand out! I was able to change the color of the font for the label to black but the percentage remained grey. chxtc=1,10|2,-180. ignore ignore the cell, rendering the chart as if that value does not exist; Note in the image that the “ignore” option behaves the same as not designating this option at all. to ensure that your labels are fully visible. This example fills the chart background with pale gray Only applicable when the chart is two-dimensional. This defaults to 0 for pie charts, and 50 for doughnuts. You can specify the size of the chart's margins, in pixels. In this article, we are going to see a simple example to create a pie chart, column and bar charts … y-axes (y and r). width to be twice the size of the height, to ensure that your labels are You can override this using Note that by default, the top and bottom axes don't show tick marks by the You can find a URL encoder in the Google Visualization If you want to interact with the chart, and The colors are peach (FFE7C6), centered on the left side axis (for example, to label one axis While they can be harder to read than column charts, they remain a popular choice for small datasets. FFE7C6,0, (peach) supported: An object that specifies the tooltip text style. Change Pie Chart colors Splunk: Create a Dashboard, edit colors, etc This example shows how to create a new Dashboard for Nessus network scanner (a Plugin for Splunk that parses Nessus scan data) using the chxr property. … of the final function will be plotted on the chart. be overwritten by the function output. Sign up for the Google Developers newsletter. The The chart area fill labels. Can be one of the following: Maximum number of lines in the legend. Separate multiple sets of labels using the pipe character Each series describes one pie, and each data value specifies one slice. If you set chf= The color of the chart border, as an HTML color string. #FDBB2F #377B2B #7AC142 #007CC3 #00529B . Note how we specify an offset of 0.75, to provide 'bottom' - Displays the legend below the chart. Default axis lines colors are specified by the series color parameter chco. This document describes how to create pie charts using properties for specific chart elements. Labels for a three-dimensional pie chart. How to Quickly Create a Pie Chart With a Google Slides Template. The bottom For example, the following two declarations are identical, and declare bar, and so on. Returns the screen y-coordinate of position relative to the chart's container. background fills. a peak of blue that fades away towards the top right corner. for more details. Set this to a number greater than one to add lines to This option currently works only when legend.position is 'top'. Because no custom label text is specified, these axis values 76A4FB,1 (blue). of the chart is pure peach. . Here's a diagram showing the basic parts of a 2. Enter your data in our simple table or import data from a Google spreadsheet. Chart properties. From simple scatter plots to hierarchical treemaps, find the best fit for your data. The object has this format: The color can be any HTML color string, for example: 'red' or 8 pieces i give an array of special colors every value has its own color. To add a margin around the legend, set a value for the and parameters. VML. #FFAB05 . You'll learn how to create a pie chart that fits your needs by customizing this slide design. In an upcoming release, you should only need new charts.PieChart(... since we removed the generic from the chart itself. You can override this using using nested pie charts (described below), use multiple series. Axes are drawn from A pie chart is a good chart to choose when displaying data that has stark contrasts. do not show numbers; you must specify an axis in the chxt parameter See inward from the specified chart size (chs); increasing the margin 'labeled' - Draws lines connecting slices to their data values. See This example shows a horizontal bar chart with an x-axis, to 'transparent': We also used the pieStartAngle to rotate the chart 135 chxl=3:|Jan|Feb|Mar| error events), and will not display hovertext or otherwise change depending on user input. You can't combine the pieHole and is3D How far to draw the chart from the left border. Call this after the chart is drawn. Note how you don't have to space labels evenly apart when you Finally, you can use the chxs and chxtc parameters chf= Example: chart.getChartLayoutInterface().getYLocation(300). If you don’t like the default colors for pie slices, you can change them. fontSize. on each side. Any and all tooltip actions should be set prior to calling the chart's draw() Inserting a chart in Google Slides. Type: string. Example: chart.getChartLayoutInterface().getHAxisValue(400). These can be 2:|A|B|C| how the values are evenly spaced, and how the last chxl value pipe character (|). because the first value (1000) is larger than the last value chdlp - [Optional] The position of if the chart area leaves room for margins, the margin size will be whatever is that you want to label. Plots to hierarchical treemaps, find the best fit for your chart should show only three:! The range of 0 to 4 with SVG tooltips, any overflow outside of the legend scenario to... Data series r chxl=2: |min|average|max chxp=2,10,35,75 each dataset that specifies the tooltip will be displayed when title... Allocated for the x-axis ( Jan, Feb, Mar ) value for the,. Javascript based charting library meant to enhance web applications by adding interactive charting capability, chl=May|June|July|August| September|October chs=220x100, September|October. Slices are proportional to the laws in your tracking system or you can choose to have your axes numbers. Using default color and font size label to black but the percentage the. Is zero, next value gets color of that series string values: only characters. A number of formats colors according to the center of the chart Editor.... Sharepoint 2010 size and color are specified, the legend, and then two more. T like the default font size of your chart contains multiple colors,.... Values are specified by the labels are only partially displayed, because the chart, you should URL-encode strings. Returns an object as its action parameter with each series is made of multiple,. Is pure blue API Displays a range of values for the chtt parameter dimensions, use the parameter. Any given moment consistent across all of the first 20 colors from the Google chart tools are powerful simple... So chosen because that particular angle happens to make pie charts, grouped bar charts and.! Size of the chart a requirement that i should make graphical representation statistics! 1 ) the sizes of the area allocated for the legend 's position including the same axis the... Are by default, the chart data cause `` blank '' slices text. Y-Axis using chxs choose when displaying data that has stark contrasts about %! Four tints of each elements of Google pie chart < color >, < color_centerpoint > pair be prior. Single color to value is ignored ; all IE8 charts are rendered, using the chxl parameter x two...: Customization of the text for each axis independently, using HTML5/SVG to provide a peak of that... Or backgrounds use object literal notation, as an HTML color string library meant enhance! Gradient across all of the appropriate size second x-axis ( x ) you express your creativity as... Chart playground ; you must specify a custom function to run over chart data using function... At any given moment whether the chart should show only three labels: 10 35... Implement pie charts of unique and distinct saturated colors for pie chart in ASP.NET and AjaxControl Toolkit controls., blue|yellow 0 google pie chart colors 0.1 ) ' ( for example, the global value be... 8 pieces i give an array of objects, and free lg,0, FFE7C6,0 (! Images, videos and more 0: |Jan|July|Jan|July|Jan| 2: |min|average|max chxp=2,10,35,75 default is 'center ' ; other default. So i ’ ve preferred to create some connecting slices to illustrate numerical.. Create: flat, concentric, or HSL notations chart from the chart is a kind of pie chart use... ’ s also an option to add a chart with values 100,200,300 inserted! Applications for rendering charts readers to read the charts default: None choice for small datasets one! % is a circular chart that fits your needs by customizing this slide design note! Formatted separately, simple to use, either pie or doughnut: |Jan|July|Jan|July|Jan| 2: |min|average|max chxp=2,10,35,95,. Specify multiple fills using the default axis lines, or HSL notations function used to create pie.... ( pie, Google-o-meter charts: background only. ) is still in flux ). Slice. ) no axis line, assign the colors are chco=green|red blue|yellow! Entry per slice. ) sizes of the legend two pie charts display your data 76A4FB ) is the color... Left and right y-axes ( y and r ) the options that it supports also an option: style., pieHole will be displayed when the user interaction that causes the tooltip be... How data sets relate to one another exactly what you 're looking for and options. To provide a peak of blue that fades away towards the top bottom... Can add an 's ' to any value if you want from the API... Applications for rendering charts an x-axis, a y-axis, an upper t-axis, and then to!: Edit or format title text, color, and then two or more colors to! A one color to another its services very seriously the basic parts of a motion,. Option is ignored ; all IE8 charts are drawn using SVG in standard like. Allocated for the combination slice that holds all slices below sliceVisibilityThreshold two dummy series, to the. - colors and styles for the chart if set to true, show colored squares next to the end the...: chart.getChartLayoutInterface ( ).getYLocation ( 300 ) or relative value of the appropriate size each data specifies... Your tracking system or you can choose to have your axes display numbers reflecting the values. ( 0 ) and two sets of values that appear on each axis,! Chart has a vertical ( top to bottom ) linear gradient, specified with an x-axis, y-axis! When supplying colors to chart areas or backgrounds are interpolated from dark to. General, pie segment colors are specified for the combination slice that holds all slices from yellow to (! Points with relatively large differences in proportion between the values are evenly spaced and! Returns the screen x-coordinate of position relative to its portion of the slice. ) as. ( peach ) 76A4FB,0.75 ( blue ).getVAxisValue ( 300 ) of properties be... Output by a previous function values are supported: an object with the requested actionID to. Selects the element tweaking, i needed 7 specific colors according to the center the... Be selected at a glance a few of the appropriate size marks, but no line... Are rendered, using default color '' column chart apart when you use chxp platform portability line,! Labels at specified locations or some values using the pipe character ( | ) display your data an.!, call getSelection ( ).getHAxisValue ( 400 ) ignored, and are evenly spaced, and 50 doughnuts! Both, separate them with a pipe character ( | ) legend below the chart once... Between 0.4 and 0.6 will look the same format value is not supported the... A one color to another color into two series: you can specify < position > and/or label_order! Exceeds the chart accepts further method calls only after the readyevent is fired 0 to 4 the second (... Template looks like without any changes the background color for the chart: { left:20, top:0 width. Using SVG in standard browsers like Chrome, Firefox, Safari, Internet Explorer IE... As many labels as you like a < color >, < color_centerpoint values. Of data being presented skipped in the chxt parameter if you want to label, gridlines, to... Using a pipe character ( | ) and formatting in the area allocated for the x-axis around your legend 2. Background with pale gray ( EFEFEF ) angle happens to make the Italian. Charts can become overly complicated if there are more ways to adjust the graph design in Google Sheets 's (... X-Coordinate of position relative to its portion of the chart 's documentation should describe the options that its! Value does not end with a slice. ) via the tooltip will inserted! That shows numeric values ( see the Google chart tools are powerful, simple to use Google tries. Completely shut your pieHole are specified, Chart.js will use the chxs parameter to change colors showing through chart! ( 5 ) is the list of the metric you are plotting legend on the first example, ’... Ll see different options: bar, column, line and pie charts ( described below ), September|October! 1 ) and order of the corresponding slice in the character ( | ) has!, each value, or you can see that there are more ways to adjust the graph design in Sheets... Are available in Google Sheets i had some other options like D3.js,,. Spaced, and height of chart element id 'percentage ' - display only the scale of colors... Left border but only at specified locations very popular for showing a compact overview of a composition comparison. Create dynamic Google chart colors would match a lease plan numeric range, use chp= < angle_in_radians > output... Three general types of pie chart slices using the chl parameter set for each that... Axes display numbers reflecting the data table ( column index is null ) visible the. To learn how to create some 35, and a value is wide! For showing a compact overview of a chart with lines pointing toward the slice may fall into the.! Can format each one differently as is shown here the range for
Peach Moon Milk, Flexo Printing Machine Brands, Somewhere In The Middle Casting Crowns, Solar Panel Design, Ankita Name Full Form, Baked Potato Meatballs, Anxiety Shirt For Dogs, Buffalo Wild Wings Strawberry Lemonade Recipe,
|
__label__pos
| 0.527731 |
RIFT J. Head, Ed.
Internet-Draft T. Przygienda
Intended status: Standards Track W. Lin
Expires: 8 September 2022 Juniper Networks
7 March 2022
RIFT Auto-EVPN
draft-ietf-rift-auto-evpn-02
Abstract
This document specifies procedures that allow an EVPN overlay to be
fully and automatically provisioned when using RIFT as underlay by
leveraging RIFT's no-touch ZTP architecture.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 8 September 2022.
Copyright Notice
Copyright (c) 2022 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/
license-info) in effect on the date of publication of this document.
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
extracted from this document must include Revised BSD License text as
described in Section 4.e of the Trust Legal Provisions and are
provided without warranty as described in the Revised BSD License.
Head, et al. Expires 8 September 2022 [Page 1]
Internet-Draft RIFT Auto-EVPN March 2022
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3
2. Design Considerations . . . . . . . . . . . . . . . . . . . . 3
3. System ID . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Fabric ID . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5. Auto-EVPN Device Roles . . . . . . . . . . . . . . . . . . . 5
5.1. All Participating Nodes . . . . . . . . . . . . . . . . . 5
5.2. ToF Nodes as Route Reflectors . . . . . . . . . . . . . . 5
5.2.1. Data Center Interconnect Gateway Functions . . . . . 6
5.3. Leaf Nodes . . . . . . . . . . . . . . . . . . . . . . . 6
6. Auto-EVPN Variable Derivation . . . . . . . . . . . . . . . . 8
6.1. Auto-EVPN Version . . . . . . . . . . . . . . . . . . . . 8
6.2. MAC-VRF ID . . . . . . . . . . . . . . . . . . . . . . . 8
6.3. Loopback Address . . . . . . . . . . . . . . . . . . . . 8
6.3.1. Leaf Nodes as Gateways . . . . . . . . . . . . . . . 9
6.3.2. ToF Nodes as Route Reflectors . . . . . . . . . . . . 9
6.3.2.1. Single Plane Route Reflector Election
Procedures . . . . . . . . . . . . . . . . . . . . 9
6.3.2.2. Multiplane Route Reflector Election Procedures . 11
6.4. Autonomous System Number . . . . . . . . . . . . . . . . 11
6.5. Router ID . . . . . . . . . . . . . . . . . . . . . . . . 11
6.6. Cluster ID . . . . . . . . . . . . . . . . . . . . . . . 11
6.7. Route Target . . . . . . . . . . . . . . . . . . . . . . 12
6.8. Route Distinguisher . . . . . . . . . . . . . . . . . . . 12
6.9. EVPN MAC-VRF Services . . . . . . . . . . . . . . . . . . 12
6.9.1. Untagged Traffic in Multiple Fabrics . . . . . . . . 13
6.9.1.1. VLAN . . . . . . . . . . . . . . . . . . . . . . 13
6.9.1.2. VNI . . . . . . . . . . . . . . . . . . . . . . . 13
6.9.1.3. MAC Address . . . . . . . . . . . . . . . . . . . 13
6.9.1.4. IPv6 IRB Gateway Address . . . . . . . . . . . . 13
6.9.1.5. IPv4 IRB Gateway Address . . . . . . . . . . . . 13
6.9.2. Tagged Traffic in Multiple Fabrics . . . . . . . . . 14
6.9.2.1. VLAN . . . . . . . . . . . . . . . . . . . . . . 14
6.9.2.2. VNI . . . . . . . . . . . . . . . . . . . . . . . 14
6.9.2.3. MAC Address . . . . . . . . . . . . . . . . . . . 14
6.9.2.4. IPv6 IRB Gateway Address . . . . . . . . . . . . 14
6.9.2.5. IPv4 IRB Gateway Address . . . . . . . . . . . . 15
6.9.3. Tagged Traffic in a Single Fabric . . . . . . . . . . 15
6.9.3.1. VLAN . . . . . . . . . . . . . . . . . . . . . . 15
6.9.3.2. VNI . . . . . . . . . . . . . . . . . . . . . . . 15
6.9.3.3. MAC Address . . . . . . . . . . . . . . . . . . . 15
6.9.3.4. IPv6 IRB Gateway Address . . . . . . . . . . . . 16
6.9.3.5. IPv4 IRB Gateway Address . . . . . . . . . . . . 16
6.9.4. Traffic Routed to External Destinations . . . . . . . 16
6.9.4.1. Route Distinguisher . . . . . . . . . . . . . . . 16
6.9.4.2. Route Target . . . . . . . . . . . . . . . . . . 16
Head, et al. Expires 8 September 2022 [Page 2]
Internet-Draft RIFT Auto-EVPN March 2022
7. Operational Considerations . . . . . . . . . . . . . . . . . 17
7.1. RIFT Underlay and Auto-EVPN Overlay . . . . . . . . . . . 17
7.2. Auto-EVPN Analytics . . . . . . . . . . . . . . . . . . . 20
7.2.1. Auto-EVPN Global Analytics Key Type . . . . . . . . . 21
7.2.2. Auto-EVPN MAC-VRF Key Type . . . . . . . . . . . . . 22
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23
9. Security Considerations . . . . . . . . . . . . . . . . . . . 23
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 23
10.1. Normative References . . . . . . . . . . . . . . . . . . 23
Appendix A. Thrift Models . . . . . . . . . . . . . . . . . . . 24
A.1. common.thrift . . . . . . . . . . . . . . . . . . . . . . 24
A.2. encoding.thrift . . . . . . . . . . . . . . . . . . . . . 24
A.3. common_evpn.thrift . . . . . . . . . . . . . . . . . . . 25
A.4. auto_evpn_kv.thrift . . . . . . . . . . . . . . . . . . . 28
Appendix B. Auto-EVPN Variable Derivation . . . . . . . . . . . 30
B.1. Variable Derivation Functions . . . . . . . . . . . . . . 30
B.2. Variable Derivation Results . . . . . . . . . . . . . . . 42
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 87
1. Introduction
RIFT is a protocol that focuses heavily on operational simplicity.
[RIFT] natively supports Zero Touch Provisioning (ZTP) functionality
that allows each node in an underlay network to automatically derive
its place in the topology and configure itself accordingly when
properly cabled. RIFT can also disseminate Key-Value information
contained in Key-Value Topology Information Elements (KV-TIEs)
[RIFT-KV]. These KV-TIEs can contain any information and therefore
be used for any purpose. Leveraging RIFT to provision EVPN overlays
without any need for configuration and leveraging KV capabilities to
easily validate correct operation of such overlay without a single
point of failure would provide significant benefit to operators in
terms of simplicity and robustness of such a solution.
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
2. Design Considerations
EVPN supports various service models, this document defines a method
for the VLAN-Aware service model defined in [RFC7432]. Other service
models may be considered in future revisions of this document.
Head, et al. Expires 8 September 2022 [Page 3]
Internet-Draft RIFT Auto-EVPN March 2022
Each model has its own set of requirements for deployment. For
example, a functional BGP overlay is necessary to exchange EVPN NLRI
regardless of the service model. Furthermore, the requirements are
made up of individual variables, such as each node's loopback address
and AS number for the BGP session. Some of these variables may be
coordinated across each node in a network, but are ultimately locally
significant (e.g. route distinguishers). Similarly, calculation of
some variables will be local only to each device. RIFT contains
currently enough topology information in each node to calculate all
those necessary variables automatically.
Once the EVPN overlay is configured and becomes operational, RIFT
Key-Value TIEs can be used to distribute state information to allow
for validation of basic operational correctness without the need for
further tooling.
3. System ID
The 64-bit RIFT System ID that uniquely identifies a node as defined
in RIFT [RIFT].
4. Fabric ID
RIFT operates on variants of Clos substrate which are commonly called
an IP Fabric. Since EVPN VLANs can be either contained within one
fabric or span them, Auto-EVPN introduces the concept of a Fabric ID
into RIFT.
This section describes an optional extension to LIE packet schema in
the form of a 16-bit Fabric ID that identifies a nodes membership
within a particular fabric. Auto-EVPN capable nodes MUST support
this extension but MAY not advertise it when not participating in
Auto-EVPN. A non-present Fabric ID and value of 0 is reserved as
ANY_FABRIC and MUST NOT be used for any other purpose.
Fabric ID MUST be considered in existing adjacency FSM rules so nodes
that support Auto-EVPN can interoperate with nodes that do not. The
LIE validation is extended with following clause and if it is not
met, miscabling should be declared:
(if fabric_id is not advertised by either node OR
if fabric_id is identical on both nodes)
AND
(if auto_evpn_version is not advertised by either node OR
if auto_evpn_version is identical on both nodes)
The appendix (Appendix A) details necessary changes to the RIFT LIE
and Node-TIE thrift schema.
Head, et al. Expires 8 September 2022 [Page 4]
Internet-Draft RIFT Auto-EVPN March 2022
5. Auto-EVPN Device Roles
Auto-EVPN requires that each node understand its given role within
the scope of the EVPN implementation so each node derives the
necessary variables and provides the necessary overlay configuration.
For example, a leaf node performing VXLAN gateway functions does not
need to derive its own Cluster ID or learn one from the route
reflector that it peers with.
5.1. All Participating Nodes
Not all nodes have to participate in Auto-EVPN, however if a node
does assume an Auto-EVPN role, it MUST derive the following
variables:
*IPv6 Loopback Address*
Unique IPv6 loopback address used in BGP sessions.
*Router ID*
The BGP Router ID.
*Autonomous System Number*
The ASN for IBGP sessions.
*Cluster ID*
The Cluster ID for Top-of-Fabric IBGP route reflection.
5.2. ToF Nodes as Route Reflectors
This section defines an Auto-EVPN role whereby some Top-of-Fabric
nodes act as EVPN route reflectors. It is expected that route
reflectors would establish IBGP sessions with leaf nodes in the same
fabric. The typical route reflector requirements do not change,
however determining which specific values to use requires further
consideration.
ToF nodes performing route reflector functionality MUST derive the
following variables:
*IPv6 RR Loopback Address*
The source address for IBGP sessions with leaf nodes in case
ToF won election for one of the route reflectors in the fabric.
*IPv6 RR Acceptable Prefix Range*
Range of addresses acceptable by the route reflector to form a
IBGP session. This range covers ALL possible IPv6 Loopback
Addresses derived by other Auto EVPN nodes in the current
fabric and other Auto-EVPN RRs addresses.
Head, et al. Expires 8 September 2022 [Page 5]
Internet-Draft RIFT Auto-EVPN March 2022
*Cluster ID*
The Cluster ID for Top-of-Fabric IBGP route reflection.
5.2.1. Data Center Interconnect Gateway Functions
Implementations that require connectivity beyond the EVPN/VXLAN
boundary can leverage Data Center Interconnect Gateway functionality.
This requires additional considerations to ensure the appropriate
reachability is present.
First - new VRFs and accompanying variable derivation is required,
this is decribed below.
Second - additional route reflector election considerations in order
to ensure that route reflectors with DCI gateway functionality are
preferred. This is described later in the document in Section 6.3.2.
If DCI functionality is desired, the Top-of-Fabric nodes MUST be
capable of routing toward the correct leaf node when it receives
traffic from an external destination. Therefore, it MUST be capable
of deriving the following types of variables:
*Route Distinguisher*
The route distinguisher corresponding to a IP-VRF's IP prefix
routes that MUST uniquely identify each node.
*Route Target*
The route target that corresponds to an IP-VRF's IP prefix
routes.
*VNI*
The VNI that corresponds to the Type-5 IP prefix routes within
an IP-VRF.
5.3. Leaf Nodes
Leaf nodes derive their role from realizing they are at the bottom of
the fabric, i.e. not having any southbound adjacencies. Alternately,
a node can assume a leaf node if it has only southbound adjacencies
to nodes with explicit LEAF_LEVEL to allow for scenarios where RIFT
leaves do NOT participate in Auto-EVPN.
Leaf nodes MUST derive the following variables:
*IPv6 RR Loopback Addresses*
Addresses of the RRs present in the fabric. Those addresses
are used to build BGP sessions to the RR.
Head, et al. Expires 8 September 2022 [Page 6]
Internet-Draft RIFT Auto-EVPN March 2022
*EVIs*
Leaf node derives all the necessary variables to instantiate
EVIs with layer-2 and optionally layer-3 functionality.
If a leaf node is required to perform layer-2 VXLAN gateway
functions, it MUST be capable of deriving the following types of
variables:
*Route Distinguisher*
The route distinguisher corresponding to a MAC-VRF that
uniquely identifies each node.
*Route Target*
The route target that corresponds to a MAC-VRF.
*MAC VRF Name*
This is an optional variable to provide a common MAC VRF name
across all leaves.
*Set of VLANs*
Those are VLANs provisioned either within the fabric or
allowing to stretch across fabrics.
For each VLAN derived in an EVI the following variables MUST be
derived:
*VLAN*
The VLAN ID.
*Name*
This is an optional variable to provide a common VLAN name
across all leaves.
*VNI*
The VNI that corresponds to the VLAN ID. This will contribute
to the EVPN Type-2 route.
*IRB*
Optional variables of the IRB for the VLAN if the leaf performs
layer-3 gateway function.
Head, et al. Expires 8 September 2022 [Page 7]
Internet-Draft RIFT Auto-EVPN March 2022
6. Auto-EVPN Variable Derivation
As previously mentioned, not all nodes are required to derive all
variables in a given network (e.g. a transit spine node may not need
to derive any or participate in Auto-EVPN). Additionally, all
derived variables are derived from RIFT's FSM or ZTP mechanism so no
additional flooding beside RIFT flooding is necessary for the
functionality.
It is also important to mention that all variable derivation is in
some way based on combinations of System ID, MAC-VRF ID, Fabric ID,
EVI and VLAN and MUST comply precisely with calculation methods
specified in the Auto-EVPN Variable Derivation section to allow
interoperability between different implementations. All foundational
code elements are also mentioned there.
6.1. Auto-EVPN Version
This section describes extensions to both the RIFT LIE packet and
Node-TIE schemas in the form of a 16-bit value that identifies the
Auto-EVPN Version. Auto-EVPN capable nodes MUST support this
extension, but MAY choose not to advertise it in LIEs and Node-TIEs
when Auto-EVPN is not being utilized.
This section also describes an extension to the Node Capabilities
schema indicating that a node supports Auto-EVPN.
The appendix (Appendix A) details necessary changes to the RIFT LIE,
Node-TIE, and Node Capabilities thrift schema.
6.2. MAC-VRF ID
This section describes a variable MAC-VRF ID that uniquely identifies
an instance of EVPN instance (EVI) and is used in variable derivation
procedures. Each EVPN EVI MUST be associated with a unique MAC-VRF
ID, this document does not specify a method for making that
association or ensuring that they are coordinated properly across
fabric(s).
6.3. Loopback Address
First and foremost, RIFT does not advertise anything more specific
than the fabric default route in the southbound direction by default.
However, Auto-EVPN nodes MUST advertise specific loopback addresses
southbound to all other Auto-EVPN nodes so to establish MP-BGP
reachability correctly in all scenarios.
Head, et al. Expires 8 September 2022 [Page 8]
Internet-Draft RIFT Auto-EVPN March 2022
Auto-EVPN nodes MUST derive a ULA-scoped IPv6 loopback address to be
used as both the IBGP source address, as well as the VTEP source when
VXLAN gateways are required. Calculation is done using the 6-bytes
of reserved ULA space, the 2-byte Fabric ID, and the node's 8-byte
System ID. Derivation of the System ID varies slightly depending
upon the node's location/role in the fabric and will be described in
subsequent sections.
6.3.1. Leaf Nodes as Gateways
Calculation is done using the 6-bytes of reserved ULA space, the
2-byte Fabric ID, and the node's 8-byte System ID.
In order for leaf nodes to derive IPv6 loopback addresses, algorithms
shown in both auto_evpn_fidsidv6loopback (Figure 28) and
auto_evpn_v6prefixfidsid2loopback (Figure 13) are required.
IPv4 addresses MAY be supported, but it should be noted that they
have a higher likelihood of collision. The appendix contains the
required auto_evpn_fidsid2v4loopback (Figure 27) algorithm to support
IPv4 loopback derivation.
6.3.2. ToF Nodes as Route Reflectors
ToF nodes acting as route reflectors MUST derive their loopback
address according to the specific section describing the algorithm.
Calculation is done using the 6-bytes of reserved ULA space, the
2-byte Fabric ID, and the 8-byte System ID of each elected route
reflector.
In order for the ToF nodes to derive IPv6 loopbacks, the algorithms
shown in both auto_evpn_fidsidv6loopback (Figure 28) and
auto_evpn_fidrrpref2rrloopback (Figure 14) are required.
In order for the ToF derive the necessary prefix range to facilitate
peering requests from any leaf, the algorithm shown in
"auto_evpn_fid2fabric_prefixes" (Figure 12) is required.
A topology MUST elect at least 1 Top-of-Fabric node as an IBGP route
reflector, but SHOULD elect 3. The election method varies depending
upon whether the fabric is comprised of a single plane or multiple
planes or if DCI gateway functionality is desired.
6.3.2.1. Single Plane Route Reflector Election Procedures
Each ToF performs the election independently based on system IDs of
other ToFs in the fabric obtained via southbound reflection. The
route reflector election procedures are defined as follows:
Head, et al. Expires 8 September 2022 [Page 9]
Internet-Draft RIFT Auto-EVPN March 2022
1. ToF node with the highest System ID.
2. ToF node with the lowest System ID.
3. ToF node with the 2nd highest System ID.
4. etc.
This ordering is necessary to prevent a single node with either the
highest or lowest System ID from triggering changes to route
reflector loopback addresses as it would result in all BGP sessions
dropping.
For example, if two nodes, ToF01 and ToF02 with System IDs
002c6af5a281c000 and 002c6bf5788fc000 respectively, ToF02 would be
elected due to it having the highest System ID of the ToFs
(002c6bf5788fc000). If a ToF determines that it is elected as route
reflector, it uses the knowledge of its position in the list to
derive route reflector IPv6 loopback address.
The algorithm shown in "auto_evpn_sids2rrs" (Figure 10) is required
to accomplish this.
6.3.2.1.1. DCI-GW Variations
It is beneficial for ToF-RRs requiring DCI-GW functions to be
preferred over ToF-RRs that do not. As such, the
"default_acting_auto_evpn_dci_when_tof" flag described in
Appendix A.1 MUST be factored into election procedures mentioned in
the previous section. Essentially, ToFs flagged as requiring DCI-GW
functions, will be sorted separately from those that do not. That is
to say, that ToFs requiring DCI-GW functions will always be preferred
as RRs.
For example, if a fabric contains 4 ToF nodes where 2 require DCI-GW
functions and the other 2 do not, the election will take place as
follows:
1. ToF node (DCI) with the highest System ID.
2. ToF node (DCI) with the lowest System ID.
3. ToF node (non-DCI) with the 2nd highest System ID.
4. etc.
Head, et al. Expires 8 September 2022 [Page 10]
Internet-Draft RIFT Auto-EVPN March 2022
6.3.2.2. Multiplane Route Reflector Election Procedures
As mentioned in the main RIFT [RIFT] specification, when an
implementation uses multiplane fabrics, inter-ToF rings are
recommended in order to facilitate northbound flooding between ToFs
in different planes.
If a multiplane implementation is using Auto-EVPN, those inter-Tof
rings are REQUIRED to ensure that DCI/RR election works as intended.
Each ToF performs the election independently based on system IDs of
other ToFs in the other fabrics obtained from northbound flooding
across the inter-ToF rings. The highest System ID from each plane
will be considered the Plane ID, which is then factored into the
election as follows:
1. The ToF node with the highest Plane ID, DCI bit, System ID
2. The ToF node with the lowest Plane ID, DCI bit, System ID
3. The ToF node with the 2nd highest Plane ID, DCI bit, System ID
4. etc.
This algorithm allows DCI/RRs to be split across planes for improved
redundancy.
6.4. Autonomous System Number
Nodes in each fabric MUST derive a private autonomous system number
based on its Fabric ID so that it is unique across the fabric.
The algorithm shown in auto_evpn_fid2private_AS (Figure 29) is
required to derive the private ASN.
6.5. Router ID
Nodes MUST drive a Router ID that is based on both its System ID and
Fabric ID so that it is unique to both.
The algorithm shown in auto_evpn_sidfid2bgpid (Figure 15) is required
to derive the BGP Router ID.
6.6. Cluster ID
Route reflector nodes in each fabric MUST derive a cluster ID that is
based on its Fabric ID so that it is unique across the fabric.
Head, et al. Expires 8 September 2022 [Page 11]
Internet-Draft RIFT Auto-EVPN March 2022
The algorithm shown in auto_evpn_fid2clusterid (Figure 30) is
required to derive the BGP Cluster ID.
6.7. Route Target
Nodes hosting EVPN EVIs MUST derive a route target extended community
based on the MAC-VRF ID for each EVI so that it is unique across the
network. Route targets MUST be of type 0 as per RFC4360.
For example, if given a MAC-VRF ID of 1, the derived route target
would be "target:1"
The algorithm shown in auto_evpn_evi2rt (Figure 16) is required to
derive the Route Target community.
6.8. Route Distinguisher
Nodes hosting EVPN EVIs MUST derive a type-0 route distinguisher
based on its System ID and Fabric ID so that it is unique per node
within a fabric.
The algorithm shown in auto_evpn_sidfid2rd (Figure 22) is required to
derive the Route Distinguisher.
6.9. EVPN MAC-VRF Services
It's obvious that applications utilizing Auto-EVPN overlay services
may require a variety of layer-2 and/or layer-3 traffic
considerations. Variables supporting these services are also derived
based on some combination of MAC-VRF ID, Fabric ID, and other
constant values. Integrated Routing and Bridging (IRB) gateway
address derivation also leverages a set of constant RANDOMSEEDS
(Figure 9) values that MUST be used to provide additional entropy.
In order to ensure that VLAN ID's don't collide, a single deployment
SHOULD NOT exceed 6 fabrics with 7 EVIs where each EVI terminates 30
VLANs. The algorithms shown in auto_evpn_fidevivlansvlans2desc
(Figure 20) and auto_evpn_vlan_description_table (Figure 19) are
required to derive VLANs accordingly. An implementation MAY exceed
this, but MUST indicate methods to ensure collision-free derivation
and describe which VLANs are stretched across fabrics.
Lastly, Table 3 shows example derivation results for the previously
mentioned scaling figures.
Head, et al. Expires 8 September 2022 [Page 12]
Internet-Draft RIFT Auto-EVPN March 2022
6.9.1. Untagged Traffic in Multiple Fabrics
This section defines methods to derive unique VLAN, VNI, MAC, and
gateway address values for deployments where untagged traffic is
stretched across multiple fabrics.
6.9.1.1. VLAN
Untagged traffic stretched across multiple fabrics MUST derive VLAN
tags based on MAC-VRF ID in conjunction with a constant value.
6.9.1.2. VNI
Untagged traffic stretched across multiple fabrics MUST derive VNIs
based on MAC-VRF ID in conjunction with a constant value. These VNIs
MUST correspond to EVPN Type-2 routes.
The algorithm shown in auto_evpn_fidevivid2vni (Figure 18) is
required to derive VNIs for Type-2 EVPN routes.
6.9.1.3. MAC Address
The MAC address MUST be a unicast address and also MUST be identical
for any IRB gateways that belong to an individual bridge-domain
across fabrics. The last 5-bytes MUST be a hash of the MAC-VRF ID
and a constant value that is calculated using the previously
mentioned random seed values.
The algorithm shown in auto_evpn_fidevividsid2mac (Figure 26) is
required to derive MAC addresses.
6.9.1.4. IPv6 IRB Gateway Address
The derived IPv6 gateway address MUST be from a ULA-scoped range that
will account for the first 6-bytes. The next 5-bytes MUST be the
last bytes of the derived MAC address. Finally, the remaining
7-bytes MUST be ::0001.
The algorithm shown in auto_evpn_fidevividsid2v6subnet (Figure 25) is
required to derive the IPv6 gateway address.
6.9.1.5. IPv4 IRB Gateway Address
The derived IPv4 gateway address MUST be from a RFC1918 range, which
accounts for the first octet. The next octet MUST a hash of the MAC-
VRF ID and a constant value of 1 that is calculated using the
previously mentioned random seed values. Finally, the remaining 2
octets MUST be 0 and 1 respectively.
Head, et al. Expires 8 September 2022 [Page 13]
Internet-Draft RIFT Auto-EVPN March 2022
The algorithm shown in auto_evpn_v4prefixfidevividsid2v4subnet
(Figure 23) is required to derive the IPv4 gateway address. It
should be noted that there is a higher likelihood of address
collisions when deriving IPv4 addresses.
6.9.2. Tagged Traffic in Multiple Fabrics
This section defines methods to derive unique VLAN, VNI, MAC, and
gateway address values for deployments where tagged traffic is
stretched across multiple fabrics.
6.9.2.1. VLAN
Tagged traffic stretched across multiple fabrics MUST derive VLAN
tags based on MAC-VRF ID in conjunction with a constant value.
6.9.2.2. VNI
Tagged traffic stretched across multiple fabrics MUST derive VNIs
based on MAC-VRF ID in conjunction with a constant value. These VNIs
MUST correspond to EVPN Type-2 routes.
The algorithm shown in auto_evpn_fidevivid2vni (Figure 18) is
required to derive VNIs for Type-2 EVPN routes.
6.9.2.3. MAC Address
The MAC address MUST be a unicast address and also MUST be identical
for any IRB gateways that belong to an individual bridge-domain
across fabrics. The last 5-bytes MUST be a hash of the MAC-VRF ID
and a constant value that is calculated using the previously
mentioned random seed values.
The algorithm shown in auto_evpn_fidevividsid2mac (Figure 26) is
required to derive MAC addresses.
6.9.2.4. IPv6 IRB Gateway Address
The derived IPv6 gateway address MUST be from a ULA-scoped range that
will account for the first 6-bytes. The next 5-bytes MUST be the
last bytes of the derived MAC address. Finally, the remaining
7-bytes MUST be ::0001.
The algorithm shown in auto_evpn_fidevividsid2v6subnet (Figure 25) is
required to derive the IPv6 gateway address.
Head, et al. Expires 8 September 2022 [Page 14]
Internet-Draft RIFT Auto-EVPN March 2022
6.9.2.5. IPv4 IRB Gateway Address
The derived IPv4 gateway address MUST be from a RFC1918 range, which
accounts for the first octet. The next octet MUST a hash of the MAC-
VRF ID and a constant value of 16 that is calculated using the
previously mentioned random seed values. Finally, the remaining 2
octets MUST be 0 and 1 respectively.
The algorithm shown in auto_evpn_v4prefixfidevividsid2v4subnet
(Figure 23) is required to derive the IPv4 gateway address. It
should be noted that there is a higher likelihood of address
collisions when deriving IPv4 addresses.
6.9.3. Tagged Traffic in a Single Fabric
This section defines a method to derive unique VLAN, VNI, MAC, and
gateway address values for deployments where untagged traffic is
contained within a single fabric.
6.9.3.1. VLAN
Tagged traffic contained to a single fabric MUST derive VLAN tags
based on MAC-VRF ID and Fabric ID in conjunction with a constant
value.
6.9.3.2. VNI
Tagged traffic contained to a single fabric MUST derive VNIs based on
MAC-VRF ID and Fabric ID in conjunction with a constant value. These
VNIs MUST correspond to EVPN Type-2 routes.
The algorithm shown in auto_evpn_fidevivid2vni (Figure 18) is
required to derive VNIs for Type-2 EVPN routes.
6.9.3.3. MAC Address
The MAC address MUST be a unicast address and also MUST be identical
for any IRB gateways that belong to an individual bridge-domain
across fabrics. The last 5-bytes MUST be a hash of the MAC-VRF ID
and a constant value that is calculated using the previously
mentioned random seed values.
The algorithm shown in auto_evpn_fidevividsid2mac (Figure 26) is
required to derive MAC addresses.
Head, et al. Expires 8 September 2022 [Page 15]
Internet-Draft RIFT Auto-EVPN March 2022
6.9.3.4. IPv6 IRB Gateway Address
The derived IPv6 gateway address MUST be from a ULA-scoped range,
which accounts for the first 6-bytes. The next 5-bytes MUST be the
last bytes of the derived MAC address. Finally, the remaining
7-bytes MUST be ::0001.
The algorithm shown in auto_evpn_fidevividsid2v6subnet (Figure 25) is
required to derive the IPv6 gateway address.
6.9.3.5. IPv4 IRB Gateway Address
The derived IPv4 gateway address MUST be from a RFC1918 range, which
accounts for the first octet. The next octet MUST a hash of the MAC-
VRF ID and a constant value of 17 that is calculated using the
previously mentioned random seed values. Finally, the remaining 2
octets MUST be 0 and 1 respectively.
The algorithm shown in auto_evpn_v4prefixfidevividsid2v4subnet
(Figure 23) is required to derive the IPv4 gateway address. It
should be noted that there is a higher likelihood of address
collisions when deriving IPv4 addresses.
6.9.4. Traffic Routed to External Destinations
6.9.4.1. Route Distinguisher
Nodes hosting IP Prefix routes MUST derive a type-0 route
distinguisher based on its System ID and Fabric ID so that it is
unique per IP-VRF and per node.
The algorithm shown in auto_evpn_sidfid2rd (Figure 22) is required to
derive the Route Target.
6.9.4.2. Route Target
Nodes hosting IP prefix routes MUST derive a route target extended
community based on the MAC-VRF ID for each IP-VRF so that it is
unique across the network. Route targets MUST be of type 0.
The algorithm shown in auto_evpn_evi2rt (Figure 16) is required to
derive the Route Target community.
Head, et al. Expires 8 September 2022 [Page 16]
Internet-Draft RIFT Auto-EVPN March 2022
7. Operational Considerations
To fully realize the benefits of Auto-EVPN, it may help to describe
the high-level methodology. Simply put, RIFT automatically
provisions the underlay and Auto-EVPN provisions the overlay. The
goal of this section is to draw clear lines between general fabric
concepts, RIFT, and Auto-EVPN and how they fit into current network
designs and practices.
This section also describes an set of optional Key-Value TIEs that
leverages the variables that have already been derived to provide
further operational enhancement to the operator.
7.1. RIFT Underlay and Auto-EVPN Overlay
Head, et al. Expires 8 September 2022 [Page 17]
Internet-Draft RIFT Auto-EVPN March 2022
+----------------+ +----------------+
| Superspine-01 | | Superspine-02 |
| Top-of-Fabric | | Top-of-Fabric |
| RR/DCI Gateway | | RR/DCI Gateway |
+-+--+------+--+-+ +-+--+------+--+-+
| | | | | | | |
+---------------------+ | | | | | | |
| | | | | | | +---------------------+
| +-----------)------)--)--------+ | | |
| | | | | +-------+ | |
| | | | | | | |
| | | | +---)--------------)-----------+ |
| | | | | | | |
| | +--+ +------)----+ +--+ | |
| | | | | | | |
| | | +---+ | | | |
| | | | | | | |
+-+------------+-+ +-+------------+-+ +-+------------+-+ +-+------------+-+
| Spine-1-1 | | Spine-1-2 | | Spine-2-1 | | Spine-2-2 |
| Top-of-PoD | | Top-of-PoD | | Top-of-PoD | | Top-of-PoD |
| N/A | | N/A | | N/A | | N/A |
+--+----------+--+ +--+----------+--+ +--+----------+--+ +--+----------+--+
| | | | | | | |
| +----------)---+ | | +----------)---+ |
| | | | | | | |
| +----------+ | | | +----------+ | |
| | | | | | | |
+--+----------+--+ +------+------+--+ +--+----------+--+ +------+------+--+
| Leaf-1-1 | | Leaf-1-2 | | Leaf-2-1 | | Leaf-2-2 |
| Leaf +----+ Leaf | | Leaf | | Leaf |
| Leaf Gateway | | Leaf Gateway | | Leaf Gateway | | Leaf Gateway |
+--+-------------+ +--------------+-+ +----------------+ +--------------+-+
| | |
| ESI | |
| (00:00:00:00:00:00:00:00:11:01) | |
| +----------------------+ |
| | |
+--+----------+--+ +--------------+-+
| Server-1-1 | | Server-2-2 |
+----------------+ +----------------+
+-------------- PoD-1 -------------+ +-------------- PoD-2 -------------+
Figure 1: Auto-EVPN Example Topology
Figure 1 illustrates a typical 5-stage Clos IP fabric. Each node is
labelled in such a way that conveys the following:
Head, et al. Expires 8 September 2022 [Page 18]
Internet-Draft RIFT Auto-EVPN March 2022
1. The nodes placement within the generic IP fabric.
2. The nodes role within the RIFT IP underlay.
3. The nodes role within the Auto-EVPN overlay.
Table 1 should also help further align these concepts.
+==================+===============+====================+
| Fabric Placement | RIFT Role | Auto-EVPN Role |
+==================+===============+====================+
| Superspine | Top-of-Fabric | Route Reflector |
| | | and/or DCI Gateway |
+------------------+---------------+--------------------+
| Spine | Spine or Top- | N/A |
| | of-PoD | |
+------------------+---------------+--------------------+
| Leaf | Leaf | Leaf Gateway |
+------------------+---------------+--------------------+
Table 1: Role Associations
It's also important to remember that Auto-EVPN simply takes existing
EVPN overlay deployment scenarios and simplifies the provisioning.
Figure 2 further illustrates the resulting EVPN overlay topology.
Head, et al. Expires 8 September 2022 [Page 19]
Internet-Draft RIFT Auto-EVPN March 2022
+----------------+ +----------------+
| Superspine-01 | | Superspine-02 |
| RR1 | | RR2 |
| | | |
+-+--+---------+-+ +-+--+---------+-+
| | | | | |
+---------------------+ | | | | |
| | | | | +---------------------+
| +-----------)---------)--------+ | |
| | | | +-------+ |
| | | | | |
| | | +---)--------------------------+ |
| | | | | |
| | +--+ | | |
| | | | | |
| | | +---+ | |
| | | | | |
+-+------------+-+ +-+------------+-+ +-+------------+-+
| Leaf-1-1 | | Leaf-1-2 | | Leaf-2-2 |
| Leaf Gateway | | Leaf Gateway | | Leaf Gateway |
| | | | | |
+--+-------------+ +--------------+-+ +--------------+-+
| | |
| ESI | |
| (00:00:00:00:00:00:00:00:11:01) | |
| +----------------------+ |
| | |
+--+----------+--+ +--------------+-+
| Server-1-1 | | Server-2-2 |
+----------------+ +----------------+
+-------------- PoD-1 -------------+ +-------------- PoD-2 -------------+
Figure 2: Auto-EVPN Overlay Topology
7.2. Auto-EVPN Analytics
Leaf nodes MAY optionally advertise analytics information about the
Auto-EVPN fabric to ToF nodes using RIFT Key-Value TIEs. This may be
advantageous in that overlay validation and troubleshooting
activities can be performed on the ToF nodes.
This section requests suggested values from the RIFT Well-Known Key-
Type Registry and describes their use for Auto-EVPN.
Head, et al. Expires 8 September 2022 [Page 20]
Internet-Draft RIFT Auto-EVPN March 2022
+===================+=======+====================================+
| Name | Value | Description |
+===================+=======+====================================+
| Auto-EVPN | 3 | Analytics describing a MAC-VRF on |
| Analytics MAC-VRF | | a particular node within a fabric. |
+-------------------+-------+------------------------------------+
| Auto-EVPN | 4 | Analytics describing an Auto-EVPN |
| Analytics Global | | node within a fabric. |
+-------------------+-------+------------------------------------+
Table 2: Requested RIFT Key Registry Values
The normative Thrift schema can be found in the appendix
(Appendix A.4).
7.2.1. Auto-EVPN Global Analytics Key Type
This Key Type describes node level information within the context of
the Auto-EVPN fabric. The System ID of the advertising leaf node
MUST be used to differentiate the node among other nodes in the
fabric.
The Auto-EVPN Global Key Type MUST be advertised with the RIFT Fabric
ID encoded into the 3rd and 4th bytes of the Key Identifier.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Well-Known | Auto-EVPN (Global) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| (Auto-EVPN Role, |
| Established BGP Peer Count, |
| Total BGP Peer Count,) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 3: Auto-EVPN Global Key-Value TIE
where:
*Auto-EVPN Role:*
The value indicating the node's Auto-EVPN role within the
fabric.
0: Illegal value, MUST NOT be used.
1: Auto-EVPN Leaf Gateway
2: Auto-EVPN Top-of-Fabric Gateway
Head, et al. Expires 8 September 2022 [Page 21]
Internet-Draft RIFT Auto-EVPN March 2022
*Established BGP Session Count:*
A 16-bit integer indicating the number of BGP sessions in the
Established state.
*Total BGP Peer Count:*
A 16-bit integer indicating the total number of possible BGP
sessions on the local node, regardless of state.
7.2.2. Auto-EVPN MAC-VRF Key Type
This Key-Value structure contains information about a specific MAC-
VRF within the Auto-EVPN fabric.
The Auto-EVPN MAC-VRF Key Type MUST be advertised with the Auto-EVPN
MAC-VRF ID encoded into the 3rd and 4th bytes of the Key Identifier.
All values advertised in a MAC-VRF Key-Value TIE MUST represent only
state of the local node.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Well-Known | Auto-EVPN (MAC-VRF) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| (Operational CE Interface Count, |
| Total CE Interface Count, |
| Operational IRB Interface Count, |
| Total IRB Interface Count, |
| EVPN Type-2 MAC Route Count, |
| EVPN Type-2 MAC/IP Route Count, |
| Configured VLAN Count, |
| MAC-VRF Name, |
| MAC-VRF Description,) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 4: Auto-EVPN MAC-VRF Key-Value TIE
where:
*Operational Customer Edge Interface Count:*
A 16-bit integer indicating the number of CE interfaces
associated with the MAC-VRF where both administrative and
operational status are "up".
*Total Customer Edge Interface Count:*
A 16-bit integer indicating the total number of CE interfaces
associated with the MAC-VRF regardless of interface status.
Head, et al. Expires 8 September 2022 [Page 22]
Internet-Draft RIFT Auto-EVPN March 2022
*Operational IRB Interface Count:*
A 16-bit integer indicating the number of IRB interfaces
associated with the MAC-VRF where both administrative and
operational status are "up".
*Total IRB Interface Count:*
A 16-bit integer indicating the total number of IRB interfaces
associated with the MAC-VRF regardless of interface status.
*EVPN Type-2 MAC Route Count:*
A 32-bit integer indicating the total number of EVPN Type-2 MAC
routes.
*EVPN Type-2 MAC/IP Route Count:*
A 32-bit integer indicating the total number of EVPN Type-2
MAC/IP routes.
*VLAN Count:*
A 16-bit integer indicating the total number configured VLANs.
*MAC-VRF Name:*
A string used to indicate the name of the MAC-VRF on the node.
*MAC-VRF Description:*
A string used to describe the MAC-VRF on the node, similar to
that of an interface description.
8. Acknowledgements
The authors would like to thank Olivier Vandezande for some nice
operational improvements for variable derivation procedures, as well
as Matthew Jones and Michal Styszynski for their contributions.
9. Security Considerations
This document introduces no new security concerns to RIFT or other
specifications referenced in this document.
10. References
10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>.
Head, et al. Expires 8 September 2022 [Page 23]
Internet-Draft RIFT Auto-EVPN March 2022
[RFC7432] Sajassi, A., Aggarwal, R., Bitar, N., Isaac, A., Uttaro,
J., Drake, J., and W. Henderickx, "BGP MPLS-Based Ethernet
VPN", February 2015,
<https://www.rfc-editor.org/info/rfc7432>.
[RIFT] Przygienda, T., Sharma, A., Thubert, P., Rijsman, B., and
D. Afanasiev, "RIFT: Routing in Fat Trees", Work in
Progress, draft-ietf-rift-rift-13, July 2021.
[RIFT-KV] Head, J. and T. Przygienda, "RIFT Keys Structure and Well-
Known Registry in Key Value TIE", Work in Progress, draft-
head-rift-kv-registry-01, July 2021.
Appendix A. Thrift Models
This section contains the normative Thrift models required to support
Auto-EVPN. Per the main RIFT [RIFT] specification, all signed values
MUST be interpreted as unsigned values.
A.1. common.thrift
This section specifies changes to main RIFT common.thrift model.
...
/** EVPN Fabric ID */
typedef i16 FabricIDType
const FabricIDType undefined_fabric_id = 0
const FabricIDType default_fabric_id = 1
const bool default_acting_auto_evpn_dci_when_tof = false
enum AutoEVPNModel {
ERB_VLAN_BUNDLE = 0,
}
const AutoEVPNModel default_autoevpn_model = AutoEVPNModel.ERB_VLAN_BUNDLE
Figure 5: RIFT Common Schema for Auto-EVPN
A.2. encoding.thrift
This section specifies changes to main RIFT encoding.thrift model.
Head, et al. Expires 8 September 2022 [Page 24]
Internet-Draft RIFT Auto-EVPN March 2022
struct LIEPacket {
...
/** provides the optional ID of the configured auto-evpn fabric. */
35: optional common.FabricIDType fabric_id;
/** provides optional version of EVPN ZTP as 256 * MAJOR + MINOR */
36: optional i16 auto_evpn_version;
...
}
struct NodeTIEElement {
...
/** All Auto EVPN elements MUST be present in at least one node TIE in each direction if auto evpn is running. */
/** It provides optional version of EVPN ZTP as 256 * MAJOR + MINOR, if set auto EVPN is enabled. */
21: optional i16 auto_evpn_version;
/** It provides the optional ID of the Fabric configured */
22: optional common.FabricIDType fabric_id = common.default_fabric_id;
/** provides optionally the EVPN model supported */
25: optional common.AutoEVPNModel auto_evpn_model = common.AutoEVPNModel.ERB_VLAN_BUNDLE,
...
}
struct NodeCapabilities {
...
/** provides the optional ID of the configured auto-evpn fabric. */
10: optional bool auto_evpn_support = false;
...
}
struct NodeFlags {
...
/** acting as DCI for auto-evpn, necessary for proper RR election where DCIs are preferred */
10: optional bool
...
}
Figure 6: RIFT Encoding Schema for Auto-EVPN
A.3. common_evpn.thrift
This section contains the normative Auto-EVPN Thrift schema.
/**
Thrift file for common AUTO EVPN definitions for RIFT
Copyright (c) Juniper Networks, Inc., 2016-
All rights reserved.
*/
Head, et al. Expires 8 September 2022 [Page 25]
Internet-Draft RIFT Auto-EVPN March 2022
namespace py common_evpn
namespace rs models
include "common.thrift"
include "encoding.thrift"
include "statistics.thrift"
const i8 default_evis = 3
const i8 default_vlans_per_evi = 7
typedef i32 RouterIDType
typedef i32 ASType
typedef i32 ClusterIDType
struct EVPNAnyRole {
1: required common.IPv6Address v6_loopback,
2: required common.IPv6Address type5_v6_loopback,
3: required common.IPv4Address type5_v4_loopback,
4: required RouterIDType bgp_router_id,
5: required ASType autonomous_system,
6: required ClusterIDType cluster_id,
/** prefixes to be redistributed north */
7: required set<common.IPPrefixType> redistribute_north,
/** prefixes to be redistributed south */
8: required set<common.IPPrefixType> redistribute_south,
/** group name for evpn auto overlay */
9: required string bgp_group_name,
/** fabric prefixes to be advertised in rift instead of default */
10: required set<common.IPPrefixType> fabric_prefixes,
/** v6 loopback prefix range, used e.g. to clean up config */
20: required common.IPv6PrefixType v6_loopback_range,
21: required common.IPv6PrefixType rr_loopback_range,
22: required common.IPv6PrefixType type5_loopback_range,
23: required common.IPv4PrefixType type5_v4_loopback_range,
/** v6 addresses of all possible RR loopbacks in this config. Can be used for e.g. cleanup */
24: required set<common.IPv6PrefixType> possible_elected_rrs,
}
struct PartialEVPNEVI {
// route target per RFC4360
1: required CommunityType rt_target,
2: required RTDistinguisherType rt_distinguisher,
3: required RTDistinguisherType rt_type5_distinguisher,
5: required string mac_vrf_name,
6: required VNIType type5_vni,
}
struct EVPNRRRole {
Head, et al. Expires 8 September 2022 [Page 26]
Internet-Draft RIFT Auto-EVPN March 2022
2: required common.IPv6Address v6_rr_addr_loopback,
3: required common.IPv6PrefixType v6_peers_allowed_range,
4: required map<MACVRFNumberType, PartialEVPNEVI> evis,
}
typedef i64 RTDistinguisherType
typedef i64 RTTargetType
typedef i16 MACVRFNumberType
typedef i16 VLANIDType
typedef binary MACType
typedef i16 UnitType
struct IRBType {
1: required string name,
2: required UnitType unit,
/// constant
3: required MACType mac,
/// contains address of the gateway as well
4: optional common.IPv6PrefixType v6_subnet,
/// contains address of the gateway as well
5: optional common.IPv4PrefixType v4_prefix,
}
typedef i32 VNIType
struct VLANType {
1: optional VLANIDType id,
2: required string name,
3: optional IRBType irb,
5: optional bool stretched = false,
6: optional bool is_native = false,
}
struct CEInterfaceType {
2: optional common.IEEE802_1ASTimeStampType moved_to_ce,
// we may not be able to obtain it in case of internal errors
3: optional string platform_interface_name,
}
typedef i64 CommunityType
struct EVPNEVI {
// route target per RFC4360
1: required CommunityType rt_target,
2: required RTDistinguisherType rt_distinguisher,
3: required RTDistinguisherType rt_type5_distinguisher,
Head, et al. Expires 8 September 2022 [Page 27]
Internet-Draft RIFT Auto-EVPN March 2022
4: required string mac_vrf_name,
// fabric unique 24 bits VNI on non-stretch, otherwise unique across fabrics
5: required map<VNIType, VLANType> vlans,
6: required VNIType type5_vni,
}
struct EVPNLeafRole {
1: required set<common.IPv6Address> rrs,
2: required map<MACVRFNumberType, EVPNEVI> evis,
3: optional map<common.LinkIDType,
CEInterfaceType> ce_interfaces,
5: optional binary leaf_unique_lacp_system_id,
6: optional binary fabric_unique_lacp_system_id,
}
/// structure to indicate EVPN roles assumed and their variables for
/// external platform to configure itself accordingly. Presence of
/// according structure indicates that the role is assumed.
struct EVPNRoles {
1: required EVPNAnyRole generic,
2: optional EVPNRRRole route_reflector,
3: optional EVPNLeafRole leaf,
}
const common.TimeIntervalInSecType default_leaf_delay = 120
const common.TimeIntervalInSecType default_interface_ce_delay = 180
/// default delay before AUTOEVPN FSM starts to compute anything
const common.TimeIntervalInSecType default_AUTOEVPN_startup_delay = 60
Figure 7: Auto-EVPN Common Thrift Schema
A.4. auto_evpn_kv.thrift
This section contains the normative Auto-EVPN Analytics Thrift
schema.
include "common.thrift"
namespace py auto_evpn_kv
namespace rs models
/** We don't need the full role structure, only an indication of the node's basic role */
enum AutoEVPNRole {
ILLEGAL = 0,
auto_evpn_leaf_erb = 1,
auto_evpn_tof_gw = 2,
}
Head, et al. Expires 8 September 2022 [Page 28]
Internet-Draft RIFT Auto-EVPN March 2022
enum KVTypes {
OUI = 1,
WellKnown = 2,
}
const i8 AutoEVPNWellKnownKeyType = 1
typedef i32 AutoEVPNKeyIdentifier
typedef i16 AutoEVPNCounterType
typedef i32 AutoEVPNLongCounterType
const i8 GlobalAutoEVPNTelemetryKV = 4
const i8 AutoEVPNTelemetryKV = 3
/** Per the according RIFT draft the key comes from the well known space.
Part of the key is used as Fabric-ID.
1st byte MUST be = "Well-Known"
2nd byte MUST be = "Global Auto-EVPN Telemetry KV",
3rd/4th bytes MUST be = FabricIDType
*/
struct AutoEVPNTelemetryGlobalKV {
/** Only values that the ToF cannot derive itself should be flooded. */
1: required set<AutoEVPNRole> auto_evpn_roles,
/** Established BGP peer count (for Auto-EVPN)
2: optional AutoEVPNCounterType established_bgp_peer_count,
/** Total BGP peer count (for Auto-EVPN)
3: optional AutoEVPNCounterType total_bgp_peer_count,
}
/** Per the according RIFT draft the key comes from the well known space.
Part of the key is used as MAC-VRF number.
1st byte MUST be = "Well-Known"
2nd byte MUST be = indicates "Auto-EVPN Telemetry KV",
3rd/4th bytes MUST be = MACVRFNumberType
*/
struct AutoEVPNTelemetryMACVRFKV {
/** Active CE interface count (up/up)
1: optional AutoEVPNCounterType active_ce_interfaces,
/** Total CE interface count
2: optional AutoEVPNCounterType total_ce_interfaces,
/** Active IRB interface count (up/up)
3: optional AutoEVPNCounterType active_irb_interfaces,
Head, et al. Expires 8 September 2022 [Page 29]
Internet-Draft RIFT Auto-EVPN March 2022
/** Total IRB interface count
4: optional AutoEVPNCounterType total_irb_interfaces,
/** Local EVPN Type-2 MAC route count
5: optional AutoEVPNLongCounterType local_evpn_type2_mac_routes,
/** Local EVPN Type-2 MAC/IP route count
6: optional AutoEVPNLongCounterType local_evpn_type2_mac_ip_routes,
/** number of configured VLANs */
7: optional i16 configured_vlans,
/** optional human readable name */
8: optional string name,
/** optional human readable string describing the MAC-VRF */
9: optional string description,
}
Figure 8: Auto-EVPN Key-Value Thrift Schema
Appendix B. Auto-EVPN Variable Derivation
B.1. Variable Derivation Functions
This section contains the normative derivation procedures required to
support Auto-EVPN.
/// indicates how many RRs we're computing in AUTO EVPN
pub const MAX_AUTO_EVPN_RRS: usize = 3;
/// indicates the fabric has no ID, used in computations to omit effects of fabric ID
pub const NO_FABRIC_ID: FabricIDType = 0;
/// invalid MACVRF number, MACVRFs start from 1
pub const NO_MACVRF: MACVRFNumberType = 0;
/// first MACVRF
pub const MIN_MACVRF : MACVRFNumberType = 1;
/// unique v6 prefix for all nodes starts with this
pub fn auto_evpn_v6pref(fid: FabricIDType) -> String {
format!("FD00:{:04X}:A1", fid)
}
/// how many bytes in a v6pref for different purposes
pub const AUTO_EVPN_V6PREFLEN: usize = 8 * 5;
/// unique v6 prefix for route reflector purposes starts like this
pub fn auto_evpn_v6rrpref(fid: FabricIDType) -> String {
format!("FD00:{:04X}:A2", fid)
}
/// unique v6 prefix for type-5 purposes starts like this
Head, et al. Expires 8 September 2022 [Page 30]
Internet-Draft RIFT Auto-EVPN March 2022
pub fn auto_evpn_v6t5pref(fid: FabricIDType) -> String {
format!("FD00:{:04X}:A3", fid)
}
/// unique v6 prefix for IRB prefix purposes
pub fn auto_evpn_v6irbpref(fid: FabricIDType) -> String {
format!("FD00:{:04X}:A4", fid)
}
/// 2 bytes of prefix, then fabric ID, then another byte
pub const AUTO_EVPN_V6_FABPREFIXLEN: usize = 16 + 16 + 8;
/// unique v4 prefix for IRB purposes
pub const AUTO_EVPN_V4IRBPREF: &str = "10";
/// per RFC magic
const RT_TARGET_HIGH: CommunityType = 0;
const RT_TARGET_LOW: CommunityType = 0;
/// first available VLAN number
pub const FIRST_VLAN: UnsignedVLANIDType = 1;
// maximum vlan number one less than maximum to use as bitmask
pub const MAX_VLAN: UnsignedVLANIDType = 4095;
/// constant VLAN shift
pub const FIRST_VLAN_SHIFT: UnsignedVLANIDType = NATIVE_VLAN + 1;
/// NATIVE VLAN number
pub const NATIVE_VLAN: UnsignedVLANIDType = 1;
/// abstract description of VLAN properties for a derived VLAN
pub struct VLANDescription {
pub vlan_id: UnsignedVLANIDType,
pub name: String,
/// can this VLAN be stretched across multiple fabrics
pub stretchable: bool,
pub native: bool,
}
/// maximum number of VLANs per MACVRF
pub const MAX_VLANS_PER_EVI: usize = 30;
/// maximum number of EVIs
pub const MAX_EVIS: MACVRFNumberType = 7;
pub type VLANStretchableType = bool;
pub type VLANNativeType = bool;
pub type UnsignedVNIType = u32;
pub type UnsignedFabricIDType = u16;
pub type UnsignedUnitType = u16;
pub type UnsignedVLANIDType = u16;
Head, et al. Expires 8 September 2022 [Page 31]
Internet-Draft RIFT Auto-EVPN March 2022
pub type UnsignedRTDistinguisherType = u64;
pub const EXTRATYPE5_RD_DISTINGUISHER: u32 = 0xffff_ffff;
/// high bits of type 5 VNI
const TYPE5VNIHIGH: UnsignedVNIType = 0x0080_0000;
/// bitmask for type 2 VNI
const TYPE2VNIMASK: UnsignedVNIType = 0x00ff_ffff ^ TYPE5VNIHIGH;
/// random seeds used in several algorithms to increase entropy
pub const RANDOMSEEDS: [u64; 4] = [
27008318799u64,
67438371571,
37087353685,
88675895388,
];
Figure 9: auto_evpn_const_structs_type
/// function sorts vector of (is_dci, systemID) first,
/// splits of the DCIs from the non-DCIs and sorts them
/// followed by a shuffle taking largest/smallest/2nd largest/2nd smallest.
/// Ultimately both are merged which prefers the DCIs while
/// still making sure that the election is stable with a system ID joining
/// as smallest/largest.
pub(crate) fn auto_evpn_sids2rrs(v: Vec<(bool, UnsignedSystemID)>)
-> Vec<UnsignedSystemID> {
let (dcis, nondcis): (Vec<(bool, UnsignedSystemID)>, Vec<(bool, UnsignedSystemID)>) =
v.into_iter().partition(|(dci, _)| *dci);
vec![dcis, nondcis]
.into_iter()
.flat_map(|mut v| {
v.par_sort();
if v.len() > 2 {
let mut s = v.split_off(v.len() / 2);
s.reverse();
interleave(v.into_iter(), s.into_iter())
.collect::<Vec<_>>()
.into_iter()
} else {
v.into_iter()
}
})
.map(|(_, sid)| sid)
.collect()
}
Head, et al. Expires 8 September 2022 [Page 32]
Internet-Draft RIFT Auto-EVPN March 2022
Figure 10: auto_evpn_sids2rrs
pub(crate) fn auto_evpn_v62octets(a: Ipv6Addr) -> Vec<u8> {
a.octets().iter().cloned().collect()
}
Figure 11: auto_evpn_v62octets
/// fabric prefixes derived instead of advertising default on the fabric to allow
/// for default route on ToF or leaves
pub fn auto_evpn_fid2fabric_prefixes(fid: FabricIDType) -> Result<Vec<IPPrefixType>, ServiceErrorType> {
vec![
(auto_evpn_fidsidv6loopback(fid, ILLEGAL_SYSTEM_I_D as _), AUTO_EVPN_V6PREFLEN),
(auto_evpn_fidrrpref2rrloopback(fid, ILLEGAL_SYSTEM_I_D as _), AUTO_EVPN_V6PREFLEN),
]
.into_iter()
.map(|(p, _)|
match p {
Ok(_) => Ok(
IPPrefixType::Ipv6prefix(
IPv6PrefixType {
address: auto_evpn_v62octets(p?),
prefixlen: AUTO_EVPN_V6PREFLEN as _,
})),
Err(e) => Err(e),
}
)
.collect::<Result<Vec<_>, _>>()
}
Figure 12: auto_evpn_fid2fabric_prefixes
Head, et al. Expires 8 September 2022 [Page 33]
Internet-Draft RIFT Auto-EVPN March 2022
/// local address with encoded fabric ID and system ID for collision free identifiers. Basis
/// for several different prefixes.
pub fn auto_evpn_v6prefixfidsid2loopback(v6pref: &str, fid: FabricIDType,
sid: UnsignedSystemID) -> Result<Ipv6Addr, ServiceErrorType> {
assert!(fid != UNDEFINED_FABRIC_ID);
let a = format!("{}00::{}",
v6pref,
sid.to_ne_bytes()
.iter()
.chunks(2)
.into_iter()
.map(|chunk|
chunk.fold(0u16, |v, n| (v << 8) | *n as u16))
.map(|v| format!("{:04X}", v))
.collect::<Vec<_>>()
.into_iter()
.join(":")
);
Ipv6Addr::from_str(&a)
.map_err(|_| ServiceErrorType::INTERNALRIFTERROR)
}
Figure 13: auto_evpn_v6prefixfidsid2loopback
/// auto evpn V6 loopback for RRs
pub fn auto_evpn_fidrrpref2rrloopback(fid: FabricIDType,
preference: u8) -> Result<Ipv6Addr, ServiceErrorType> {
auto_evpn_v6prefixfidsid2loopback(&auto_evpn_v6rrpref(fid), fid, (1 + preference) as _)
}
Figure 14: auto_evpn_fidrrpref2rrloopback
/// auto evpn BGP router ID
pub fn auto_evpn_sidfid2bgpid(fid: FabricIDType, sid: UnsignedSystemID) -> u32 {
assert!(fid != 0);
let hs: u32 = ((sid & 0xffff_ffff_0000_0000) >> 32) as _;
let mut ls: u32 = (sid & 0xffff_ffff) as _;
ls = ls.rotate_right(7) ^ (fid as u32).rotate_right(13);
max(1, hs ^ ls) // never a 0
}
Figure 15: auto_evpn_sidfid2bgpid
Head, et al. Expires 8 September 2022 [Page 34]
Internet-Draft RIFT Auto-EVPN March 2022
/// route target bytes are type0/0 and then add EVI
pub fn auto_evpn_evi2rt(evi: MACVRFNumberType) -> CommunityType {
let wideevi = (evi + 1) as CommunityType;
(RT_TARGET_HIGH << (64 - 8)) | (RT_TARGET_LOW << 64 - 16) |
((wideevi) << 17) |
((wideevi))
}
Figure 16: auto_evpn_evi2rt
/// type-5 VNI for an EVI
pub fn auto_evpn_fidevi2type5vni(fid: FabricIDType, evi: MACVRFNumberType) -> UnsignedVNIType {
TYPE5VNIHIGH | auto_evpn_fidevivid2vni(fid, evi, 0, false)
}
Figure 17: auto_evpn_fidevi2type5vni
/// type-2 VNI for a specific VLAN
pub fn auto_evpn_fidevivid2vni(fid: FabricIDType, evi: MACVRFNumberType, vlanid: VLANIDType, stretchable: bool) -> UnsignedVNIType {
let rfid = if stretchable {
NO_FABRIC_ID as _
} else {
fid as UnsignedVNIType
};
let revi = evi as UnsignedVNIType;
let rvlan = vlanid as UnsignedVNIType;
// mask out high bits, VNI is only 24 bits
TYPE2VNIMASK &
(
rfid.rotate_left(16) ^
revi.rotate_left(12) ^
rvlan
)
}
Figure 18: auto_evpn_fidevivid2vni
Head, et al. Expires 8 September 2022 [Page 35]
Internet-Draft RIFT Auto-EVPN March 2022
/// maximum VLANs per EVI supported by auto evpn when deriving
pub fn auto_evpn_vlan_description_table<'a>(vlans: usize)
-> Result<&'a [(UnsignedVLANIDType, VLANStretchableType, VLANNativeType)], ServiceErrorType> {
// up to 15 vlans can be activated
const VLANSARRAY: [(UnsignedVLANIDType, bool, bool); MAX_VLANS_PER_EVI] = [
(NATIVE_VLAN, true, true, ),
(FIRST_VLAN_SHIFT, true, false, ),
(FIRST_VLAN_SHIFT + 1, true, false, ),
(FIRST_VLAN_SHIFT + 2, true, false, ),
(FIRST_VLAN_SHIFT + 3, true, false, ),
(FIRST_VLAN_SHIFT + 4, true, false, ),
(FIRST_VLAN_SHIFT + 5, true, false, ),
(FIRST_VLAN_SHIFT + 6, true, false, ),
(FIRST_VLAN_SHIFT + 7, true, false, ),
(FIRST_VLAN_SHIFT + 8, false, false, ),
(FIRST_VLAN_SHIFT + 9, false, false, ),
(FIRST_VLAN_SHIFT +10, false, false, ),
(FIRST_VLAN_SHIFT +11, false, false, ),
(FIRST_VLAN_SHIFT +12, false, false, ),
(FIRST_VLAN_SHIFT +13, false, false, ),
(FIRST_VLAN_SHIFT +14, false, false, ),
(FIRST_VLAN_SHIFT +15, false, false, ),
(FIRST_VLAN_SHIFT +16, false, false, ),
(FIRST_VLAN_SHIFT +17, false, false, ),
(FIRST_VLAN_SHIFT +18, false, false, ),
(FIRST_VLAN_SHIFT +19, false, false, ),
(FIRST_VLAN_SHIFT +20, false, false, ),
(FIRST_VLAN_SHIFT +21, false, false, ),
(FIRST_VLAN_SHIFT +22, false, false, ),
(FIRST_VLAN_SHIFT +23, false, false, ),
(FIRST_VLAN_SHIFT +24, false, false, ),
(FIRST_VLAN_SHIFT +25, false, false, ),
(FIRST_VLAN_SHIFT +26, false, false, ),
(FIRST_VLAN_SHIFT +27, false, false, ),
(FIRST_VLAN_SHIFT +28, false, false, ),
];
if vlans > VLANSARRAY.len() {
return Err(ServiceErrorType::INVALIDPARAMETERVALUE)
}
Ok(&VLANSARRAY[..vlans])
}
Figure 19: auto_evpn_vlan_description_table
Head, et al. Expires 8 September 2022 [Page 36]
Internet-Draft RIFT Auto-EVPN March 2022
const fn num_bits<T>() -> usize { std::mem::size_of::<T>() * 8 }
fn log2(x: u32) -> u32 {
assert!(x > 0);
num_bits::<u32>() as u32 - x.leading_zeros() - 1
}
/// delivers the vlan description that can be used to generate vlans for a
/// specific fabric ID and a MACVRF number
pub fn auto_evpn_fidevivlansvlans2desc(fid: UnsignedFabricIDType, macvrf: MACVRFNumberType,
vlans: usize) -> Vec<VLANDescription> {
assert!(NO_MACVRF != macvrf);
// abstract description of derived VLANs
let vlan_table = auto_evpn_vlan_description_table(vlans)
.expect("vlan table in AUTO EVPN incorrect");
let vlanshift = log2(vlan_table
.iter()
.map(|(vl, _, _)| *vl as usize)
.max()
.expect("vlan table in AUTO EVPN incorrect")
.checked_next_power_of_two()
.expect("vlan table in AUTO EVPN incorrect")
as u32);
vlan_table
.iter()
.map(move |(vid, stretch, native_)| {
let stretchedfid = if !stretch {
fid
} else {
NO_FABRIC_ID as _
};
let reducedmacvrf = macvrf - MIN_MACVRF;
// we shift fid & evi same amount to extinguish them possibly
let fidandevishift = vlanshift + 1;
let mut vlan_id = *vid ^ stretchedfid
.rotate_left(fidandevishift) as UnsignedVLANIDType;
// leave space for VLANs in the encoding
vlan_id ^= reducedmacvrf.rotate_left(fidandevishift) as UnsignedVLANIDType;
vlan_id %= MAX_VLAN;
vlan_id = max(1, vlan_id);
VLANDescription {
Head, et al. Expires 8 September 2022 [Page 37]
Internet-Draft RIFT Auto-EVPN March 2022
vlan_id: vlan_id as _,
name: format!("V{}", vlan_id),
stretchable: *stretch,
native: *native_,
}
})
.collect()
}
Figure 20: auto_evpn_fidevivlansvlans2desc
/// IRB interface number.
/// fid/evi combination shifted up to not interfere with the VLAN-ID
/// and then add the VLAN-ID
pub fn auto_evpn_fidevivid2irb(_fid: FabricIDType, _evi: MACVRFNumberType, vid: VLANIDType) -> UnsignedUnitType {
assert!(NO_MACVRF != _evi);
// VLAN collision function is collision free to the point we can just ignore EVI
// and assign IRB interface number to be same as VLAN which simplifies deployment
let mut v: UnsignedUnitType = 0;
v = v.wrapping_add(vid as UnsignedVLANIDType);
max(1, v % (UnsignedUnitType::MAX - 1))
}
Figure 21: auto_evpn_fidevivid2irb
/// route distinguisher derivation
pub fn auto_evpn_sidfid2rd(sid: UnsignedSystemID, fid: UnsignedFabricIDType, extra: u32) -> UnsignedRTDistinguisherType {
// generate type 0 route distinguisher, first 2 bytes 0 and then 6 bytes
assert!(fid != NO_FABRIC_ID as _);
// shift the 2 bytes we loose
let convsid = sid as UnsignedRTDistinguisherType;
let hs = ((sid & 0xffff_0000_0000_0000) >> 32) as UnsignedRTDistinguisherType;
let mut ls: UnsignedRTDistinguisherType = convsid & 0x0000_ffff_ffff_ffff;
ls ^= hs;
ls ^= (fid as UnsignedRTDistinguisherType).rotate_left(16);
ls ^= extra as UnsignedRTDistinguisherType;
ls
}
Figure 22: auto_evpn_sidfid2rd
Head, et al. Expires 8 September 2022 [Page 38]
Internet-Draft RIFT Auto-EVPN March 2022
/// v4 subnet derivation
pub fn auto_evpn_v4prefixfidevividsid2v4subnet(v4pref: &str, fid: FabricIDType,
evi: MACVRFNumberType, vid: VLANIDType,
sid: UnsignedSystemID) -> Result<IPv4PrefixType, ServiceErrorType> {
assert!(NO_MACVRF != evi);
// fid can be 0 for stretched v4subnets
let mut sub = evi.to_ne_bytes().iter()
.fold((RANDOMSEEDS[0] & 0xff) as u8, |r, e| r.rotate_left(1) ^ e.rotate_right(1));
sub ^= fid.to_ne_bytes().iter()
.fold((RANDOMSEEDS[1] & 0xff) as u8, |r, e| r.rotate_left(2) ^ e.rotate_right(1));
sub ^= vid.to_ne_bytes().iter()
.fold((RANDOMSEEDS[2] & 0xff) as u8, |r, e| r.rotate_left(3) ^ e.rotate_right(1));
let subnet = sub % 254; // make sure we don't show multicast subnet
let _host = sid.to_ne_bytes().iter()
.fold(0u16, |r, e| r.rotate_left(3) ^ e.rotate_right(3) as u16);
let a = format!("{}.{}.{}.{}",
v4pref,
subnet,
0,
1,
);
Ok(
IPv4PrefixType {
address: Ipv4Addr::from_str(&a)
.map_err(|_| {
ServiceErrorType::INTERNALRIFTERROR
})?
.octets()
.iter()
.fold(0u32, |v, nv| v << 8 | (*nv as u32)) as IPv4Address
,
prefixlen: 16,
}
)
}
Figure 23: auto_evpn_v4prefixfidevividsid2v4subnet
Head, et al. Expires 8 September 2022 [Page 39]
Internet-Draft RIFT Auto-EVPN March 2022
/// generic v6 bytes derivation used for different purposes
pub fn auto_evpn_v6hash(fid: FabricIDType, evi: MACVRFNumberType, vid: VLANIDType, sid: UnsignedSystemID)
-> [u8; 8] {
let mut sub = evi.to_ne_bytes().iter()
.fold(RANDOMSEEDS[3], |r, e| r.rotate_left(6) ^ e.rotate_right(4) as u64);
sub ^= fid.to_ne_bytes().iter()
.fold(RANDOMSEEDS[0], |r, e| r.rotate_left(6) ^ e.rotate_right(4) as u64);
sub ^= vid as u64;
sub ^= sid;
sub.to_ne_bytes()
}
Figure 24: auto_evpn_v6hash
/// v6 subnet derivation
pub fn auto_evpn_fidevividsid2v6subnet(fid: FabricIDType, evi: MACVRFNumberType,
vid: VLANIDType,
sid: UnsignedSystemID) -> Result<IPv6PrefixType, ServiceErrorType> {
assert!(NO_MACVRF != evi);
let sb = auto_evpn_v6hash(fid, evi, vid, sid);
let a = format!("{}:{:02X}{:02X}:{:02X}{:02X}:{:02X}{:02X}::1",
auto_evpn_v6irbpref(fid),
sb[3] ^ sb[0],
sb[4] ^ sb[1],
sb[6],
sb[7],
sb[5],
sb[2],
);
Ok(IPv6PrefixType {
address: Ipv6Addr::from_str(
&a)
.map_err(|_| {
ServiceErrorType::INTERNALRIFTERROR
})?
.octets()
.to_vec(),
prefixlen: 64,
})
}
Figure 25: auto_evpn_fidevividsid2v6subnet
Head, et al. Expires 8 September 2022 [Page 40]
Internet-Draft RIFT Auto-EVPN March 2022
/// MAC address derivation for IRB
pub fn auto_evpn_fidevividsid2mac(fid: FabricIDType, evi: MACVRFNumberType,
vid: VLANIDType, sid: UnsignedSystemID) -> Vec<u8> {
let sb = auto_evpn_v6hash(fid, evi, vid, sid);
vec![0x02,
sb[3] ^ sb[0],
sb[4] ^ sb[1],
sb[6],
sb[7],
sb[5] ^ sb[2],
]
}
Figure 26: auto_evpn_fidevividsid2mac
/// v4 loopback address derivation for every node in auto-evpn, returns address and
/// subnet mask length
pub fn auto_evpn_fidsid2v4loopback(fid: FabricIDType, sid: UnsignedSystemID) -> (IPv4Address, u8) {
let mut derived = sid.to_ne_bytes().iter()
.fold(0 as IPv4Address, |p, e| (p << 4) ^ (*e as IPv4Address));
derived ^= fid as IPv4Address;
// use the byte we loose for entropy
derived ^= derived >> 24;
// and sanitize for loopback range, we nuke 9 bits out
derived &= 0x007f_ffff;
let m = ((127 as IPv4Address) << 24) | derived;
(m as _, 9)
}
Figure 27: auto_evpn_fidsid2v4loopback
/// V6 loopback derivation for every node in auto-evpn
pub fn auto_evpn_fidsidv6loopback(fid: FabricIDType,
sid: UnsignedSystemID) -> Result<Ipv6Addr, ServiceErrorType> {
auto_evpn_v6prefixfidsid2loopback(&auto_evpn_v6pref(fid), fid, sid)
}
Figure 28: auto_evpn_fidsidv6loopback
Head, et al. Expires 8 September 2022 [Page 41]
Internet-Draft RIFT Auto-EVPN March 2022
#[allow(non_snake_case)]
pub fn auto_evpn_fid2private_AS(fid: FabricIDType) -> u32 {
assert!(fid != NO_FABRIC_ID);
// range 4200000000-4294967294
const DIFF: u32 = 4_294_967_294 - 4_200_000_000;
64496 + ((fid as u32) << 3) % DIFF
}
Figure 29: auto_evpn_fid2private_AS
pub fn auto_evpn_fid2clusterid(fid: FabricIDType) -> u32 {
auto_evpn_fid2private_AS(fid)
}
Figure 30: auto_evpn_fid2clusterid
B.2. Variable Derivation Results
This section contains functional variable derviation results that can
be used as a confirmation that an implementation conforms to
procedures in this document.
+===========+============+=========+===========+========+=====+
| Fabric ID | MAC-VRF ID | VLAN ID | Stretched | VNI | IRB |
+===========+============+=========+===========+========+=====+
| 1 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 74 | N | 69706 | 74 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 75 | N | 69707 | 75 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 76 | N | 69708 | 76 |
Head, et al. Expires 8 September 2022 [Page 42]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 77 | N | 69709 | 77 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 78 | N | 69710 | 78 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 79 | N | 69711 | 79 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 80 | N | 69712 | 80 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 81 | N | 69713 | 81 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 82 | N | 69714 | 82 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 83 | N | 69715 | 83 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 84 | N | 69716 | 84 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 85 | N | 69717 | 85 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 86 | N | 69718 | 86 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 87 | N | 69719 | 87 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 88 | N | 69720 | 88 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 89 | N | 69721 | 89 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 90 | N | 69722 | 90 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 91 | N | 69723 | 91 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 92 | N | 69724 | 92 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 93 | N | 69725 | 93 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 1 | 94 | N | 69726 | 94 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 70 | Y | 8262 | 70 |
Head, et al. Expires 8 September 2022 [Page 43]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 10 | N | 73738 | 10 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 11 | N | 73739 | 11 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 12 | N | 73740 | 12 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 13 | N | 73741 | 13 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 14 | N | 73742 | 14 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 15 | N | 73743 | 15 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 16 | N | 73744 | 16 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 17 | N | 73745 | 17 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 18 | N | 73746 | 18 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 19 | N | 73747 | 19 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 20 | N | 73748 | 20 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 21 | N | 73749 | 21 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 22 | N | 73750 | 22 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 23 | N | 73751 | 23 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 24 | N | 73752 | 24 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 25 | N | 73753 | 25 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 26 | N | 73754 | 26 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 27 | N | 73755 | 27 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 28 | N | 73756 | 28 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 29 | N | 73757 | 29 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 2 | 30 | N | 73758 | 30 |
Head, et al. Expires 8 September 2022 [Page 44]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 202 | N | 78026 | 202 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 203 | N | 78027 | 203 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 204 | N | 78028 | 204 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 205 | N | 78029 | 205 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 206 | N | 78030 | 206 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 207 | N | 78031 | 207 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 208 | N | 78032 | 208 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 209 | N | 78033 | 209 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 210 | N | 78034 | 210 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 211 | N | 78035 | 211 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 212 | N | 78036 | 212 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 213 | N | 78037 | 213 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 214 | N | 78038 | 214 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 215 | N | 78039 | 215 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 216 | N | 78040 | 216 |
Head, et al. Expires 8 September 2022 [Page 45]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 217 | N | 78041 | 217 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 218 | N | 78042 | 218 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 219 | N | 78043 | 219 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 220 | N | 78044 | 220 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 221 | N | 78045 | 221 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 3 | 222 | N | 78046 | 222 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 198 | Y | 16582 | 198 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 138 | N | 82058 | 138 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 139 | N | 82059 | 139 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 140 | N | 82060 | 140 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 141 | N | 82061 | 141 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 142 | N | 82062 | 142 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 143 | N | 82063 | 143 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 144 | N | 82064 | 144 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 145 | N | 82065 | 145 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 146 | N | 82066 | 146 |
Head, et al. Expires 8 September 2022 [Page 46]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 147 | N | 82067 | 147 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 148 | N | 82068 | 148 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 149 | N | 82069 | 149 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 150 | N | 82070 | 150 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 151 | N | 82071 | 151 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 152 | N | 82072 | 152 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 153 | N | 82073 | 153 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 154 | N | 82074 | 154 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 155 | N | 82075 | 155 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 156 | N | 82076 | 156 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 157 | N | 82077 | 157 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 4 | 158 | N | 82078 | 158 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 330 | N | 86346 | 330 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 331 | N | 86347 | 331 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 332 | N | 86348 | 332 |
Head, et al. Expires 8 September 2022 [Page 47]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 333 | N | 86349 | 333 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 334 | N | 86350 | 334 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 335 | N | 86351 | 335 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 336 | N | 86352 | 336 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 337 | N | 86353 | 337 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 338 | N | 86354 | 338 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 339 | N | 86355 | 339 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 340 | N | 86356 | 340 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 341 | N | 86357 | 341 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 342 | N | 86358 | 342 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 343 | N | 86359 | 343 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 344 | N | 86360 | 344 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 345 | N | 86361 | 345 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 346 | N | 86362 | 346 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 347 | N | 86363 | 347 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 348 | N | 86364 | 348 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 349 | N | 86365 | 349 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 5 | 350 | N | 86366 | 350 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 326 | Y | 24902 | 326 |
Head, et al. Expires 8 September 2022 [Page 48]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 266 | N | 90378 | 266 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 267 | N | 90379 | 267 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 268 | N | 90380 | 268 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 269 | N | 90381 | 269 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 270 | N | 90382 | 270 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 271 | N | 90383 | 271 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 272 | N | 90384 | 272 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 273 | N | 90385 | 273 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 274 | N | 90386 | 274 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 275 | N | 90387 | 275 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 276 | N | 90388 | 276 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 277 | N | 90389 | 277 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 278 | N | 90390 | 278 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 279 | N | 90391 | 279 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 280 | N | 90392 | 280 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 281 | N | 90393 | 281 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 282 | N | 90394 | 282 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 283 | N | 90395 | 283 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 284 | N | 90396 | 284 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 285 | N | 90397 | 285 |
+-----------+------------+---------+-----------+--------+-----+
| 1 | 6 | 286 | N | 90398 | 286 |
Head, et al. Expires 8 September 2022 [Page 49]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 138 | N | 135306 | 138 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 139 | N | 135307 | 139 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 140 | N | 135308 | 140 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 141 | N | 135309 | 141 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 142 | N | 135310 | 142 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 143 | N | 135311 | 143 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 144 | N | 135312 | 144 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 145 | N | 135313 | 145 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 146 | N | 135314 | 146 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 147 | N | 135315 | 147 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 148 | N | 135316 | 148 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 149 | N | 135317 | 149 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 150 | N | 135318 | 150 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 151 | N | 135319 | 151 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 152 | N | 135320 | 152 |
Head, et al. Expires 8 September 2022 [Page 50]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 153 | N | 135321 | 153 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 154 | N | 135322 | 154 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 155 | N | 135323 | 155 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 156 | N | 135324 | 156 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 157 | N | 135325 | 157 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 1 | 158 | N | 135326 | 158 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 70 | Y | 8262 | 70 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 202 | N | 139466 | 202 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 203 | N | 139467 | 203 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 204 | N | 139468 | 204 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 205 | N | 139469 | 205 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 206 | N | 139470 | 206 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 207 | N | 139471 | 207 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 208 | N | 139472 | 208 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 209 | N | 139473 | 209 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 210 | N | 139474 | 210 |
Head, et al. Expires 8 September 2022 [Page 51]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 211 | N | 139475 | 211 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 212 | N | 139476 | 212 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 213 | N | 139477 | 213 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 214 | N | 139478 | 214 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 215 | N | 139479 | 215 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 216 | N | 139480 | 216 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 217 | N | 139481 | 217 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 218 | N | 139482 | 218 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 219 | N | 139483 | 219 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 220 | N | 139484 | 220 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 221 | N | 139485 | 221 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 2 | 222 | N | 139486 | 222 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 10 | N | 143370 | 10 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 11 | N | 143371 | 11 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 12 | N | 143372 | 12 |
Head, et al. Expires 8 September 2022 [Page 52]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 13 | N | 143373 | 13 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 14 | N | 143374 | 14 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 15 | N | 143375 | 15 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 16 | N | 143376 | 16 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 17 | N | 143377 | 17 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 18 | N | 143378 | 18 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 19 | N | 143379 | 19 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 20 | N | 143380 | 20 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 21 | N | 143381 | 21 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 22 | N | 143382 | 22 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 23 | N | 143383 | 23 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 24 | N | 143384 | 24 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 25 | N | 143385 | 25 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 26 | N | 143386 | 26 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 27 | N | 143387 | 27 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 28 | N | 143388 | 28 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 29 | N | 143389 | 29 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 3 | 30 | N | 143390 | 30 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 198 | Y | 16582 | 198 |
Head, et al. Expires 8 September 2022 [Page 53]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 74 | N | 147530 | 74 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 75 | N | 147531 | 75 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 76 | N | 147532 | 76 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 77 | N | 147533 | 77 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 78 | N | 147534 | 78 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 79 | N | 147535 | 79 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 80 | N | 147536 | 80 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 81 | N | 147537 | 81 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 82 | N | 147538 | 82 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 83 | N | 147539 | 83 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 84 | N | 147540 | 84 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 85 | N | 147541 | 85 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 86 | N | 147542 | 86 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 87 | N | 147543 | 87 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 88 | N | 147544 | 88 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 89 | N | 147545 | 89 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 90 | N | 147546 | 90 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 91 | N | 147547 | 91 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 92 | N | 147548 | 92 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 93 | N | 147549 | 93 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 4 | 94 | N | 147550 | 94 |
Head, et al. Expires 8 September 2022 [Page 54]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 394 | N | 151946 | 394 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 395 | N | 151947 | 395 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 396 | N | 151948 | 396 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 397 | N | 151949 | 397 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 398 | N | 151950 | 398 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 399 | N | 151951 | 399 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 400 | N | 151952 | 400 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 401 | N | 151953 | 401 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 402 | N | 151954 | 402 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 403 | N | 151955 | 403 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 404 | N | 151956 | 404 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 405 | N | 151957 | 405 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 406 | N | 151958 | 406 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 407 | N | 151959 | 407 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 408 | N | 151960 | 408 |
Head, et al. Expires 8 September 2022 [Page 55]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 409 | N | 151961 | 409 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 410 | N | 151962 | 410 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 411 | N | 151963 | 411 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 412 | N | 151964 | 412 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 413 | N | 151965 | 413 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 5 | 414 | N | 151966 | 414 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 326 | Y | 24902 | 326 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 458 | N | 156106 | 458 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 459 | N | 156107 | 459 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 460 | N | 156108 | 460 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 461 | N | 156109 | 461 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 462 | N | 156110 | 462 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 463 | N | 156111 | 463 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 464 | N | 156112 | 464 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 465 | N | 156113 | 465 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 466 | N | 156114 | 466 |
Head, et al. Expires 8 September 2022 [Page 56]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 467 | N | 156115 | 467 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 468 | N | 156116 | 468 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 469 | N | 156117 | 469 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 470 | N | 156118 | 470 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 471 | N | 156119 | 471 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 472 | N | 156120 | 472 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 473 | N | 156121 | 473 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 474 | N | 156122 | 474 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 475 | N | 156123 | 475 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 476 | N | 156124 | 476 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 477 | N | 156125 | 477 |
+-----------+------------+---------+-----------+--------+-----+
| 2 | 6 | 478 | N | 156126 | 478 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 202 | N | 200906 | 202 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 203 | N | 200907 | 203 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 204 | N | 200908 | 204 |
Head, et al. Expires 8 September 2022 [Page 57]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 205 | N | 200909 | 205 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 206 | N | 200910 | 206 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 207 | N | 200911 | 207 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 208 | N | 200912 | 208 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 209 | N | 200913 | 209 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 210 | N | 200914 | 210 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 211 | N | 200915 | 211 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 212 | N | 200916 | 212 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 213 | N | 200917 | 213 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 214 | N | 200918 | 214 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 215 | N | 200919 | 215 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 216 | N | 200920 | 216 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 217 | N | 200921 | 217 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 218 | N | 200922 | 218 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 219 | N | 200923 | 219 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 220 | N | 200924 | 220 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 221 | N | 200925 | 221 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 1 | 222 | N | 200926 | 222 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 70 | Y | 8262 | 70 |
Head, et al. Expires 8 September 2022 [Page 58]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 138 | N | 204938 | 138 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 139 | N | 204939 | 139 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 140 | N | 204940 | 140 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 141 | N | 204941 | 141 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 142 | N | 204942 | 142 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 143 | N | 204943 | 143 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 144 | N | 204944 | 144 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 145 | N | 204945 | 145 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 146 | N | 204946 | 146 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 147 | N | 204947 | 147 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 148 | N | 204948 | 148 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 149 | N | 204949 | 149 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 150 | N | 204950 | 150 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 151 | N | 204951 | 151 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 152 | N | 204952 | 152 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 153 | N | 204953 | 153 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 154 | N | 204954 | 154 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 155 | N | 204955 | 155 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 156 | N | 204956 | 156 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 157 | N | 204957 | 157 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 2 | 158 | N | 204958 | 158 |
Head, et al. Expires 8 September 2022 [Page 59]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 74 | N | 208970 | 74 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 75 | N | 208971 | 75 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 76 | N | 208972 | 76 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 77 | N | 208973 | 77 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 78 | N | 208974 | 78 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 79 | N | 208975 | 79 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 80 | N | 208976 | 80 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 81 | N | 208977 | 81 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 82 | N | 208978 | 82 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 83 | N | 208979 | 83 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 84 | N | 208980 | 84 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 85 | N | 208981 | 85 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 86 | N | 208982 | 86 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 87 | N | 208983 | 87 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 88 | N | 208984 | 88 |
Head, et al. Expires 8 September 2022 [Page 60]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 89 | N | 208985 | 89 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 90 | N | 208986 | 90 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 91 | N | 208987 | 91 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 92 | N | 208988 | 92 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 93 | N | 208989 | 93 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 3 | 94 | N | 208990 | 94 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 198 | Y | 16582 | 198 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 10 | N | 213002 | 10 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 11 | N | 213003 | 11 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 12 | N | 213004 | 12 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 13 | N | 213005 | 13 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 14 | N | 213006 | 14 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 15 | N | 213007 | 15 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 16 | N | 213008 | 16 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 17 | N | 213009 | 17 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 18 | N | 213010 | 18 |
Head, et al. Expires 8 September 2022 [Page 61]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 19 | N | 213011 | 19 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 20 | N | 213012 | 20 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 21 | N | 213013 | 21 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 22 | N | 213014 | 22 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 23 | N | 213015 | 23 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 24 | N | 213016 | 24 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 25 | N | 213017 | 25 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 26 | N | 213018 | 26 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 27 | N | 213019 | 27 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 28 | N | 213020 | 28 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 29 | N | 213021 | 29 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 4 | 30 | N | 213022 | 30 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 458 | N | 217546 | 458 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 459 | N | 217547 | 459 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 460 | N | 217548 | 460 |
Head, et al. Expires 8 September 2022 [Page 62]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 461 | N | 217549 | 461 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 462 | N | 217550 | 462 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 463 | N | 217551 | 463 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 464 | N | 217552 | 464 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 465 | N | 217553 | 465 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 466 | N | 217554 | 466 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 467 | N | 217555 | 467 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 468 | N | 217556 | 468 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 469 | N | 217557 | 469 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 470 | N | 217558 | 470 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 471 | N | 217559 | 471 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 472 | N | 217560 | 472 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 473 | N | 217561 | 473 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 474 | N | 217562 | 474 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 475 | N | 217563 | 475 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 476 | N | 217564 | 476 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 477 | N | 217565 | 477 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 5 | 478 | N | 217566 | 478 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 326 | Y | 24902 | 326 |
Head, et al. Expires 8 September 2022 [Page 63]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 394 | N | 221578 | 394 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 395 | N | 221579 | 395 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 396 | N | 221580 | 396 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 397 | N | 221581 | 397 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 398 | N | 221582 | 398 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 399 | N | 221583 | 399 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 400 | N | 221584 | 400 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 401 | N | 221585 | 401 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 402 | N | 221586 | 402 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 403 | N | 221587 | 403 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 404 | N | 221588 | 404 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 405 | N | 221589 | 405 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 406 | N | 221590 | 406 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 407 | N | 221591 | 407 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 408 | N | 221592 | 408 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 409 | N | 221593 | 409 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 410 | N | 221594 | 410 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 411 | N | 221595 | 411 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 412 | N | 221596 | 412 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 413 | N | 221597 | 413 |
+-----------+------------+---------+-----------+--------+-----+
| 3 | 6 | 414 | N | 221598 | 414 |
Head, et al. Expires 8 September 2022 [Page 64]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 266 | N | 266506 | 266 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 267 | N | 266507 | 267 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 268 | N | 266508 | 268 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 269 | N | 266509 | 269 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 270 | N | 266510 | 270 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 271 | N | 266511 | 271 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 272 | N | 266512 | 272 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 273 | N | 266513 | 273 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 274 | N | 266514 | 274 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 275 | N | 266515 | 275 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 276 | N | 266516 | 276 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 277 | N | 266517 | 277 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 278 | N | 266518 | 278 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 279 | N | 266519 | 279 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 280 | N | 266520 | 280 |
Head, et al. Expires 8 September 2022 [Page 65]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 281 | N | 266521 | 281 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 282 | N | 266522 | 282 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 283 | N | 266523 | 283 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 284 | N | 266524 | 284 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 285 | N | 266525 | 285 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 1 | 286 | N | 266526 | 286 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 70 | Y | 8262 | 70 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 330 | N | 270666 | 330 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 331 | N | 270667 | 331 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 332 | N | 270668 | 332 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 333 | N | 270669 | 333 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 334 | N | 270670 | 334 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 335 | N | 270671 | 335 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 336 | N | 270672 | 336 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 337 | N | 270673 | 337 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 338 | N | 270674 | 338 |
Head, et al. Expires 8 September 2022 [Page 66]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 339 | N | 270675 | 339 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 340 | N | 270676 | 340 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 341 | N | 270677 | 341 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 342 | N | 270678 | 342 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 343 | N | 270679 | 343 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 344 | N | 270680 | 344 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 345 | N | 270681 | 345 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 346 | N | 270682 | 346 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 347 | N | 270683 | 347 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 348 | N | 270684 | 348 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 349 | N | 270685 | 349 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 2 | 350 | N | 270686 | 350 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 394 | N | 274826 | 394 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 395 | N | 274827 | 395 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 396 | N | 274828 | 396 |
Head, et al. Expires 8 September 2022 [Page 67]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 397 | N | 274829 | 397 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 398 | N | 274830 | 398 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 399 | N | 274831 | 399 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 400 | N | 274832 | 400 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 401 | N | 274833 | 401 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 402 | N | 274834 | 402 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 403 | N | 274835 | 403 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 404 | N | 274836 | 404 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 405 | N | 274837 | 405 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 406 | N | 274838 | 406 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 407 | N | 274839 | 407 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 408 | N | 274840 | 408 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 409 | N | 274841 | 409 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 410 | N | 274842 | 410 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 411 | N | 274843 | 411 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 412 | N | 274844 | 412 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 413 | N | 274845 | 413 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 3 | 414 | N | 274846 | 414 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 198 | Y | 16582 | 198 |
Head, et al. Expires 8 September 2022 [Page 68]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 458 | N | 278986 | 458 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 459 | N | 278987 | 459 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 460 | N | 278988 | 460 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 461 | N | 278989 | 461 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 462 | N | 278990 | 462 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 463 | N | 278991 | 463 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 464 | N | 278992 | 464 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 465 | N | 278993 | 465 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 466 | N | 278994 | 466 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 467 | N | 278995 | 467 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 468 | N | 278996 | 468 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 469 | N | 278997 | 469 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 470 | N | 278998 | 470 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 471 | N | 278999 | 471 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 472 | N | 279000 | 472 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 473 | N | 279001 | 473 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 474 | N | 279002 | 474 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 475 | N | 279003 | 475 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 476 | N | 279004 | 476 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 477 | N | 279005 | 477 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 4 | 478 | N | 279006 | 478 |
Head, et al. Expires 8 September 2022 [Page 69]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 10 | N | 282634 | 10 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 11 | N | 282635 | 11 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 12 | N | 282636 | 12 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 13 | N | 282637 | 13 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 14 | N | 282638 | 14 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 15 | N | 282639 | 15 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 16 | N | 282640 | 16 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 17 | N | 282641 | 17 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 18 | N | 282642 | 18 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 19 | N | 282643 | 19 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 20 | N | 282644 | 20 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 21 | N | 282645 | 21 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 22 | N | 282646 | 22 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 23 | N | 282647 | 23 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 24 | N | 282648 | 24 |
Head, et al. Expires 8 September 2022 [Page 70]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 25 | N | 282649 | 25 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 26 | N | 282650 | 26 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 27 | N | 282651 | 27 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 28 | N | 282652 | 28 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 29 | N | 282653 | 29 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 5 | 30 | N | 282654 | 30 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 326 | Y | 24902 | 326 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 74 | N | 286794 | 74 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 75 | N | 286795 | 75 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 76 | N | 286796 | 76 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 77 | N | 286797 | 77 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 78 | N | 286798 | 78 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 79 | N | 286799 | 79 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 80 | N | 286800 | 80 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 81 | N | 286801 | 81 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 82 | N | 286802 | 82 |
Head, et al. Expires 8 September 2022 [Page 71]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 83 | N | 286803 | 83 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 84 | N | 286804 | 84 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 85 | N | 286805 | 85 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 86 | N | 286806 | 86 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 87 | N | 286807 | 87 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 88 | N | 286808 | 88 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 89 | N | 286809 | 89 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 90 | N | 286810 | 90 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 91 | N | 286811 | 91 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 92 | N | 286812 | 92 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 93 | N | 286813 | 93 |
+-----------+------------+---------+-----------+--------+-----+
| 4 | 6 | 94 | N | 286814 | 94 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 330 | N | 332106 | 330 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 331 | N | 332107 | 331 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 332 | N | 332108 | 332 |
Head, et al. Expires 8 September 2022 [Page 72]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 333 | N | 332109 | 333 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 334 | N | 332110 | 334 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 335 | N | 332111 | 335 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 336 | N | 332112 | 336 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 337 | N | 332113 | 337 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 338 | N | 332114 | 338 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 339 | N | 332115 | 339 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 340 | N | 332116 | 340 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 341 | N | 332117 | 341 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 342 | N | 332118 | 342 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 343 | N | 332119 | 343 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 344 | N | 332120 | 344 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 345 | N | 332121 | 345 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 346 | N | 332122 | 346 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 347 | N | 332123 | 347 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 348 | N | 332124 | 348 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 349 | N | 332125 | 349 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 1 | 350 | N | 332126 | 350 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 70 | Y | 8262 | 70 |
Head, et al. Expires 8 September 2022 [Page 73]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 266 | N | 336138 | 266 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 267 | N | 336139 | 267 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 268 | N | 336140 | 268 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 269 | N | 336141 | 269 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 270 | N | 336142 | 270 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 271 | N | 336143 | 271 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 272 | N | 336144 | 272 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 273 | N | 336145 | 273 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 274 | N | 336146 | 274 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 275 | N | 336147 | 275 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 276 | N | 336148 | 276 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 277 | N | 336149 | 277 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 278 | N | 336150 | 278 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 279 | N | 336151 | 279 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 280 | N | 336152 | 280 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 281 | N | 336153 | 281 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 282 | N | 336154 | 282 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 283 | N | 336155 | 283 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 284 | N | 336156 | 284 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 285 | N | 336157 | 285 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 2 | 286 | N | 336158 | 286 |
Head, et al. Expires 8 September 2022 [Page 74]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 458 | N | 340426 | 458 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 459 | N | 340427 | 459 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 460 | N | 340428 | 460 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 461 | N | 340429 | 461 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 462 | N | 340430 | 462 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 463 | N | 340431 | 463 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 464 | N | 340432 | 464 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 465 | N | 340433 | 465 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 466 | N | 340434 | 466 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 467 | N | 340435 | 467 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 468 | N | 340436 | 468 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 469 | N | 340437 | 469 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 470 | N | 340438 | 470 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 471 | N | 340439 | 471 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 472 | N | 340440 | 472 |
Head, et al. Expires 8 September 2022 [Page 75]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 473 | N | 340441 | 473 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 474 | N | 340442 | 474 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 475 | N | 340443 | 475 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 476 | N | 340444 | 476 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 477 | N | 340445 | 477 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 3 | 478 | N | 340446 | 478 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 198 | Y | 16582 | 198 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 394 | N | 344458 | 394 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 395 | N | 344459 | 395 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 396 | N | 344460 | 396 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 397 | N | 344461 | 397 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 398 | N | 344462 | 398 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 399 | N | 344463 | 399 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 400 | N | 344464 | 400 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 401 | N | 344465 | 401 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 402 | N | 344466 | 402 |
Head, et al. Expires 8 September 2022 [Page 76]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 403 | N | 344467 | 403 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 404 | N | 344468 | 404 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 405 | N | 344469 | 405 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 406 | N | 344470 | 406 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 407 | N | 344471 | 407 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 408 | N | 344472 | 408 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 409 | N | 344473 | 409 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 410 | N | 344474 | 410 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 411 | N | 344475 | 411 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 412 | N | 344476 | 412 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 413 | N | 344477 | 413 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 4 | 414 | N | 344478 | 414 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 74 | N | 348234 | 74 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 75 | N | 348235 | 75 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 76 | N | 348236 | 76 |
Head, et al. Expires 8 September 2022 [Page 77]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 77 | N | 348237 | 77 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 78 | N | 348238 | 78 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 79 | N | 348239 | 79 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 80 | N | 348240 | 80 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 81 | N | 348241 | 81 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 82 | N | 348242 | 82 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 83 | N | 348243 | 83 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 84 | N | 348244 | 84 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 85 | N | 348245 | 85 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 86 | N | 348246 | 86 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 87 | N | 348247 | 87 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 88 | N | 348248 | 88 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 89 | N | 348249 | 89 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 90 | N | 348250 | 90 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 91 | N | 348251 | 91 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 92 | N | 348252 | 92 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 93 | N | 348253 | 93 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 5 | 94 | N | 348254 | 94 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 326 | Y | 24902 | 326 |
Head, et al. Expires 8 September 2022 [Page 78]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 10 | N | 352266 | 10 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 11 | N | 352267 | 11 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 12 | N | 352268 | 12 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 13 | N | 352269 | 13 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 14 | N | 352270 | 14 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 15 | N | 352271 | 15 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 16 | N | 352272 | 16 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 17 | N | 352273 | 17 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 18 | N | 352274 | 18 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 19 | N | 352275 | 19 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 20 | N | 352276 | 20 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 21 | N | 352277 | 21 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 22 | N | 352278 | 22 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 23 | N | 352279 | 23 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 24 | N | 352280 | 24 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 25 | N | 352281 | 25 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 26 | N | 352282 | 26 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 27 | N | 352283 | 27 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 28 | N | 352284 | 28 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 29 | N | 352285 | 29 |
+-----------+------------+---------+-----------+--------+-----+
| 5 | 6 | 30 | N | 352286 | 30 |
Head, et al. Expires 8 September 2022 [Page 79]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 1 | Y | 4097 | 1 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 2 | Y | 4098 | 2 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 3 | Y | 4099 | 3 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 4 | Y | 4100 | 4 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 5 | Y | 4101 | 5 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 6 | Y | 4102 | 6 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 7 | Y | 4103 | 7 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 8 | Y | 4104 | 8 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 9 | Y | 4105 | 9 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 394 | N | 397706 | 394 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 395 | N | 397707 | 395 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 396 | N | 397708 | 396 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 397 | N | 397709 | 397 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 398 | N | 397710 | 398 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 399 | N | 397711 | 399 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 400 | N | 397712 | 400 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 401 | N | 397713 | 401 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 402 | N | 397714 | 402 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 403 | N | 397715 | 403 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 404 | N | 397716 | 404 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 405 | N | 397717 | 405 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 406 | N | 397718 | 406 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 407 | N | 397719 | 407 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 408 | N | 397720 | 408 |
Head, et al. Expires 8 September 2022 [Page 80]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 409 | N | 397721 | 409 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 410 | N | 397722 | 410 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 411 | N | 397723 | 411 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 412 | N | 397724 | 412 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 413 | N | 397725 | 413 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 1 | 414 | N | 397726 | 414 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 65 | Y | 8257 | 65 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 66 | Y | 8258 | 66 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 67 | Y | 8259 | 67 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 68 | Y | 8260 | 68 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 69 | Y | 8261 | 69 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 70 | Y | 8262 | 70 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 71 | Y | 8263 | 71 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 72 | Y | 8264 | 72 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 73 | Y | 8265 | 73 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 458 | N | 401866 | 458 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 459 | N | 401867 | 459 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 460 | N | 401868 | 460 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 461 | N | 401869 | 461 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 462 | N | 401870 | 462 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 463 | N | 401871 | 463 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 464 | N | 401872 | 464 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 465 | N | 401873 | 465 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 466 | N | 401874 | 466 |
Head, et al. Expires 8 September 2022 [Page 81]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 467 | N | 401875 | 467 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 468 | N | 401876 | 468 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 469 | N | 401877 | 469 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 470 | N | 401878 | 470 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 471 | N | 401879 | 471 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 472 | N | 401880 | 472 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 473 | N | 401881 | 473 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 474 | N | 401882 | 474 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 475 | N | 401883 | 475 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 476 | N | 401884 | 476 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 477 | N | 401885 | 477 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 2 | 478 | N | 401886 | 478 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 129 | Y | 12417 | 129 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 130 | Y | 12418 | 130 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 131 | Y | 12419 | 131 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 132 | Y | 12420 | 132 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 133 | Y | 12421 | 133 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 134 | Y | 12422 | 134 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 135 | Y | 12423 | 135 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 136 | Y | 12424 | 136 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 137 | Y | 12425 | 137 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 266 | N | 405770 | 266 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 267 | N | 405771 | 267 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 268 | N | 405772 | 268 |
Head, et al. Expires 8 September 2022 [Page 82]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 269 | N | 405773 | 269 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 270 | N | 405774 | 270 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 271 | N | 405775 | 271 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 272 | N | 405776 | 272 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 273 | N | 405777 | 273 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 274 | N | 405778 | 274 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 275 | N | 405779 | 275 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 276 | N | 405780 | 276 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 277 | N | 405781 | 277 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 278 | N | 405782 | 278 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 279 | N | 405783 | 279 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 280 | N | 405784 | 280 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 281 | N | 405785 | 281 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 282 | N | 405786 | 282 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 283 | N | 405787 | 283 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 284 | N | 405788 | 284 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 285 | N | 405789 | 285 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 3 | 286 | N | 405790 | 286 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 193 | Y | 16577 | 193 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 194 | Y | 16578 | 194 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 195 | Y | 16579 | 195 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 196 | Y | 16580 | 196 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 197 | Y | 16581 | 197 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 198 | Y | 16582 | 198 |
Head, et al. Expires 8 September 2022 [Page 83]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 199 | Y | 16583 | 199 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 200 | Y | 16584 | 200 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 201 | Y | 16585 | 201 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 330 | N | 409930 | 330 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 331 | N | 409931 | 331 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 332 | N | 409932 | 332 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 333 | N | 409933 | 333 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 334 | N | 409934 | 334 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 335 | N | 409935 | 335 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 336 | N | 409936 | 336 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 337 | N | 409937 | 337 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 338 | N | 409938 | 338 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 339 | N | 409939 | 339 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 340 | N | 409940 | 340 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 341 | N | 409941 | 341 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 342 | N | 409942 | 342 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 343 | N | 409943 | 343 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 344 | N | 409944 | 344 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 345 | N | 409945 | 345 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 346 | N | 409946 | 346 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 347 | N | 409947 | 347 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 348 | N | 409948 | 348 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 349 | N | 409949 | 349 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 4 | 350 | N | 409950 | 350 |
Head, et al. Expires 8 September 2022 [Page 84]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 257 | Y | 20737 | 257 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 258 | Y | 20738 | 258 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 259 | Y | 20739 | 259 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 260 | Y | 20740 | 260 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 261 | Y | 20741 | 261 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 262 | Y | 20742 | 262 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 263 | Y | 20743 | 263 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 264 | Y | 20744 | 264 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 265 | Y | 20745 | 265 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 138 | N | 413834 | 138 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 139 | N | 413835 | 139 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 140 | N | 413836 | 140 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 141 | N | 413837 | 141 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 142 | N | 413838 | 142 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 143 | N | 413839 | 143 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 144 | N | 413840 | 144 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 145 | N | 413841 | 145 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 146 | N | 413842 | 146 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 147 | N | 413843 | 147 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 148 | N | 413844 | 148 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 149 | N | 413845 | 149 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 150 | N | 413846 | 150 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 151 | N | 413847 | 151 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 152 | N | 413848 | 152 |
Head, et al. Expires 8 September 2022 [Page 85]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 153 | N | 413849 | 153 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 154 | N | 413850 | 154 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 155 | N | 413851 | 155 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 156 | N | 413852 | 156 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 157 | N | 413853 | 157 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 5 | 158 | N | 413854 | 158 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 321 | Y | 24897 | 321 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 322 | Y | 24898 | 322 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 323 | Y | 24899 | 323 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 324 | Y | 24900 | 324 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 325 | Y | 24901 | 325 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 326 | Y | 24902 | 326 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 327 | Y | 24903 | 327 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 328 | Y | 24904 | 328 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 329 | Y | 24905 | 329 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 202 | N | 417994 | 202 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 203 | N | 417995 | 203 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 204 | N | 417996 | 204 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 205 | N | 417997 | 205 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 206 | N | 417998 | 206 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 207 | N | 417999 | 207 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 208 | N | 418000 | 208 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 209 | N | 418001 | 209 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 210 | N | 418002 | 210 |
Head, et al. Expires 8 September 2022 [Page 86]
Internet-Draft RIFT Auto-EVPN March 2022
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 211 | N | 418003 | 211 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 212 | N | 418004 | 212 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 213 | N | 418005 | 213 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 214 | N | 418006 | 214 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 215 | N | 418007 | 215 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 216 | N | 418008 | 216 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 217 | N | 418009 | 217 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 218 | N | 418010 | 218 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 219 | N | 418011 | 219 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 220 | N | 418012 | 220 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 221 | N | 418013 | 221 |
+-----------+------------+---------+-----------+--------+-----+
| 6 | 6 | 222 | N | 418014 | 222 |
+-----------+------------+---------+-----------+--------+-----+
Table 3: Example Derivation Results
Authors' Addresses
Jordan Head (editor)
Juniper Networks
1137 Innovation Way
Sunnyvale, CA
United States of America
Email: [email protected]
Tony Przygienda
Juniper Networks
1137 Innovation Way
Sunnyvale, CA
United States of America
Email: [email protected]
Head, et al. Expires 8 September 2022 [Page 87]
Internet-Draft RIFT Auto-EVPN March 2022
Wen Lin
Juniper Networks
10 Technology Park Drive
Westford, MA
United States of America
Email: [email protected]
Head, et al. Expires 8 September 2022 [Page 88]
|
__label__pos
| 0.74345 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.