content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
How to Know Plugin ID from Running Code
Hi,
Is there any way to know the plugin ID from inside the running code?
Thank you in advance.
Additional Context (2023/06/13)
In my powerup, I am trying to scan the plugin data of all members on a board. Rest API returns the plugin data but it includes multiple instances from different powerups (plugins) installed in the board. So I need to know the plugin ID to select the correct instance.
I know that I can get the plugin ID statically from Trello’s admin page. But I have two powerups with the same code, one for production and the other one for testing. So I want to get the plugin ID dynamically inside the running code.
1 Like
Hi there,
to get the Id of the App for Navigating between App Pages, i use the Product Context “localId”
And get the RefId which sits between “extension/” and “/static”.
import { useProductContext } from '@forge/ui';
function getRefId() {
const context = useProductContext();
const id = context.localId;
const startIndex = id.indexOf('extension/') + 10; // Add 10 to exclude 'extension/'
const endIndex = id.indexOf('/static');
const refId = id.substring(startIndex, endIndex);
return refId;
}
Regards,
Adrian
Hi Adrian,
Thank you for the reply.
I tried it, but I got an exception when calling useProductContext() in my React function component.
I don’t have knowledge about Forge, but it is a different product from Trello. Would the solution you provided also be applicable to Trello PowerUp?
Regards,
Tateo
Oh, I’m sorry. I think this was a mistake on my end. I thought I filtered all Topics regarding “Forge”. I just now see this is “Trello”.
Unfortunately, I have no experience developing for Trello.
Regards,
Adrian
I see :slight_smile:
Anyway, thank you for your reply.
Regards,
Tateo
You can use the t.jwt method which contains the idPlugin - you’ll have to decode it of course. I’m assuming that this is happening on the backend.
An alternative is to just hardcode it. Since you know the plugin IDs of both your test and production Power-Ups (which is unlikely going to change), why not just have it as a constant and do an if-else conditional?
1 Like
Hi,
Thank you for the reply.
It worked! My code runs purely in a browser, so it may not be the intended usage of JWT, but it seems viable.
why not just have it as a constant and do an if-else conditional?
Yeah, it can be a solution :slight_smile: . My initial implementation was to read plugin IDs from .env file and switch them based on a flag imposed in a build script.
The risk of that implementation is that only after I deploy the production code would I know that the ID is mistyped or the switching code has a bug. In my long history, I experienced such horrifying moments … :cold_face:
I found that the context data returned by t.getContext() includes the plugin ID.
However, it is not documented.
Could someone from the Trello team confirm if I can use the context to extract the plugin ID?
Regards,
Tateo
|
__label__pos
| 0.996869 |
Open study
is now brainly
With Brainly you can:
• Get homework help from millions of students and moderators
• Learn how to solve problems with step-by-step explanations
• Share your knowledge and earn points by helping other students
• Learn anywhere, anytime with the Brainly app!
A community for students.
How to calculate the distance from the moon using an object like a penny (As well as knowing the diameter of the moon) using trig.
Trigonometry
See more answers at brainly.com
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
Join Brainly to access
this expert answer
SEE EXPERT ANSWER
To see the expert answer you'll need to create a free account at Brainly
what? you need measurements..is this a question off a text book ?
no its not. Using a penny you can calculate the distance from the moon to yourself.
oh idk how to do that :/ sorry!!
Not the answer you are looking for?
Search for more explanations.
Ask your own question
Other answers:
keep the penny at some distance ( few cm) such that it blocks the view of the moon, then u can measure the angle subtended by the diameters of the two. well u get sumthing like this |dw:1359388725848:dw| angle=arc/radius arc = diameter of the moon radius ur ans :)
Helpful response :)
Not the answer you are looking for?
Search for more explanations.
Ask your own question
|
__label__pos
| 0.777223 |
Can WordPress site be converted to GoDaddy?
Can I transfer my WordPress site to GoDaddy?
Managed WordPress comes with an auto-migration feature to conveniently move your existing site. Go to your GoDaddy product page. From the list of Your existing plans, select the Managed WordPress plan you want to use for the site you’re moving, and then select Next. …
Is GoDaddy compatible with WordPress?
Hundreds of thousands of sites trust their online presence to WordPress — and with WordPress Hosting from GoDaddy, you can, too.
How do I upload my local WordPress site to GoDaddy?
Go back to the Upload Files button to take you to the webroot page and look for the newly created folder where you told GoDaddy to create your WordPress set-up. In my case that was the WP folder. Click on the folder name in the left panel to open its contents in the right panel. Look for a file called wp-config.
How do I manually backup my WordPress site GoDaddy?
Manually backup my WordPress website in GoDaddy Pro
1. Log in to your GoDaddy Pro account. (Need help logging in?)
2. Select Sites in the left sidebar.
3. Hover over the website and select Backups.
4. Select Backup Now.
5. Type the name of your backup and select Save.
THIS IS INTERESTING: How do I completely remove Facebook from WooCommerce?
What’s better GoDaddy or WordPress?
The Winner Between GoDaddy vs. WordPress. … GoDaddy isn’t a good hosting company and WordPress is a content management system (CMS) offered by most hosting companies. The real winner here is ITX Design with the wide variety of excellent hosting companies, all offering WordPress for your website and blog needs.
Is it better to use GoDaddy or WordPress?
Both options are well-suited for beginners. However, GoDaddy is recommended for those who prefer simplicity and quick setup in their site management. WordPress is best suited for bloggers and other admins who want access to more customization to appearance and functions for a lower cost.
Can I use my own domain with WordPress free?
Connect your own domain
Every WordPress.com site comes with a free subdomain. If you already own a domain, or you’d like to register a new one, you can add a custom domain to your site starting with a Personal Plan.
How do I move my website to GoDaddy?
Migrate my website in GoDaddy Pro to another server
1. Log in to your GoDaddy Pro account. …
2. Select Sites in the left sidebar.
3. Hover over the website and select Backups.
4. Select the date of the backup. …
5. Select Clone website. …
6. Type the destination IP or temporary URL.
How do I move my WordPress site from localhost to server?
Let’s quickly review the steps:
1. Export the local database.
2. Create a new database on the live server.
3. Import the local database.
4. Replace the old URLs with the new location.
5. Upload WordPress files.
6. Reconfigure wp-config. php.
7. Update permalinks.
How do I move my WordPress site to cPanel?
Migrate your WordPress installation
1. Pre-migration requirements. …
2. Export the WordPress database. …
3. Upload the files to the new server. …
4. Create a MySQL database. …
5. Import the WordPress database to cPanel. …
6. Change the website URL. …
7. Configure the WordPress database settings. …
8. Update links and images.
THIS IS INTERESTING: Why WordPress is the most popular website builder?
How do I move my website from GoDaddy to cPanel?
Restoring Your Site
1. Log in to your GoDaddy account.
2. Click Web Hosting.
3. Next to the cPanel account you want to use, click Manage.
4. Click cPanel Admin.
5. In the Files section, click Backup Wizard.
6. Click Restore.
7. Click Home Directory.
How do I move my website from GoDaddy to hosting?
Transfer in 4 simple steps:
1. Enter the domain that you wish to transfer. Simply enter your domain name and click on Transfer.
2. Proceed with the purchase. After the purchase you will be redirected to our members area.
3. Enter the EPP code and confirm transfer. …
4. Confirm domain transfer email.
|
__label__pos
| 0.894631 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
I'm curious about this thing... see example:
switch(x)
{
case(a):
{
//do stuff
}
break;
case(b):
//do stuff
break;
}
All my life I've done it like case b, but since C# allows me to use it, and Visual Studio allows me to collapse that thing, I am curious - what is the real difference between case a (with braces) and case b?
share|improve this question
15
There is no difference except that variables defined in case a are only visible in that block. – juergen d Apr 10 '12 at 19:40
3
Nothing really. It's just that sometimes you want to create and use objects that are scoped to the case. Without brackets anything you define there has wider scope you see. – Robinson Apr 10 '12 at 19:40
1
Not what you asked, but I find long case blocks hard to read. If I need scoping or any other complexity, it gets a new method. – Jay Bazuzi Apr 11 '12 at 0:01
4 Answers 4
up vote 69 down vote accepted
Braces {} are used to define a scope for a set of operations. Bizarrely, the following will compile and work:
private void ConnectionStateChange(object sender, StateChangeEventArgs e)
{
string s = "hi";
switch(s)
{
case "hi":
{
int a = 1;
a++;
}
{
int a = 2;
a++;
}
break;
}
{
int a = 1;
a++;
}
{
int a = 2;
a++;
}
}
As you can see, in that one method I've created four variables, each called a. Each is entirely separate because, as local variables, they exist only within their own scope.
Does that make some sort of sense?
share|improve this answer
54
Why bizarrely? That looks pretty normal (albeit contrived). – Joey Apr 10 '12 at 19:51
34
If having 4 different local variables named a in the same function looks pretty normal to you, you've been looking at the wrong code all this time. :) – cHao Apr 10 '12 at 19:54
9
Hence contrived, but introducing a new local scope isn't that unusual, imho. – Joey Apr 10 '12 at 20:04
12
This should not be considered as bizarre at all. – jsn Apr 10 '12 at 20:52
2
How sad that the accepted answer actually fails to explain (or, indeed, obviously to comprehend) why this is useful. – Konrad Rudolph Apr 11 '12 at 14:00
A pair of braces (not brackets -- [] -- and not parentheses -- () -- but braces {}) with zero or more statements in them is a legal statement in C#, and therefore may appear anywhere that a statement may legally appear.
As others have pointed out, the typical reason for doing so is because such a statement introduces a new local variable declaration space, which then defines the scope of the local variables declared within it. (Recall that the "scope" of an element is the region of program text in which the element may be referred to by its unqualified name.)
I note that this is particularly interesting in a switch statement because the scoping rules in a switch are a little bit strange. For details of how strange they are, see "Case 3:" in my article on the subject:
http://ericlippert.com/2009/08/13/four-switch-oddities/
share|improve this answer
1
+1 for the true source. – Daniel A. White Apr 10 '12 at 19:53
I've always felt disappointed that C# didn't introduce a smarter switch statement, as the C switch is one of the more bizarre and problematic constructs in programming. – Nate C-K Apr 17 '12 at 20:45
6
@NateC-K: C# did introduce a better switch statement! The C# designers closely examined the shortcomings of the C/C++ switch statement and improved upon it in many ways. For example: (1) The "no fall through" rule eliminates a common source of bugs. (2) You can switch on strings. (3) You can switch on nullable types. (4) The types of the case labels are checked to be compatible with the governing type of the switch. (5) "goto case 1" works properly, unlike in C++. And so on; in almost every conceivable way the C# switch is an improvement. – Eric Lippert Apr 17 '12 at 21:15
You can switch on strings is one of the more underappreciated features. So many languages don't allow that (e.g., Java or Delphi) and it turns a simple problem into a big if..elseif cascade. – Michael Stum Apr 17 '12 at 21:31
2
@phoog: I refer you to section 3.3.1 of The C++ Programming Language, 2nd edition (old, I know; it's the one I have handy), which states on page 103 "Note that a case label is not a suitable label for use in a goto statement." Perhaps C++ has added this feature while I wasn't paying attention, but it did not have the feature in 1991, when I was learning C++. – Eric Lippert Apr 17 '12 at 23:16
It creates a new scope in which you can create new variables.
share|improve this answer
It creates new scope for variables you used. Scope of variables can be tricky sometimes. For instance in the code you posted;
switch(x)
{
case(a):
{
int i = 0;
}
break;
case(b):
i = 1; // Error: The name 'i' doesn't exist in the current context
break;
}
The error makes sense here as in case(b) variable a is accessed out of scope. Now on the other hand,
switch(x)
{
case(a):
{
int i = 0;
}
break;
case(b):
int i = 1; // Error: A local variable named 'i' cannot be declared in this scope because it would give a different meaning to 'i', which is already used in a 'child' scope to denote something else
break;
}
Above two errors look contradictory to each other. To get around this you should define the scope separately in both case statements,
switch(x)
{
case(a):
{
int i = 0;
}
break;
case(b):
{
int i = 1; // No error
}
break;
}
Eric Lippert shared a very good link to his blog to explain variable scopes in case statement. You should have a look at it.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.964184 |
paint-brush
How to Write Fewer Lines of Code with the OpenAPI Generatorby@alphamikle
4,381 reads
4,381 reads
How to Write Fewer Lines of Code with the OpenAPI Generator
by Mike AlfaOctober 1st, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow
Too Long; Didn't Read
There are many approaches to solving the problem of developing an API that must be a specification. The ideal situation would be when your backing provides the most complete spec, which will describe all the methods used by clients, as well as all transmitted and received data and possible errors. But this is not the case everywhere, so we must strive for the best. For a feature to appear, some code must appear anyway - and yes, we will not write it, but there will be a code generator.
featured image - How to Write Fewer Lines of Code with the OpenAPI Generator
Mike Alfa HackerNoon profile picture
Hey! I'll start with the main thing - I'm a lazy person. I'm a very, very lazy developer. I have to write a lot of code - both for the backend and for the frontend. And my laziness constantly torments me, saying: You could not write this code, but you write ... This is how we live.
But what to do? How can you get rid of the need to write at least part of the code?
There are many approaches to solving this problem. Let's take a look at some of them.
OpenAPI
Let's say your backend is a collection of REST services. The first place to start is to study your backend's documentation in the hope of stumbling upon the OpenAPI specification. The ideal situation would be when your backing provides the most complete spec, which will describe all the methods used by clients, as well as all transmitted and received data and possible errors.
In fact, I am writing these lines and I think this is self-evident: it seems obvious that if you are developing an API, then there must be a specification. Not in the form of a simple enumeration of methods, but as complete as possible - and, most importantly, generated from code and not written by hand. But this is not the case everywhere, so we must strive for the best.
Well, ok, so we found our spec; it is full-fledged, without dark spots. Great - it's almost done. Now it remains to use it to achieve our insidious goals 😈. It just so happens that I also write applications on Flutter and as a client, I would consider it but the approach used here is also suitable for web clients (and for any others, there is also something to use).
Client-initiated generation
I think it will not be a revelation that there is no magic. For a feature to appear, some code must appear anyway. And yes, we will not write it, but there will be a code generator. And this is where the fun begins. There are libraries for Flutter (and not only for it), which will generate code for working with the backend based on annotations that you can throw on pseudo-services (which you still have to write).
It looks something like this:
import 'package:dio/dio.dart';
import 'package:json_annotation/json_annotation.dart';
import 'package:retrofit/retrofit.dart';
part 'example.g.dart';
@RestApi(baseUrl: "https://5d42a6e2bc64f90014a56ca0.mockapi.io/api/v1/")
abstract class RestClient {
factory RestClient(Dio dio, {String baseUrl}) = _RestClient;
@GET("/tasks/{id}")
Future<Task> getTask(@Path("id") String id);
@GET('/demo')
Future<String> queries(@Queries() Map<String, dynamic> queries);
@GET("https://httpbin.org/get")
Future<String> namedExample(@Query("apikey") String apiKey, @Query("scope") String scope, @Query("type") String type, @Query("from") int from);
@PATCH("/tasks/{id}")
Future<Task> updateTaskPart(@Path() String id, @Body() Map<String, dynamic> map);
@PUT("/tasks/{id}")
Future<Task> updateTask(@Path() String id, @Body() Task task);
@DELETE("/tasks/{id}")
Future<void> deleteTask(@Path() String id);
@POST("/tasks")
Future<Task> createTask(@Body() Task task);
@POST("http://httpbin.org/post")
Future<void> createNewTaskFromFile(@Part() File file);
@POST("http://httpbin.org/post")
@FormUrlEncoded()
Future<String> postUrlEncodedFormData(@Field() String hello);
}
@JsonSerializable()
class Task {
String id;
String name;
String avatar;
String createdAt;
Task({this.id, this.name, this.avatar, this.createdAt});
factory Task.fromJson(Map<String, dynamic> json) => _$TaskFromJson(json);
Map<String, dynamic> toJson() => _$TaskToJson(this);
}
After starting the generator, we will get a working service ready to use:
import 'package:logger/logger.dart';
import 'package:retrofit_example/example.dart';
import 'package:dio/dio.dart';
final logger = Logger();
void main(List<String> args) {
final dio = Dio(); // Provide a dio instance
dio.options.headers["Demo-Header"] = "demo header";
final client = RestClient(dio);
client.getTasks().then((it) => logger.i(it));
}
This method (applicable to all types of clients) can save you a lot of time. If your backing does not have a normal OpenAPI scheme, then you sometimes do not have a very large choice. However, if there is a high-quality scheme, then in comparison with that method code generation (which we will talk about further) the current version has several disadvantages:
• You still need to write code - less than before, but a lot
• You must independently track changes in the backend and change the code you write after them
It is worth dwelling on the last point in a little more detail - if (when) changes occur on the back in the methods that are already used in your application, then you need to track these changes yourself, modify the DTO models, and, possibly, the endpoint. Also, if for some incredible reason backward-incompatible changes to the method occur, then you will learn about this only at runtime (at the time of calling this method) - which may not happen during development (especially if you do not have or do not have enough tests) and then you will have an extremely unpleasant bug in the production.
A generation without "fog of war"
Have not you forgotten that we have a high-quality OpenAPI scheme? Fine! The whole battlefield is open to you and there is no point in groping (* I added this phrase in order to somehow justify the title of this block, which, with a scratch, you have to invent yourself; generation will not help here *). Then you should pay attention to those tools that the entire OpenAPI ecosystem offers in principle!
Of all the variety of hammers and microscopes, we are now interested in only one. And its name is OpenAPI Generator. This rasp allows you to generate code for any language (well, almost), as well as for both clients and the server (to make a mock server, for example).
Let's get to practice already:
As a diagram, we will take what the Swagger demo offers. Next, we need to install the generator itself. Here is a great tutorial for that. If you are reading this article, then with a high degree of probability you already have Node.js installed, which means that one of the easiest installation methods would be to use npm-versions.
The next step is the generation itself. There are a couple of ways to do this:
• Using a pure console command
• Using a command in conjunction with a config file
In our example, the 1st method will look like this:
openapi-generator-cli generate -i https://petstore.swagger.io/v2/swagger.json -g dart-dio -o .pet_api --additional-properties pubName=pet_api
An alternative way is to describe the parameters in the openapitools.json file, for example:
{
"$schema": "node_modules/@openapitools/openapi-generator-cli/config.schema.json",
"spaces": 2,
"generator-cli": {
"version": "5.1.1",
"generators": {
"pet": {
"input-spec": "https://petstore.swagger.io/v2/swagger.json",
"generator-name": "dart-dio",
"output": ".pet_api",
"additionalProperties": {
"pubName": "pet_api"
}
}
}
}
}
And then running the command:
openapi-generator-cli generate
A complete list of available parameters for Dart is presented here. And for any other generator, the list of these parameters can be found by running the following console command:
# <generator-name>, dart-dio - for example
openapi-generator-cli config-help -g dart-dio
Even if you choose the complete console option, after the first start of the generator, you will have a configuration file with the version of the generator used written in it, as in this example - 5.1.1. In the case of Dart / Flutter, this version is very important, since each of them can carry certain changes, including those with backward incompatibility or interesting effects.
So, since version 5.1.0, the generator uses null-safety, but implements this through explicit checks, and not the capabilities of the Dart language itself (for now, unfortunately). For example, if in your schema some of the model fields are marked as required, then if your backend returns a model without this field, then an error will occur in runtime.
flutter: Deserializing '[id, 9, category, {id: 0, name: cats}, photoUrls, [string], tags, [{id: 0, na...' to 'Pet' failed due to: Tried to construct class "Pet" with null field "name". This is forbidden; to allow it, mark "name" with @nullable.
And all because the name field of the Pet model is explicitly specified as required, but is absent in the request response:
{
"Pet": {
"type": "object",
"required": [
"name", // <- required field
"photoUrls" // <- and this too
],
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"category": {
"$ref": "#/definitions/Category"
},
"name": {
"type": "string",
"example": "doggie"
},
"photoUrls": {
"type": "array",
"xml": {
"wrapped": true
},
"items": {
"type": "string",
"xml": {
"name": "photoUrl"
}
}
},
"tags": {
"type": "array",
"xml": {
"wrapped": true
},
"items": {
"xml": {
"name": "tag"
},
"$ref": "#/definitions/Tag"
}
},
"status": {
"type": "string",
"description": "pet status in the store",
"enum": [
"available",
"pending",
"sold"
]
}
},
"xml": {
"name": "Pet"
}
},
"Category": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
}
},
"xml": {
"name": "Category"
}
},
"Tag": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
}
},
"xml": {
"name": "Tag"
}
}
}
Well, the generator is running - the job is done, but there are a few simple steps left in which we hardly have to write code (for the sake of this, everything was started!). The standard openapi-generator will generate only the basic code, which uses libraries that already rely on code generation by means of Dart itself. Therefore, after completing the basic generation, you need to start Dart / Flutter:
cd .pet_api
flutter pub get
flutter pub run build_runner build --delete-conflicting-outputs
At the output, we get a ready-made package, which will be located where you specified in the configuration file or console command. It remains to include it in pubspec.yaml:
name: openapi_sample
description: Sample for OpenAPI
version: 1.0.0
publish_to: none
environment:
flutter: ">=2.0.0"
sdk: ">=2.12.0 <3.0.0"
dependencies:
flutter:
sdk: flutter
pet_api: # <- our generated library
path: .pet_api
And use this library as follows:
import 'package:dio/dio.dart';
import 'package:pet_api/api/pet_api.dart';
import 'package:pet_api/model/pet.dart';
import 'package:pet_api/serializers.dart'; // <- we must use [standartSerializers] from this package module
Future<Pet> loadPet() async {
final Dio dio = Dio(BaseOptions(baseUrl: 'https://petstore.swagger.io/v2'));
final PetApi petApi = PetApi(dio, standardSerializers);
const petId = 9;
final Response<Pet> response = await petApi.getPetById(petId, headers: <String, String>{'Authorization': 'Bearer special-key'});
return response.data;
}
An important part of it is the need to write down which serializers we will use in order for JSONs to turn into normal models. And also drop the Dio instances into the generated ... Api, specifying the base server URLs in them.
Dart nuances
It seems that this is all that can be said on this topic, but Dart recently received a major update, it added null-safety. And all packages are being actively updated, and projects are migrating to a new version of the language, more resistant to our errors.
However, at the moment, the generator does not support this new version and in several directions at once:
• Language version in the package (in the latest version of the generator - 5.1.1, Dart 2.7.0 is used)
• Outdated packages
• Backward incompatibility of some of the packages used (in the current version of Dio, some methods have different names)
name: pet_api
version: 1.0.0
description: OpenAPI API client
environment:
sdk: '>=2.7.0 <3.0.0' # -> '>=2.12.0 <3.0.0'
dependencies:
dio: '^3.0.9' # Actual -> 4.0.0
built_value: '>=7.1.0 <8.0.0' # -> 8.1.0
built_collection: '>=4.3.2 <5.0.0' # -> 5.1.0
dev_dependencies:
built_value_generator: '>=7.1.0 <8.0.0' # -> 8.1.0
build_runner: any # -> 2.0.5
test: '>=1.3.0 <1.16.0' # -> 1.17.9
And this can cause several problems at once - if you have already switched to Flutter 2.0+ and Dart 2.12+, then in order to start the code generation of the second stage (which is on Dart), you will have to switch the language to the old version, FVM allows you to do this pretty quickly, but it's still an inconvenience.
The second disadvantage is that this generated API package is now a legacy dependency, which will prevent your new project from starting with sound-null-safety. You will be able to take advantage of null-safety when writing code, but you will not be able to check and optimize runtime, and the project will only work if you use the additional Flutter parameter: --no-sound-null-safety.
There are three options for correcting this situation:
• Make a pull request with the openapi-generator update
• Wait until someone else does it - in half a year it will most likely happen
• Correct the generated code so that it now becomes sound-null-safety
The third point sounds like we will have to write code ... It will have to be a little, but not the same.
Before starting our manipulations, I will show you the bash script that has turned out at the moment and which runs all our code generation logic:
openapi-generator-cli generate
cd .pet_api || exit
flutter pub get
flutter pub run build_runner build --delete-conflicting-outputs
This script also relies on the configuration file we discussed above. Let's update this script so that it immediately updates all the dependencies of our generated package:
openapi-generator-cli generate
cd .pet_api || exit
echo "name: pet_api
version: 1.0.0
description: OpenAPI API client
environment:
sdk: '>=2.12.0 <3.0.0'
dependencies:
dio: ^4.0.0 built_value: ^8.1.0 built_collection: ^5.1.0
dev_dependencies:
built_value_generator: ^8.1.0 build_runner: ^2.0.5 test: ^1.17.9" > pubspec.yaml
flutter pub get
flutter pub run build_runner build --delete-conflicting-outputs
Now - our generator will start correctly with the new version of Dart (>2.12.0) in the system. Everything would be fine, but we still won't be able to use our api package! First, the generated code is replete with annotations that bind it to the old version of the language:
//
// AUTO-GENERATED FILE, DO NOT MODIFY!
//
// @dart=2.7 <--
// ignore_for_file: unused_import
And secondly, there is a backward incompatibility in the logic of Dio and packages that are used to serialize / deserialize models. Let's fix it!
To fix it, we need to write quite a bit of utility code that will fix the incompatibilities that appear in our generated code. I mentioned above that I would advise installing the generator via npm, as the easiest way, if you have Node.js, respectively, by inertia - and the utility code will be written in JS. If desired, it is easy to rewrite it in Dart, if you do not have Node.js and do not want to mess with it.
Let's take a look at these simple manipulations:
const fs = require('fs');
const p = require('path');
const dartFiles = [];
function main() {
const openapiDirPath = p.resolve(__dirname, '.pet_api');
searchDartFiles(openapiDirPath);
for (const filePath of dartFiles) {
fixFile(filePath);
console.log('Fixed file:', filePath);
}
}
function searchDartFiles(path) {
const isDir = fs.lstatSync(path).isDirectory();
if (isDir) {
const dirContent = fs.readdirSync(path);
for (const dirContentPath of dirContent) {
const fullPath = p.resolve(path, dirContentPath);
searchDartFiles(fullPath);
}
} else {
if (path.includes('.dart')) {
dartFiles.push(path);
}
}
}
function fixFile(path) {
const fileContent = fs.readFileSync(path).toString();
const fixedContent = fixOthers(fileContent);
fs.writeFileSync(path, fixedContent);
}
const fixOthers = fileContent => {
let content = fileContent;
for (const entry of otherFixers.entries()) {
content = content.replace(entry[0], entry[1]);
}
return content;
};
const otherFixers = new Map([
// ? Base fixers for Dio and standard params
[
'// @dart=2.7',
'// ',
],
[
/response\.request/gm,
'response.requestOptions',
],
[
/request: /gm,
'requestOptions: ',
],
[
/Iterable<Object> serialized/gm,
'Iterable<Object?> serialized',
],
[
/(?<type>^ +Uint8List)(?<value> file,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +String)(?<value> additionalMetadata,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +ProgressCallback)(?<value> onReceiveProgress,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +ProgressCallback)(?<value> onSendProgress,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +ValidateStatus)(?<value> validateStatus,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +Map<String, dynamic>)(?<value> extra,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +Map<String, dynamic>)(?<value> headers,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +CancelToken)(?<value> cancelToken,)/gm,
'$<type>?$<value>',
],
[
/(@nullable\n)(?<annotation>^ +@.*\n)(?<type>.*)(?<getter> get )(?<variable>.*\n)/gm,
'$<annotation>$<spaces>$<type>?$<getter>$<variable>',
],
[
'final result = <Object>[];',
'final result = <Object?>[];',
],
[
'Iterable<Object> serialize',
'Iterable<Object?> serialize',
],
[
/^ *final _response = await _dio.request<dynamic>\(\n +_request\.path,\n +data: _bodyData,\n +options: _request,\n +\);/gm,
`_request.data = _bodyData;
final _response = await _dio.fetch<dynamic>(_request);
`,
],
// ? Special, custom params for concrete API
[
/(?<type>^ +String)(?<value> apiKey,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +String)(?<value> name,)/gm,
'$<type>?$<value>',
],
[
/(?<type>^ +String)(?<value> status,)/gm,
'$<type>?$<value>',
],
]);
main();
Most of all these regulars fix the basic logic of the generated code, but there are three custom ones that are needed for a specific API. Each specific one will have its own custom regulars, but it is very likely that adding them will not be difficult, and all basic ones will work on any API.
Conclusions
The approach to generating client code, in the presence of a high-quality OpenAPI scheme, is an extremely simple task, regardless of the client's language. In the case of Dart, there are still certain inconveniences caused, especially, by the transition period to * null-safety *. But as part of this article, we have successfully overcome all the troubles and got a fully functional library for working with the backend, the dependencies of which (and itself) have been updated to the newest version and can be used in a Flutter project with sound-null-safety without any restrictions.
An additional plus of the approach when the source of truth is the schema - if it changes with the loss of backward compatibility, our generated code will immediately react to this and show all errors at the stage of static analysis, which will save your nerves from catching bugs at runtime.
|
__label__pos
| 0.877242 |
TurnerEtAlPrior: (Log-Normal) heterogeneity priors for binary outcomes as...
Description Usage Arguments Details Value Author(s) References See Also Examples
View source: R/bayesmeta.R
Description
Use the prior specifications proposed in the paper by Turner et al., based on an analysis of studies using binary endpoints that were published in the Cochrane Database of Systematic Reviews.
Usage
1
2
3
4
5
6
7
8
9
10
11
TurnerEtAlPrior(outcome=c(NA, "all-cause mortality", "obstetric outcomes",
"cause-specific mortality / major morbidity event / composite (mortality or morbidity)",
"resource use / hospital stay / process", "surgical / device related success / failure",
"withdrawals / drop-outs", "internal / structure-related outcomes",
"general physical health indicators", "adverse events",
"infection / onset of new disease",
"signs / symptoms reflecting continuation / end of condition", "pain",
"quality of life / functioning (dichotomized)", "mental health indicators",
"biological markers (dichotomized)", "subjective outcomes (various)"),
comparator1=c("pharmacological", "non-pharmacological", "placebo / control"),
comparator2=c("pharmacological", "non-pharmacological", "placebo / control"))
Arguments
outcome
The type of outcome investigated (see below for a list of possible values).
comparator1
One comparator's type.
comparator2
The other comparator's type.
Details
Turner et al. conducted an analysis of studies listed in the Cochrane Database of Systematic Reviews that were investigating binary endpoints. As a result, they proposed empirically motivated log-normal prior distributions for the (squared!) heterogeneity parameter τ^2, depending on the particular type of outcome investigated and the type of comparison in question. The log-normal parameters (μ and σ) here are internally stored in a 3-dimensional array (named TurnerEtAlParameters) and are most conveniently accessed using the TurnerEtAlPrior() function.
The outcome argument specifies the type of outcome investigated. It may take one of the following values (partial matching is supported):
Specifying “outcome=NA” (the default) yields the marginal setting, without considering meta-analysis characteristics as covariates.
The comparator1 and comparator2 arguments together specify the type of comparison in question. These may take one of the following values (partial matching is supported):
Any combination is allowed for the comparator1 and comparator2 arguments, as long as not both arguments are set to "placebo / control".
Note that the log-normal prior parameters refer to the (squared) heterogeneity parameter τ^2. When you want to use the prior specifications for τ, the square root, as the parameter (as is necessary when using the bayesmeta() function), you need to correct for the square root transformation. Taking the square root is equivalent to dividing by two on the log-scale, so the square root's distribution will still be log-normal, but with halved mean and standard deviation. The relevant transformations are already taken care of when using the resulting $dprior(), $pprior() and $qprior() functions; see also the example below.
Value
a list with elements
parameters
the log-normal parameters (μ and σ, corresponding to the squared heterogeneity parameter τ^2 as well as τ).
outcome.type
the corresponding type of outcome.
comparison.type
the corresponding type of comparison.
dprior
a function(tau) returning the prior density of τ.
pprior
a function(tau) returning the prior cumulative distribution function (CDF) of τ.
qprior
a function(p) returning the prior quantile function (inverse CDF) of τ.
Author(s)
Christian Roever [email protected]
References
R.M. Turner, D. Jackson, Y. Wei, S.G. Thompson, J.P.T. Higgins. Predictive distributions for between-study heterogeneity and simple methods for their application in Bayesian meta-analysis. Statistics in Medicine, 34(6):984-998, 2015.
See Also
dlnorm, RhodesEtAlPrior.
Examples
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# load example data:
data("CrinsEtAl2014")
# determine corresponding prior parameters:
TP <- TurnerEtAlPrior("surgical", "pharma", "placebo / control")
print(TP)
# a prior 95 percent interval for tau:
TP$qprior(c(0.025,0.975))
## Not run:
# compute effect sizes (log odds ratios) from count data
# (using "metafor" package's "escalc()" function):
crins.es <- escalc(measure="OR",
ai=exp.AR.events, n1i=exp.total,
ci=cont.AR.events, n2i=cont.total,
slab=publication, data=CrinsEtAl2014)
print(crins.es)
# perform meta analysis:
crins.ma01 <- bayesmeta(crins.es, tau.prior=TP$dprior)
# for comparison perform analysis using weakly informative Cauchy prior:
crins.ma02 <- bayesmeta(crins.es, tau.prior=function(t){dhalfcauchy(t,scale=1)})
# show results:
print(crins.ma01)
print(crins.ma02)
# compare estimates; heterogeneity (tau):
rbind("Turner prior"=crins.ma01$summary[,"tau"], "Cauchy prior"=crins.ma02$summary[,"tau"])
# effect (mu):
rbind("Turner prior"=crins.ma01$summary[,"mu"], "Cauchy prior"=crins.ma02$summary[,"mu"])
# illustrate heterogeneity priors and posteriors:
par(mfcol=c(2,2))
plot(crins.ma01, which=4, prior=TRUE, taulim=c(0,2),
main="informative log-normal prior")
plot(crins.ma02, which=4, prior=TRUE, taulim=c(0,2),
main="weakly informative half-Cauchy prior")
plot(crins.ma01, which=3, mulim=c(-3,0),
main="informative log-normal prior")
abline(v=0, lty=3)
plot(crins.ma02, which=3, mulim=c(-3,0),
main="weakly informative half-Cauchy prior")
abline(v=0, lty=3)
par(mfrow=c(1,1))
# compare prior and posterior 95 percent upper limits for tau:
TP$qprior(0.95)
crins.ma01$qposterior(0.95)
qhalfcauchy(0.95)
crins.ma02$qposterior(0.95)
## End(Not run)
bayesmeta documentation built on April 12, 2018, 9:03 a.m.
|
__label__pos
| 0.842702 |
Support me!
If you enjoy these webpages and you want to show your gratitude, feel free to support me in anyway!
Like Me On Facebook! Megabyte Softworks Facebook
Like Me On Facebook! Megabyte Softworks Patreon
Donate $1
Donate $2
Donate $5
Donate $10
Donate Custom Amount
018.) Heightmap Pt. 3 - Multiple Layers
Hi there! After a summer break, I'm back to writing OpenGL tutorials and this time I am going to show you, how to render a terrain with multiple texture layers on it with smooth transitions. This allows us to create far more realistic scenes than whatever we have done until now! Actually, it's not even that hard, if you followed other 2 heightmap tutorials, this one should not be that much more difficult from the previous ones . Just read further and you will see .
The concept
To implement the algorithm correctly, first we have to have a clear understanding of the concept. Basically what we want is to have several textures within our terrain. Those textures are present up to a certain height and then they start to transition smoothly into another texture. Let's take the terrain from this tutorial as an example. It consists of three textures - rocky texture at the bottom, grass texture in the middle and snow texture on the top. This is the cut of our terrain:
The quality is rather poor, but the concept is important . As you can see, we had to define four values, that I call levels (we could also call it thresholds). First level is 0.2 - up to height 0.2, we just have this rocky terrain, nothing else. After that, transition phase from rocks to grass begins and ends at the second level 0.3, where the transition to grass ends and pure grass begins. Grass goes up to the third level value, which is 0.55 and that's where another transition phase from grass to snow begins. Fourth and the last level - 0.7 is height from which there's only the snow. Snow will go up to the highest point (height 1.0), just as rocks go from the very bottom (height 0.0) up to the first level .
We can now quickly determine the equation of how many levels do we need to define, if we have generally N textures:
• N=1 - 0 levels (doesn't even make sense here to use multi-layered heightmap with 1 texture)
• N=2 - 2 levels (First texture, transition between first and second texture and second texture only)
• N=3 - 4 levels (Our example)
• N - (N-1)*2 levels (try out this equation for these low values of N)
Multiple layers heightmap shader program
With this equation in mind, we can now implement shader program for rendering heightmaps with multiple layers! It will be a generic shader program allowing you to define (almost) arbitrary number of levels. Why almost? Because our uniform variables will have limited size and you should not go beyond it (but who needs terrain with 100 textures anyway ). You will see it in the code below .
Vertex shader
As a first thing, let's analyze the vertex shader. Not much new stuff there, just one important thing (explained below):
#version 440 core
uniform struct
{
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat3 normalMatrix;
} matrices;
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexTexCoord;
layout(location = 2) in vec3 vertexNormal;
smooth out vec2 ioVertexTexCoord;
smooth out vec3 ioVertexNormal;
smooth out float ioHeight;
void main()
{
mat4 mvpMatrix = matrices.projectionMatrix * matrices.viewMatrix * matrices.modelMatrix;
gl_Position = mvpMatrix * vec4(vertexPosition, 1.0);
ioVertexTexCoord = vertexTexCoord;
ioVertexNormal = matrices.normalMatrix*vertexNormal;
ioHeight = vertexPosition.y;
}
There is one very important thing to notice and it's outputting of a variable smooth out float ioHeight, which is the input height of a particular vertex. This is then smoothly interpolated to every fragment and we will use it in fragment shader .
Fragment shader
Fragment shader is where most of the magic happens and will require a bit of explanation. Let's have a look at it:
#version 440 core
precision highp float;
#include "../lighting/ambientLight.frag"
#include "../lighting/diffuseLight.frag"
#include "../common/utility.frag"
layout(location = 0) out vec4 outputColor;
smooth in vec2 ioVertexTexCoord;
smooth in vec3 ioVertexNormal;
smooth in float ioHeight;
uniform vec4 color;
uniform AmbientLight ambientLight;
uniform DiffuseLight diffuseLight;
uniform sampler2D terrainSampler[16];
uniform float levels[32];
uniform int numLevels;
void main()
{
vec3 normal = normalize(ioVertexNormal);
vec4 textureColor = vec4(0.0);
bool isTextureColorSet = false;
for(int i = 0; i < numLevels && !isTextureColorSet; i++)
{
if(ioHeight > levels[i]) {
continue;
}
int currentSamplerIndex = i / 2;
if(i % 2 == 0) {
textureColor = texture(terrainSampler[currentSamplerIndex], ioVertexTexCoord);
}
else
{
int nextSamplerIndex = currentSamplerIndex+1;
vec4 textureColorCurrent = texture(terrainSampler[currentSamplerIndex], ioVertexTexCoord);
vec4 textureColorNext = texture(terrainSampler[nextSamplerIndex], ioVertexTexCoord);
float levelDiff = levels[i] - levels[i-1];
float factorNext = (ioHeight - levels[i-1]) / levelDiff;
float factorCurrent = 1.0f - factorNext;
textureColor = textureColorCurrent*factorCurrent + textureColorNext*factorNext;
}
isTextureColorSet = true;
}
if(!isTextureColorSet)
{
int lastSamplerIndex = numLevels / 2;
textureColor = texture(terrainSampler[lastSamplerIndex], ioVertexTexCoord);
}
vec4 objectColor = textureColor*color;
vec3 lightColor = sumColors(getAmbientLightColor(ambientLight), getDiffuseLightColor(diffuseLight, normal));
outputColor = objectColor*vec4(lightColor, 1.0);
}
First of all, we have three new uniforms - terrainSampler[16], levels[32] and numLevels. The samplers are used to access terrain textures. levels is an array of those thresholds, where the transitions start / end. Finally, numLevels tells us how many levels have we actually defined (in case of three textures, we would have 4 levels). I chose those sizes of arrays to be big enough - at the moment we are supporting up to 16 textures. If you somehow needed more, increase this number along with number of levels .
Now, there is one thing we need to realize - we are always either in a single texture phase or a transition phase. Those two phases are alternating and we can determine this using a modulo operator and parity of our for loop control variable i. But before it, we have to reach the current level we are in with a current ioHeight.
If we are in a single texture phase (if(i % 2 == 0)), then we simply output the texture of this level, not much to think of here .
Now things get trickier if we are in transition phase. As you can see, I have lots of variables defined there - makes to code easier to read and leaves less room for mistakes. What is done here is that I'm getting the texels of current texture (textureColorCurrent) and next texture (textureColorNext). Then, I proceed with calculating the difference between the start and end of a transition (levelDiff). This value is used to calculate factor, with which the current and next texture will contribute with. Final color is then the sum of the current texture and next texture we transition into, both multiplied by their respective factors.
If we go through the whole for loop without calculating the texture color (if(!isTextureColorSet)), that means only one thing - we are beyond the last level and have to use the last texture up to the maximum height 1.0. That is what the code within the if(!isTextureColorSet) does. At the end of the fragment shader, we just do usual stuff like adding ambient / diffuse light and combining a terrain with a desired color (which is usually white and in this case too) .
Rendering of heightmap
With shaders ready, now we can write a function in Heightmap class that does rendering. You can find it below:
void Heightmap::renderMultilayered(const std::vector<std::string>& textureKeys, const std::vector<float> levels) const
{
if (!_isInitialized) {
return;
}
// If there are less than 2 textures, does not even make sense to render heightmap in multilayer way
if (textureKeys.size() < 2) {
return;
}
// Number of levels defined must be correct
if ((textureKeys.size() - 1) * 2 != levels.size()) {
return;
}
// Bind chosen textures first
const auto& tm = TextureManager::getInstance();
auto& heightmapShaderProgram = getMultiLayerShaderProgram();
for (auto i = 0; i < int(textureKeys.size()); i++)
{
tm.getTexture(textureKeys[i]).bind(i);
heightmapShaderProgram[Heightmap::ShaderConstants::terrainSampler(i)] = i;
}
// Set uniform levels
heightmapShaderProgram[Heightmap::ShaderConstants::numLevels()] = int(levels.size());
heightmapShaderProgram[Heightmap::ShaderConstants::levels()] = levels;
// Finally render heightmap
render();
}
This method takes two parameters - texture keys to render with and levels. At the beginning, we are doing some basic validations if the input is correct (number of levels and number of textures must be valid according to our equation) and if it is, we proceed with setting all those uniform variables before rendering. Nothing too difficult in the end I think .
Result
This is what has been achieved today:
I think the result is really beautiful and opens a new horizon of possibilities . There is one visual artifact at transition levels (you can actually see where the transitions begin / end) and I can't get rid of it (trust me, I have really tried), if anyone knew what causes it, let me know .
This tutorial was the last from the heightmap series and my next tutorial will (finally!) be about model loading! I hope you have enjoyed your summer just as I did and you're ready to digest some more of them tutorials . Thank you for reading and stay tuned for another!
Download 11.30 MB (255 downloads)
|
__label__pos
| 0.915169 |
Count + Sum Cells based on Cell Colour in Excel – How To
One of the most sought after feature among heavy Excel users i.e. count or sum cells in Excel based on cell background color. I really wish we had a formula or a feature that does it for us but it isn’t available yet. But we do have the solution if we turn to VBA and write few lines of code.
Count or Sum cells based on color using Excel filter
Yes! We can use Excel filter to do this if we couple it with the appropriate formula. Your usual COUNT or SUM functions won’t help in this case. This is because Excel filter simply hides the result not fulfilling the criteria and both COUNT and SUM functions consider hidden and visible data therefore, we need a function that processes only visible data and ignore the hidden one.
This is where we meet SUBTOTAL function. This approach is especially good for those who want to avoid getting into VBA.
Step 1: For this approach we need the data to have headings. If data does not have headings already then its suggested to have heading inserted for each column at the top by adding a new row or at least for the column containing colored cells.
Step 2: Selected the cells containing headers and go to Data tab > in sort and filter group click Filter button and you will see drop-down arrows added to the heading cells.
Step 3: Move to the last cell of column containing colored cells. You can do that quickly by simply having an active cell within in that column and hitting CTRL+DownArrow. Move one cell down further and put the following formula for count:
=SUBTOTAL(2,B2:B10)
I had my data in range in cells B2 through B10 only that is why formula contains just that.
Step 4: Click the drop-down arrow button of the column containing colored. A menu will appear and in there hover the cursor over filter by color option and select the desired. This will filter the data down to those cells that contain the selected color only.
If you want to sum the values in specific colored cells you need to change the formula a little as following:
=SUBTOTAL(9,B2:B10)
However, this approach is quite limited. As you can process only on one color at a time. If you have more than one color in large data set, and you want to count, sum or average specifically colored data then much better and faster approach is by making a simple UDF in VBA.
Count cells based on color using VBA
For this we need a UDF i.e. user defined function. Following are the steps:
Step 1: Hit ALT+F11 shortcut key to enter visual basic environment
Step 2: Once you are inside visual basic editor go to Insert > module to insert a new module.
Step 3: Double click newly created module and a new blank window will open on the right. Copy the following code and paste it inside empty window:
Function CountColor(color As Range, data_range As Range)
Dim dRange As Range
Dim dColor As Long
dColor = color.Interior.ColorIndex
For Each dRange In data_range
If dRange.Interior.ColorIndex = dColor Then
CountColor = CountColor + 1
End If
Next dRange
End Function
Step 4: The above code has now given you a new function to use named CountColor. Just provide the cell address containing the color you want to use as count criteria and then provide the range of data to process.
Following illustration will walk you through the process:
And you can count for multiple colors at once as well:
Understanding VBA code
Lets understand what the code is about and what are we achieving with it.
Line 1:
Function CountColor(color As Range, data_range As Range)
Keyword “Function” tells Excel that we want to initiate UDF with the name “CountColor”. Inside bracket we required two arguments; “color” and “data_range” and both are of type range meaning we have to mention either a cell address or a range of cells. In descriptive manner the formula is following:
CountColor(cell address with the background color you want to use as criteria, range of cells you want to check for color selected and count)
Remember we have defined two variables color and data_range. But we need to tell from where to get the values for these two and what to do with it. So far it is just a disclosure.
Line 2 & 3:
Dim dRange As Range
Dim dColor As Long
Two variables with the name dRange and dColor are defined. These two will be used to process the information given as part of function arguments mentioned inside parenthesis in line 1. I will explain it further in a bit.
Line 4:
dColor = color.Interior.ColorIndex
Remember dColor variable defined in line 2 and 3? and “color” variable mentioned in line 1?
Once user has mentioned the cell address for first argument “color” we want Excel to get the color index or simply the specific color number and store it as a value of variable dColor. Once stored we can use this color index number as a criteria.
Line 5 to 8:
For Each dRange In data_range
If dRange.Interior.ColorIndex = dColor Then
CountColor = CountColor + 1
End If
Remember we defined dRange as variable and has data_range as argument in line? In fifth line we asked Excel to take each cell in range (that user has mentioned for second argument as data_range) and equate it as dRange.
In sixth line we are invoking IF condition statement and we are checking if the color index number of any cell is equal to the color index number of dColor variable (remember we stored a specific color index number in line 4 above) then add “1” to the count.
Terminate IF statement once all cells are checked going through each element in dRange.
This essentially creates a loop and check each cell mentioned in range and once all cells are checked loop is terminated.
In few words this code is fetching the color index number of cell mentioned by user and then comparing it against the color index numbers of each and every cell in the range specified. The ones that match are counted in and the ones that doesn’t match are ignored. That is how we get the count.
Sum cells based on color using VBA
Now that we understand how code works. We can modify it just a tad bit to have another function that sums the value of cells that fulfill criteria. Have a look at the following code that makes a new UDF that sum values based on cell color:
Function SumColor(color As Range, data_range As Range)
Dim dRange As Range
Dim dColor As Long
dColor = color.Interior.ColorIndex
For Each dRange In data_range
If dRange.Interior.ColorIndex = dColor Then
SumColor = SumColor + dRange.Value
End If
Next dRange
End Function
It is exactly the same code as we had for count function with only difference. Instead of adding “1” for each cell matching the criteria, we want excel to take the value of that cell and add it up.
As simple as it can get!
Here is the illustration that shows count and sum of cell values based on color:
51 COMMENTS
1. Our coursework help USA experts have attained a lucid style of writing. They can compile essays in any style of writing if aspirants specify their requirements. Our professionals are highly adaptable. They have a significant eye for writing essays or assignments.
2. Lodha Group is a real estate company known for its expertise in ensuring best quality and mastery over the design area. Codename Best Deal Wadala project, where life is evergreen is a New Lunch project from the house of Lodha Group properties. Lodha Group properties brings you a new world of live comfortable location of Wadala in Mumbai, where lifestyle at its best phase! Codename Best Deal Wadala offers competitively priced apartments to those looking for a stylish and classy lifestyle. Codename Best Deal Wadala gives high on connectivity and convenience as well and the finest standalone tower. Codename Best Deal Wadala is Conveniently connected by Eastern Free way , Monorail and upcoming Wadala-Thane Metro. Near Codename Best Deal Wadala positioned in close proximity to major IT campuses, your work place could be just a stone’s throw away. Codename Best Deal Wadala is offers to a very good qualities and futures in whole life. such as to its a very good news because Wadala is situated to in heart of Mumbai, where we know about in a good natural thing. Codename Best Deal Wadala is going to offer 2 BHK, 2 BHK Luxury, 3 BHK Luxury spacious apartments integrated with the number of world class amenities that gives the best living structure for the lifestyle with EOI Benefit upto 15 Lacs.
3. Dotom domain Govandi is project by Dotom realty in Govandi, Mumbai. Dotom domain Govandi offer 1,2 BHK apartments at reasonable price rate. Dotom domain Govandi comes with best class internal and external amenities and with near by places.
4. Thank you! This is a simple solution that I have been trying to find the answer to for hours! This simple VBA worked great, no knowledge needed!
5. At VIGNAN IAS, we take immense pride in introducing ourselves as one of the Best IAS Coaching Center in Bangalore for IAS coaching with maximum success ratio
6. Great work Hassan & i also want learn MS Excel really appreciate. really happy to found pakistani web. Good luck man i am also from pakistan.
7. Your post is very unique and reliable information for all readers to write more on the same topic and share with us your info…Thanks,
8. quite an impressive site you have here. as an excel expert i must admit that your knowledge is challenging to me. Thanks
9. Your post is very unique and reliable information for all readers to write more on the same topic and share with us your info…Thanks,
10. thanks so much for this wonderful piece of information I really love your website and all it web blogs and post I’m going to recommend your website to others it really informative and has good content. GOOD JOB ..
11. Thanks for giving such information. I will be very help full to play with excel.
12. Thank you very much:-) you helped us a lot:)
13. thank you very much the information was very much useful and this issue was very much complicated for me and these formulas ease and solve my problems
14. Thank you for sharing! It really helps!
15. My issues are resolved on how to count sum cells based on cell colour excel
16. Hi Hasaan,
Thanks for your function very easy to use
My only question is auto updating the user defined formula
I find the count or sum works all ok but when I change a cell colour in the range the formula will not uodate unless I click into the formula cell and force a recalculation
Is there something I’m missing?
17. I’m very concerns because when I close the file then I reopen it was error. Could you tell me why
18. I greatly appreciate your blog posts! As a Budget Analyst I am constantly exploring ways to automate and streamline my Excel tasks. You have been a great help to that end. Thank you so much!
19. I have followed this step by step but for some reason it is not calculating
Values just keep coming up with 0 or #NAME?
Please advise
20. Thanks for the sharing such very informative content regarding summing up cells in Excel based on cell background color. I am happy I found this blog!
21. much obliged to you for the blog. it was incredible help.
if it’s not too much trouble allude to this site for additional data
22. Wow.. very very useful post. Keep more like this. Thank you for the wonderful sharing.
23. Oureducation Coaching Institute our students in safe social and physical environment and provide freedom to go beyond the IAS syllabus that transforms the young mind into future leader.
24. KEI Hifi store is best online shop in india.you can get Hegel Music Systems, Power Amplifier, Hegel Electronics High End Audio Music System in reasonable prices
25. This is a great information.Keep blogging this type of post.This type of information is very useful
26. Ace Pneumatic is a manufacturer of pneumatic chisel, pneumatic cylinder manufacturers, steel rod manufacturers with simple motto of customer satisfaction & quality product.
27. In the event that you are confronting issue while surfing the web on your program, you can dial our Google Chrome Support +1-855-746-8414.
It has untold quality and now it is available in the much better brand.Call our Firefox Technical Support +1-855-746-8414 for any brand.
At the Internet Explorer Technical Support, the specialized arrangement is constantly both subjective and effective by means of the various method for specialized help on Internet Explorer Support Number +1-855-746-8414.
Most common problem users are facing with Opera Browser are to set up or install Opera Browser. To resolve these problems you need Opera troubleshooting help. Dial the Opera Support Number +1-855-746-8414 anytime.
On the off chance, if you face technical Issue when using your Safari browser or you are not able to assume the direction of using the browser, do not get tensed. This is because we are there to help you come out of such condition with our absolute Safari Customer Service +1-855-746-8414 Toll-Free.
28. thank you for the blog. it was great help.
please refer to this website for further information
29. Great Information For everyone. Because today’s time everyone is using Microsoft excel. Keep Sharing.
30. Barry HAW WC ESS MEL AD WB FRE R GEE 6
Bev HAW GW ESS MEL AD WB FRE R GEE 5
Dawn HAW GW ESS MEL AD WB FRE R GEE 5
Dean HAW GW CA GC AD WB FRE NM COL 3
Grant HAW GW ESS MEL AD WB FRE R COL 4
Steve HAW WC ESS MEL AD WB FRE R GEE #NAME?
WIN LOST
I want to count the number of squares that are correct for each person. In excel i have colored them green.
I would like the score to automatically increase each time i color another entry with a green background. how can i improve what i have so far.?
31. Interesting idea, though it doesn’t seem to work on conditional formatting, which for me, does not make it that useful. If it could be made to work to detect the color of cells using conditional formatting it would be great to use with pivot tables.
• Hmmm really interesting. I will have to give it a try and see what works. Seems Interior property needs additional details to get it to work with conditional formatting. Probably “displayformat”. I will update the post once I have the solution for conditional formatting.
See this is how feedback works 🙂 Thanks for letting us know. I am on it!
• Hi Hasaan,
Did you resolve this issue with conditional formatted colored cells?
Your macro returns 0 on conditional formatted cells.
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.571512 |
Skip to main content
Arthur's user avatar
Arthur's user avatar
Arthur's user avatar
Arthur
• Member for 11 years, 1 month
• Last seen more than a month ago
7 votes
1 answer
3k views
Proportional Circles QGIS 3
6 votes
2 answers
300 views
Getting attribute information of multiple polygon intersected by line using QGIS
6 votes
3 answers
2k views
Extract left part of field with regexp_substr after two specific charaters
5 votes
2 answers
6k views
How to automaticaly create a buffer for a line in QGIS
5 votes
2 answers
1k views
Getting db manager error from QGIS 3 on Mac?
4 votes
3 answers
2k views
Showing just some labels based on feature intersection of layers using QGIS
4 votes
1 answer
167 views
Automatic selection of legend object in QGIS
4 votes
2 answers
787 views
Enable load extension in Python 3.6 mac osx
4 votes
1 answer
408 views
Distance matrix along line layer
3 votes
0 answers
667 views
Print composer too many row in attribute table how to split in multiple frame?
3 votes
0 answers
275 views
How to solve this problem with proportional size data in qgis print composer
3 votes
1 answer
143 views
Creating list of intersected polygons for each line using QGIS
3 votes
3 answers
2k views
Sorting string column Qgis
3 votes
1 answer
204 views
Using multiple conditional factor in QGIS?
2 votes
0 answers
107 views
Layer length statistics in print composer QGIS
2 votes
1 answer
288 views
Sorting string column in QGIS Print Composer?
2 votes
1 answer
1k views
Flow map with QGIS 3 [closed]
2 votes
1 answer
59 views
View object on layer B only when specific object if visible in layer A
2 votes
1 answer
292 views
Is it possible to edit a non PostGIS layer within Lizmap?
2 votes
4 answers
260 views
Aggregate entities by single value distributed in two fields - QGIS
1 vote
0 answers
201 views
Python and SSL error loading tiles in QGIS
1 vote
3 answers
528 views
Automatically fill fields when creating feature in Lizmap?
1 vote
1 answer
2k views
Calculating total lengh of selected feature in ArcGIS Pro
1 vote
1 answer
211 views
GIS Database format functional differences [closed]
1 vote
0 answers
62 views
Sorting error QGIS [duplicate]
1 vote
2 answers
817 views
Error with existing OSM QGIS style
1 vote
1 answer
248 views
Qgis composer attribute table
1 vote
1 answer
415 views
Statistics in print composer Qgis
1 vote
1 answer
743 views
Data-defined size legend QGIS and Map Unit symbol
1 vote
1 answer
2k views
How to get osm tiles in QGIS 3
|
__label__pos
| 0.999906 |
WordPress Development Stack Exchange is a question and answer site for WordPress developers and administrators. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I am using this function to limit the content in my themes. But the problem is whenever I call the function, it also displays the image caption. I want to remove the image caption when calling the_content_limit function.
Here is the code:
function the_content_limit($max_char, $more_link_text = '', $stripteaser = 0, $more_file = '') {
$content = get_the_content($more_link_text, $stripteaser, $more_file);
$content = apply_filters('the_content', $content);
$content = str_replace(']]>', ']]>', $content);
$content = strip_tags($content);
if (strlen($_GET['p']) > 0) {
echo "";
echo $content;
echo " <a href='";
the_permalink();
echo "'>"."Read More →</a>";
echo "";
}
else if ((strlen($content)>$max_char) && ($espacio = strpos($content, " ", $max_char ))) {
$content = substr($content, 0, $espacio);
$content = $content;
echo "";
echo $content;
echo "...";
echo " <a href='";
the_permalink();
echo "'>"."</a>";
echo "";
}
else {
echo "";
echo $content;
echo " <a href='";
the_permalink();
echo "'>"."Read More →</a>";
echo "";
}
}
share|improve this question
up vote 2 down vote accepted
Image captions in Wordpress are actually shortcodes. Shortcodes are applied by the filter:
$content = apply_filters('the_content', $content);
For example, Wordpress creates the following code in your content when you enter an image caption:
[caption id="attachment_55" align="alignleft" width="127" caption="Here is my caption"][/caption]
You need to still use apply_filters() in order to properly display content. (safe content display and all other shortcodes)
If you don't want shortcodes (which is what it looks like, since you are doing a striptags) you should just use this:
$content = strip_shortcodes( $content );
But if it is specifically [caption] shortcodes, I assume this could work, if you just want to add a string-replace line to your code:
$content = get_the_content($more_link_text, $stripteaser, $more_file);
// remove [caption] shortcode
$content = preg_replace("/\[caption.*\[\/caption\]/", '', $content);
// short codes are applied
$content = apply_filters('the_content', $content);
share|improve this answer
Thank you for your quick solution. That's all I wanted to do. – pervez Oct 30 '12 at 10:09
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.926286 |
dcsimg
So You Want to Move A Layer, Huh?
By Joe Burns
WEBINAR:
On-Demand
Application Security Testing: An Integral Part of DevOps
...use these to jump around or read it all
[Moving the Layer] [The Code]
[Deconstructing The Script]
[Moving the Layer From Off Screen]
First Things First: We're dealing with Layers here so you'll need to be running a Netscape Navigator browser version 4.0 or better. If not...error.
Every now and again I get an email asking how people get things to fly around their screen. When I am given an URL to look at, I find that a good bit of the time, it's a layer set up to follow a path across the browser screen. It's actually easy enough to do.
Here I'll show you two things:
1. The first is a basic JavaScript that sets a path for a Layer to follow. The script is very easy to play with to get all kinds of different paths.
2. The second is the same script using negative numbers to have the layer roll in from off of the screen. On that layer will be a link that controls what shows up on the screen. Very neat.
Moving the Layer
OK, here's a simple script that moves the Layer. You can set the path of the layer, the speed of the movement, the size of each movement, and where the layer stops, if you want it to stop at all. I have it set up to stop.
See the Moving Layer
The Code
Here you go...
<SCRIPT LANGUAGE="javascript">
x=1
y=1
function moveIt()
{
x=x+2
y=y+2
document.layers[0].moveToAbsolute(x,y)
if (x >= 100)
{document.layers[0].stop}
else
{setTimeout("move()",5)}
}
</SCRIPT>
<BODY onLoad="moveIt()">
<LAYER NAME="bigLayer" BGCOLOR="ff00ff" HEIGHT="100" WIDTH="100" LEFT=1 TOP=1>Hey!</LAYER>
Deconstructing the Script
The easiest way to roll through this script is to start with the Layer first. Once you have that, we can start moving it around. The code looks like this:
<LAYER NAME="bigLayer" BGCOLOR="ff00ff" HEIGHT="100" WIDTH="100" LEFT=1 TOP=1>Hey!</LAYER>
If the code is foreign to you, try reading my first Layer tutorial before you get into this one. It'll make your life a little easier.
And so we start...
• LAYER starts off the Layer.
• NAME="bigLayer" assigns a name so we can attach the JavaScript to it.
• BGCOLOR="ff00ff" HEIGHT="100" WIDTH="100" are used to assign a few basic parameters to the Layer.
• LEFT=1 TOP=1 are used to assign the layer a starting position. The pixels down from the top and in from the left denote the layer's upper-left hand corner. All pixel settings will be relative to that corner of the Layer.
Please Note! These numbers are important to the appearance of this script. You'll need to make a point of getting these two number equal to the "x" and "y" numbers in the script itself. I'll get to why in a moment, I just want to make you aware of it right now.
The Script
<SCRIPT LANGUAGE="javascript">
This should look familiar. That starts off every JavaScript, and this is a JavaScript.
x=1
y=1
Remember I told you about this in the section just above? These are the starting coordinates of the layer's upper left hand corner. These numbers should be the same as the numbers set in the Layer itself. If they're not, then you're going to get a bit of a jump before the script kicks in.
The layer will first appear where LEFT and TOP settings place it, then jump to where these x and y coordinates say it should be. See the problem? Get the numbers equal and you'll get a nice smooth start. If you want to see what I mean, try setting the numbers to different values and running the script. You can always change it back later.
function moveIt()
{
x=x+2
y=y+2
Here's where you control the movement of the Layer. The coordinate "x" denotes the Layer's movement to the right. The "y" coordinate controls the Layer's movement downwards.
Right now I have it set to move in a perfect 45 degree line down across the page. For every two pixels to the right, the block also moves two pixels down. That happens again and again and the Layer moves down the line.
You could also try:
• x=x+2 y=y+0: Movement straight across the screen.
• x=x+0 y=y+2: Movement straight down the screen.
• x=x+2 y=y+1: Lesser angle across the screen.
• x=x+6 y=y+0: Heavier angle across the screen.
you might even try setting up a small math equation to create random numbers to give a really weird look.
document.layers[0].moveToAbsolute(x,y)
Here's the magic. This line actually moves the layer to the next point set up by the adding of numbers to x and y.
if (x >= 100)
{document.layers[0].stop}
else
{setTimeout("move()",5)}
This is a conditional statement that allows you to stop the layer at a certain point.
The first line states that if x becomes greater than or equal to 100, then something should happen. Now, I chose 100 out of the clear blue sky. You can set it to any thing you want. The higher the number, the longer the Layer will move. How quickly you get to the number also depends on how quickly "x" adds up. Remember that earlier we set up "x" to have 2 added each time the script rolls through. That means after the script ran 49 times (it wouldn't have run the 50th time), the script stops and the Layers stops. Get the concept? The higher the number, the longer the script will take to reach it and the longer the script will run moving the Layer across the screen.
One more thing. I have this line set up to read as "greater than or equal to" because I found out the hard way that setting this line so that "x" has to equal a specific number is bad. I first had it so that "x" was suppose to equal 100 to stop. The problem was that I has it so that only one was added each time. That means x was 99 and then 101. It never did get to equal 100. Oops. Stick with the "greater than or equal to" format.
When the script is running, if "x" has not yet reached at least 100, then the function rolls through again thanks to the setTimeout() command. See the "5" after the comma? That is a measure in 1/1000ths of a second of how long the script should wait to run again. Higher numbers scroll slower. This Layer moves pretty quickly because it gets bumped up every 5/1000ths of a second.
}
</SCRIPT>
Finally the curly bracket rounds out the function and the script comes to an end.
The script is triggered to run through the use of an onLoad event Handler in the BODY command: onLoad="moveIt()".
Moving the Layer From Off Screen
This is becoming a popular effect. In fact, this is being asked for more often than just the simple moving layer. I'm using the same script as above with just a few simple changes. I also added a link
See it in Action
OK, that's pretty smooth, you have to admit. Before I show you the code, let's think it through.
• We want the layer to roll in from off screen.
• All positioning statements deal with the upper left hand corner of the layer.
That means the left hand corner will need to be off of the screen at least the width of the Layer itself (100). OK, so let's set the "x" point to -100.
Set the LEFT in the layer to minus 100 too!
• We want the Layer to come straight in without moving at any angle.
That means we'll set the "y" setting to zero.
• We want the layer to scroll in from the middle of the left hand side.
So we'll set the "y" to 200. Set TOP in the Layer to 200 also!
• We want the Layer to stop after it comes onto the screen so that we don't see past the layer on the left hand side.
We'll get that by setting the point at which the Layer stops as (-1). Zero would show a little on the left. This way the left hand top of the layer stops one pixel off of the screen. Now we'll have to make sure that "x" will actually equal (-1). I did by adding one to "x" each time the script loops.
That should do it, and it does. Here's the script:
<SCRIPT LANGUAGE="javascript">
x=-100
y=200
function move()
{
x=x+1
y=y+0
document.layers[0].moveToAbsolute(x,y)
if (x == -1)
{document.layers[0].stop}
else
{setTimeout("move()",25)}
}
</SCRIPT>
The link on the Layer is just a normal hypertext link except I put TARGET="main" in the code. That's it.
The script is triggered again by an onLoad Event Handler in the BODY tag. Great effect.
That's That
This is a very basic script. It has limited movement. Play with it. See if you can get it to do different things. If you get a new and strange layer movement, send it to me. Maybe I'll post it here - who knows?
Enjoy!
[Moving the Layer] [The Code]
[Deconstructing The Script]
[Moving the Layer From Off Screen]
Make a Comment
Loading Comments...
• Web Development Newsletter Signup
Invalid email
You have successfuly registered to our newsletter.
By submitting your information, you agree that htmlgoodies.com may send you HTMLGOODIES offers via email, phone and text message, as well as email offers about other products and services that HTMLGOODIES believes may be of interest to you. HTMLGOODIES will process your information in accordance with the Quinstreet Privacy Policy.
•
•
•
Thanks for your registration, follow us on our social networks to keep up-to-date
|
__label__pos
| 0.770139 |
Problem solve Get help with specific problems with your technologies, process and projects.
Why isn't my index getting used?
Tom Kyte answers the frequently asked question, "Why isn't my index getting used?"
This excerpt is from Tom Kyte's new book Expert Oracle Database Architecture: 9i and 10g Programming Techniques...
and Solutions, published by Apress in September 2005. Tom Kyte is Vice President of the Core Technologies Group at Oracle and has been using Oracle since 1988. For more frequently asked questions and myths about Oracle indexes, click here.
Why isn't my index getting used?
There are many possible causes of this. In this section, we'll take a look at some of the most common.
Case 1
We're using a B*Tree index, and our predicate does not use the leading edge of an index. In this case, we might have a table T with an index on T(x,y). We query SELECT * FROM T WHERE Y = 5. The optimizer will tend not to use the index since our predicate did not involve the column X -- it might have to inspect each and every index entry in this case (we'll discuss an index skip scan shortly where this is not true). It will typically opt for a full table scan of T instead. That does not preclude the index from being used. If the query was SELECT X,Y FROM T WHERE Y = 5, the optimizer would notice that it did not have to go to the table to get either X or Y (they are in the index) and may very well opt for a fast full scan of the index itself, as the index is typically much smaller than the underlying table. Note also that this access path is only available with the CBO.
There is another case whereby the index on T(x,y) could be used with the CBO is during an index skip scan. The skip scan works well if and only if the leading edge of the index (X in the previous example) has very few distinct values and the optimizer understands that. For example, consider an index on (GENDER, EMPNO) where GENDER has the values M and F, and EMPNO is unique. A query such as
select * from t where empno = 5;
might consider using that index on T to satisfy the query in a skip scan method, meaning the query will be processed conceptually like this:
select * from t where GENDER='M' and empno = 5
UNION ALL
select * from t where GENDER='F' and empno = 5;
It will skip throughout the index, pretending it is two indexes: one for Ms and one for Fs. We can see this in a query plan easily. We'll set up a table with a bivalued column and index it:
ops$tkyte@ORA10GR1> create table t
2 as
3 select decode(mod(rownum,2), 0, 'M', 'F' ) gender, all_objects.*
4 from all_objects
5 /
Table created.
ops$tkyte@ORA10GR1> create index t_idx on t(gender,object_id)
2 /
Index created.
ops$tkyte@ORA10GR1> begin
2 dbms_stats.gather_table_stats
3 ( user, 'T', cascade=>true );
4 end;
5 /
PL/SQL procedure successfully completed.
Now, when we query this, we should see the following:
ops$tkyte@ORA10GR1> set autotrace traceonly explain
ops$tkyte@ORA10GR1> select * from t t1 where object_id = 42;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=95)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=4 Card=1 Bytes=95)
2 1 INDEX (SKIP SCAN) OF 'T_IDX' (INDEX) (Cost=3 Card=1)
The INDEX SKIP SCAN step tells us that Oracle is going to skip throughout the index, looking for points where GENDER changes values and read down the tree from there, looking for OBJECT_ID=42 in each virtual index being considered. If we increase the number of distinct values for GENDER measurably, as follows:
ops$tkyte@ORA10GR1> update t
2 set gender = chr(mod(rownum,256));
48215 rows updated.
ops$tkyte@ORA10GR1> begin
2 dbms_stats.gather_table_stats
3 ( user, 'T', cascade=>true );
4 end;
5 /
PL/SQL procedure successfully completed.
we'll see that Oracle stops seeing the skip scan as being a sensible plan. It would have 256 mini indexes to inspect, and it opts for a full table scan to find our row:
ops$tkyte@ORA10GR1> set autotrace traceonly explain
ops$tkyte@ORA10GR1> select * from t t1 where object_id = 42;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=158 Card=1 Bytes=95)
1 0 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=158 Card=1 Bytes=95)
Case 2
We're using a SELECT COUNT(*) FROM T query (or something similar) and we have a B*Tree index on table T. However, the optimizer is full scanning the table, rather than counting the (much smaller) index entries. In this case, the index is probably on a set of columns that can contain nulls. Since a totally null index entry would never be made, the count of rows in the index will not be the count of rows in the table. Here the optimizer is doing the right thing -- it would get the wrong answer if it used the index to count rows.
Case 3
For an indexed column, we query using the following:
select * from t where f(indexed_column) = value
and find that the index on INDEX_COLUMN is not used. This is due to the use of the function on the column. We indexed the values of INDEX_COLUMN, not the value of F(INDEXED_COLUMN). The ability to use the index is curtailed here. We can index the function if we choose to do it.
Case 4
We have indexed a character column. This column contains only numeric data. We query using the following syntax:
select * from t where indexed_column = 5
Note that the number 5 in the query is the constant number 5 (not a character string). The index on INDEXED_COLUMN is not used. This is because the preceding query is the same as the following:
select * from t where to_number(indexed_column) = 5
We have implicitly applied a function to the column and, as noted in case 3, this will preclude the use of the index. This is very easy to see with a small example. In this example, we're going to use the built-in package DBMS_XPLAN. This package is available only with Oracle9i Release 2 and above (in Oracle9i Release 1, we will use AUTOTRACE instead to see the plan easily, but we will not see the predicate information—that is only available in Oracle9i Release 2 and above):
ops$tkyte@ORA10GR1> create table t ( x char(1) constraint t_pk primary key,
2 y date );
Table created.
ops$tkyte@ORA10GR1> insert into t values ( '5', sysdate );
1 row created.
ops$tkyte@ORA10GR1> delete from plan_table;
3 rows deleted.
ops$tkyte@ORA10GR1> explain plan for select * from t where x = 5;
Explained.
ops$tkyte@ORA10GR1> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------
Plan hash value: 749696591
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 12 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(TO_NUMBER("X")=5)
As you can see, it full scanned the table, and even if we were to hint the query
ops$tkyte@ORA10GR1> explain plan for select /*+ INDEX(t t_pk) */ * from t
2 where x = 5;
Explained.
ops$tkyte@ORA10GR1> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------
Plan hash value: 3473040572
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 34 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 12 | 34 (0)| 00:00:01 |
|* 2 | INDEX FULL SCAN | T_PK | 1 | | 26 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(TO_NUMBER("X")=5)
it uses the index, but not for a UNIQUE SCAN as we might expect -- it is FULL SCANNING this index. The reason lies in the last line of output there: filter(TO_NUMBER("X")=5). There is an implicit function being applied to the database column. The character string stored in X must be converted to a number prior to comparing to the value 5. We cannot convert 5 to a string, since our NLS settings control what 5 might look like in a string (it is not deterministic), so we convert the string into a number, and that precludes the use of the index to rapidly find this row. If we simply compare strings to strings
ops$tkyte@ORA10GR1> delete from plan_table;
2 rows deleted.
ops$tkyte@ORA10GR1> explain plan for select * from t where x = '5';
Explained.
ops$tkyte@ORA10GR1> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------
Plan hash value: 1301177541
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 12 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | T_PK | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("X"='5')
we get the expected INDEX UNIQUE SCAN, and we can see the function is not being applied. You should always avoid implicit conversions anyway. Always compare apples to apples and oranges to oranges. Another case where this comes up frequently is with dates. We try to query:
-- find all records for today
select * from t where trunc(date_col) = trunc(sysdate);
and discover that the index on DATE_COL will not be used. We can either index the TRUNC(DATE_COL) or, perhaps more easily, query using range comparison operators. The following demonstrates the use of greater than and less than on a date. Once we realize that the condition
TRUNC(DATE_COL) = TRUNC(SYSDATE)
is the same as the condition
select *
from t
where date_col >= trunc(sysdate)
and date_col < trunc(sysdate+1)
this moves all of the functions to the right-hand side of the equation, allowing us to use the index on DATE_COL (and it has the same exact effect as WHERE TRUNC(DATE_COL) = ➥ TRUNC(SYSDATE)).
If possible, you should always remove the functions from database columns when they are in the predicate. Not only will doing so allow for more indexes to be considered for use, but also it will reduce the amount of processing the database needs to do. In the preceding case, when we used
where date_col >= trunc(sysdate)
and date_col < trunc(sysdate+1)
the TRUNC values are computed once for the query, and then an index could be used to find just the qualifying values. When we used TRUNC(DATE_COL) = TRUNC(SYSDATE), the TRUNC(DATE_COL) had to be evaluated once per row for every row in the entire table (no indexes).
Case 5
The index, if used, would actually be slower. I see this a lot -- people assume that, of course, an index will always make a query go faster. So, they set up a small table, analyze it, and find that the optimizer doesn't use the index. The optimizer is doing exactly the right thing in this case. Oracle (under the CBO) will use an index only when it makes sense to do so. Consider this example:
ops$tkyte@ORA10GR1> create table t
2 ( x, y , primary key (x) )
3 as
4 select rownum x, object_name
5 from all_objects
6 /
Table created.
ops$tkyte@ORA10GR1> begin
2 dbms_stats.gather_table_stats
3 ( user, 'T', cascade=>true );
4 end;
5 /
PL/SQL procedure successfully completed.
If we run a query that needs a relatively small percentage of the table, as follows:
ops$tkyte@ORA10GR1> set autotrace on explain
ops$tkyte@ORA10GR1> select count(y) from t where x < 50;
COUNT(Y)
----------
49
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=1 Bytes=28)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=3 Card=41 Bytes=1148)
3 2 INDEX (RANGE SCAN) OF 'SYS_C009167' (INDEX (UNIQUE)) (Cost=2 Card=41)
it will happily use the index; however, we'll find that when the estimated number of rows to be retrieved via the index crosses a threshold (which varies depending on various optimizer settings, physical statistics, and so on), we'll start to observe a full table scan:
ops$tkyte@ORA10GR1> select count(y) from t where x < 15000;
COUNT(Y)
----------
14999
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=57 Card=1 Bytes=28)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=57 Card=14994 Bytes=419832)
This example shows the optimizer won't always use an index and, in fact, it makes the right choice in skipping indexes. While tuning your queries, if you discover that an index isn't used when you think it "ought to be," don't just force it to be used -- test and prove first that the index is indeed faster (via elapsed and I/O counts) before overruling the CBO. Reason it out.
Case 6
We haven't analyzed our tables in a while. They used to be small, but now when we look at them, they have grown quite large. An index will now make sense, whereas it didn't originally. If we analyze the table, it will use the index.
Without correct statistics, the CBO cannot make the correct decisions.
Index Case Summary
In my experience, these six cases are the main reasons I find that indexes are not being used. It usually boils down to a case of "They cannot be used -- using them would return incorrect results" or "They should not be used -- if they were used, performance would be terrible."
To see other frequently asked questions and myths about indexes, click here.
This was last published in October 2005
Dig Deeper on Oracle DBA jobs, training and certification
PRO+
Content
Find more PRO+ content and other member only offers, here.
Join the conversation
1 comment
Send me notifications when other members comment.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Please create a username to comment.
Hi Tom,
This whole series on index is very much informative. I just love the way you explained things here...it's very well written with full explanation and anyone can understand :)
Thanks a ton for sharing it.
Cancel
-ADS BY GOOGLE
SearchDataManagement
SearchBusinessAnalytics
SearchSAP
SearchSQLServer
TheServerSide
SearchDataCenter
SearchContentManagement
SearchFinancialApplications
Close
|
__label__pos
| 0.569121 |
Title:
Method for efficiently supporting interactive, fuzzy search on structured data
United States Patent 8073869
Abstract:
A method to support efficient, interactive, and fuzzy search on text data includes an interactive, fuzzy search on structured data used in applications such as query relaxation, autocomplete, and spell checking, where inconsistencies and errors exist in user queries as well as data. It utilizes techniques to efficiently and interactively answer fuzzy queries on structured data to allow users to efficiently search for information interactively, and they can find records and documents even if these records and documents are slightly different from the user keywords.
Inventors:
Li, Chen (Irvine, CA, US)
Ji, Shengyue (Irvine, CA, US)
Li, Guoliang (Beijing, CN)
Wang, Jiannan (Beijing, CN)
Feng, Jianhua (Beijing, CN)
Application Number:
12/497489
Publication Date:
12/06/2011
Filing Date:
07/02/2009
Assignee:
The Regents of the University of California (Oakland, CA, US)
Primary Class:
International Classes:
G06F7/00
Field of Search:
707/780, 707/999.004
View Patent Images:
US Patent References:
20110083167Leveraging Collaborative Cloud Services to Build and Share Apps2011-04-07Carpenter et al.726/4
20110047138Method and Apparatus for Identifying Synonyms and Using Synonyms to Search2011-02-24Dong et al.707/706
7870151Fast accurate fuzzy matching2011-01-11Mayer et al.707/780
7836060Multi-way nested searching2010-11-16Rennison707/749
20080147627CLUSTERED QUERY SUPPORT FOR A DATABASE QUERY ENGINE2008-06-19Natkovich et al.707/4
20080120072System and method for determining semantically related terms based on sequences of search queries2008-05-22Bartz et al.703/2
20080091413SYSTEMS AND METHODS FOR BUILDING AN ELECTRONIC DICTIONARY OF MULTI-WORD NAMES AND FOR PERFORMING FUZZY SEARCHES IN THE DICTIONARY2008-04-17El-Shishiny et al.704/10
20080072143METHOD AND DEVICE INCORPORATING IMPROVED TEXT INPUT MECHANISM2008-03-20Assadollahi715/261
20070168335Deep enterprise search2007-07-19Moore et al.707/3
20070038615Identifying alternative spellings of search strings by analyzing self-corrective searching behaviors of users2007-02-15Vadon et al.707/4
20060265208Device incorporating improved text input mechanism2006-11-23Assadollahi704/9
20050114331Near-neighbor search in pattern distance spaces2005-05-26Wang et al.707/6
6865567Method of generating attribute cardinality maps2005-03-08Oommen et al.1/1
20040228296Data transmission method, system, base station and subscriber station2004-11-18Lenzini et al.370/322
6684336Verification by target end system of intended data transfer operation2004-01-27Banks et al.709/227
6175835Layered index with a basic unbalanced partitioned index that allows a balanced structure of blocks2001-01-16Shadmon707/696
Primary Examiner:
Lovel, Kimberly
Assistant Examiner:
Uddin, Mohammed R.
Attorney, Agent or Firm:
Dawes, Daniel L.
Dawes, Marcus C.
Parent Case Data:
RELATED APPLICATION
The present application is related to U.S. Provisional Patent Applications Ser. No. 61,078,082, filed on Jul. 3, 2008; 61,112,527, filed on Nov. 7, 2008; and 61,152,171, filed on Feb.12, 2009, which are incorporated herein by reference and to which priority is claimed pursuant to 35 USC 119.
Claims:
We claim:
1. A method for searching a structured data table T with m attributes and n records, where A={a1; a2, : : : ; am} denotes an attribute set, R={r1; r2, : : : , rn} denotes the record set, and W={w1; w2, : : : ; wp} denotes a distinct word set in T, where given two words, wi and wi, “wi≦wj” denotes that wi is a prefix string of wj, where a query consists of a set of prefixes Q={p1, p2, . . . , pl}, where a predicted-word set is Wkl={w|w is a member of W and kl≦w}, the method comprising for each prefix pi finding the set of prefixes from the data set that are similar to pi, by: determining the predicted-record set RQ={r|r is a member of R, for every i; 1≦i≦·l−1, pi appears in r, and there exists a w included in Wkl, w appears in r}; and for a keystroke that invokes query Q, returning the top-t records in RQ for a given value t, ranked by their relevancy to the query, treating every keyword as a partial keyword, namely given an input Q={k1; k2; : : : ; kl for each predicted record r, for each 1≦i≦·l, there exists at least one predicted word wi for ki in r, since ki must be a prefix of wi,quantifying their similarity as:
sim=(ki;wi)=|ki|/|wi| if there are multiple predicted words in r for a partial keyword kj, selecting the predicted word wi with the maximal similarity to ki and quantifying a weight of a predicted word to capture the importance of a predicted word, and taking into account the number of attributes that the l predicted words appear in, denoted as na, to combine similarity, weight and number of attributes to generate a ranking function to score r for the query Q as follows: SCORE(r,Q)=α*l=11idfwi*sim(ki,wi)+(1-α)*1na, where α is a tuning parameter between 0 and 1.
2. The method of claim 1 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises finding a trie node corresponding to a keyword in a trie with inverted lists on leaf nodes by traversing the trie from the root; locating leaf descendants of the trie node corresponding to the keyword, and retrieving the corresponding predicted words and the predicted records on inverted lists.
3. The method of claim 2 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises tokenizing a query string into several keywords, k1; k2; : : : ; ki; for each keyword ki (1≦i≦l−1) determining only a predicted word, ki, and one predicted-record list of a trie node corresponding to ki, denoted as li, where q predicted words for ki, and their corresponding predicted record lists are ll1; ll2; : : : ; llq, and determining the predicted records by ∩i=1l−1li∩(∪j=1qllj) namely taking the union of the lists of predicted keywords for partial words, and intersecting the union of lists of predicted keywords for partial words with lists of the complete keywords.
4. The method of claim 3 where determining the predicted records by ∩i=1l−1ll∩(∪j=1q) comprises determining the union Il=∪j=1qIlj of the predicted-record lists of the partial keyword ki to generate an ordered predicted list by using a sort-merge algorithm and then determining the intersection of several lists ∩i=1lIl by using a merge-join algorithm to intersect the lists, assuming these lists are pre-sorted or determining whether each record on the short lists appears in other long lists by doing a binary search or a hash-based lookup.
5. The method of claim 2 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises among the union lists ∪1, ∪2, . . . , ∪t, of the leaf nodes of each prefix node identifying the shortest union list, verifying each record ID on the shortest list by checking if it exists on all the other union lists by maintaining a forward list maintained for each record r, which is a sorted list of IDs of keywords in r, denoted as Fr, so that each prefix pi has a range of keyword IDs [MinIdi, MaxIdi], verifying whether r appears on a union list ∪k of a query prefix pk for a record r on the shortest union list by testing if pk appears in the forward list Fr as a prefix by performing a binary search for MinIdk on the forward list Fr to get a lower bound Idlb, and check if Idlb is no larger than MaxIdk, where the probing succeeds if the condition holds, and fails otherwise.
6. The method of claim 5 where each query keyword has multiple active nodes of similar prefixes, instead of determining the union of the leaf nodes of one prefix node, determining the unions of the leaf nodes for all active nodes of a prefix keyword, estimating the lengths of these union lists to find a shortest one, for each record r on the shortest union list, for each of the other query keywords, for each of its active nodes, testing if the corresponding similar prefix appears in the record r as a prefix using the forward list of r, Fr.
7. The method of claim 1 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises maintaining a session cache for each user where each session cache keeps keywords that the user has input in the past and other information for each keyword, including its corresponding trie node and the top-t predicted records.
8. The method of claim 7 where maintaining a session cache for each user comprises inputting in a query string c1c2 : : : cx letter by letter, where pi=c1c2 : : : c1 is a prefix query (1≦i≦·x) and where ni is the trie node corresponding to p, and after inputting in the prefix query pi, storing node ni for pi and its top-t predicted records, inputting a new character cx+1 at the end of the previous query string c1c2 : : : cx, determining whether node nx that has been kept for px has a child with a label of cx+1, if so, locating leaf descendants of node nx+1, and retrieving corresponding predicted words and the predicted records, otherwise, there is no word that has a prefix of px+1, and then returning an empty answer.
9. The method of claim 8 for a keystroke that invokes query Q, returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises matching prefixes which includes the similarity between a query keyword and its best matching prefix; predicted keywords where different predicted keywords for the same prefix can have different weights, and record weights where different records have different weights, where a query is Q={p1, p2, . . . }, where p′i is the best matching prefix for pi, and where ki is the best predicted keyword for p′i, where sim(pi, p′i) is an edit similarity between p′i and pi and where the score of a record r for Q can be defined as:
Score(r,Q)=Σi[sim(pi,p′i)+α*|p′i|−|ki|)+β*score(r,ki)], where α and β are weights (0<β<α<1), and score(r, ki) is a score of record r for keyword ki.
10. The method of claim 7 further comprising modifying a previous query string arbitrarily, or copying and pasting a completely different string for a new query string, among all the keywords input by the user, identifying the cached keyword that has the longest prefix with the new query.
11. The method of claim 7 where prefix queries p1, p2; : : : ; px have been cached, further comprising inputting a new query p′=c1 c2: : : ci c′ : : : cy, finding pi that has a longest prefix with p′, using node ni of pi to incrementally answer the new query p′ by inserting the characters after the longest prefix of the new query c′ : : : cy one by one, if there exists a cached keyword pi=p′, using the cached top-t records of pi to directly answer the query p′; otherwise if there is no such cached keyword, answering the query without use of any cache.
12. The method of claim 7 where maintaining a session cache for each user comprises: caching query results and using them to answer subsequent queries; increasing the edit-distance threshold δ as a query string is getting longer in successive queries; using pagination to show query results in different pages to partially traverse the shortest list, until enough results have been obtained for a first page, continuing traversing the shortest list to determine more query results and caching them; or retrieving the top-k records according a ranking function, for a predefined constant k, verifying each record accessed in the traversal by probing the keyword range using the forward list of the record, caching records that pass verification, then when answering a query incrementally, first verifying each record in the cached result of the previous increment of the query by probing the keyword range, if the results from the cache are insufficient to compute the new top-k, resuming the traversal on the list starting from the stopping point of the previous query, until we have enough top-k results for the new query.
13. The method of claim 1 for searching a structured data table T with the query Q={k1; k2; : : : ; kl}, where an edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of edit operations of single characters needed to transform the first string so to the second string s2, and an edit-distance threshold δ, for 1≦i≦l, where a predicted-word set Wkj for ki is {w|∃w′≦w, w∈W, ed(ki,w′)≦δ), where a predicted-record set RQ is {r|r∈R, ∀1≦l, ∃ wi∈Wi, wl appears in r} comprising determining the top-t predicted records in RQ ranked by their relevancy to Q with the edit-distance threshold δ.
14. The method of claim 13 further comprising: inputting a keyword k, storing a set of active nodes φk={[n, ξn]}, where n is an active node for k, and ξn=ed(k; n)≦δ, inputting one more letter after k, and finding only the descendants of the active nodes of k as active nodes of the new query which comprises initializing an active-node set for an empty keyword ε, i.e., φε={[n; ξn]|ξn=|n|≦δ}, namely including all trie nodes n whose corresponding string has a length |n| within the edit-distance threshold δ, inputting a query string c1 c2 : : : cx letter by letter as follows: after inputting in a prefix query pi=c1 c2 : : : ci (i≦x), storing an active-node set φp for pi, when inputting a new character cx+1 and submitting a new query px+1, incrementally determining the active-node set 0px+1 for px+1 by using 0px as follows: for each [n; ξn]in 0px, we consider whether the descendants of n are active nodes for px+1, for the node n, if ξn+1<δ, then n is an active node for px+1, then storing [n; ξn+1] into 0px+1 for each child nc of node n, (1) the child node nc has a character different from cx+1, ed(ns; px+1)≦ed(n; px)+1=ξn+1, if ξn+1≦·δ, then ns is an active node for the new string, then storing [ns; ξn+1] into 0px+1, or (2) the child node nc has a label cx+1 is denoted as a matching node nm, ed(nm; px+1)≦·ed(n; px)=ξn≦δ, so that nm is an active node of the new string, then storing [nm; ξn] into 0px+1, but if the distance for the node nm is smaller than δ, i.e., ξn<δ, then for each nm's descendant d that is at most δ−ξn letters away from nm, adding [d; ξd] to the active-node set for the new string px+1, where ξdn+|d|−|nm|.
15. The method of claim 14 where during storing set 0px+1, it is possible to add two new pairs [v; ξ1] and [v; ξ2] for the same trie node v in which case storing the one of the new pairs [v; ξ1] and [v; ξ2] for the same trie node v with the smaller edit distance.
16. The method of claim 13 where given two words wi and wj, their normalized edit distance is:
ned(wi;wj)=ed(wi;wj)/max(|wi|;|wj|); where |wi| denotes the length of wi, where given an input keyword and one of its predicted words, the prefix of the predicted word with the minimum ned is defined as a best predicted prefix, and the corresponding normalized edit distance is defined as the “minimal normalized edit distance,” denoted as “mned” and where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises determining if ki is a complete keyword, then using ned to quantify the similarity; otherwise, if ki is a partial keyword, then using mned to quantify their similarity, namely quantifying similarity of two words using:
sim(ki;wi)=γ*(1−ned(ki;wi))+(1−γ)*(1−mned(ki;wi)); where γ is a tuning parameter between 0 and 1.
17. The method of claim 1 where determining the predicted-record set RQ comprises determining possible multiple words which have a prefix similar to a partial keyword, including multiple trie nodes corresponding to these words defined as the active nodes for the keyword k, locating leaf descendants of the active nodes, and determining the predicted records corresponding to these leaf nodes.
18. The method of claim 1 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises for a given query Q={p1, p2, . . . , pi}, where {ki1, ki2, . . . } is the set of keywords that share the prefix pi, where Lij is the inverted list of kij, and ∪i=∪jLij is the union of the lists for p for each prefix pi, determining the corresponding union list ∪i on the fly and intersecting the union lists of different keywords to find ∩iUi.
19. The method of claim 1 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises for a given query Q={p1, p2, . . . , pi} where {ki1, ki2, . . . } is the set of keywords that share the prefix pi, where Lij is the inverted list of kij, and ∪i=∪jLij is the union of the lists for pfor each prefix pi, predetermining and storing the union list ∪i of each prefix pi, and intersecting the union lists ∩iUi of query keywords when a query is initiated.
20. The method of claim 1 where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises assigning to each record a score on a list, and combining the scores of the records on different lists using an aggregation function to determine an overall relevance of the record to the query, where the aggregation function is monotonic, for each data keyword similar to a keyword in the query providing an inverted list sorted based on the weight of the keyword in a record, and accessing the sorted inverted lists.
21. The method of claim 1 where the data table T is structured into a trie and further comprising linking each node on the trie corresponding to a word, wi, to each node corresponding to the synonyms of the word, wi, in the trie and vise versa to return both wi and its synonyms using the link when the word, wi is retrieved.
22. A method for fuzzy type ahead search where R is a collection of records such as the tuples in a relational table, where D is a data set of words in R, where a user inputs a keyword query letter by letter, comprising: finding on-the-fly records with keywords similar to the query keywords by using edit distance to measure the similarity between strings, where the edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of single-character edit operations, where Q is the keyword query the user has input which is a sequence of keywords [w1, w2, : : : , wm]; treating the last keyword wm as a partial keyword finding the keywords in the data set that are similar to query keywords, where π is a function that quantifies the similarity between a string s and a query keyword w in D, including, but not limited to:
π(s,w)=1−ed(s,w)/|w|, where |w| is the length of the keyword w; and normalizing the edit distance based on the query-keyword length in order to allow more errors for longer query keywords, where d be a keyword in D, for each complete keyword wi (i=1, : : : , m−1), defining the similarity of d to wi as:
Sim(d,wi)=π(d,wi), since the last keyword wm is treated as a prefix condition, defining the similarity of d to wm as the maximal similarity of d's prefixes using function π, namely Sim(d, wm )=max prefix p of d π(p, wm ), where τ is a similarity threshold, where a keyword in D is similar to a query keyword w if Sim(d, w)≧τ, where a prefix p of a keyword in D is similar to the query keyword wm if π(p, wm)≧τ, where φ(wi) (i=1, : : : , m) denotes the set of keywords in D similar to wi, and where P(wm) denotes the set of prefixes of keywords in D similar to wm.
23. The method of claim 22 further comprising: ranking each record r in R based on its relevance to the query, where F(;) is a ranking function that takes the query Q and a record r∈R; determining a score F(r, Q) as the relevance of the record r to the query Q, and given a positive integer k, determining the k best records in R ranked by their relevance to Q based on the score F(r, Q).
24. The method of claim 23 where determining a score F(r, Q) as the relevance of the record r to the query Q comprises determining the relevance score F(r, Q) based on the similarities of the keywords in r and those keywords in the query given that a keyword d in the record could have different prefixes with different similarities to the partial keyword wm, by taking their maximal value as the overall similarity between d and wm, where a keyword in record r has a weight with respect to r, such as the term frequency TF and inverse document frequency IDF of the keyword in the record.
25. The method of claim 23 where determining the score F(r, Q) comprises: for each keyword w in the query, determining a score of the keyword with respect to the record r and the query, denoted by Score(r, w, Q); and determining the score F(r, Q) by applying a monotonic function on the Score(r, w, Q)'s for all the keywords w in the query.
26. The method of claim 25 where d is a keyword in record r such that d is similar to the query keyword w, d ∈φ(w), where Score(r, w, d) denotes the relevance of this query keyword w to the record keyword d, where the relevance value Score(r, w, Q) for a query keyword w in the record is the maximal value of the Score(r, w, d)'s for all the keywords d in the record, where determining a score of the keyword with respect to the record r and the query, denoted by Score(r, w, Q) comprises finding the most relevant keyword in a record to a query keyword when computing the relevance of the query keyword to the record as an indicator of how important this record is to the user query.
27. The method of claim 26 where F(r, Q) is F(r,Q)=i=1mScore(r,wi,Q) where
Score(r,wi,Q)=maxrecord keyword d in r{Score(r,wi,d)},
and
Score(r,wi,d)=Sim(d,wi)*Weight(d,r) where Sim(d, wi) is the similarity between complete keyword wi and record keyword d, and Weight(d, r) is the weight of d in record r.
28. The method of claim 26 comprising partitioning the inverted lists into several groups based on their corresponding query keywords, where each query keyword w has a group of inverted lists, producing a list of record IDs sorted on their scores with respect to this keyword, and using a top-k algorithm to find the k best records for the query.
29. The method of claim 28 where using a top-k algorithm to find the k best records for the query comprises for each group of inverted lists for the query keyword w, retrieving the next most relevant record ID for w by building a max heap on the inverted lists comprising maintaining a cursor on each inverted list in the group, where the heap is comprised of the record IDs pointed by the cursors so far, sorted on the scores of the similar keywords in these records since each inverted list is already sorted based on the weights of its keyword in the records and all the records on this list share the same similarity between this keyword and the query keyword w the list is also sorted based on the scores of this keyword in these records, retrieving the next best record from this group by popping the top element from the heap, incrementing the cursor of the list of the popped element by one, and pushing the new element of this list to the heap, ignoring other lists that may produce this popped record, since their corresponding scores will no longer affect the score of this record with respect to the query keyword w.
30. The method of claim 29 where L1, : : : , Lt are inverted lists with the similar keywords d1, : : : , dt, respectively, and further comprising sorting these inverted lists based on the similarities of their keywords to w, Sim(d1, w), : : : , sim(dt, w), constructing the max heap using the lists with the highest similarity values.
31. The method of claim 22 where the dataset D comprises a trie for the data keywords in D, where each trie node has a character label, where each keyword in D corresponds to a unique path from the root to a leaf node on the trie, where a leaf node has an inverted list of pairs [rid, weight]i, where rid is the ID of a record containing the leaf-node string, and weight is the weight of the keyword in the record and further comprising determining the top-k answers to the query Q in two steps comprising: for each keyword wi in the query, determining the similar keywords φ(wi) and similar prefixes P(wm) on the trie; and accessing the inverted lists of these similar data keywords to determine the k best answers to the query.
32. The method of claim 31 where accessing the inverted lists of these similar data keywords to determine the k best answers to the query comprises randomly accessing the inverted list, in each random access, given an ID of a record r, retrieving information related to the keywords in the query Q, to determine the score F(r, Q) using a forward index in which each record has a forward list of the IDs of its keywords and their corresponding weights, where each keyword has a unique ID corresponding its leaf node on the trie, and the IDs of the keywords follow their alphabetical order.
33. The method of claim 32 where randomly accessing the inverted list comprises: maintaining for each trie node n, a keyword range [ln, un], where In and un are the minimal and maximal keyword IDs of its leaf nodes, respectively; verifying whether record r contains a keyword with a prefix similar to wm, where for a prefix p on the trie similar to wm checking if there is a keyword ID on the forward list of r in the keyword range [lp, up] of the trie node of p, since the forward list of r sorted, this checking is performed a binary search using the lower bound lp on the forward list of r to get the smallest ID° no less than lp, the record having a keyword similar to wm if γ exists and is no greater than the upper bound up, i.e., γ≦up.
34. The method of claim 32 where randomly accessing the inverted list comprises: for each prefix p similar to wm, traversing the subtrie of p and identifying its leaf nodes; for each leaf node d, for the query Q, this keyword d has a prefix similar to wm in the query, storing [Query ID, partial keyword wm, sim(p, wm)]. in order to differentiate the query from other queries in case multiple queries are answered concurrently; storing the similarity between wm and p: determining the score of this keyword in a candidate record, where in the case of the leaf node having several prefixes similar to wm, storing their maximal similarity to wm; for each keyword wi in the query, storing the same information for those trie nodes similar to wi, defining stored entries for the leaf node as its collection of relevant query keywords; using collection of relevant query keywords to efficiently check if a record r contains a complete word with a prefix similar to the partial keyword wm by scanning the forward list of r, for each of its keyword IDs, locating the corresponding leaf node on the trie, and testing whether its collection of relevant query keywords includes this query and the keyword wm, and if so, using the stored string similarity to determine the score of this keyword in the query.
35. The method of claim 31 further comprising improving sorted access by precomputing and storing the unions of some of the inverted lists on the trie, where v is a trie node, and ∪(v) is the union of the inverted lists of v's leaf nodes, sorted by their record weights, and if a record appears more than once on these lists, selecting its maximal weight as its weight on list ∪(v), where ∪(v) is defined as the union list of node v.
36. The method of claim 35 where v is a trie node comprising materializing union list ∪(v), and using ∪(v) to speed up sorted access for the prefix keyword wm is that ∪(v) is sorted based on its record weights, where the value Score(r, wm, di) of a record r on the list of a keyword di with respect to wm is based on both Weight(di, r) and Sim(di, wm ), where all the leaf nodes of v have the same similarity to wm, where all the leaf nodes of v are similar to wm, namely their similarity to wm is no less than the threshold τ so that the sorting order of the union list ∪(v) is also the order of the scores of the records on the leaf-node lists with respect to wm.
37. The method of claim 36 where B is a budget of storage space available to materialize union lists comprising selecting trie nodes to materialize their union lists to maximize the performance of queries, where a node is defined as “materialized” if its union list has been materialized, where for a query Q with a prefix keyword wm, some of the trie nodes have their union lists materialized, where v is the highest trie node that is usable for the max heap of wm, and for which ∪(v) has not been materialized, where for each nonleaf trie descendant c of v, such that no node on the path from v to c (including c) has been materialized comprising: performing a cost-based analysis to quantify the benefit of materializing ∪(c) on the performance of operations on the max heap of wm based on reduction of traversal time, reduction of heap-construction time and reduction of sorted-access time, the overall benefit Bv of materializing v for the query keyword query wm being:
Bv=Breduction of traversal time+Breduction of heap-construction time+Av*Breduction of sorted-access time, where Av is the number of sorted accesses on ∪(v) for each query and then summing the benefits of materializing its union list to all the queries in the query workload or trie according to probability of occurrence of the query, and recomputing Bv the benefit Bv of materializing other affected nodes after the benefit of each node is computed until the given budget B of storage space is realized.
38. The method of claim 37 where selecting trie nodes to materialize their union lists to maximize the performance of queries comprises randomly select trie nodes, selecting nodes top down from the trie root, or selecting nodes bottom up from the leaf nodes.
39. A software product including instructions stored on a non-transitory tangible medium for controlling a computer for searching a structured data table T with m attributes and n records, where A={a1; a2; : : : ; am} denotes an attribute set, R={r1; : : : ; rn} denotes the record set and W={w1; w2; : : : ; wp} denotes a distinct word set in T where given two words, wi and wj, “wi≦wj” denotes that wi is a prefix string of wj, where a query consists of a set of prefixes Q={p1, p2, . . . , pl}, where a predicted-word set is Wki={w|w is a member of W and kl≦w}, the method comprising for each prefix pi finding the set of prefixes from the data set that are similar to pi, by: determining the predicted-record set RQ={r|r is a member of R for ever i; 1≦i≦·l−1, pi appears in r, and there exists a w included in Wki, w appears in r}; and for a keystroke that invokes query Q, returning the top-t records in RQ for a given value t, ranked by their relevancy to the query, treating every keyword as a partial keyword, namely given an input query Q={k1; k2; : : : ; kl}, for each predicted record r, for each 1≦i≦·l, there exists at least one predicted word wi for ki in r, since ki must be a prefix of wi, quantifying similarity as:
sim(ki;wi)=|ki|/|wi| if there are multiple predicted words in r for a partial keyword ki, selecting the predicted word wi with the maximal similarity to ki and quantifying a weight of a predicted word to capture the importance of a predicted word, and taking into account the number of attributes that the l predicted words a ear in denoted as na, to combine similarity, weight and number of attributes to generate a ranking function to score r for the query Q as follows: SCORE(r,Q)=α*l=11idfwi*sim(ki,wi)+(1-α)*1na, where α is a tuning parameter between 0 and 1.
Description:
GOVERNMENT RIGHTS
This invention was made with Government support under Grant No. 0238586, awarded by the National Science Foundation. The Government had certain rights in the invention.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to the field of interactive fuzzy-logic searches of string databases using a computer.
2. Description of the Prior Art
Traditional information systems return answers to a user after the user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data or the entities they are looking for, and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step towards solving this problem.
In information systems, often users cannot find their interesting records or documents because they don't know the exact representation of the entity in the backend repository. For example, a person Dame “John Lemmon” could be mistakenly represented as “Johnny Lemon”. A person called “William Gates” could be stored as “Bill Gates.” As a result, a user might not be able to find the records if he doesn't know the exact spelling of the person he is looking for. The same problem exists especially for the case where data came from various Web sources, which tends to be heterogeneous and noisy.
What is needed is a method which allows information systems to answer queries even if there is a slight mismatch between the user query and the physical representation of the interesting entities.
What is needed is a method that can also allow users to interactively query the repository as they type in their query.
In order to give instant feedback when users formulate search queries, many information systems are supporting autocomplete search, which shows results immediately after a user types in a partial query. As an example, almost all the major search engines nowadays automatically suggest possible keyword queries as a user types in partial keywords. Both Google Finance and Yahoo! Finance support searching for stock information as users type in keywords. Most autocomplete systems treat a query with multiple keywords as a single string, and find answers with text that matches the string exactly. As an illustrative example, consider the search box on the home page of Apple Inc. that supports autocomplete search on its products. Although a keyword query “itunes” can find a record “itunes wi-fi music store”, a query with keywords “itunes music” cannot find this record (as of June 2009), simply because these two keywords appear at different places in the record.
To overcome this limitation, a new type-ahead search paradigm has emerged recently. A system using this paradigm treats a query as a set of keywords, and does a full-text search on the underlying data to find answers including the keywords. An example is the CompleteSearch system on DBLP2, which searches the underlying publication data “on the fly” as the user types in query keywords letter by letter. For instance, a query “music sig” can find publication records with the keyword “music” and a keyword that has “sig” as a prefix, such as “sigir”, “sigmod”, and “signature”. In this way, a user can get instant feedback after typing keywords, thus can obtain more knowledge about the underlying data to formulate a query more easily.
A main challenge in fuzzy type-ahead search is the requirement of high efficiency. To make search interactive, from the time the user has a keystroke to the time the results computed from the server are displayed on the user's browser, the delay should be as small as possible. An interactive speed requires this delay be within milliseconds. Notice that this time includes the network transfer delay, execution time on the server, and the time for the browser to execute its javascript (which tends to be slow). Providing a high efficiency is especially important since the system needs to answer more queries than a traditional system that answers a query only after the user finishes typing a complete query.
The problem includes how to answer ranking queries in fuzzy type-ahead search on large amounts of data. Consider a collection of records such as the tuples in a relational table. As a user types in a keyword query letter by letter, we want to on-the-fly find records with keywords similar to the query keywords. We treat the last keyword in the query as a partial keyword the user is completing. We assume an index structure with a trie for the keywords in the underlying data, and each leaf node has an inverted list of records with this keyword, with the weight of this keyword in the record. As an example, Table IV shows a sample collection of publication records. For simplicity, we only list some of the keywords for each record. FIG. 9 shows the corresponding index structure.
Suppose a user types in a query “music icde li”. We want to find records with keywords similar to these keywords, and rank them to find the best answers. To get the answers, for each query keyword, we can find keywords similar the query keyword. For instance, both the keywords “icdt” and “icde” are similar to the second query keyword. The last keyword “li” is treated as a prefix condition, since the user is still typing at the end of this keyword. We find keywords that have a prefix similar to “li”, such as “lin”, “liu”, “lui”, “icde”, and “icdt”. We access the inverted lists of these similar keywords to find records and rank them to find the best answers for the user.
A key question is: how to access these lists efficiently to answer top-k queries? In the literature there are many algorithms for answering top-k queries by accessing lists. These algorithms share the same framework originally proposed by Fagin, in which we have lists of records sorted based on various conditions. An aggregation function takes the scores of a record from these lists and computes the final score of the record. There are two methods to access these lists: (1) Random Access: Given a record id, we can retrieve the score of the record on each list; (2) Sorted Access: We retrieve the record ids on each list following the list order.
BRIEF SUMMARY OF THE INVENTION
We disclose a method and apparatus for an information-access paradigm, called “iSearch,” in which the system searches on the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by (1) supporting queries with multiple keywords on data with multiple attributes; and (2) finding relevant records that may not match query keywords exactly. This framework allows users to explore data as they type, even in the presence of minor errors.
There are challenges in this framework for large amounts of data. The first one is search efficiency. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each such query within milliseconds. We develop a caching-based technique to support incremental computation on a trie index structure. The second challenge is supporting fuzzy search with an interactive speed. We propose a technique to do incremental computation with efficient pruning and caching. We have developed several real prototypes using these techniques. One of them has been deployed to support interactive search on the UCI directory, which has been used regularly and well received by users due to its friendly interface and high search efficiency.
One of the illustrated embodiments of the invention is a method to support efficient, interactive, and fuzzy search on text data. Interactive, fuzzy search on structured data is important in applications such as query relaxation, autocomplete, and spell checking, where inconsistencies and errors exist in user queries as well as data. In this embodiment, we developed techniques to efficiently and interactively answer fuzzy queries on structured data. Most of existing algorithms cannot support interactive, fuzzy search. After a user types in a query with keywords, these algorithms find records and documents that include these keywords exactly.
The techniques of the illustrated embodiment allows users to efficiently search for information interactively, and they can find records and documents even if these records and documents are slightly different from the user keywords. In the illustrated embodiment of the invention, we developed the following novel techniques to effectively, efficiently, and interactively answer fuzzy search over structured data:
1) We use tree-based indexes to efficiently index the data and use an algorithm to traverse the tree as the user types in a query.
2) We adaptively maintain the intermediate results for the query to support interactive, fuzzy search.
3) We develop good ranking algorithms to sort the candidate results.
We have developed an online demo called PSearch, available at ≦http://psearch.ics.uci.edu/≧. The goal of the PSearch Project is to make it easier to search for people at the University of California at Irvine. It has a single input search box, which allows keyword queries on people name, UCInetID, telephone number, department, and title. It has the following features:
• Supports interactive search: search as you type;
• Allows minor errors in the keywords;
• Supports synonyms, e.g., “William=Bill” and “NACS=Network and Academic Computing Services”.
• Allows multiple keywords; We use the UCI Directory data provided by NACS.
The illustrated embodiment of the invention may be used in information systems that need to allow users to interactively search for text data even with minor errors. For example, E-commerce Web sites that have structured data about their products can benefit from our techniques. Services that allow people to search for people can also use our techniques to make their search more powerful. The illustrated embodiment of the invention can also be used to allow users to browse data in underlying databases.
In the context of the illustrated embodiment we consider how to support these two types of operations efficiently by utilizing characteristics specific to our problem setting, so that we can adopt these algorithms to solve our problem. We make the following contributions. We present two efficient algorithms for supporting random access on the inverted lists. Both require a forward index on the data, and their performance depends on the number of keywords in each record. We consider how to support sorted access on the inverted lists efficiently. We identify an important subclass of ranking functions, which have been proven to be very effective in several systems we have deployed. We develop a list-pruning technique to improve the performance. We consider how to further improve the performance of sorted access by list materialization, i.e., precomputing and storing the union of the leaf-node inverted lists for a trie node. We show how to use materialized lists properly to improve the performance. We then consider how to select trie nodes to materialize their lists given a space constraint in order to maximize query performance. We show our experimental results on several real data sets to show the query performance of algorithms using our techniques. We also give suggestions on how to use these techniques in applications with large data sets.
Preliminaries: Data and Queries:
We first review the problem of fuzzy type ahead search. Let R be a collection of records such as the tuples in a relational table. Let D be the set of words in R. As a user types in a keyword query letter by letter, we want to on-the-fly find records with keywords similar to the query keywords. We use edit distance to measure the similarity between strings. Formally, the edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of single-character edit operations (i.e., insertion, deletion, and substitution) needed to transform s1 to s2. For example, ed(icde, idt)=2.
Let Q be a query the user has typed in, which is a sequence of keywords [w1, w2, : : : , wm]. We treat the last keyword wm as a partial keyword the user is completing. We want to find the keywords in the data set that are similar to query keywords, since records with such a keyword could be of interest to the user.
Let π be a function that quantifies the similarity between a string s and a query keyword w in D. An example is:
π(s,w)=1−ed(s,w)/|w|
where |w| is the length of the keyword w. We normalize the edit distance based on the query-keyword length in order to allow more errors for longer query keywords. Our results in the paper focus on this function, and they can be generalized to other functions using edit distance.
• a. Let d be a keyword in D. For each complete keyword wi (i=1, : : : , m−1), we define the similarity of d to wi as:
Sim(d,wi)=π(d,wi).
Since the last keyword wm is treated as a prefix condition, we define the similarity of d to wm as the maximal similarity of d's prefixes using function π, i.e.: Sim(d, wm)=max prefix p of d π(p, wm): Let τ be a similarity threshold. We say a keyword in D is similar to a query keyword w if Sim(d, w)≧τ. We say a prefix p of a keyword in D is similar to the query keyword wm if π(p, wm)≧τ. Let φ(wi) (i=1, : : : , m) denote the set of keywords in D similar to wi, and P(wm) denote the set of prefixes (of keywords in D) similar to wm.
In our running example, suppose a user types in a query “icde li” letter by letter on the data shown in Table IV. Suppose the similarity threshold τ is 0:45. Then the set of prefixes similar to the partial keyword “li” is P(“li”)={l, li, lin, liu, lu, lui, i}, and the set of data keywords similar to the partial keyword “li” is φ(“li”)={lin, liu, lui, icde, icdt}. In particular, “lui” is similar to “li” since Sim(lui, li)=1−ed(lui, li)/|li|=0:5≧τ. The set of similar words for the complete keyword “icde” is φ(“icde”)={icde, icdt}. Notice that in the running example, for simplicity, we use a relatively small similarity threshold τ. Increasing the threshold can reduce the number of similar prefixes on the trie.
Top-k Answers:
We rank each record r in R based on its relevance to the query. Let F(,) be such a ranking function that takes the query Q and a record r ∈R, and computes a score F(r, Q) as the relevance of the record r to the query Q. Given a positive integer k, our goal is to compute the k best records in R ranked by their relevance to Q. Notice that our problem setting allows an important record to be in the answer, even if not all the query keywords have a similar keyword in the record. We want to compute these records efficiently as the user modifies the query by adding more letters, or modifying or deleting the letters in Q. Without loss of generality, each string in the data set and a query is assumed to only use lower case letters.
An effective ranking function F computes the relevance score F(r, Q) based on the similarities of the keywords in r and those keywords in the query. Notice that a keyword d in the record could have different prefixes with different similarities to the partial keyword wm, and we can take their maximal value as the overall similarity between d and wm. The function also considers the importance of each keyword in the record. A keyword in record r has a weight with respect to r, such as the term frequency and inverse document frequency (TF/IDF) of the keyword in the record. Notice that this setting covers the specific cases where each record has a keyword independent weight, such as the rank of a Web page if a record is a URL with tokenized keywords, and the number of publications of an author if a record is an author.
Indexing and Searching:
To support fuzzy type-ahead search efficiently, we construct a trie for the data keywords in D. A trie node has a character label. Each keyword in D corresponds to a unique path from the root to a leaf node on the trie. For simplicity, a trio node is mentioned interchangeably with the keyword corresponding to the path from the root to the node. A leaf node has an inverted list of pairs [rid, weight]i, where rid is the ID of a record containing the leaf-node string, and weight is the weight of the keyword in the record. FIG. 9 shows the index structure in our running example. For instance, for the leaf node of keyword “lin”, its inverted list has five elements. The first element “r3, 9]” indicates that the record r3 has this keyword, and the weight of this keyword in this record is “9”.
We compute the top-k answers to the query Q in two steps. In the first step, for each keyword wi in the query, we compute the similar keywords φ(wi) and similar prefixes P(wm) on the trie (shown in FIG. 10). We disclosed above an efficient algorithm for incrementally computing these similar strings as the user modifies the current query. In our running example, the keyword “icde” in the query has two similar data keywords φ(“icde”)={icde, icdt}. The partial keyword has seven similar prefixes, i.e., P(“li”)={l, li, lin, liu, lu, lui, i}. They have five leaf-node keywords: “lin”, “liu”, “lui”, “icde”, and “icdt” . . . .
In the second step, we access the inverted lists of these similar data keywords to compute the k best answers to the query. Many algorithms have been proposed for answering top-k queries by accessing sorted lists. When adopting these algorithms to solve our problem, we need to efficiently support two basic types of access used in these algorithms: random access and sorted access on the lists. Below we consider how to efficiently support these two access methods, and the corresponding requirements on the index structure.
Supporting Random Access Efficiently
• a. Consider how to support efficient random access in order to adopt existing top-k algorithms in the literature. In each random access, given an ID of a record r, we want to retrieve information related to the keywords in the query Q, which allows us to compute the score F(r, Q). In particular, for a keyword we in the query, does the record r have a keyword similar to wi? One naive way to get the information is to retrieve the original record rand go through its keywords. This approach has two limitations. First, if the data is too large to fit into memory and has to reside on hard disks, accessing the original data from the disks may slow down the process significantly. This costly operation will prevent us from achieving an interactive-search speed. Its second limitation is that it may require a lot of computation of string similarities based on edit distance, which could be time consuming.
• b. Here we present two efficient approaches for solving this problem. Both require a forward index in which each record has a forward list of the IDs of its keywords and their corresponding weights. We assume each keyword has a unique ID corresponding its leaf node on the trie, and the IDs of the keywords follow their alphabetical order. FIG. 11 shows the forward lists in our running example. The element “[1, 9]” on the forward list of record r3 shows that this record has a keyword with the ID “1”, which is the keyword “lin” as shown on the trie. The weight of this keyword in this record is “9”. For simplicity, in the discussion we focus on how to verify whether the record has a keyword with a prefix similar to the partial keyword wm. With minor modifications the discussion extends to the case where we want to verify whether r has a keyword similar to a complete keyword wi in the query.
Method 1: Probing on Forward Lists
This method maintains, for each trie node n, a keyword range [ln, un], where ln and un are the minimal and maximal keyword IDs of its leaf nodes, respectively. An interesting observation is that a complete word with n as a prefix must have an ID in this keyword range, and each complete word in the data set with an ID in this range must have a prefix of n. In FIG. 11, the keyword range of node “li” is [1, 3], since 1 is the smallest ID of its leaf nodes and 3 is the largest ID of its leaf nodes.
Based on this observation, this method verifies whether record r contains a keyword with a prefix similar to wm as 4 follows. For a prefix p on the trie similar to wm (computed in the first step of the algorithm as described in Section II), we check if there is a keyword ID on the forward list of r in the keyword range [lp, up] of the trie node of p. Since we can keep the forward list of r sorted, this checking can be done efficiently. In particular, we do a binary search using the lower bound lp on the forward list of r to get the smallest ID° no less than lp. The record has a keyword similar to wm if γ exists and is no greater than the upper bound up, i.e., γ≦·up.
In our running example, suppose we want to verify whether the record r3 contains a complete word with a prefix similar to the partial keyword “li”. For each of its similar prefixes, we check if its range contains a keyword in the record. For instance, consider the similar prefix “li” with the range [1, 3]. Using a binary search on the forward list ([1, 9], [3, 4], [5, 4]) of r3, we find a keyword ID 1 in this range. Thus we know that the record indeed contains a keyword similar to this prefix.
B. Method 2: Probing on Trie Leaf Nodes
Using this method, for each prefix p similar to wm, we traverse the subtrie of p and identify its leaf nodes. For each leaf node d, we store the fact that for the query Q, this keyword d has a prefix similar to wm in the query. Specifically, we store
• [Query ID, partial keyword wm, sim(p, wm)].
We store the query ID in order to differentiate it from other queries in case multiple queries are answered concurrently. We store the similarity between wm and p to compute the score of this keyword in a candidate record. In case the leaf node has several prefixes similar to wm, we only keep their maximal similarity to wm. For each keyword wi in the query, we also store the same information for those trie nodes similar to wi. Therefore, a leaf node might have multiple entries corresponding to different keywords in the same query, We call these entries for the leaf node as its collection of relevant query keywords. This collection can be implemented as a hash table using the query ID as the key, or an array sorted based on the query ID. Notice that this structure needs very little storage space, since the entries of old queries can be quickly reused by new queries, and the number of keywords in a query tends to be small.
We use this additional information to efficiently check if a record r contains a complete word with a prefix similar to the partial keyword wm. We scan the forward list of r. For each of its keyword IDs, we locate the corresponding leaf node on the trie, and test whether its collection of relevant query keywords includes this query and the keyword wm. If so, we use the stored string similarity to compute the score of this keyword in the query.
FIG. 12 shows how we use this method in our running example, where the user types in a keyword query q1=[icde, li]. When computing the complete words of “li”, for each of its leaf nodes such as “lin” and “liu”, we insert the query ID (shown as “q1”), the partial keyword “li”, and the corresponding prefix similarity to its collection of relevant query keywords. To verify whether record r3 has a word with a prefix similar to “li”, we scan its forward list. Its first keyword is “lin”. We access its corresponding leaf node, and see that the node's collection of relevant query keywords includes this query partial keyword. We can retrieve the corresponding prefix similarity to compute the score of this keyword with respect to this query.
C. Comparing Two Methods
The time complexity of method 1 is O (G*log(|r|)), where G is the total number of similar prefixes of wm and similar complete keywords of wi's for 1·≦·m−1, and |r| is the number of distinct keywords in record r. Since the similar prefixes of wm could have ancestor-descendant relationships, we can optimize the step of accessing them by only considering those “highest” ones.
The time complexity of the second method is
• Σprefix p of wm|Tp|+|r|*|Q|.
The first term corresponds to the time of traversing the subtries of similar prefixes, where Tp is the size of the subtrie at a similar prefix p. The second term corresponds to the time of probing the leaf nodes, where |Q| is the number of query keywords. Notice that to identify the answers, we need access the inverted lists of complete words, thus the first term can be removed from the complexity. Method 1 is preferred for data sets where records have a lot of keywords such as long documents, while method 2 is preferred for data sets where records have a small number of records such as relational tables with relatively short attribute values. For both methods, we can retrieve the weight of a keyword in the record and the similarity between the record keyword and the corresponding query keyword, which is precomputed on the trie. We use this weight and similarity to compute a score of this query keyword with respect to the record.
Supporting Sorted Access Efficiently
Consider how to do sorted access efficiently in algorithms for computing top-k answers. Existing top-k algorithms assume we are given multiple lists sorted based on various conditions. Each record has a score on a list, and we use an aggregation function to combine the scores of the record on different lists to compute its overall relevance to the query. The aggregation function needs to be monotonic, i.e., decreasing the score of a record on a list cannot increase the record's overall score. In our problem, for each data keyword similar to a keyword in the query, we have an inverted list sorted based on the weight of the keyword in a record. Thus we can adopt an existing top-k algorithm to solve our problem by accessing these sorted lists, assuming the ranking function is monotonic with respect to the weights on the lists. This approach has the advantage of allowing a general class of ranking functions. On the other hand, its performance could be low since the number of lists for a query could be large (in tens of thousands or even hundreds of thousands), especially since the last prefix keyword wm can have multiple similar prefixes, each of which can have multiple leaf nodes, and accessing these lists can be costly. For instance, there are about tens of thousands of inverted lists for each query on DBLP dataset. Next we consider how to improve the performance of sorted access on an important class of ranking functions.
Important Class of Ranking Functions
An interesting observation in our problem setting is that the inverted lists of a query form groups naturally. In particular, each complete keyword wi has several inverted lists corresponding to its similar data keywords in φ(wi), the last partial keyword wm has several similar prefixes in P(wm), each of which has multiple leaf nodes. These groups allow us to improve the sorted-access performance for a large class of ranking functions F with the following two properties.
Property 1: The score F(r, Q) of a record r to a query Q is a monotonic combination of scores of the query, keywords with respect to the record r.
Formally, we compute the score F(r, Q) in two steps. In the first step, for each keyword w in the query, we compute a score of the keyword with respect to the record rand the query, denoted by Score(r, w, Q). In the second step, we compute the score F(r, Q) by applying a monotonic function on the Score(r, w, Q)'s for all the keywords w in the query. The intuition of this property is that the more relevant an individual query keyword is to a record, the more likely this keyword is a good answer to this query. As we will see shortly, this property allows us to partition all the inverted lists into different groups based on query keywords. In our running example, we compute the score of a record to the query “icde li” by aggregating the scores of each of the query keywords “icde” and “li” with respect to the record.
The next question is how to compute the value Score(r, w, Q), especially since the keyword w can be similar to multiple keywords in the record r. Let d be a keyword in record r such that d is similar to the query keyword w, i.e., d ∈φ(w). We use Score(r, w, d) to denote the relevance of this query keyword w to the record keyword d. The value should depend on both the weight of w in r as well as the similarity between w and d, i.e., Sim(d, w). Intuitively, the more similar they are, the more relevant w is to d.
Property 2: The relevance value Score(r, w, Q) for a query keyword w in the record is the maximal value of the Score(r, w, d)'s for all the keywords d in the record. This property states that we only look at the most relevant keyword in a record to a query keyword when computing the relevance of the query keyword to the record. It means that the ranking function is “greedy” to find the most relevant keyword in the record as an indicator of how important this record is to the user query. In addition, as we can see shortly, this property allows us to do effective pruning when accessing the multiple lists of a query keyword. In several prototype systems we have implemented, we used ranking functions with these two properties, and the function have been proved to be very effective.
In the following, as an example, we use the following ranking function with the two properties to discuss techniques to support efficient sorted access.
F(r,Q)=i=1mScore(r,wi,Q)(1)
where
Score(r,wi,Q)=maxrecord keyword d in r{Score(r,wi,d)}, (2)
and
Score(r,wi,d)=Sim(d,wi)*Weight(d,r) (3)
In the last formula, Sim(d, wi) is the similarity between complete keyword wi and record keyword d, and Weight(d, r) is the weight of d in record r.
B. Accessing Lists in Groups
The first property allows us to partition the inverted lists into several groups based on their corresponding query keywords. Each query keyword w has a group of inverted lists, which can produce a list of record IDs sorted on their scores with respect to this keyword. On top of these sorted lists of groups, we can adopt an existing top-k algorithm to find the k best records for the query.
For each group of inverted lists for the query keyword w, we need to support sorted access that retrieves the next most relevant record ID for w. Fully computing this sorted list L(w) using the keyword lists is expensive in terms of time and space. We can support sorted access on this list L(w) efficiently by building a max heap on the inverted lists. In particular, we maintain a cursor on each inverted list in the group. The heap consists of the record IDs pointed by the cursors so far, sorted on the scores of the similar keywords in these records. Notice that each inverted list is already sorted based on the weights of its keyword in the records, and all the records on this list share the same similarity between this keyword and the query keyword w. Thus this list is also sorted based on the scores of this keyword in these records. To retrieve the next best record from this group, we pop the top element 6 from the heap, increment the cursor of the list of the popped element by 1, and push the new element of this list to the heap. The second property allows us to ignore other lists that may produce this popped record, since their corresponding scores will no longer affect the score of this record with respect to the query keyword w. Since our method does not need to compute the entire list of L(w), we call L(w) the virtual sorted list of the query keyword w.
In our running example, FIG. 13 shows the two heaps for a query Q with two keywords “icde” and “li”. For illustration purposes, for each group of lists we also show the virtual merged list of records with their scores, and this list is only partially computed during the traversal of the underlying lists. Each record on a heap has an associated score of this keyword with respect to the query keyword, computed using Equation 3. For instance, for record 1 on the inverted list of the keyword “lui” similar to the query keyword “li”, we have Score(r1, li, Q)=(1j ed(lui, li)/|li|)*Weight(r1, lui)=4.
Suppose we want to compute the top-2 best answers based on the functions described above, by doing sorted access only. We first pop the top elements of the two max heaps, [r4, 9] and [r3, 9], and compute an upper bound on the overall score of an answer, i.e., 18. We increment the cursors of the lists that produce these top elements, push them to the heaps, and retrieve the next two top elements: [r5, 8] and [r5, 8]. The new score upper bound becomes 16. After retrieving the four elements of every max heap, we can compute the top-2 records together with their scores: [r5, 16] and [r4, 16].
C. Pruning Low-Score Lists
We can further improve the performance of sorted access on the virtual sorted list L(w) of the query keyword w using the idea of “on-demand heap construction,” i.e., we want to avoid constructing a heap for all the inverted lists of keywords similar to a query keyword. Let L1, : : : , Lt be these inverted lists with the similar keywords d1, : : : , dt, respectively. Each push/pop operation on the heap of these lists takes O(log(t)) time. If we can reduce the number of elements on the heap, we can reduce the cost of its push/pop operations. Intuitively, we sort these inverted lists based on the similarities of their keywords to w, i.e., Sim(d1, w), : : : , sim(dt, w). We first construct the max heap using the lists with the highest similarity values. Suppose Li is a list not included in the heap so far. We can derive an upper bound ui on the score of a record from Li (with respect to the query keyword w) using the largest weight on the list and the string similarity Sim(di, w). Let r be the top record on the heap, with a score Score(r, w, Q). If Score(r, w, Q)≧ui, then this list does not need to be included in the heap, since it cannot have a record with a higher score. Otherwise, this list needs to be included in the heap.
Based on this analysis, each time we pop a record from the heap and push a new record r to it, we compare the score of the new record with the upper bounds of those lists not included in the heap so far. For those lists with an upper bound smaller than this score, they need to be included in the heap from now on. Notice that this checking can be done very efficiently by storing the maximal value of these upper bounds, and ordering these lists based on their upper bounds.
We have two observations about this pruning method. (1) As a special case, if those keywords matching a query keyword exactly have the highest relevance scores, this method allows us to consider these records prior considering other records with mismatching keywords. Thus this method is consistent with the idea “exactly-matched records should come out first and fast.” (2) The pruning power can be even more significant if the query keyword w is the last prefix keyword wm, since many of its similar keywords share the same prefix p on the trie similar to wm. We can compute an upper bound of the record scores from these lists and store the bound on the trie node p. In this way, we can prune the lists more effectively by comparing the value Score(r, w, Q) with this upper bound stored on the trie, without needing to on-the-fly compute the upper bound.
In our running example of the query “icde li”, FIG. 13 illustrates how we can prune low-score lists and do on-demand heap constructions. The prefix “li” has several similar keywords. Among them, the two words “lin” and “liu” have the highest similarity value to the query keyword, mainly because they have a prefix matching the keyword exactly. We build a heap using these two lists. We compute the upper bounds of record scores for the lists “liu”, “icde”, and “icdt”, which are 3, 4.5, 3, respectively. These lists are never included in the heap since their upper bounds are always smaller than the scores of popped records before the traversal terminates. As “icde” and “icdt” share the same similar prefix “i” for “li”, they have the same similarity to “li”. We can store the upper bound of the weights in the inverted lists of “icde” and “icdt” on trie node “i”. We compute the upper bound score of “icde” and “icdt” using the stored weight, instead of retrieving leaf-descendants of “i” to compute the bound. The improvement is significant if there are multiple similar 7 words share a similar prefix. Similarly, for the query keyword “icde”, using the pruning method we only need to build a heap using a single list, which is the one corresponding to the keyword “icde” on the trie. The list of the keyword “icdt” is pruned and never included in the heap.
Improving Sorted Access By List Materialization
Consider how to further improve the performance of sorted access by precomputing and storing the unions of some of the inverted lists on the trie. Let v be a trie node, and ∪(v) be the union of the inverted lists of v's leaf nodes, sorted by their record weights. If a record appears more than once on these lists, we choose its maximal weight as its weight on list ∪(v). We call ∪(v) the union list of node v. For example, the union list of the trie node “li” in our running example has [r3, 9], [r5, 8], [r7, 8], [r4, 7], [r6, 5], [r9, 4], [r2, 3], and [r8, 1]. When using a max heap to retrieve records sorted by their scores with respect to a query keyword, this materialized union list could help us build a max heap with fewer lists and reduce the cost of push/pop operations on the heap. Therefore, this method allows us to utilize additional memory space to answer top-k queries more efficiently.
The benefit of list materialization on sorted access on a max heap is especially significant for the last prefix keyword wm, since it could have multiple similar prefixes on the trie, which correspond to many leaf-node keywords. Thus in the remainder of this section we mainly focus on the cost-benefit analysis for this prefix keyword wm. We first discuss how to use materialized union lists to speed up sorted access below, and then consider how to select trie nodes to materialize their union lists given a space constraint below.
Utilizing Materialized Union Lists Property
Suppose v is a trie node whose union list ∪(v) has been materialized. One subtlety in using ∪(v) to speed up sorted access for the prefix keyword wm is that ∪(v) is sorted based on its record weights, while the push/pop operations on the max heap require each list on the heap be sorted based on its record scores with respect to wm. Recall that the value Score(r, wm, di) of a record r on the list of a keyword di with respect to wm is based on both Weight(di, r) and Sim(di, wm). In order to use ∪(v) to replace the lists of v's leaf nodes in the max heap, the following two conditions need to be satisfied:
All the leaf nodes of v have the same similarity to wm.
All the leaf nodes of v are similar to wm, i.e., their similarity to wm is no less than the threshold τ.
When the conditions are satisfied, the sorting order of the union list ∪(v) is also the order of the scores of the records on the leaf-node lists with respect to wm. In this case, we call the union list ∪(v) “usable” for the max heap of the query keyword wm. For instance, consider the index in FIG. 9 and a query “music icd”. For the query keyword “icd”, we access its similar data keywords “icde” and “icdt”, and build a max heap on their inverted lists based on record scores with respect to this query keyword. For the trie node “icd”, both of its leaf nodes are similar to the query keyword (with the same similarity), its union list is usable for the max heap of this keyword. That is, we can use its materialized union list in the max heap of these two inverted lists, saving the time to traverse these two lists.
Materializing ∪(v) has the following performance benefits for the max heap for wm. (1) We do not need to traverse the trie to access these leaf nodes and use them to construct the max heap, (2) Each push/pop operation on the heap is more efficient since its has fewer lists. Notice that if v has an ancestor v′ with a materialized union list ∪(v) also usable for the max heap, then we can use the materialized list ∪(v′) instead of ∪(v), and the list ∪(v) will no longer benefit the performance of this max heap.
B. Choosing Union Lists to Materialize
Let B be a budget of storage space we are given to materialize union lists. Our goal is to select trie nodes to materialize their union lists to maximize the performance of queries. The following are several naive algorithms for choosing trie nodes:
Random: We randomly select trie nodes.
TopDown: We select nodes top down from the trie root.
BottomUp: We select nodes bottom up from the leaf nodes.
Each naive approach keeps choosing trie nodes to materialize their union lists until the sum of their list sizes reaches the space limit B. One main limitation of these approaches is that they do not quantitatively consider the benefits of materializing a union list. To overcome this limitation, we propose a cost based method called CostBased to do list materialization. Its main idea is the following.
For simplicity we say a node has been “materialized” if its union list has been materialized. For a query Q with a prefix keyword wm, suppose some of the trie nodes have their union lists materialized. Let v be the highest trie node that is usable for the max heap of wm, and ∪(v) has not been materialized. Materializing the union list of an ancestor of v does not have any benefit on this max heap, since the score order of their records might not be consistent with the weight order or these records. For each non-leaf trie descendant c of v, such that no node on the path from v to c (including c) has been materialized. We can do a cost-based analysis to quantify the benefit of materializing ∪(c) on the performance of operations on the max heap of wm. If v has a descendant node c′ that has a materialized list, then materializing the descendants of c′ will no longer benefit the max heap for wm.
Here we present an analysis of the benefits of materializing the usable node v. In general, for a trie node n, let T(n) denote its subtrie, |T(n)| denote the number of nodes in T(v), and D(n) denote the set of n's leaf nodes. The total time of traversing this subtrie and accessing the inverted lists of its leaf nodes is O(|T(v)|+|D(v)|).
As illustrated in FIG. 14, suppose v has materialized descendants. Let M(v) be the set of highest materialized descendants of v. These materialized nodes can help reduce the time of accessing the inverted lists of v's leaf nodes in two ways. First, we do not need to traverse the descendants of a materialized node d ∈M(v). We can just traverse |T(v)|−Σd ∈M(v)|T(d)| trie nodes. Second, when inserting lists to the max heap of wm, we insert the union list of each d ∈M(v) and the inverted lists of d′∈N(v) into the heap, where N(v) denotes the set of v's leaf descendants having no ancestors in M(v). Let S(v)=M(v)∪N(v). Now we quantify the benefits of materializing the node v:
Reducing traversal time: Since we do not need to traverse v's descendants, the time reduction is B1=O(|T(v)|−Σd ∈M(v)|T(d)|
Reducing heap-construction time: When constructing the max heap for the query keyword wm, we insert the union list ∪(v) into the heap, instead of the inverted lists of those nodes in S(v). The time reduction is B2=|S(v)|−1.
Reducing sorted-access time: If we insert the union list ∪(v) to the max heap of wm, the number of leaf nodes in the heap is Swma ∈P(wm)Sa. Otherwise, it is Swm+|S(v)|−1. The time reduction of a sorted access is B3=O(log(Swm+|S(v)|−1))−O(log(Swm))
The following is the overall benefit of materializing v for the query keyword query wm:
Bv=B1+B2+Av*B3, (4)
where Av is the number of sorted accesses on ∪(v). The analysis above is on a single query. Suppose we are given a query workload. For each trie node, we can quantify the benefits of materializing its union list with respect to the workload by taking the summation of the benefits of materializing its union list to all the queries in the workload, based on Equation 4. In addition, the memory cost of materializing v is the number of records in the union list of v. We choose the best union list to materialize based on their benefits and costs (sizes). Notice that materializing this list will affect the benefits of materializing other union lists on the query workload. So after deciding a node to materialize, we may need to recompute the benefits of other affected nodes. We repeat this process until we reach the given space budget B. If there is no query workload, we can use the trie structure to count the probability of each trie node to be queried and use such information to compute the benefit of materializing a node.
In summary we have disclosed how to efficiently answer top-k queries in fuzzy type-ahead search. We focused on an index structure with a trie of keywords in a data set and inverted lists of records on the trie leaf nodes. We studied two technical challenges when adopting existing top-k algorithms in the literature: how to efficiently support random access and sorted access on inverted lists? We presented two efficient algorithms for supporting random access. For sorted access, we first identified an important class of ranking functions that allow us to improve its performance by grouping the lists. We then proposed two techniques to support efficient sorted access using list pruning and materialization. We conducted an experimental consider on real data sets to show that our techniques can answer queries on large data sets efficiently.
In summary, the illustrated embodiment of the invention is a method for efficient, interactive, and fuzzy search on text data comprising the steps of: using tree-based indexes to efficiently index the data; using an algorithm to traverse the tree as the user types in a query; adaptively maintaining the intermediate results for the query to support interactive, fuzzy search; and sorting the candidate results using ranking algorithms.
The method further comprises performing query relaxation, autocompletion, or spell checking.
Where there is a single input search box, the illustrated embodiment of the invention the allows keyword queries on a plurality of data fields wherein the search is interactive, allows minor errors in the keywords, supports synonyms, and/or allows multiple keywords.
The illustrated embodiment of the invention is a method used in an information system to allow users to interactively search for text data even with minor errors.
The illustrated embodiment of the invention is a method for searching a structured data table T with m attributes and n records, where A={a1; a2; : : : ; am} denotes an attribute set, R={r1; r2; : : : ; rn} denotes the record set, and W={w1;w2; : : : ;wp} denotes a distinct word set in T, where given two words, wi and wj, “wi≦wj” denotes that wi is a prefix string of wj, where a query consists of a set of prefixes Q={p1, p2, . . . , pi}, where a predicted-word set is Wkl={w|w is a member of W and kl≦w}, the method comprising for each prefix pi finding the set of prefixes from the data set that are similar to pi, by: determining the predicted-record set RQ={r|r is a member of R, for every i; 1≦i≦·l−1, pi appears in r, and there exists a w included in Wkl, w appears in r}; and for a keystroke that invokes query Q, returning the top-t records in RQ for a given value t, ranked by their relevancy to the query.
Where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query the illustrated embodiment of the invention comprises finding a trie node corresponding to a keyword in a trie with inverted lists on leaf nodes by traversing the trie from the root; locating leaf descendants of the trie node corresponding to the keyword, and retrieving the corresponding predicted words and the predicted records on inverted lists.
Where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query the illustrated embodiment of the invention comprises maintaining a session cache for each user where each session cache keeps keywords that the user has input in the past and other information for each keyword, including its corresponding trie node and the top-t predicted records.
Where maintaining a session cache for each user the illustrated embodiment of the invention comprises inputting in a query string c1c2 : : : cx letter by letter, where pi=c1c2 : : : ci is a prefix query (1≦i≦·x) and where ni is the trie node corresponding to p, and after inputting in the prefix query pi, storing node ni for pi and its top-t predicted records, inputting a new character cx+1 at the end of the previous query string c1c2 : : : cx, determining whether node nx that has been kept for px has a child with a label of cx+1· if so, locating leaf descendants of node nx+1, and retrieving corresponding predicted words and the predicted records, otherwise, there is no word that has a prefix of px+1, and then returning an empty answer.
The illustrated embodiment of the invention is a method further comprising modifying a previous query string arbitrarily, or copying and pasting a completely different string for a new query string, among all the keywords input by the user, identifying the cached keyword that has the longest prefix with the new query.
The illustrated embodiment of the invention is a method further where prefix queries p1; p2; : : : ; px have been cached, further comprising inputting a new query p′=c1c2 : : : cic′ : : : cy, finding pi that has a longest prefix with p′, using node ni of pi to incrementally answer the new query p′ by inserting the characters after the longest prefix of the new query c′ : : : cy one by one, if there exists a cached keyword pi=p′, using the cached top-t records of pi to directly answer the query p′; otherwise if there is no such cached keyword, answering the query without-use of any cache.
The step of returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises tokenizing a query string into several keywords, k1; k2; : : : ; ki; for each keyword ki (1≦i≦l−1) determining only a predicted word, ki, and one predicted-record list of a trie node corresponding to ki, denoted as li, where q predicted words for ki, and their corresponding predicted record lists are ll1; ll2; : : : ; llq, and determining the predicted records by ∩i=1l−1li∩(∪j=11llj) namely taking the union of the lists of predicted keywords for partial words, and intersecting the union of lists of predicted keywords for partial words with lists of the complete keywords.
The step of determining the predicted records by ∩i=1l−1li∩(∪j=11llj) comprises determining the union ll=∩j=1qllj of the predicted-record lists of the partial keyword ki to generate an ordered predicted list by using a sort-merge algorithm and then determining the intersection of several lists ∩i=1l ll if by using a merge-join algorithm to intersect the lists, assuming these lists are pre-sorted or determining whether each record on the short lists appears in other long lists by doing a binary search or a hash-based lookup.
The step of returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises treating every keyword as a partial keyword, namely given an input query Q={k1; k2; : : : ; ki}, for each predicted record r, for each 1≦i≦·l, there exists at least one predicted word wi for ki in r, since ki must be a prefix of wi, quantifying their similarity as:
sim(ki;wi)=|ki|/|wi|
if there are multiple predicted words in r for a partial keyword ki, selecting the predicted word wi with the maximal similarity to ki and quantifying a weight of a predicted word to capture the importance of a predicted word, and taking into account the number of attributes that the l predicted words appear in, denoted as na, to combine similarity, weight and number of attributes to generate a ranking function to score r for the query Q as follows:
SCORE(r,Q)=α*l=1lidfwi*sim(ki,wi)+(1-α)*1na,
where α is a tuning parameter between 0 and 1.
The method is for searching a structured data table T with the query Q={k1; k2; : : : ; ki}, where an edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of edit operations of single characters needed to transform the first string s1 to the second string s2, and an edit-distance threshold δ, for 1≦i≦l, where a predicted-word set Wkf for ki is {w|∃w′≦w, w ∈W, ed(ki,w′)≦δ, where a predicted-record set RQ is {r|r ∈R, ∀ 1<l, ∃ wi ∈Wi, wi appears in r}. The illustrated embodiment of the invention comprises determining the top-t predicted records in RQ ranked by their relevancy to Q with the edit-distance threshold δ.
The step of determining the predicted-record set RQ comprises determining possible multiple words which have a prefix similar to a partial keyword, including multiple trie nodes corresponding to these words defined as the active nodes for the keyword k, locating leaf descendants of the active nodes, and determining the predicted records corresponding to these leaf nodes.
The illustrated embodiment of the invention further comprises: inputting a keyword k, storing a set of active nodes φk={[n, ξn]}, where n is an active node for k, and ξn=ed(k; n)≦δ, inputting one more letter after k, and finding only the descendants of the active nodes of k as active nodes of the new query which comprises initializing an active-node set for an empty keyword ε, i.e., φε={[n; ξn]|ξn=|n|≦δ}, namely including all trie nodes n whose corresponding string has a length |n| within the edit-distance threshold δ, inputting a query string c1c2 : : : cx letter by letter as follows: after inputting in a prefix query pi=c1c2 : : : ci (i≦x), storing an active-node set φp for pi, when inputting a new character cx+1 and submitting a new query px+1, incrementally determining the active-node set 0px+1 for px+1 by using 0px as follows: for each [n; ξn] in 0px, we consider whether the descendants of n are active nodes for px+1, for the node n, if ξn+1<δ, then n is an active node for px+1, then storing [n; ξn+1] into 0px+1 for each child nc of node n, (1) the child node n, has a character different from cx+1, ed(ns; px+1)≦ed(n; px)+1=ξn+1, if ξn+1≦·δ, then ns is an active node for the new string, then storing [ns; ξn+1] into 0px+1, or (2) the child node nc has a label cx+1 is denoted as a matching node nm, ed(nm; px+1)≦·ed(n; px)=ξn≦δ, so that nm is an active node of the new string, then storing [nm; ξn] into 0px+1, but if the distance for the node nm is smaller than δ, i.e., ξn<δ, then for each nm's descendant d that is at most δ−ξn letters away from nm, adding [d; ξd] to the active-node set for the new string px+1, where ξdn+|d|−|nm|.
Where during storing set 0px+1, it is possible to add two new pairs [v; ξ1] and [v; ξ2] for the same trie node v in which case storing the one of the new pairs [v; ξ1] and [v; ξ2] for the same trie node v with the smaller edit distance.
Where given two words wi and wj, their normalized edit distance is:
ned(wi;wj)=ed(wi;wj)/max(|wi|;|wj|);
where |wi| denotes the length of wi, where given an input keyword and one of its predicted words, the prefix of the predicted word with the minimum ned is defined as a best predicted prefix, and the corresponding normalized edit distance is defined as the “minimal normalized edit distance,” denoted as “mned” and where returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises determining if ki is a complete keyword, then using ned to quantify the similarity; otherwise, if ki is a partial keyword, then using mned to quantify their similarity, namely quantifying similarity of two words using: sim(ki; wi)=γ*(1−ned(ki; wi))+(1−γ)*(1−mned(ki; wi)); where γ is a tuning parameter between 0 and 1.
The step of returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises among the union lists ∪1, ∪2, . . . , ∪t, of the leaf nodes of each prefix node identifying the shortest union list, verifying each record ID on the shortest list by checking if it exists on all the other union lists by maintaining a forward list maintained for each record r, which is a sorted list of IDs of keywords in r, denoted as F so that each prefix pi has a range of keyword IDs [MinIdi, MaxIdi], verifying whether r appears on a union list ∪k of a query prefix pk for a record r on the shortest union list by testing if pk appears in the forward list Fr as a prefix by performing a binary search for MinIdk on the forward list Fr to get a lower bound Idlb, and check if Idlb is no larger than MaxIdk, where the probing succeeds if the condition holds, and fails otherwise.
Where each query keyword has multiple active nodes of similar prefixes, instead of determining the union of the leaf nodes of one prefix node, the illustrated embodiment of the invention comprises determining the unions of the leaf nodes for all active nodes of a prefix keyword, estimating the lengths of these union lists to find a shortest one, for each record r on the shortest union list, for each of the other query keywords, for each of its active nodes, testing if the corresponding similar prefix appears in the record r as a prefix using the forward list of r, Fr.
The step of maintaining a session cache for each user comprises: caching query results and using them to answer subsequent queries; increasing the edit-distance threshold δ as a query string is getting longer in successive queries; using pagination to show query results in different pages to partially traverse the shortest list, until enough results have been obtained for a first page, continuing traversing the shortest list to determine more query results and caching them; or retrieving the top-k records according a ranking function, for a predefined constant k, verifying each record accessed in the traversal by probing the keyword range using the forward list of the record, caching records that pass verification, then when answering a query incrementally, first verifying each record in the cached result of the previous increment of the query by probing the keyword range, if the results from the cache are insufficient to compute the new top-k, resuming the traversal on the list starting from the stopping point of the previous query, until we have enough top-k results for the new query.
The illustrated embodiment of the invention is a method where for a keystroke that invokes query Q, the step of returning the top-t records in RQ for a given value t, ranked by their relevancy to the query comprises matching prefixes which includes the similarity between a query keyword and its best matching prefix; predicted keywords where different predicted keywords for the same prefix can have different weights, and record weights where different records have different weights, where a query is Q={p1, p2, . . . }, where p′i is the best matching prefix for pi, and where ki is the best predicted keyword for p′i, where sim(pi, p′i) is an edit similarity between p′i and pi and where the score of a record r for Q can be defined as:
Score(r,Q)=Σi[sim(pi,p′i)+α*|p′i|−|ki|)+β*score(r,ki)],
where α and β are weights (0<β<α<1), and score(r, ki) is a score of record r for keyword ki.
The illustrated embodiment of the invention is a method for fuzzy type ahead search where R is a collection of records such as the tuples in a relational table, where D is a data set of words in R, where a user inputs a keyword query letter by letter, comprising: finding on-the-fly records with keywords similar to the query keywords by using edit distance to measure the similarity between strings, where the edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of single-character edit operations, where Q is the keyword query the user has input which is a sequence of keywords [w1, w2, : : : , wm]; treating the last keyword wm as a partial keyword finding the keywords in the data set that are similar to query keywords, where π is a function that quantifies the similarity between a string s and a query keyword w in D, including, but not limited to:
π(s,w)=1−ed(s,w)/|w|,
where |w| is the length of the keyword w; and normalizing the edit distance based on the query-keyword length in order to allow more errors for longer query keywords, where d be a keyword in D, for each complete keyword wi (i=1, : : : , m−1), defining the similarity of d to wi as:
Sim(d,wi)=π(d,wi),
since the last keyword wm is treated as a prefix condition, defining the similarity of d to wm as the maximal similarity of d's prefixes using function π, namely Sim(d, wm)=max prefix p of d π(p, wm), where τ is a similarity threshold, where a keyword in D is similar to a query keyword w if Sim(d, w)≧τ, where a prefix p of a keyword in D is similar to the query keyword wm if π(p, wm)≧τ, where φ(wi) (i=1, : : : , m) denotes the set of keywords in D similar to wi, and where P(wm) denotes the set of prefixes of keywords in D similar to wm.
The illustrated embodiment of the invention further comprises: ranking each record r in R based on its relevance to the query, where F(;) is a ranking function that takes the query Q and a record r ∈R; determining a score F(r, Q) as the relevance of the record r to the query Q, and given a positive integer k, determining the k best records in R ranked by their relevance to Q based on the score F(r, Q).
The step of determining a score F(r, Q) as the relevance of the record r to the query Q comprises determining the relevance score F(r, Q) based on the similarities of the keywords in r and those keywords in the query given that a keyword d in the record could have different prefixes with different similarities to the partial keyword wm, by taking their maximal value as the overall similarity between d and wm, where a keyword in record r has a weight with respect to r, such as the term frequency TF and inverse document frequency IDF of the keyword in the record.
Where the dataset D comprises a trie for the data keywords in D, where each trie node has a character label, where each keyword in D corresponds to a unique path from the root to a leaf node on the trie, where a leaf node has an inverted list of pairs [rid, weight]i, where id is the ID of a record containing the leaf-node string, and weight is the weight of the keyword in the record, the illustrated embodiment of the invention further comprises determining the top-k answers to the query Q in two steps comprising: for each keyword wi in the query, determining the similar keywords φ(wi) and similar prefixes P(wm) on the trie; and accessing the inverted lists of these similar data keywords to determine the k best answers to the query.
The step of accessing the inverted lists of these similar data keywords to determine the k best answers to the query comprises randomly accessing the inverted list, in each random access, given an ID of a record r, retrieving information related to the keywords in the query Q, to determine the score F(r, Q) using a forward index in which each record has a forward list of the IDs of its keywords and their corresponding weights, where each keyword has a unique ID corresponding its leaf node on the trie, and the IDs of the keywords follow their alphabetical order.
The step of randomly accessing the inverted list comprises: maintaining for each trie node n, a keyword range [ln, un], where ln and un are the minimal and maximal keyword IDs of its leaf nodes, respectively; verifying whether record r contains a keyword with a prefix similar to wm, where for a prefix p on the trie similar to wm checking if there is a keyword ID on the forward list of r in the keyword range [lp, up] of the trie node of p, since the forward list of r sorted, this checking is performed a binary search using the lower bound lp on the forward list of r to get the smallest ID° no less than lp, the record having a keyword similar to wm if γ exists and is no greater than the upper bound up, i.e., γ≦·up.
The step of randomly accessing the inverted list comprises: for each prefix p similar to wm, traversing the subtrie of p and identifying its leaf nodes; for each leaf node d, for the query Q, this keyword d has a prefix similar to wm in the query, storing
• [Query ID, partial keyword wm, sim(p, wm)].
in order to differentiate the query from other queries in case multiple queries are answered concurrently; storing the similarity between wm and p; determining the score of this keyword in a candidate record, where in the case of the leaf node having several prefixes similar to wm, storing their maximal similarity to wm; for each keyword wi in the query, storing the same information for those trie nodes similar to wi, defining stored entries for the leaf node as its collection of relevant query keywords; using collection of relevant query keywords to efficiently check if a record r contains a complete word with a prefix similar to the partial keyword wm by scanning the forward list of r, for each of its keyword IDs, locating the corresponding leaf node on the trie, and testing whether its collection of relevant query keywords includes this query and the keyword wm, and if so, using the stored string similarity to determine the score of this keyword in the query.
The step of determining the score F(r, Q) comprises: for each keyword w in the query, determining a score of the keyword with respect to the record rand the query, denoted by Score(r, w, Q); and determining the score F(r, Q) by applying a monotonic function on the Score(r, w, Q)'s for all the keywords w in the query.
Where d is a keyword in record r such that d is similar to the query keyword w, d ∈φ(w), where Score(r, w, d) denotes the relevance of this query keyword w to the record keyword d, where the relevance value Score(r, w, Q) for a query keyword w in the record is the maximal value of the Score(r, w, d)'s for all the keywords d in the record, where determining a score of the keyword with respect to the record rand the query, denoted by Score(r, w, Q), the illustrated embodiment of the invention comprises finding the most relevant keyword in a record to a query keyword when computing the relevance of the query keyword to the record as an indicator of how important this record is to the user query.
The illustrated embodiment of the invention includes where F(r, Q) is
F(r,Q)=i=1mScore(r,wi,Q)
where
Score(r,wi,Q)=maxrecord keyword d in r{Score(r,wi,d)}, (2)
and
Score(r,wi,d)=Sim(d,wi)*Weight(d,r) (3)
where Sim(d, wi) is the similarity between complete keyword wi and record keyword d, and Weight(d, r) is the weight of d in record r.
The method of claim 31 comprising partitioning the inverted lists into several groups based on their corresponding query keywords, where each query keyword w has a group of inverted lists, producing a list of record IDs sorted on their scores with respect to this keyword, and using a took algorithm to find the k best records for the query.
The step of using a top-k algorithm to find the k best records for the query comprises for each group of inverted lists for the query keyword w, retrieving the next most relevant record ID for w by building a max heap on the inverted lists comprising maintaining a cursor on each inverted list in the group, where the heap is comprised of the record IDs pointed by the cursors so far, sorted on the scores of the similar keywords in these records since each inverted list is already sorted based on the weights of its keyword in the records and all the records on this list share the same similarity between this keyword and the query keyword w the list is also sorted based on the scores of this keyword in these records, retrieving the next best record from this group by popping the top element from the heap, incrementing the cursor of the list of the popped element by one, and pushing the new element of this list to the heap, ignoring other lists that may produce this popped record, since their corresponding scores will no longer affect the score of this record with respect to the query keyword w.
Where L1, : : : , Lt are inverted lists with the similar keywords d1, : : : , dt, respectively, the illustrated embodiment of the invention further comprises sorting these inverted lists based on the similarities of their keywords to w, Sim(d1, w), : : : , sim(dt, w), constructing the max heap using the lists with the highest similarity values.
The illustrated embodiment of the invention further comprises improving sorted access by precomputing and storing the unions of some of the inverted lists on the trie, where v is a trie node, and ∪(v) is the union of the inverted lists of v's leaf nodes, sorted by their record weights, and if a record appears more than once on these lists, selecting its maximal weight as its weight on list ∪(v), where ∪(v) is defined as the union list of node v.
Where v is a trie node the illustrated embodiment of the invention comprises materializing union list ∪(v), where in using ∪(v) to speed up sorted access for the prefix keyword wm is that ∪(v) is sorted based on its record weights, where the value Score(r, wm, di) of a record r on the list of a keyword di with respect to wm is based on both Weight(di, r) and Sim(di, wm), where all the leaf nodes of v have the same similarity to wm, where all the leaf nodes of v are similar to wm, namely their similarity to wm is no less than the threshold τ so that the sorting order of the union list ∪(v) is also the order of the scores of the records on the leaf-node lists with respect to wm.
Where B is a budget of storage space available to materialize union lists comprising selecting trie nodes to materialize their union lists to maximize the performance of queries, where a node is defined as “materialized” if its union list has been materialized, where for a query Q with a prefix keyword wm, some of the trie nodes have their union lists materialized, where v is the highest trie node that is usable for the max heap of wm, and for which ∪(v) has not been materialized, where for each nonleaf trie descendant c of v, such that no node on the path from v to c (including c) has been materialized, the illustrated embodiment of the invention comprises: performing a cost-based analysis to quantify the benefit of materializing ∪(c) on the performance of operations on the max heap of wm based on reduction of traversal time, reduction of heap-construction time and reduction of sorted-access time, the overall benefit Bv of materializing v for the query keyword query wm being:
Bv=Breduction of traversal time+Breduction of heap-construction time+Av*Breduction of sorted-access time,
where Av is the number of sorted accesses on ∪(v) for each query, then summing the benefits of materializing its union list to all the queries in the query workload or trie according to probability of occurrence of the query, and recomputing Bv the benefit Bv of materializing other affected nodes after the benefit of each node is computed until the given budget B of storage space is realized.
While the apparatus and method has or will be described for the sake of grammatical fluidity with functional explanations, it is to be expressly understood that the claims, unless expressly formulated under 35 USC 112, are not to be construed as necessarily limited in any way by the construction of “means” or “steps” limitations, but are to be accorded the full scope of the meaning and equivalents of the definition provided by the claims under the judicial doctrine of equivalents, and in the case where the claims are expressly formulated under 35 USC 112 are to be accorded full statutory equivalents under 35 USC 112. The invention can be better visualized by turning now to the following drawings wherein like elements are referenced by like numerals.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1 and 2 are screenshots produced by the method of the illustrated embodiment.
FIG. 3 is the trie on top of the words in Table I.
FIGS. 4a-4e (also referenced collectively as FIG. 4) is a sequence of trie diagrams showing the execution of fuzzy search for processing prefix queries of “nlis” (edit distance threshold δ=2).
FIG. 5 is a diagram depicting the computation of active node set 0px+1 from the active node set 0px. We consider an active [n, ξn] in 0px.
FIG. 6 is a trie with inverted lists at the leaf nodes.
FIG. 7 is a diagram of prefix intersections using forward lists. Numbers with underlines are keyword IDs and numbers without underscores are record IDs.
FIG. 8 is a diagram which depicts computing the top k results using cached answers and resuming unfinished traversal on a list.
FIG. 9 is an index structure, namely a trie with inverted lists on the leaf nodes.
FIG. 10 is a diagram illustrating keywords similar to those in query Q=[w1, w2, . . . wm]. Each query keyword wi has similar keywords on the leaf nodes. The last prefix keyword wm has similar prefixes, each of which has several keywords on the leaf nodes.
FIG. 11 is a diagram illustrating the probing of forward lists.
FIG. 12 is a diagram illustrating probing on leaf nodes.
FIG. 13 is a diagram illustrating Max Heaps for the query keywords, “icde” and :“li”. Each shaded list is merged from the underlying lists. It is “virtual” since we do not need to compute the entire list. The lists in a rectangle are those used to build a heap, which can prune other low score lists not in the rectangle.
FIG. 14 is a diagram in which the benefits of materializing the union list ∪(v) for a trie node with respect to the last query wm in the query.
The invention and its various embodiments can now be better understood by turning to the following detailed description of the preferred embodiments which are presented as illustrated examples of the invention defined in the claims. It is expressly understood that the invention as defined by the claims may be broader than the illustrated embodiments described below.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Introduction—iSearch
The illustrated embodiment of the invention is a computing paradigm, called “iSearch”, that supports interactive, fuzzy search on structured data. It has two unique features: (1) Interactive: The system searches for the best answers “on the fly” as the user types in a keyword query; (2) Fuzzy: When searching for relevant records, the system also tries to find those records that include words that are similar to the keywords in the query, even if they do not match exactly.
We have developed several prototypes using this new paradigm. The first one, available at ≦http://psearch.ics.uci.edu≧, is a system that can support search on the UCI directory. A screenshot is shown in FIG. 1. The second one, available at ≦http://dbip.ics.uci.edu≧ supports search on a DBLP dataset ≦(http//www.informatik.uni-trier.de/>>ley/db/)≧ with more than 1 million publication records. A screenshot is shown in FIG. 2. As an example, in FIG. 1, the user has typed in a query string “professor smyt”. Even though the user has not typed in the second keyword completely, the system can already find person records that might be of interests to the user. Notice that the two keywords in the query string (including a partial keyword “smyt”) can appear in different attributes of the found records. In particular, in the first record, the keyword “professor” appears in the “title” attribute, and the partial keyword “smyt” appears in the “name” attribute. The matched prefixes are highlighted for the user. This feature makes this paradigm different from many autocomplete systems, which mainly deal with single-attribute data (such as a set of popular queries) and treat a query as a single-attribute string. In addition to finding records with words that match the query keywords as prefixes, the system also finds records with words that are similar to the query keywords, such as a person name “smith”.
The feature of supporting fuzzy search is especially important when the user have limited knowledge about the underlying data or the entities they are looking for. As the user types in more letters, the system interactively searches on the data, and updates the list of relevant records. The system also utilizes a-priori knowledge such as synonyms. For instance, given the fact that “william” and “bill” are synonyms, the system can find a person “William Kropp” when the user has typed in “bill crop”. This search prototype has been used regularly by many people at UCI, and received positive feedback due to the friendly user interface and high efficiency.
We develop solutions to these problems. We present several incremental-search algorithms for answering a query by using cached results of earlier queries. In this way, the computation of the answers to a query can spread across multiple keystrokes of the user, thus we can achieve a high speed.
Specifically, we make the following contributions. (1) We first consider the case of queries with a single key-word, and present an incremental algorithm for computing keyword prefixes similar to a prefix keyword in a query. (2) For queries with multiple keywords, we study various ways for computing the intersection of the inverted lists of query keywords, and develop an algorithm for computing the results efficiently. Its main idea is to use forward lists of keyword IDs for checking whether a record matches query keyword conditions (even approximately). (3) We develop an on-demand caching technique for incremental search. Its idea is to cache only part of the results of a query. For subsequent queries, unfinished computation will be resumed if the previously cached results are not sufficient. In this way, we can efficiently compute and cache a small amount of results. (4) We consider various features in this paradigm, such as how to rank results properly, how to highlight keywords in the results, and how to utilize domain-specific information such as synonyms to improve search. (5) In addition to deploying several real prototypes, we conducted a thorough experimental evaluation of the developed techniques on real data sets, and show the practicality of this new computing paradigm. All experiments were done using a single desktop machine, which can still achieve response times of milliseconds on millions of records.
II. iSearch of Multiple Keywords on Structured Data
We first give an example to show how iSearch works for queries with multiple keywords on structured data. Assume there is a structured table person(id; name; title; email) with information about people. The data resides on a server. A user accesses and searches the data through a Web browser. Each keystroke that the user types invokes a query, which includes the current string the user has typed in. The browser sends the query to the server, which computes and returns to the user the best answers ranked by their relevancy to the query. If the user clicks the “search” button on the interface, the browser sends the query to the server, which answers the query in the same way as traditional keyword-search methods. For example, suppose a user searches for a professor called “smyth”, and types in a query “professor smyt” letter by letter, as illustrated in FIG. 1. The string is tokenized to keywords using delimiters such as the space character. The keywords (except the last one) such as “professor”, have been typed in completely. The last keyword is a partial keyword, such as “smyt”, as the user may have not finished typing the complete keyword.
For the partial keyword, we would like to know the possible word the user intends to type. However, given the limited information, we can only identify a set of words in the data set with this partial keyword as a prefix. This set of keywords are called the predicted words of the partial keyword. For instance, for the partial keyword “smyt”, its predicted words are “smyth”, “smytha”, etc. We retrieve the records that contain all the complete keywords and at least one of the predicted words of the partial keyword. We call these records the predicted records of the query. FIG. 1 shows four predicted records. In this way, iSearch can save users time and efforts, since they can find the answers even if they have not finished typing all the complete keywords. There are two challenges in this context. The first one is how to interactively and incrementally identify the predicted words after each keystroke from the user. The second one is how to efficiently compute the predicted records of a query with multiple keywords, especially when there are many such records.
Problem Formulation
We formalize the problem of interactive, fuzzy search on a relational table, and our method can be adapted to textual documents, XML, and databases with multiple tables. We first formalize the iSearch problem as follows. Consider a structured data table T with m attributes and n records. Let A={a1; a2; : : : ; am} denote the attribute set, R={r1; r2; : : : ; rn} denote the record set, and W={w1;w2; : : : ;wp} denote the distinct word set in T. Given two words, wi and wj, “wi≦wj” denotes that wi is a prefix string of wj. For example, consider the publication table shown in Table I. It has 10 records and 5 attributes. We have “lu”≦“luis”.
A query consists of a set of prefixes Q={p1, p2, . . . , pi}. For each prefix pi, we want to find the set of prefixes from the data set that are similar to pi. Clearly the technique can be used to answer queries when only the last keyword is treated as a partial prefix, and the other are treated as completed words. In this work we use edit distance to measure the similarity between two strings. The edit distance between two strings s1 and s2, denoted by ed(s1, s2), is the minimum number of edit operations (i.e., insertion, deletion, and substitution) of single characters needed to transform the first one to the second. For example, ed(smith, smyth)=1.
Definition 1: (The iSearch Problem) Given a table T and a query Q={k1; k2; : : : ; ki} where ki is the last partial keyword, let predicted-word set be Wki=(w|w is a member of Wand ki≦w). We want to compute the predicted-record set RQ={r|r is a member of R, for every i; 1≦i≦·1, ki appears in r, and there exists a w included in Wkl, w appears in r}. For the keystroke that invokes query Q, we return the top-t records in RQ for a given value t, ranked by their relevancy to the query.
We treat the data and query string as lower-case strings. We will focus on how to find the predicted records, among which we can find the best answers using a ranking function. A variety of ranking functions can be used, as discussed below. We use a trie to index the words in the table. Each word w in the table corresponds to a unique path from the root of the trie to a leaf node. Each node on the path has a label of a character in w. For simplicity, we use a node and its corresponding string interchangeably. For each leaf node, we store an inverted list of IDs of records that contain the word of the leaf node. For instance, for the publication table in Table I, its trie for the tokenized words is shown in FIG. 3. The word “luis” has a node ID of 16, and its inverted list includes record 7, which contains the word.
Alternatively, problem formulation can begin with a definition for an Interactive Fuzzy Search.
Definition 1 (Interactive Fuzzy Search). Given a set of records R, let W be the set of words in R. Consider a query Q={p1, p2, . . . , pt} and an edit-distance threshold δ. For each pi, let Pi be {p′i| ∃w ∈W, p′i≦w and ed(p′i, pi)≦δ}. Let the set of candidate records RQ be {r|r ∈R, ∀ 1≦i≦l, ∃ p′i ∈Pi and wi appears in r, p′i≦wi}. The problem is to compute the best records in RQ ranked by their relevancy to Q. These records are computed incrementally as the user modifies the query, e.g., by typing in more letters.
Indexing:
We use a trie to index the words in the relational table. Each word w in the table corresponds to a unique path from the root of the trie to a leaf node. Each node on the path has a label of a character in w. For simplicity, a node is mentioned interchangeably with its corresponding string in later text. Each leaf node has an inverted list of IDs of records that contain the corresponding word, with additional information such as the attribute in which the keyword appears and its position. For instance, FIG. 6 shows a partial index structure for publication records. The word “vldb” has a trie node ID of 15, and its inverted list includes record IDs 6, 7, and 8. For simplicity, the figure only shows the record ids, without showing the additional information about attributes and positions.
B. Single Keyword
We first consider how to answer a query with a single keyword using the trie. Each keystroke that a user types invokes a query of the current keyword, and the client browser sends the query to the server.
Naive Method:
One naive way to process such a query on the server is to answer the query from scratch as follows. We first find the trie node corresponding to this keyword by traversing the trie from the root. Then we locate the leaf descendants of this node, and retrieve the corresponding predicted words and the predicted records on the inverted lists. For example, suppose a user types in query string “luis” letter by letter. When the user types in the character “l”, the client sends the query “l” to the server. The server finds the trie node corresponding to this keyword, i.e., node 10. Then it locates the leaf descendants of node 10, i.e., nodes 11, 12, 13, 14, and 16, and retrieves the corresponding predicted words, i.e., “li”, “lin”, “liu”, “lu”, and “luis”, and the predicted records, i.e., 1, 3, 4, 5, and 7. When the user types in the character “u”, the client sends a query string “lu” to the server. The server answers the query from scratch as follows. It first finds node 14 for this string, then locates the leaf descendants of node 14 (nodes 14 and 16). It retrieves the corresponding predicted words (“lu” and “luis”), and computes the predicted records (4 and 7). Other queries invoked by keystrokes are processed in a similar way. One main limitation of this method is that it involves a lot of recomputation without using the results of earlier queries.
2) Caching-Based Method.
We maintain a session for each user. Each session keeps the keywords that the user has typed in the past and other information for each keyword, including its corresponding trie node and the top-t predicted records. The goal of keeping the information is to use it answer subsequent queries incrementally as follows. Assume a user has typed in a query string c1c2 : : : cx letter by letter. Let pi=c1c2 : : : ci be a prefix query (1≦i≦·x). Suppose ni is the trie node corresponding to pi. After the user types in a prefix query pi, we store node ni for pi and its top-t predicted records.
For example, suppose a user has typed in “lui”. After this query is submitted, the server has stored node 10 and records 1, 3, 4, 5, and 7 (only top-t) for the prefix query “l”, node 14 and records 4 and 7 for the prefix query “lu”, and node 15 and record 7 for “lui”. For each keystroke the user types, for simplicity, we first assume that the user types in a new character cx+1 at the end of the previous query string. To incrementally answer the new query, we first check whether node nx that has been kept for px has a child with a label of cx+1. If so, we locate the leaf descendants of node nx+1, and retrieve the corresponding predicted words and the predicted records. Otherwise, there is no word that has a prefix of px+1, and we can just return an empty answer. For example, if the user types in “s”, we check whether node 15 kept for “lui” has a child with label “s”. Here, we find node 16, and retrieve the predicted word “luis” and the predicted record 7.
In general, the user may modify the previous query string arbitrarily, or copy and paste a completely different string. In this case, for the new query string, among all the keywords typed by the user, we identify the cached keyword that has the longest prefix with the new query. Formally, consider a query string c1c2 : : : cx. Suppose we have cached prefix queries p1; p2; : : : ; px. Suppose the user submits a new query p′=c1c2 : : : cic′ : : : cy. We find pi that has a longest prefix with p′. Then we use the node ni of pi to incrementally answer the new query p′, by inserting the characters after the longest prefix of the new query c′ : : : cy one by one. In particular, if there exists a cached keyword pi=p′, we use the cached top-t records of pi to directly answer the query p′; If there is no such cached keyword, we answer the query from scratch.
C. Multiple Keywords
Now we consider how to do interactively search in the case of a query with multiple keywords. For a keystroke that invokes a query, we first tokenize the query string into several keywords, k1; k2; : : : ; ki. For each keyword ki (1≦i≦l−1), since it is treated as a complete keyword, we consider only predicted word (i.e., ki), and one predicted-record list (i.e., the inverted list of the trie node corresponding to ki, denoted as li). For the partial keyword ki, there can be multiple predicted words and multiple predicted-record lists. Suppose there are q predicted words for ki, and their corresponding predicted record lists are ll1; ll2; : : : ; llq. Note that the predicted-record lists of complete keywords can be retrieved based on the cached nodes of the complete keywords. The predicted-record lists of the partial keyword can be computed as discussed before. Now our problem is to compute the predicted records as ∩i=1l−1li∩(∪j=1qllj). Conceptually, we take the union of the lists of predicted keywords for the partial word, and do an intersection of this union list with the lists of the complete keywords. We illustrate this operation using the following example.
Example 2
Assume a user types in a query “databases vldb luis” letter by letter. We use Table II to illustrate how our method works. As the user types in the keyword “databases”, for each keystroke, we incrementally answer the query as discussed before. When the user types in a space, we assume that the user has completely typed in the keyword “databases”. When the user types in “v”, we find the trie node 17. We identify the predicted word “vldb” and the predicted records 6, 7, and 8. We compute the intersection of the predicted records of “databases” and those of “v”, and get the predicted records of the current query (records 6, 7, and 8). Similarly, we can incrementally answer the query “databases vldb”. When the user types in another space, we assume that “vldb” has been typed in completely. When the user types in “lu”, there are two predicted words (“lu” and “luis”). We first compute the union of the record lists of the two predicted words, and get an ordered predicted-record list {4, 7}. The predicted-record list for “databases” is {3, 4, 5, 6, 7, 8, 9, 10}; and that of “vldb” is {6, 7, 8}. We intersect the three lists and get the final predicted record (7).
In general, we first compute the union ll=∪j=1qllj of the predicted-record lists of the partial keyword ki to generate an ordered predicted list by using a sort-merge algorithm. Then, we compute the intersection of several lists ∩i=1lli. If the sizes of these inverted lists are large, it is expensive to compute the predicted records. Various algorithms can be adopted here. Specifically we consider two methods. One way is to use a merge-join algorithm to intersect the lists, assuming these lists are pre-sorted. Another way is to check whether each record on the short lists appears in other long lists by doing a binary search or a hash-based lookup. The second method has been shown to achieve a high performance.
D. Treating Every Keyword as a Partial Keyword
In most cases, when a user types in a space character, the user may not modify the keywords before the space. Thus we can treat them as complete keywords. In some cases, the user may modify them to refine the answer, or may input partial keywords due to their limited knowledge, such as “pad smyt” for “padhraic smyth”. To address this issue, we can treat every input keyword as a partial keyword. We can answer the query in a similar manner. We identify the predicted words and predicted records for every partial keyword as discussed above, and compute the predicted records of the query as discussed before. Note that give a query Q={k1; k2; : : : ; ki}, for each keyword, there could be multiple predicted-record lists. Suppose there are qi predicted record lists for the keyword ki, denoted as li1; li2; : : : ; liqi. We compute the predicted records as ∩i=1l(∩j=1qilij). We can easily extend the algorithms discussed above to compute this intersection.
E. Ranking Predicted Records
We want to return a smaller number of relevant records for reasons such as limited interfaces on the client. A variety of ranking functions can be used to rank the predicted records. Here, we consider a specific ranking function by utilizing the fact that every keyword can be treated as a partial keyword, which is quite different from search interfaces that treat every input keyword as a complete keyword.
Given an input query Q={k1; k2; : : : ; ki}, for each predicted record r, for each 1≦i≦·l, there exists at least one predicted word wi for ki in r. Since ki must be a prefix of wi, we can quantify their similarity as:
sim(ki;wi)=|ki|/|wi|: (1)
If there are multiple predicted words in r for a partial keyword ki, we select the predicted word w, with the maximal similarity to ki. We also consider the weight of a predicted word to capture the importance of a predicted word in the data. For example, we can use the inverse document frequency of a keyword wi to quantify its weight, denoted as idfwi. In addition, we take into account the number of attributes that the l predicted words appear in, denoted as na. Accordingly, by combining these three parameters, we propose a ranking function to score r for the query Q as follows:
SCORE(r,Q)=α*i=1lidfwi*sim(ki,wi)+(1-α)*1na,(2)
where a is a tuning parameter between 0 and 1.
Highlighting: When a user types in some keywords, as we treat every keyword as a partial keyword, we want to highlight the predicted words of each keyword. We highlight the prefix of the predicted word that the user has typed. For example, for the input keyword “lu” and its predicted word “luis”, we highlight it as “luis”. To efficiently highlight a predicted word, we need to store the information (on the inverted list) about in which attribute the word appears, and the offset of the word in the attribute, in addition to the record ID. For example, in FIG. 3, for node 16 with respect to the string “luis”, on its inverted list, we need to keep its record ID (7), attribute ID (3), and offset in the attribute (20). (For simplicity, we only show record IDs in the figures.)
Synonyms:
We can utilize a-priori knowledge about synonyms to find relevant records. For example, “William=Bill” is a common synonym in the domain of person names. Suppose in the underlying data, there is a person called “Bill Gates”. If a user types in “William Gates”, we can also find this person. To this end, on the trie, the node corresponding to “Bill” has a link to the node corresponding to “William”, and vice versa. When a user types in “Bill”, in addition to retrieving the predicted records for “Bill”, we also identify those of “William” following the link. In this way, our method can be easily extended to utilize synonyms.
iSearch With Fuzzy Search
Suppose a user wants to search for papers written by Christos Faloutsos. The user does not know the exact spelling, and types in “chritos felouts” as illustrated in FIG. 2. iSearch can still find relevant answers; we predict “christos” for “chritos” and “faloutsos” for “flotsos”, and retrieve top-t relevant records. iSearch with fuzzy search is appealing especially since even if the user types in a query with minor errors due to limited knowledge, the system can still find the answers that could be of interests to the user. A big challenge is how to incrementally and efficiently identify the predicted words that are similar to the input keywords. We develop an efficient algorithm to solve this problem.
Problem Formulation
We first extend the problem formulation above to support a fuzzy search. We treat every input keyword in a given query as a partial keyword. For each keyword, we identify its predicted words that have a prefix similar to the keyword within a given threshold, and retrieve the predicted records that contain a predicted word for every keyword. We use edit distance to quantify the similarity between two words wi and wj, denoted as ed(wi,wj). The edit distance between two strings is the minimum number of edit operations (i.e., insertion, deletion, and substitution) of single characters needed to transform the first one to the second. For example, ed(chritos, christos)=1 and ed(felouts, faloutsos)=3.
Definition 2: (Fuzzy Search)
Given a table T, a query Q={k1; k2; : : : ; ki}, and an edit-distance threshold δ, for 1≦i≦l, let the predicted-word set Wki for ki be {w|∃w′≦w, w∈W, ed(ki,w′)≦δ}. Let the predicted-record set RQ be {r|r ∈R, ∀ 1≦l, ∃ wl ε Wi,wi appears in r}. We want to compute the top-t predicted records in RQ ranked by their relevancy to Q.
For simplicity, we assume we are given threshold δ on the edit distance between similar strings. We can easily extend our solution by dynamically increasing the threshold δ for longer keywords.
B. Single Keyword
In the case of exact search, there exists only one trie node corresponding to a partial keyword k. We use this node to identify the predicted words as discussed above. However, to support fuzzy search, we need to predict the possible multiple words which have a prefix similar to the partial keyword. Thus, there may exist multiple trie nodes corresponding to these words. We call these nodes the active nodes for the keyword k. We locate the leaf descendants of the active nodes, and compute the predicted records corresponding to these leaf nodes. For example, consider the trie in FIG. 4. Suppose δ=2, and a user types in a partial keyword “nlis”. The words “li”, “lin”, “liu”, and “luis”, are all similar to the input keyword, since their edit distances to “nlis” is within δ=2. Thus nodes 11, 12, 13, and 16 are active nodes for the partial keyword (FIG. 4(e)). We find the leaf descendants of the active nodes as the predicted words (“li”, “lin”, “liu”, and “luis”).
Now we consider how to incrementally compute active nodes for query strings as the user types in letters. We develop a caching-based method to achieve our goal. Given an input keyword k, different from exact search which keeps only one trie node, we store the set of active nodes φk={[n, ξn]}, where n is an active node for k, and ξn=ed(k; n)≦δ. (Notice that for simplicity, we use “n” to represent both the trie node and its corresponding string.) We call φk the active node set for the keyword k (together with the edit-distance information for each active node). The idea behind our method is to use the prefix-filtering. That is, when the user types in one more letter after k, only the descendants of the active nodes of k could be active nodes of the new query, and we need not consider other trie nodes. We use this property to incrementally compute the active-node set of a new query.
For a new query, we will use the cached active-node sets to incrementally compute a new active-node set for the query as follows. Firstly, we initialize an active-node set for the empty keyword ε, i.e., φε={[n; εn]|ξn=|n|≦δ}. That is, it includes all trie nodes n whose corresponding string has a length |n| within the edit-distance threshold δ. These nodes are active nodes for the empty string since their edit distances to ε are within δ.
Assume a user has typed in a query string c1c2 : : : cx letter by letter. After the user types in a prefix query pi=c1c2 : : : ci (i≦x), we keep an active-node set φp for pi. When the user types in a new character cx+1 and submits a new query px+1, we compute the active-node set 0px+1 for px+1 by using 0px as follows. For each [n; ξn] in 0px, we consider whether the descendants of n are active nodes for px+1, as illustrated in FIG. 5. For the node n, if ξn+1<δ, then n is an active node for px+1, so we include [n; εn+1] into 0px+1. This case corresponds to deleting the last character cx+1 from the new query string px+1. Notice that even if εn+1≦δ· is not true, this node n could still potentially become an active node of the new query string, due to operations described below on other active nodes in 0px.
For each child nc of node n, it has two possible cases.
The child node nc has a character different from cx+1. FIG. 5 shows a node ns for such a child node, where “s” stands for “substitution,” the meaning of which will become clear shortly. We have ed(ns; px+1)≦ed(n; px)+1=εn+1. If ξn+1≦·δ, then ns is an active node for the new string, so we put [ns; εn+1] into 0px+1. This case corresponds to substituting the label of ns for the letter cx+1−2
The child node nc has a label cx+1. FIG. 5 shows the node nm for such a child node, where “m” stands for “matching,” the meaning of which will become clear shortly. In this case, we have ed(nm; px+1)≦·ed(n; px)=ξn≦δ. Thus, nm is an active node of the new string, so we put [nm; ξn] into 0px+1. This case corresponds to the match between the character cx+1 and the label of nm. One subtlety here is that, if the distance for the node nm is smaller than δ, i.e., ξn<δ, we need to do the following: for each nm's descendant d that is at most δ−ξn letters away from nm, we can safely add [d; ξd] to the active-node set for the new string px+1, where ξdn+|d|−|nm|. This operation corresponds to inserting several letters after node nm. For the node ns we need not consider its descendants for insertions. Because if these descendants are active nodes, they must be in 0px, and we will consider them when processing such active nodes.
Notice that during the computation of set 0px+1, it is possible to add two new pairs [v; ε1] and [v; ε2] for the same trie node v. In this case, we always keep the one with the smaller edit distance. That is, if ξ12, we only keep the former pair. The reason is that we only want to keep the edit distance between the node v and the query string px+1, which means the minimum number of edit operations to transform the string of v to the string px+1. The following lemma shows the correctness of this algorithm, which can be proved by mathematical induction. Here, we omit the proof.
Lemma 1:
For a query string px=c1c2 : : : cx, let 0px be its active-node set. Consider a new query string px+1=c1c2 : : : cxcx+1. (1) Soundness: Each node computed by the algorithm described above is an active node of the new query string px+1. (2) Completeness: Every active node of the new query string px+1 will be computed by the algorithm above.
Example 3
Assume a user types in a query “nlis” letter by letter. Suppose the edit-distance threshold δ is 2. FIG. 4 illustrates how our method works to process the prefix queries invoked by keystrokes. Table III shows the details of how to compute the active-node sets incrementally. Firstly, we initialize φε={[0,0], [10,1], [11,2], [14,2]} (FIG. 4a and Table III(a)). When the user types in the first character “n”, for the string s=“n”, we compute its active-node set φs based on φεas follows. For [0; 0] ∈φε, we put [0; 1] into φs, since we can delete the letter “n”. For node 10, which is a child of node 0 with a letter “l”, we put [10; 1] into φs, as we can substitute “l” for “n”. There are no match and insertion operations as node 1 has no child with label “n”. In this way, we get φs (FIG. 4b and Table III(b)). Similarly, we can answer the prefix queries of “nlis”.
For each active node, we predict the words corresponding to its leaf descendants. Consider the active-node set for the prefix query “nl” as shown in FIG. 4c. For [11; 2], we compute the predicted words of “li, “lin”, and “liu”, and the predicted records of 1, 3, 4, and 5.
C. Multiple Keywords
iSearch
Now we consider how to do fuzzy search in the case of a query with multiple keywords first in the context of using iSearch. Second, we will consider below a fuzzy search with multiple keywords using intersection of multiple lists of keywords and then cache-based incremental intersection.
For each keystroke, we first tokenize the query string into several keywords. Then we identify the predicted words and predicted records of each keyword as discussed before. Different from the case of exact search, in fuzzy search, there could be multiple active nodes for each partial keyword, instead of only one trie node in exact search. For example, in FIG. 4d, suppose a user types in a query “nli”. The active nodes are 10, 11, 12, 13, 14, and 15. The predicted words are “li, “lin”, “liu”, “lu”, and “lui”. There are multiple active nodes for each keyword. We need to combine the predicted records for every active node, and generate an ordered list for the keyword. Then we can use the algorithms described above to compute the predicted records for the query.
Discussions
Highlighting:
We extend the highlighting method discussed above to support fuzzy search. It is relatively easy to highlight the predicted word in exact search, as the input keyword must be a prefix of the predicted word. However it is not straightforward to highlight predicted words in fuzzy search. This is because, given an input keyword and one of its predicted words, the input keyword may not be a prefix of the predicted word. Instead, the input keyword may be similar to some prefixes of the predicted word. Thus, there could be multiple ways to highlight the predicted word. For example, suppose a user types in “lus”, and there is a predicted word “luis”. Both prefixes “lui” and “luis” are similar to “lus”, and there are several ways to highlight the predicted word, such as “luis” or “luis”. To address this issue, we use the concept normalized edit distance (ned for short).
Definition 3: (Normalized Edit Distance)
Given two words wi and wj, their normalized edit distance is:
ned(wi;wj)=ed(wi;wj)/max(|wi|;|wj|); (3)
where |wi| denotes the length of wi. Given an input keyword and one of its predicted words, we highlight the prefix of the predicted word with the minimum ned to the keyword. We call such a prefix a best predicted prefix, and call the corresponding normalized edit distance the “minimal normalized edit distance,” denoted as “mned”. This prefix is considered to be most similar to the input keyword. For example, for the keyword “lus” and its predicted word “luis”, we have ned(“lus”, “l”)=2 3, ned(“lus”, “lu”)=1 3, ned(“lus”, “lui”)=1 3 , and ned(“lus”, “luis”)=1 4. Since mned(“lus”, “luis”)=ned(“lus”, “luis”), we highlight “luis”.
2) Ranking:
As before, a variety of ranking functions can be used to rank the predicted records. Here, we consider a specific ranking function by utilizing the fact that input keywords may be fuzzy. Consider an input keyword ki and one of its predicted words, wi. As ki may not be a prefix of wi, but similar to a prefix of wi, then |ki|/|wi| may not accurately quantify the similarity between ki and wi. If ki is a complete keyword, ned can quantify the similarity; otherwise, if ki is a partial keyword, mned is a better function to quantify their similarity. We combine the two functions to quantify their similarity as follows:
sim(ki;wi)=γ*(1−ned(ki;wi))+(1−γ)*(1−mned(ki;wi)); (4)
where γ is a tuning parameter between 0 and 1. Accordingly, we can extend the ranking function Equation 2 using this similarity function to rank records in fuzzy search.
In summary, we disclose a computing paradigm, called iSearch, that supports interactive, fuzzy search on structured data. We have developed several novel techniques to interactively and efficiently search the underlying data on the fly. We also considered how to support fuzzy search in the framework, by using a prefix-filtering-based method to incrementally identify related keywords. We have developed several prototypes on real applications. The experimental results show that our methods can achieve a high search efficiency and result quality.
Now consider a fuzzy search using multiple keywords more outside the limited context of iSearch. The goal is to efficiently and incrementally compute the records with keywords whose prefixes are similar to those query keywords. We focus on several challenges in this setting.
Intersection of Multiple Lists of Keywords:
Each query keyword (treated as a prefix) has multiple predicted complete keywords, and the union of the lists of these predicted keywords includes potential answers. The union lists of multiple query keywords need to be intersected in order to compute the answers to the query. These operations can be computationally costly, especially when each query keyword can have multiple similar prefixes. We consider various algorithms for computing the answers efficiently.
(2) Cache Based Incremental Intersection:
In most cases, the user types the query letter by letter, and subsequent queries append additional letters to previous ones. Based on this observation, we consider how to use the cached results of earlier queries to answer a query incrementally
Intersecting Union Lists of Prefixes
For simplicity, we first consider exact search, and then extend the results to fuzzy search. Given a query Q={p1, p2, . . . , pi}, suppose {ki1, ki2, . . . } is the set of keywords that share the prefix pi. Let Lij denote the inverted list of kij, and ∪i=∪jLij be the union of the lists for pi. We study how to compute the answer to the query, i.e., ∩iUi.
Simple Methods:
One method is the following. For each prefix pi, we compute the corresponding union list ∪i on the fly and intersect the union lists of different keywords. The time complexity for computing the unions could be O(Σi,j Lij). The shorter the keyword prefix is, the slower the query could be, as inverted lists of more predicted keywords need to be traversed to generate the union list. This approach only requires the inverted lists of trie leaf nodes, and the space complexity of the inverted lists is O(n×L), where n is the number of records and L is the average number of distinct keywords of each record.
Alternatively, we can pre-compute and store the union list of each prefix, and intersect the union lists of query keywords when a query comes. The main issue of this approach is that the precomputed union lists require a large amount of space, especially since each record occurrence on an inverted list needs to be stored many times. The space complexity of all the union lists is O(n×L×w), where w is the average keyword length. Compression techniques can be used to reduce the space requirement.
Efficient Prefix Intersection Using Forward Lists:
We develop a new solution based on the following ideas. Among the union lists ∪1, ∪2, . . . , ∪l, we identify the shortest union list. Each record ID on the shortest list is verified by checking if it exists on all the other union lists (following the ascending order of their lengths). Notice that these union lists are not materialized in the computation. The shortest union list can be traversed by accessing the leaf nodes of the corresponding prefix. The length of each union list can be pre-computed and stored in the trie, or estimated on-the-fly. To verify record occurrences efficiently, a forward list can be maintained for each record r, which is a sorted list of IDs of keywords in r, denoted as Fr. A unique property of the keyword IDs is that they are encoded using their alphabetical order. Therefore, each prefix pi has a range of keyword IDs [MinIdi, MaxIdi], so that if pi is a prefix of another string s, then the ID of s should be within this range.
An interesting observation is, for a record r on the shortest union list, the problem of verifying whether r appears on (non-materialized) union list Uk of a query prefix pk, is equivalent to testing if pk appears in the forward list Fr as a prefix. We can do a binary search for MinIdk on the forward list Fr to get a lower bound Idlb, and check if Idlb is no larger than MaxIdk. The probing succeeds if the condition holds, and fails otherwise. The time complexity for processing each single record r is O[(l−1 )log L], where l is the number of keywords in the query, and L is the average number of distinct keywords in each record. A good property of this approach is that the time complexity of each probing does not depend on the lengths of inverted lists, but on the number of unique keywords in a record (logarithmically).
FIG. 7 shows an example when a user types in a query “vidb li”. The predicted keywords for “li” are “li”, “lin”, “line”, and “linux”. The keyword-ID range of each query keyword is shown in brackets. For instance, the keyword-ID range for prefix “li” is [3, 5], which covers the ranges of “lin” and “liu”. To intersect the union list of “vidb” with that of “li”, we first identify “vldb” as the one with the shorter union list. The record IDs (6, 7, 8, . . . ) on the list are probed one by one. Take record 6 as an example. Its forward list contains keyword IDs 2, 4, 8, . . . . We use the range of “li” to probe the forward list. By doing a binary search for the keyword ID 3, we find keyword with ID 4 on the forward list, which is then verified to be no larger than MaxID=6. Therefore, record 6 is an answer to the query, and the keyword with ID 4 (which appears in record 6) has “li” as a prefix.
Extension to Fuzzy Search:
The algorithm described above naturally extends to the case of fuzzy search. Since each query keyword has multiple active nodes of similar prefixes, instead of considering the union of the leaf nodes of one prefix node, now we need to consider the unions of the leaf nodes for all active nodes of a prefix keyword. The lengths of these union lists can be estimated in order to find a shortest one. For each record r on the shortest union list, for each of the other query keywords, for each of its active nodes, we test if the corresponding similar prefix can appear in the record r as a prefix using the forward list of r.
Cache-Based Intersection of Prefixes
Above we presented an algorithm for incrementally computing similar prefixes for a query keyword, as the user types the keyword letter by letter. Now we show that prefix intersection can also be performed incrementally using previously cached results.
We use an example to illustrate how to cache query results and use them to answer subsequent queries. Suppose a user types in a keyword query Q1=“cs co”. All the records in the answers to Q1 are computed and cached. For a new query Q2=“cs conf” that appends two letters to the end of Q1, we can use the cached results of Q1 to answer Q2, because the second keyword “conf” in Q2 is more restrictive than the corresponding keyword “co” in Q1. Each record in the cached results of Q1 is verified to check if “conf” can appear in the record as a prefix. In this way, Q2 does not need to be answered from scratch. As in this example, in the following discussion, we use “Q1” to refer to a query whose results have been cached, and “Q2” to refer to a new query whose results we want to compute using those of Q1.
Cache Miss:
• a. Often the more keywords the user types in, the more typos and mismatches the query could have. Thus we may want to dynamically increase the edit-distance threshold δ as the query string is getting longer. Then it is possible that the threshold for the new query Q2 is strictly larger than that of the original query Q1. In this case, the active nodes of keywords in Q1 might not include all those of keywords in Q2. As a consequence, we cannot use the cached results of Q1 (active nodes and answers) to compute those of Q2. This case is a cache miss, and we need to compute the answers of Q2 from scratch.
Reducing Cached Results:
The cached results of query Q1 could be large, which could require a large amount of time to compute and space to store. There are several cases where we can reduce the size. The first case is when we want to use pagination, i.e., we show the results in different pages. In this case, we can traverse the shortest list partially, until we have enough results for the first page. As the user browses the results by clicking “Previous” and “Next” links, we can continue traversing the shortest list to compute more results and cache them.
The second case is when the user is only interested in the best results, say, the took records according a ranking function, for a predefined constant k. Such a function could allow us to compute the answers to the query Q1 without traversing the entire shortest list, assuming we are sure that all the remaining records on the list cannot be better than the results already computed. In other words, the ranking function allows us to do early termination during the traversal. When using the top-k results of Q1 to compute the top-k results of Q2, it is possible that the cached results are not enough, since Q2 has a more restrictive keyword. In this case, we can continue the unfinished traversal on the shortest list, assuming we have remembered the place where the traversal stopped on the shortest list for query Q1.
FIG. 8 shows an example of incrementally computing top-k answers using cached results. A user types in a query “cs conf vanc” letter by letter, and the server receives queries “cs co”, “cs conf”, and “cs conf vanc” in order. (Notice that it is possible that some of the prefix queries were not received by the server due to the network overhead and the server delay.) The first query “cs co” is answered from scratch. Assuming the union list of keyword “cs” is the shorter one. The traversal stops at the first vertical bar. Each record accessed in the traversal is verified by probing the keyword range of “co” using the forward list of the record. Records that pass the verification are cached. When we want to answer the query “cs conf” incrementally, we first verify each record in the cached result of the previous query by probing the keyword range of “conf”. Some of these results will become results of the new query. If the results from the cache is insufficient to compute the new top-k, we resume the traversal on the list of “cs”, starting from the stopping point of the previous query, until we have enough top-k results for the new query. The next query “cs conf vanc” is answers similarly.
In the case of cache miss, i.e., earlier cached results cannot be used to compute the answers of a new query, we may need to answer the new query from scratch. We may choose a different list as the shortest one to traverse, and subsequent queries can be computed incrementally similarly.
Ranking
In the context of the foregoing a ranking function considers various factors to compute an overall relevance score of a record to a query. The following are several important factors.
• a. Matching prefixes: We consider the similarity between a query keyword and its best matching prefix. The more similar a record's matching keywords are to the query keywords, the higher this record should be ranked. The similarity is also related to keyword length. For example, when a user types in a keyword “circ”, the word “circle” is closer to the query keyword than “circumstance”, therefore records containing the word “circle” could be ranked higher. In most cases exact matches on the query should have a higher priority than fuzzy matches.
• b. Predicted keywords: Different predicted keywords for the same prefix can have different weights. One way to assign a score to a keyword is based on its inverted document frequency (IDF).
• c. Record weights: Different records could have different weights. For example, a publication record with many citations could be ranked higher than a less cited publication.
As an example, the following is a scoring function that combines the above factors. Suppose the query is Q={p1, p2, . . . }, p′1 is the best matching prefix for pi, and ki is the best predicted keyword for p′i. Let sim(pi, p′i) be an edit similarity between p′i and pi. The score of a record r for Q can be defined as:
Score(r,Q)=Σi[sim(pi,p′i)+α*|p′i|−|ki|)+β*score(r,ki)],
where α and β are weights (0<β<α<1), and score(r, ki) is a score of record r for keyword ki.
Highlighting Best Prefixes
Similarly, in the above context when displaying records to the user, the most similar prefixes for an input prefix should be highlighted. This highlighting is straightforward for the exact-match case. For fuzzy search, a query prefix could be similar to several prefixes of the same predicted keyword. Thus, there could be multiple ways to highlight the predicted keyword. For example, suppose a user types in “lus”, and there is a predicted keyword “luis”. Both prefixes “lui” and “luis” are similar to “lus”, and there are several ways to highlight them, such as “luis” or “luis”. To address this issue, we use the concept of normalized edit distance. Formally, given two prefixes pi and pj, their normalized edit distance is:
ned(pi,pj)=ed(pi,pi)/max(|pi|,|pj|),
where |pi| denotes the length of pi. Given an input prefix and one of its predicted keywords, the prefix of the predicted keyword with the minimum ned to the query is highlighted. We call such a prefix a best matched prefix, and call the corresponding normalized edit distance the “minimal normalized edit distance,” denoted as “mned”. This prefix is considered to be most similar to the input. For example, for the keyword “lus” and its predicted word “luis”, we have ned(“lus”, “l”)=2 3, ned(“lus”, “lu”)=1 3, ned(“lus”, “lui”)=1 3, and ned(“lus”, “luis”)=1 4. Since mned(“lus”, “luis”)=ned(“lus”, “luis”), “luis” will be highlighted.
Using Synonyms
In the above context we can utilize a-priori knowledge about synonyms to find relevant records in the same manner as discussed above.
Many alterations and modifications may be made by those having ordinary skill in the art without departing from the spirit and scope of the invention. Therefore, it must be understood that the illustrated embodiment has been set forth only for the purposes of example and that it should not be taken as limiting the invention as defined by the following invention and its various embodiments.
Therefore, it must be understood that the illustrated embodiment has been set forth only for the purposes of example and that it should not be taken as limiting the invention as defined by the following claims. For example, notwithstanding the fact that the elements of a claim are set forth below in a certain combination, it must be expressly understood that the invention includes other combinations of fewer, more or different elements, which are disclosed in above even when not initially claimed in such combinations. A teaching that two elements are combined in a claimed combination is further to be understood as also allowing for a claimed combination in which the two elements are not combined with each other, but may be used alone or combined in other combinations. The excision of any disclosed element of the invention is explicitly contemplated as within the scope of the invention.
The words used in this specification to describe the invention and its various embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use in a claim must be understood as being generic to all possible meanings supported by the specification and by the word itself.
The definitions of the words or elements of the following claims are, therefore, defined in this specification to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in the claims below or that a single element may be substituted for two or more elements in a claim. Although elements may be described above as acting in certain combinations and even initially claimed as such, it is to be expressly understood that one or more elements from a claimed combination can in some cases be excised from the combination and that the claimed combination may be directed to a subcombination or variation of a subcombination.
Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.
The claims are thus to be understood to include what is specifically illustrated and described above, what is conceptionally equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the invention.
TABLE I
A PUBLICATION TABLE
IDTitleAuthorBooktitleYear
1EASE: An Effective 3-in-1 Keyword Search Method forGuoliang Li. Beng Chin OoiSIGMOD2008
Unstructured, Semi-structured and Structured DataJianhua Feng, Jianyong Wang, Lizhu Zhou
2BLINKS: Ranked Keyword Searches on GraphsHao He, Haixun Wang, Jun Yang, Philip S. YuSIGMOD2007
3Spark: Top-k Keyword Query in Relational DatabasesYi Luo, Xuemin Lin, Wei Wang, Xiaofang ZhouSIGMOD2007
4Finding Top-k Min-Cost Connected Trees in DatabasesBolin Ding, Jeffrey Xu Yu, Shan Wang,ICDE2007
Lu Qin, Xiao Zhang, Xuemin Lin
5Efective Keyword Search in Relational DatabasesFang Liu, Clement T. Yu, Weiyi Meng, Abdur ChowdhurySIGMOD2006
6Bidirectional Expansion for Keyword Search on Graph DatabasesVarun Kacholia, Shashank Pandit, Soumen Chakrabarti,VLDB2005
S. Sudarshan, Rushi Desai, Hrishikesh Karambelkar
7Ethcient IR-Style Keyword Search over Relational DatabasesVagelis Hristklis, Luis Gravano, Yannis PapakonstantinouVLDB2003
8DISCOVER: Keyword Search in Relational DatabasesVagelis Hristidis, Yannis PapakonstantinouVLDB2002
9DBXplorer: A System for Keyword-Based SearchSanjay Agrawal, Surajit Chaudhuri, Gautam DasICDE2002
10Keyword Searching and Browsing in Databases using BANKSGaurav Bhalotia, Arvind Hulgeri, Charuta Nakhe,ICDE2002
Soumen Chakrabarti, S. Sudarshan
TABLE II
ANSWERING PREFIX QUERIES OF “databases vldb luis” INCREMENTALLY
User Inputddadatdatadatabdatabadatabasdatabasedatabases
Kept Nodes123456789
Predicted Records1 3 4 5 61 3 4 5 61 3 4 5 61 3 4 5 63 4 5 63 4 5 63 4 5 63 4 5 63 4 5 6
7 8 9 107 8 9 107 8 9 107 8 9 107 8 9 107 8 9 107 8 9 107 8 9 107 8 9 10
databases vdatabases vldatabases vlddatabases vldb
User Inputdatabasesvdatabasesvldatabasesvlddatabasesvldb
Kept Nodes917918919920
Predicted Records3 4 5 66 7 83 4 5 66 7 83 4 5 66 7 83 4 5 66 7 8
7 8 9 107 8 9 107 8 9 107 8 9 10
6 7 86 7 86 7 86 7 8
databases vldb ldatabases vldb ludatabases vldb luidatabases vldb luis
User Inputdatabasesvldbldatabasesvldbludatabasesvldbluidatabasesvldbluis
Kept Nodes92010920149201592016
Predicted Records3 4 5 66 7 81 3 43 4 5 66 7 84 73 4 5 66 7 873 4 5 66 7 87
7 8 9 105 77 8 9 107 8 9 107 8 9 10
7777
TABLE III
THE ACTIVE NODE SETS FOR PROCESSING PREFIX QUERIES
OF “nlis” (EDIT DISTANCE THRESHOLD δ = 2)
(a) query “n”
Φe(0, 0)(10, 1)(11, 2)(14, 2)
Deletion(0, 1)(10, 2)
Substitution(10, 1) (11, 2); (14, 2)
Match(12, 2)
Insertion
Φn(0, 1); (10, 1); (11, 2); (12, 2); (14, 2)
(b) query “nl”
Φn(0, 1)(10, 1)(11, 2)(12, 2)(14, 2)
Deletion(0, 2)(10, 2)
Substitution(11, 2);
(14, 2)
Match(10, 1)
Insertion(11, 2); (14, 2)
Φnl(10, 1); (0, 2); (11, 2); (14, 2)
(c) query “nli”
Φnl(10, 1)(0, 2)(11, 2)(14, 2)
Deletion(10, 2)
Substitution(14, 2)
Match(11, 1)(15, 2)
Insertion(12, 2); (13, 2)
Φnli(11, 1); (10, 2); (12, 2); (13, 2); (14, 2); (15, 2)
(d) query “nlis”
Φnli(11, 1)(10, 2)(12, 2)(13, 2)(14, 2)(15, 2)
Deletion(11, 2)
Substitution(12, 2);
(13, 2)
Match(16, 2)
Insertion
Φnlis(11, 2); (12, 2); (13, 2); (16, 2)
|
__label__pos
| 0.973865 |
Guest post by David Churchward
image
I’ve always been a firm believer that moving averages probably give a better insight into trends within a business than a simple trend line associated to a set of values such as monthly sales (although I tend to review these two values together). The reason for this is that a trend can be skewed by one or two values that may not be representative of the underlying business such as spikes associated to seasonality or a specific event. When BillD highlighted a query regarding this concept in his comments on Profit & Loss (Part 2) – Compare and Analyse, I thought it would be a great idea to flex our P&L dataset to provide some Moving Average capability.
In this post, I will explain what moving averages are intended to deliver and explain how to calculate them using the sales elements of the example data used in the Profit & Loss series of posts. I will then add the flexibility for users to select the time frame that the moving average calculation should consider, the number of trend periods to be displayed and the end date of the report.
What is a Moving Average?
The most common moving average measure is generally referred to as a 12 month moving average. In the case of our sales data, for any given period, this measure would sum the last 12 months of sales preceding and including the month being analysed and then divide by 12 to show an average sales value for that timeframe. In financial terms, the equation is therefore quite simply:
12 Month Moving Average = Sum of Sales for Last 12 Months / 12
This all seems very straight forward but there’s a lot of complexity involved if we want to put the Moving Average timeframe (represented as 12 in the above example) in the hands of the user, give them the power to select the number of trend periods to be displayed and the month that the report should display up to.
The Dataset
The dataset that we’re using looks something like below.
image
Note – I’m using PowerPivot V1. Design viewer is available in V2 but I’ve hashed this together – nothing clever!
You’ll notice that FACT_Tran (our dataset to be analysed) is linked to DIM_Heading1, DIM_Heading2 and DIM_DataType to provide some categorisation to our dataset. I’ve also linked to Dates which is a sequential set of dates that more than covers the timespan of our dataset. This table carries some static additional information based on the date:
Date_Month_End = EOMONTH(Dates[Date],0)
Date_Next_Month_Start = Dates[Date_Month_End]+1
Once again, we’re not quite registering on Rob’s spicy scale! Rest assured that you’ll be getting a more intense DAX workout as we go on.
As these date measures aren’t expected to be dynamic, I’ve coded them in the PowerPivot window. This allows them to be calculated on file refresh but they won’t need to recalculate for each slicer operation which removes performance overhead from our ultimate dynamic measure.
For reasons that I’ll come on to later, I also need the month end date on my fact table as I can’t use the Month End Date on my Dates table in my measures. I can however pull the same value across to my FACT_Tran table using the following measure:
Fact_Month_End_Date = RELATED(Dates[Date_Month_End])
So What Are These Unlinked MA_ Tables?
The reason for these tables should become apparent as we go on. In brief, they’re going to be used as parameters or headings on our report. The reason that they exist and that they’re not linked to the rest of our data is simply because I don’t want them to be filtered by our measures. Instead, I want them to drive the filtering.
Initial PivotTable Setup
I’m going to be displaying a series of data organised in monthly columns. The user will be given slicers to set Month End Date (the last period to be shown on the report), Number of Periods for Moving Average (this will ultimately be part of our divisor calculation) and Number of Periods for Trend (this will be the number of monthly columns that we will display on our trend). We can establish these slicers straight away and link them to the pivot.
I obviously need a month end date as a column heading but which one? To some extent I’ve given this away earlier on. In short, I need to use my MA_Dates[Month_End_Date] field. The reason is that this field isn’t linked to our dataset and therefore won’t be affected by any other filters. If I use a date field that is part of my dataset or part of a linked table, the values available may be filtered down by the users selections. I can get around this using an ALL() expression to give me the correct values, but the problem is that the column is still filtered and my results will all be displayed in one column. It’s difficult to explain until you see it so please go ahead and try – it’s worth hitting the brick wall to really understand it!
Calculating Sum of Sales for Last X Months
The first part of our equation is to calculate the total value for sales across all periods within a dynamic timeframe to be selected by the user. For this I use a Calculate function that looks like this:
CALCULATE(
[Cascade_Value_All],
DIM_Heading1[Heading1_Name]=”Sales”,
DIM_DataType[Data_Type_Name]=”Actual”,
DATESBETWEEN(
Dates[Date],
DATEADD(
LASTDATE(VALUES(MA_Dates[Next_Month_Start_Date])),
MAX(MA_Function_Periods[Moving_Average_No_Periods])*-1,MONTH
),
LASTDATE(VALUES(MA_Dates[Month_End_Date]))
)
)
I’m using a base measure called Cascade_Value_All that was created in Profit & Loss – The Art of the Cascading Subtotal. I’m then filtering that measure to limit my dataset to records that relate to Sales and a data type of Actual (ie eliminating Budget). This is simple filtering of a CALCULATE function. However, it gets a bit more tasty with the third filter which limits the dataset to a series of dates that are dependent on the users selections in slicers and our date column heading.
The DATESBETWEEN function has the syntax DATESBETWEEN(dates, start_date, end_date) and works like this:
1. I set the field that requires filtering (Dates[Data]). I’ve found that this works best if this is a linked table of sequential dates without any breaks. If you have any breaks, there’s a chance you might not get an answer as the answer that you evaluate to has to be available in the table.
2. My start date is a DATEADD function that calculates the column heading date less the number of months that the user has selected on the “Moving Average No of Periods” slicer. I use the LASTDATE(VALUES(MA_Dates[Next_Month_Start_Date)) function to retrieve the Next_Month_Start_Date value from the MA_Dates table that relates to the date represented on the column heading. I then rewind by the number of months selected on the slicer using MAX(MA_Function_Periods[Moving_Average_No_Periods])*-1. The “-1” is used to go back in time. The reason I use Next_Month_Start_Date and a multiple of –1 is more clearly explained in Slicers For Selecting Last “X” Periods.
3. My end date is simply the Month_End_Date as shown on the column heading of the report. This is calculated using LASTDATE(VALUES(MA_Dates[Month_End_Date]).
image
That’s great, but my measure isn’t taking any account of my “Show Periods Up To” selection and the “Trend No of Periods” that I’ve selected. We therefore need to limit the measure to only execute when certain parameters hold as true based on these selections. I only want values to be displayed when my column heading date is:
1. Less than or equal to the selected Month End Date on my “Show Periods Up To” slicer AND
2. Greater than or equal to the selected Month End Date LESS the selected number of periods on my “Trend No of Periods” slicer.
To do this, I use an IF statement to determine when my CALCULATE function should execute. Let’s call this measure Sales_Moving_Average_Total_Value
Sales_Moving_Average_Total_Value
= IF(COUNTROWS(VALUES(MA_Dates[Month_End_Date]))=1,
IF(VALUES(MA_Dates[Month_End_Date])<=
LASTDATE(Dates[Date_Month_End])
&&VALUES(MA_Dates[Month_End_Date])>=
DATEADD(
LASTDATE(Dates[Date_Next_Month_Start]),
(MAX(MA_Trend_Periods[Trend_Periods])*-1),MONTH),
CALCULATE(
[Cascade_Value_All],
DIM_Heading1[Heading1_Name]=”Sales”,
DIM_DataType[Data_Type_Name]=”Actual”,
DATESBETWEEN(
Dates[Date],
DATEADD(
LASTDATE(MA_Dates[Next_Month_Start_Date]),
MAX(MA_Function_Periods[Moving_Average_No_Periods])*-1,MONTH
),
LASTDATE(VALUES(MA_Dates[Month_End_Date]))
)
)
)
)
The IF statement works as follows:
1. I first need to determine that I’m evaluating only where I have one value for MA_Date[Month_End_Date]. If I don’t do this, I get that old favourite error in my subsequent evaluation that says that a table of multiple values was supplied……
2. I then evaluate to determine if my column heading date (VALUES(MA_Dates[Month_End_Date]) is less than or equal to the date selected on the Month End Period slicer (LASTDATE(dates[Date_Month_End])…AND (&&)
3. My column heading date is greater than or equal to a calculated date which is X periods prior to the selected “Show Periods Up To” as selected on the Slicer. I use a DATEADD function for this similar to that used in my CALCULATE function except we’re adjusting the date by the value selected on the “Trend No of Periods” slicer.
With this in place, we have the total sales for the selected period relating to the users selections.
image
So my table is now limited to the number of trend periods selected and represents the month end date selected.
So Now We Just Divide By “Moving Average No of Periods” Right? eh NO!
We’ve calculated our total sales for the period relating to the users selections. You would be forgiven for suggesting that we simply divide by the number of moving average periods selected. Depending on your data, you could do this but the problem is that the dataset may not hold the selected number of periods, especially if the user can select a month end date that goes back in time. As a result, we need to work out how may periods are present in our Sales_Moving_Average_Total_Value measure.
Sales_Moving_Average_Periods
= IF(COUNTROWS(VALUES(MA_Dates[Month_End_Date]))=1,
IF(VALUES(MA_Dates[Month_End_Date])<=
LASTDATE(Dates[Date_Month_End])
&&VALUES(MA_Dates[Month_End_Date])>=
DATEADD(
LASTDATE(Dates[Date_Next_Month_Start]),
(MAX(MA_Trend_Periods[Trend_Periods])*-1),MONTH),
CALCULATE(
COUNTROWS(DISTINCT(FACT_Tran[Fact_Month_End_Date])),
DIM_Heading1[Heading1_Name]=”Sales”,
DIM_DataType[Data_Type_Name]=”Actual”,
DATESBETWEEN(
Dates[Date],
DATEADD(LASTDATE(MA_Dates[Next_Month_Start_Date]),
MAX(MA_Function_Periods[Moving_Average_No_Periods])*-1,MONTH),
LASTDATE(VALUES(MA_Dates[Month_End_Date]))
)
)
)
)
This measure is essentially the same as my Sales_Moving_Average_Total measure. The only real difference is that we count the distinct date values in our dataset as opposed to calling the Cascade_Value_All measure. I mentioned earlier that there was a reason why I needed the month end date to be held on my FACT_Tran table and this is why. If I use any other table holding the month end date, that table isn’t going to have been filtered in the way that the core dataset has been filtered. As an example, my Dates table has a series of dates that spans my dataset timeframe and more. As a result, evaluating against this table will deduce that the table does in fact have dates that precede my dataset and there is therefore no evaluation as to whether there is a transaction held in the dataset for that date.
image
As you can see, since my dataset runs from 1st July 2009, I only have 9 periods of data to evaluate for my 31/03/2010 column. If I had divided by 12 (as per my “Moving Average No of Periods” slicer selection), I would have got a very wrong answer. Obviously, this is slightly contrived but it’s worthy of consideration.
And Now The Simple Bit
I can understand that the last two measures have taken some absorbing, especially working out when particular date fields should be used. For some light relief, the next measure won’t really tax you!
Sales_Moving_Average_Value =
IFERROR(
[Sales_Moving_Average_Total_Value]/[Sales_Moving_Average_Periods],
BLANK()
)
This is a simple division with a bit of error checking to avoid any nasties.
image
When It’s All Put Together
Since all of these measure are portable, I can create another Pivot Table on the same basis as the one above (with Sales_Moving_Average_Value given an alias of Moving Average), move some stuff around, add a measure for the actual sales value for the month (I won’t go into that now, but it’s a simple CALCULATE measure with some time intelligence) and I then reconfigure to look like the following:
image
I can then drive a simple line chart and apply a trend line to my “Actual” measure with the chart conveniently hiding my data grid that drives it.
image
As you can see, a trend on my Actual measure shows a steady decline. My Moving Average, however, shows a relatively stable, if not slightly improving trend. Seasonality of some other spikes are obviously therefore involved and the reality is that both measures probably need to be reviewed side by side.
For those of you reading this who are interested in seeing the workbook of this example, I’ll look to post this in a future post when I take this analysis one step further to cover the whole P&L. Sorry to make you wait.
I hope this helps you out BillD…
One More Point to Note
Those eagle eyed DAX pros out there have probably noticed that my IF functions only contain a calculation to evaluate when the logical test reaches a True answer. The reason is that the function assumes BLANK() when a false evaluation condition isn’t provided. I haven’t worked out if there’s any performance impact using this method on large datasets. It’s up to you what you chose to do and if anyone can convince me why coding the False condition as BLANK() is best practice, I will quickly change my habits!
This Post Has 6 Comments
1. Renato Lyke
Hi,
I tried the above moving average methodlogy it does not work from me, i have a columns with the Dept id , month and attritio. I am unable to get the sum of attritions. Any idea how could i do this, my data is in one table itself
2. Renato Lyke
Sales_Moving_Average_Total_Value
I tried this and i get blank values for this measure, below is the measure that i have written
=if(COUNTROWS(VALUES(MA_Dates[Month_End_Date]))=1,
If(VALUES(MA_Dates[Month_End_Date])=
DATEADD(LASTDATE(Dates[Date_Next_Month_Start]),
(MAX(MA_Trend_Periods[Trend_Periods])*-1),MONTH),
CALCULATE(‘Comp Beni'[Cascade_Value_All],
DATESBETWEEN(Dates[Dates],
DATEADD(LASTDATE(MA_Dates[Next_Month_Start_Date]),
MAX(MA_Function_Periods[Moving Average Periods])*-1,MONTH),
LASTDATE(VALUES(MA_Dates[Month_End_Date]))
)
)
)
)
Can you tell me where i am making the mistake, or do i need to create another measure for the no of periods
3. Frank
David, I’m not able to reproduce this with the dataset you stated.
If the fact table is linked to the data table my slicers, though not being linked at all, get filtered by ‘Calculating Sum of Sales for Last X Months’.
Do I have to use the different dataset structure of your Profit & Loss series of posts?
4. John Purdy
Would it be possible to upload the original workbook – would be greatly appreciated!
Many thanks
5. bb
Please post an example sample file! this looks cool!
6. Kirk Waldron
Has anyone been able to get this to work? I’ve followed the P&L series perfectly up to this point, but I just get blank values as soon as I add in the top half of this measure.
Leave a Comment or Question
|
__label__pos
| 0.519363 |
Source code
Revision control
Other Tools
Test Info:
<!DOCTYPE HTML>
<html>
<!--
-->
<head>
<title>Test key events for time control</title>
<script src="/tests/SimpleTest/SimpleTest.js"></script>
<script src="/tests/SimpleTest/EventUtils.js"></script>
<link rel="stylesheet" type="text/css" href="/tests/SimpleTest/test.css"/>
<meta charset="UTF-8">
</head>
<body>
<p id="display"></p>
<div id="content">
<input id="input" type="date">
<div id="host"></div>
</div>
<pre id="test">
<script type="application/javascript">
SimpleTest.waitForExplicitFinish();
// Turn off Spatial Navigation because it hijacks arrow keydown events:
SimpleTest.waitForFocus(function() {
SpecialPowers.pushPrefEnv({"set":[["snav.enabled", false]]}, function() {
test();
SimpleTest.finish();
});
});
var testData = [
/**
* keys: keys to send to the input element.
* initialVal: initial value set to the input element.
* expectedVal: expected value of the input element after sending the keys.
*/
{
// Type 11222016, default order is month, day, year.
keys: ["11222016"],
initialVal: "",
expectedVal: "2016-11-22"
},
{
// Type 3 in the month field will automatically advance to the day field,
// then type 5 in the day field will automatically advance to the year
// field.
keys: ["352016"],
initialVal: "",
expectedVal: "2016-03-05"
},
{
// Type 13 in the month field will set it to the maximum month, which is
// 12.
keys: ["13012016"],
initialVal: "",
expectedVal: "2016-12-01"
},
{
// Type 00 in the month field will set it to the minimum month, which is 1.
keys: ["00012016"],
initialVal: "",
expectedVal: "2016-01-01"
},
{
// Type 33 in the day field will set it to the maximum day, which is 31.
keys: ["12332016"],
initialVal: "",
expectedVal: "2016-12-31"
},
{
// Type 00 in the day field will set it to the minimum day, which is 1.
keys: ["12002016"],
initialVal: "",
expectedVal: "2016-12-01"
},
{
// Type 275769 in the year field will set it to the maximum year, which is
// 275760.
keys: ["0101275769"],
initialVal: "",
expectedVal: "275760-01-01"
},
{
// Type 000000 in the year field will set it to the minimum year, which is
// 0001.
keys: ["0101000000"],
initialVal: "",
expectedVal: "0001-01-01"
},
{
// Advance to year field and decrement.
keys: ["KEY_Tab", "KEY_Tab", "KEY_ArrowDown"],
initialVal: "2016-11-25",
expectedVal: "2015-11-25"
},
{
// Right key should do the same thing as TAB key.
keys: ["KEY_ArrowRight", "KEY_ArrowRight", "KEY_ArrowDown"],
initialVal: "2016-11-25",
expectedVal: "2015-11-25"
},
{
// Advance to day field then back to month field and decrement.
keys: ["KEY_ArrowRight", "KEY_ArrowLeft", "KEY_ArrowDown"],
initialVal: "2000-05-01",
expectedVal: "2000-04-01"
},
{
// Focus starts on the first field, month in this case, and increment.
keys: ["KEY_ArrowUp"],
initialVal: "2000-03-01",
expectedVal: "2000-04-01"
},
{
// Advance to day field and decrement.
keys: ["KEY_Tab", "KEY_ArrowDown"],
initialVal: "1234-01-01",
expectedVal: "1234-01-31"
},
{
// Advance to day field and increment.
keys: ["KEY_Tab", "KEY_ArrowUp"],
initialVal: "1234-01-01",
expectedVal: "1234-01-02"
},
{
// PageUp on month field increments month by 3.
keys: ["KEY_PageUp"],
initialVal: "1999-01-01",
expectedVal: "1999-04-01"
},
{
// PageDown on month field decrements month by 3.
keys: ["KEY_PageDown"],
initialVal: "1999-01-01",
expectedVal: "1999-10-01"
},
{
// PageUp on day field increments day by 7.
keys: ["KEY_Tab", "KEY_PageUp"],
initialVal: "1999-01-01",
expectedVal: "1999-01-08"
},
{
// PageDown on day field decrements day by 7.
keys: ["KEY_Tab", "KEY_PageDown"],
initialVal: "1999-01-01",
expectedVal: "1999-01-25"
},
{
// PageUp on year field increments year by 10.
keys: ["KEY_Tab", "KEY_Tab", "KEY_PageUp"],
initialVal: "1999-01-01",
expectedVal: "2009-01-01"
},
{
// PageDown on year field decrements year by 10.
keys: ["KEY_Tab", "KEY_Tab", "KEY_PageDown"],
initialVal: "1999-01-01",
expectedVal: "1989-01-01"
},
{
// Home key on month field sets it to the minimum month, which is 01.
keys: ["KEY_Home"],
initialVal: "2016-06-01",
expectedVal: "2016-01-01"
},
{
// End key on month field sets it to the maximum month, which is 12.
keys: ["KEY_End"],
initialVal: "2016-06-01",
expectedVal: "2016-12-01"
},
{
// Home key on day field sets it to the minimum day, which is 01.
keys: ["KEY_Tab", "KEY_Home"],
initialVal: "2016-01-10",
expectedVal: "2016-01-01"
},
{
// End key on day field sets it to the maximum day, which is 31.
keys: ["KEY_Tab", "KEY_End"],
initialVal: "2016-01-10",
expectedVal: "2016-01-31"
},
{
// Home key should have no effect on year field.
keys: ["KEY_Tab", "KEY_Tab", "KEY_Home"],
initialVal: "2016-01-01",
expectedVal: "2016-01-01"
},
{
// End key should have no effect on year field.
keys: ["KEY_Tab", "KEY_Tab", "KEY_End"],
initialVal: "2016-01-01",
expectedVal: "2016-01-01"
},
{
// Incomplete value maps to empty .value.
keys: ["1111"],
initialVal: "",
expectedVal: ""
},
];
function sendKeys(aKeys) {
for (let i = 0; i < aKeys.length; i++) {
let key = aKeys[i];
if (key.startsWith("KEY_")) {
synthesizeKey(key);
} else {
sendString(key);
}
}
}
function test() {
document.querySelector("#host").attachShadow({ mode: "open" }).innerHTML = `
<input type="date">
`;
function chromeListener(e) {
ok(false, "Picker should not be opened when dispatching untrusted click.");
}
for (const elem of [document.getElementById("input"), document.getElementById("host").shadowRoot.querySelector("input")]) {
for (let { keys, initialVal, expectedVal } of testData) {
elem.focus();
elem.value = initialVal;
sendKeys(keys);
is(elem.value, expectedVal,
"Test with " + keys + ", result should be " + expectedVal);
elem.value = "";
elem.blur();
}
SpecialPowers.addChromeEventListener("MozOpenDateTimePicker",
chromeListener);
elem.click();
SpecialPowers.removeChromeEventListener("MozOpenDateTimePicker",
chromeListener);
}
}
</script>
</pre>
</body>
</html>
|
__label__pos
| 0.643593 |
How to convert RAW to FAT32 without data loss?
Posted by Juno to Data Recovery on June 28th, 2017
I have recently discovered that I cannot access my USB flash drive. When I try to open it on Windows computer, I receive a pop-up message saying "This drive is not formatted, would you like to format it?" It shows as RAW when I check in Disk Management. Is there any way to convert RAW to FAT32 without data loss? Or can I convert RAW to FAT32 without formatting? Any help would be appreciated!
Convert RAW to FAT32 - a feasible way to fix RAW drive
FAT32 file system is widely used on various storage devices, such as USB flash drives, SD cards, memory cards and other external hard drives. But sometimes, your drive which is originally formatted with FAT32 file system may become RAW due to unexpected damage.
RAW file system is not a type of file system, like FAT32, exFAT, NTFS or HFS+ file system, but the state when the file system of a drive has been lost or can't be recognized by the Operating System.
No matter it is USB flash drive, memory card, or external hard drive that becomes RAW, you can no longer use it or access data inside it. The most common way to make a RAW drive useable again is to convert RAW to FAT32 by formatting it, but that will make you lose data saved on it. Is there any way to convert RAW to FAT32 without data loss?
Convert RAW to FAT32 using command line
Converting RAW to FAT32 using command line is to clean bad sectors and get access to the data stored in the drive.
Step 1: Go to the start menu, type in "cmd" in a search bar.
Step 2: Right-click cmd.exe and choose "Run as Administrator".
Step 3: Type diskpart and enter.
Step 4: Type rescan and enter.
Step 5: Restart your computer.
If the RAW drive has been converted to FAT32 successfully using cmd, then you can access the drive as normal, and no data lost after converting RAW to FAT32 using cmd. However, most of the time, cmd won't work as expected.
A more reliable way to convert RAW to FAT32 without data loss
To keep the security of your data, you are recommended to recover data from RAW drive before converting RAW to FAT32 with formatting.
As long as your RAW drive doesn't have hardware problem, you can perform data recovery by using iBoysoft Data Recovery, a professional data recovery software that can recover data from RAW USB flash drive, RAW memory card, RAW SD card, RAW external hard drive, formatted, corrupted, inaccessible hard drive, external hard drive, USB flash drive, memory card, recover deleted files even if emptied from Recycle Bin, etc. on Windows 10/8/7/Vista/XP and Windows Server 2016/2012/2008/2003.
Step 1: Recover data from RAW drive with iBoysoft Data Recovery
1. Download & install iBoysoft Data Recovery on the PC, and then connect the RAW drive to the PC.
2. Launch iBoysoft Data Recovery, and select the RAW drive.
Convert RAW to FAT32 without data loss
3. Click "Next" to search for all lost files on the RAW drive.
4. Preview the searching results, choose those you want, and click "Recover" button to get them back.
5. Check to ensure you have recovered all lost data.
Step 2: Convert RAW to FAT32
After you have successfully recovered data on the RAW drive, you can convert RAW to FAT32 by format without worrying about data loss.
1. Go to This PC, My Computer or Disk Management, find the RAW drive.
2. Right-click the RAW drive to choose "Format".
3. Set up file system as FAT32, and assign a volume label to it.
4. Click "Start" and the format operation will finish soon.
If you want to convert RAW drive to NTFS, please find detailed tutorial at how to convert RAW to NTFS without data loss.
Still have questions?
|
__label__pos
| 0.683029 |
QLE Home Quantum Logic Explorer < Previous Next >
Nearby theorems
Mirrors > Home > QLE Home > Th. List > gomaex3h10 GIF version
Theorem gomaex3h10 911
Description: Hypothesis for Godowski 6-var -> Mayet Example 3. (Contributed by NM, 29-Nov-1999.)
Hypotheses
Ref Expression
gomaex3h10.10 q = ((ef) →1 (bc) )
gomaex3h10.21 x = q
gomaex3h10.22 y = (ef)
Assertion
Ref Expression
gomaex3h10 xy
Proof of Theorem gomaex3h10
StepHypRef Expression
1 lea 160 . . 3 ((ef) ∩ ((ef) ∩ (bc) ) ) ≤ (ef)
2 gomaex3h10.10 . . . 4 q = ((ef) →1 (bc) )
3 df-i1 44 . . . . . 6 ((ef) →1 (bc) ) = ((ef) ∪ ((ef) ∩ (bc) ))
43ax-r4 37 . . . . 5 ((ef) →1 (bc) ) = ((ef) ∪ ((ef) ∩ (bc) ))
5 anor1 88 . . . . . 6 ((ef) ∩ ((ef) ∩ (bc) ) ) = ((ef) ∪ ((ef) ∩ (bc) ))
65ax-r1 35 . . . . 5 ((ef) ∪ ((ef) ∩ (bc) )) = ((ef) ∩ ((ef) ∩ (bc) ) )
74, 6ax-r2 36 . . . 4 ((ef) →1 (bc) ) = ((ef) ∩ ((ef) ∩ (bc) ) )
82, 7ax-r2 36 . . 3 q = ((ef) ∩ ((ef) ∩ (bc) ) )
9 ax-a1 30 . . . 4 (ef) = (ef)
109ax-r1 35 . . 3 (ef) = (ef)
111, 8, 10le3tr1 140 . 2 q ≤ (ef)
12 gomaex3h10.21 . 2 x = q
13 gomaex3h10.22 . . 3 y = (ef)
1413ax-r4 37 . 2 y = (ef)
1511, 12, 14le3tr1 140 1 xy
Colors of variables: term
Syntax hints: = wb 1 wle 2 wn 4 wo 6 wa 7 1 wi1 12
This theorem was proved from axioms: ax-a1 30 ax-a2 31 ax-a5 34 ax-r1 35 ax-r2 36 ax-r4 37 ax-r5 38
This theorem depends on definitions: df-a 40 df-i1 44 df-le1 130 df-le2 131
This theorem is referenced by: gomaex3lem5 918
Copyright terms: Public domain W3C validator
|
__label__pos
| 0.873446 |
Source code for nltk.chat.eliza
# Natural Language Toolkit: Eliza
#
# Copyright (C) 2001-2022 NLTK Project
# Authors: Steven Bird <[email protected]>
# Edward Loper <[email protected]>
# URL: <https://www.nltk.org/>
# For license information, see LICENSE.TXT
# Based on an Eliza implementation by Joe Strout <[email protected]>,
# Jeff Epler <[email protected]> and Jez Higgins <mailto:[email protected]>.
# a translation table used to convert things you say into things the
# computer says back, e.g. "I am" --> "you are"
from nltk.chat.util import Chat, reflections
# a table of response pairs, where each pair consists of a
# regular expression, and a list of possible responses,
# with group-macros labelled as %1, %2.
pairs = (
(
r"I need (.*)",
(
"Why do you need %1?",
"Would it really help you to get %1?",
"Are you sure you need %1?",
),
),
(
r"Why don\'t you (.*)",
(
"Do you really think I don't %1?",
"Perhaps eventually I will %1.",
"Do you really want me to %1?",
),
),
(
r"Why can\'t I (.*)",
(
"Do you think you should be able to %1?",
"If you could %1, what would you do?",
"I don't know -- why can't you %1?",
"Have you really tried?",
),
),
(
r"I can\'t (.*)",
(
"How do you know you can't %1?",
"Perhaps you could %1 if you tried.",
"What would it take for you to %1?",
),
),
(
r"I am (.*)",
(
"Did you come to me because you are %1?",
"How long have you been %1?",
"How do you feel about being %1?",
),
),
(
r"I\'m (.*)",
(
"How does being %1 make you feel?",
"Do you enjoy being %1?",
"Why do you tell me you're %1?",
"Why do you think you're %1?",
),
),
(
r"Are you (.*)",
(
"Why does it matter whether I am %1?",
"Would you prefer it if I were not %1?",
"Perhaps you believe I am %1.",
"I may be %1 -- what do you think?",
),
),
(
r"What (.*)",
(
"Why do you ask?",
"How would an answer to that help you?",
"What do you think?",
),
),
(
r"How (.*)",
(
"How do you suppose?",
"Perhaps you can answer your own question.",
"What is it you're really asking?",
),
),
(
r"Because (.*)",
(
"Is that the real reason?",
"What other reasons come to mind?",
"Does that reason apply to anything else?",
"If %1, what else must be true?",
),
),
(
r"(.*) sorry (.*)",
(
"There are many times when no apology is needed.",
"What feelings do you have when you apologize?",
),
),
(
r"Hello(.*)",
(
"Hello... I'm glad you could drop by today.",
"Hi there... how are you today?",
"Hello, how are you feeling today?",
),
),
(
r"I think (.*)",
("Do you doubt %1?", "Do you really think so?", "But you're not sure %1?"),
),
(
r"(.*) friend (.*)",
(
"Tell me more about your friends.",
"When you think of a friend, what comes to mind?",
"Why don't you tell me about a childhood friend?",
),
),
(r"Yes", ("You seem quite sure.", "OK, but can you elaborate a bit?")),
(
r"(.*) computer(.*)",
(
"Are you really talking about me?",
"Does it seem strange to talk to a computer?",
"How do computers make you feel?",
"Do you feel threatened by computers?",
),
),
(
r"Is it (.*)",
(
"Do you think it is %1?",
"Perhaps it's %1 -- what do you think?",
"If it were %1, what would you do?",
"It could well be that %1.",
),
),
(
r"It is (.*)",
(
"You seem very certain.",
"If I told you that it probably isn't %1, what would you feel?",
),
),
(
r"Can you (.*)",
(
"What makes you think I can't %1?",
"If I could %1, then what?",
"Why do you ask if I can %1?",
),
),
(
r"Can I (.*)",
(
"Perhaps you don't want to %1.",
"Do you want to be able to %1?",
"If you could %1, would you?",
),
),
(
r"You are (.*)",
(
"Why do you think I am %1?",
"Does it please you to think that I'm %1?",
"Perhaps you would like me to be %1.",
"Perhaps you're really talking about yourself?",
),
),
(
r"You\'re (.*)",
(
"Why do you say I am %1?",
"Why do you think I am %1?",
"Are we talking about you, or me?",
),
),
(
r"I don\'t (.*)",
("Don't you really %1?", "Why don't you %1?", "Do you want to %1?"),
),
(
r"I feel (.*)",
(
"Good, tell me more about these feelings.",
"Do you often feel %1?",
"When do you usually feel %1?",
"When you feel %1, what do you do?",
),
),
(
r"I have (.*)",
(
"Why do you tell me that you've %1?",
"Have you really %1?",
"Now that you have %1, what will you do next?",
),
),
(
r"I would (.*)",
(
"Could you explain why you would %1?",
"Why would you %1?",
"Who else knows that you would %1?",
),
),
(
r"Is there (.*)",
(
"Do you think there is %1?",
"It's likely that there is %1.",
"Would you like there to be %1?",
),
),
(
r"My (.*)",
(
"I see, your %1.",
"Why do you say that your %1?",
"When your %1, how do you feel?",
),
),
(
r"You (.*)",
(
"We should be discussing you, not me.",
"Why do you say that about me?",
"Why do you care whether I %1?",
),
),
(r"Why (.*)", ("Why don't you tell me the reason why %1?", "Why do you think %1?")),
(
r"I want (.*)",
(
"What would it mean to you if you got %1?",
"Why do you want %1?",
"What would you do if you got %1?",
"If you got %1, then what would you do?",
),
),
(
r"(.*) mother(.*)",
(
"Tell me more about your mother.",
"What was your relationship with your mother like?",
"How do you feel about your mother?",
"How does this relate to your feelings today?",
"Good family relations are important.",
),
),
(
r"(.*) father(.*)",
(
"Tell me more about your father.",
"How did your father make you feel?",
"How do you feel about your father?",
"Does your relationship with your father relate to your feelings today?",
"Do you have trouble showing affection with your family?",
),
),
(
r"(.*) child(.*)",
(
"Did you have close friends as a child?",
"What is your favorite childhood memory?",
"Do you remember any dreams or nightmares from childhood?",
"Did the other children sometimes tease you?",
"How do you think your childhood experiences relate to your feelings today?",
),
),
(
r"(.*)\?",
(
"Why do you ask that?",
"Please consider whether you can answer your own question.",
"Perhaps the answer lies within yourself?",
"Why don't you tell me?",
),
),
(
r"quit",
(
"Thank you for talking with me.",
"Good-bye.",
"Thank you, that will be $150. Have a good day!",
),
),
(
r"(.*)",
(
"Please tell me more.",
"Let's change focus a bit... Tell me about your family.",
"Can you elaborate on that?",
"Why do you say that %1?",
"I see.",
"Very interesting.",
"%1.",
"I see. And what does that tell you?",
"How does that make you feel?",
"How do you feel when you say that?",
),
),
)
eliza_chatbot = Chat(pairs, reflections)
[docs]def eliza_chat(): print("Therapist\n---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print("=" * 72) print("Hello. How are you feeling today?") eliza_chatbot.converse()
[docs]def demo(): eliza_chat()
if __name__ == "__main__": demo()
|
__label__pos
| 0.914925 |
Se riscontri problemi nella visualizzazione dei caratteri, clicca qui
Integrale
Da Wikipedia, l'enciclopedia libera.
bussola Disambiguazione – Se stai cercando altri significati, vedi Integrazione.
Integrale di ƒ(x).
Area sottesa dal grafico dalla funzione ƒ(x) nel dominio [a,b].
Si assume che l'area abbia valore negativo quando ƒ(x) è negativa.
In analisi matematica, l'integrale è un operatore che, nel caso di una funzione di una sola variabile, associa alla funzione l'area sottesa dal suo grafico entro un dato intervallo [a,b] nel dominio.
Grazie al teorema fondamentale del calcolo integrale si dimostra che l'integrale da a a x della funzione f corrisponde esattamente ad una primitiva di f(x).
L'integrazione risulta quindi l'operazione inversa a quella di derivazione.
Cenni storici[modifica | modifica wikitesto]
L'idea di base del concetto di integrale era nota ad Archimede di Siracusa, vissuto tra il 287 e il 212 a.C., ed era contenuta nel metodo da lui usato per il calcolo dell'area del cerchio o dell'area sottesa al segmento di un ramo di parabola, detto metodo di esaustione.
Nel XVII secolo alcuni matematici trovarono altri metodi per calcolare l'area sottesa al grafico di semplici funzioni, e tra di essi figurano ad esempio Fermat (1636) e Nicolaus Mercator (1668).
Nel diciassettesimo e diciottesimo secolo Newton, Leibniz, Johann Bernoulli scoprirono indipendentemente il teorema fondamentale del calcolo integrale, che ricondusse tale problema alla ricerca della primitiva di una funzione.
La definizione di integrale per le funzioni continue in tutto un intervallo, introdotta da Pietro Mengoli ed espressa con maggiore rigore da Cauchy, venne posta su base diversa da Riemann in modo da evitare il concetto di limite, e da comprendere classi più estese di funzioni. Nel 1875 Gaston Darboux mostrò che la definizione di Riemann può essere enunciata in maniera del tutto simile a quella di Cauchy, purché si intenda il concetto di limite in modo un po' più generale. Per questo motivo si parla di integrale di Cauchy-Riemann.
Notazione[modifica | modifica wikitesto]
Il simbolo \int che rappresenta l'integrale nella notazione matematica fu introdotto da Leibniz alla fine del XVII secolo. Il simbolo si basa sul carattere ſ (esse lunga), lettera che Leibniz utilizzava come iniziale della parola summa (ſumma), in latino somma, poiché questi considerava l'integrale come una somma infinita di addendi infinitesimali.
Il simbolo di integrale nella letteratura (da sinistra) inglese, tedesca e russa.
Esistono leggere differenze nella notazione dell'integrale nelle letterature di lingue diverse: il simbolo inglese è piegato verso destra, quello tedesco è dritto mentre la variante russa è piegata verso sinistra.
Introduzione euristica[modifica | modifica wikitesto]
Si consideri una funzione f:x \to f(x) reale di variabile reale definita su un intervallo chiuso e limitato dell'asse x delle ascisse. Quando si procede a calcolare l'integrale di f in un intervallo, f è detta funzione integranda e l'intervallo è detto intervallo di integrazione. Il valore dell'integrale della funzione calcolato nell'intervallo di integrazione è pari all'area (con segno) della figura che ha per bordi il grafico di f, l'asse delle ascisse e i segmenti verticali condotti dagli estremi dell'intervallo di integrazione agli estremi del grafico della funzione. Il numero reale che esprime tale area viene chiamato integrale della funzione esteso all'intervallo di integrazione. Con il termine "integrale" o "operatore integrale" si indica anche l'operazione stessa che associa l'area alla funzione.
Sono stati ideati diversi modi per calcolare in modo rigoroso il valore dell'integrale; a seconda della procedura adottata cambia anche l'insieme delle funzioni che è possibile misurare con un integrale. Un metodo è quello di "approssimare" il grafico della funzione con una linea costituita da uno o più segmenti, in modo che la figura si può scomporre in uno o più trapezi di cui è facile calcolare l'area: la somma algebrica delle aree di tutti i trapezi è allora l'integrale cercato. Un tale approccio è utilizzato per definire l'integrale di Riemann, in cui il calcolo dell'area viene eseguito suddividendo la figura in sottili strisce verticali assimilabili a rettangoli. Nello specifico, dividendo un intervallo di integrazione [a,b] in n intervalli del tipo [x_{s-1},x_{s}], con s=1,2,\dots,n, x_{0}=a e x_{n}=b, per ciascun intervallo si può considerare un punto t_s la cui immagine è f(t_{s}). Si costruisce allora il rettangolo che ha per base l'intervallo [x_{s-1},x_{s}] e per altezza f(t_{s}). L'area della figura costituita da tutti i rettangoli così costruiti è data dalla somma di Cauchy-Riemann:
\sum_{s=1}^{n} f(t_{s}) \,\delta x_s := \sum_{s=1}^{n} f(t_{s})(x_{s}-x_{s-1})
Se al diminuire dell'ampiezza degli intervalli \delta x_s i valori così ottenuti si concentrano in un intorno sempre più piccolo di un numero i , la funzione f è integrabile sull'intervallo [a,b] e i è il valore del suo integrale.
Definizione[modifica | modifica wikitesto]
La prima definizione rigorosa a essere stata formulata di integrale di una funzione su un intervallo è l'integrale di Riemann, formulato da Bernhard Riemann.
L'integrale di Lebesgue è una generalizzazione dell'integrale di Riemann, e per mostrarne la relazione è necessario utilizzare la classe delle funzioni continue a supporto compatto, per le quali l'integrale di Riemann esiste sempre. Siano f e g due funzioni continue a supporto compatto su \Bbb{R}^1. Si può definire la loro distanza nel seguente modo:[1]
d(f,g)= \int_{-\infty}^{+\infty}|f(t)-g(t)|dt
Munito della funzione distanza, lo spazio delle funzioni continue a supporto compatto è uno spazio metrico. Il completamento di tale spazio metrico è l'insieme delle funzioni integrabili secondo Lebesgue.[2][3]
In letteratura esistono diversi altri operatori di integrazione, tuttavia essi godono di minore diffusione rispetto a quelli di Riemann e Lebesgue.
Integrale di Riemann[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Integrale di Riemann.
Sia PC[a,b] l'insieme delle funzioni limitate e continue a tratti sull'intervallo [a,b], e tali da essere continue da destra:
\lim_{x\to y^+} f(x) = f(y)
La norma di tali funzioni può essere definita come:
\| f \|_\infty = \sup_{x \in [a,b]} |f(x)|
Sia (a, x_1, \dots ,x_{n-1},b) una partizione di [a,b] e \chi_i(x) la funzione indicatrice dell'i-esimo intervallo della partizione [x_{i-1}, x_i].
L'insieme S[a,b] delle possibili partizioni dell'intervallo [a,b] costituisce uno spazio vettoriale normato, con norma data da:
\| \sum_{i=1}^n c_i \chi_i(x) \|_\infty = \sup_{x \in [a,b]} |\sum_{i=1}^n c_i \chi_i(x)| = \max_{i=1,\dots,n}|c_i| \qquad c_i \in \R
L'insieme S[a,b] è denso in PC[a,b]. Si definisce la trasformazione lineare limitata I:S[a,b] \to \R nel seguente modo:[4]
I \left[ \sum_{i=1}^n c_i \chi_i(x) \right] = \sum_{i=1}^n c_i (x_i - x_{i-1})
Si dimostra che un operatore lineare limitato che mappa uno spazio vettoriale normato in uno spazio normato completo può essere sempre esteso in modo unico a un operatore lineare limitato che mappa il completamento dello spazio di partenza nel medesimo spazio di arrivo. Poiché i numeri reali costituiscono un insieme completo, l'operatore I \ può quindi essere esteso a un operatore \hat I che mappa il completamento \hat S[a,b] di S[a,b] \ in \R.
Si definisce integrale di Riemann l'operatore \hat I: \hat S[a,b] \to \R , e si indica con:[5]
\hat I(f):= \int_a^b f(x)dx
Integrale di Lebesgue[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: integrale di Lebesgue.
Sia \mu una misura su una sigma-algebra X di sottoinsiemi di un insieme E. Ad esempio, E può essere un n-spazio euclideo \R^n o un qualche suo sottoinsieme Lebesgue-misurabile, X la sigma-algebra di tutti i sottoinsiemi Lebesgue-misurabili di E e \mu la misura di Lebesgue.
Nella teoria di Lebesgue gli integrali sono limitati a una classe di funzioni, chiamate funzioni misurabili. Una funzione f è misurabile se la controimmagine di ogni insieme aperto I del codominio è in X, ossia se f^{-1}(I) è un insieme misurabile di X per ogni aperto I.[6] L'insieme delle funzioni misurabili è chiuso rispetto alle operazioni algebriche, e in particolare la classe è chiusa rispetto a vari tipi di limiti puntuali di successioni.
Una funzione semplice s è una combinazione lineare finita di funzioni indicatrici di insiemi misurabili.[7] Siano i numeri reali o complessi a_1, \dots a_n i valori assunti dalla funzione semplice s e sia:
A_i = \{x : s(x) = a_i \} \
Allora:[7]
s(x)=\sum_{i=1}^n a_i \chi_{A_i}(x)
dove \chi_{A_k}(x) è la funzione indicatrice relativa all'insieme A_i per ogni i.
L'integrale di Lebesgue di una funzione semplice è definito nel seguente modo:
\int_F s \,d \mu = \sum_{i=1}^n a_i \mu (A_i \cap F) \quad F \in X
Sia f una funzione misurabile non negativa su E a valori sulla retta reale estesa. L'integrale di Lebesgue di f sull'insieme F rispetto alla misura \mu è definito nel seguente modo:[8]
\int_F f\,d\mu := \sup \int_F s d \mu
dove l'estremo superiore è valutato considerando tutte le funzioni semplici s tali che 0 \le s \le f. Il valore dell'integrale è un numero nell'intervallo [0,\infty].
L'insieme delle funzioni tali che:
\int_E |f| d\mu < \infty
è detto insieme delle funzioni integrabili su E secondo Lebesgue rispetto alla misura \mu, o anche insieme delle funzioni sommabili, ed è denotato con L^1(\mu).
Anche l'integrale di Lebesgue è un funzionale lineare, e considerando una funzione definita su un intervallo I il teorema di Riesz permette di affermare che per ogni funzionale lineare \lambda su \C è associata una misura di Borel finita \mu su I tale che:[9]
\lambda f = \int_I f\,d\mu
In questo modo il valore del funzionale dipende con continuità dalla lunghezza dell'intervallo di integrazione.
Integrale in più variabili[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Integrale multiplo.
Sia x = (x_1, \dots ,x_k) un vettore nel campo reale. Un insieme del tipo:
I^k = \{x:\quad a_i \le x_i \le b_i \quad 1 \le i \le k \}
è detto k-cella. Sia f_k definita su I^k una funzione continua a valori reali, e si definisca:
f_{k-1}(x_1, \dots ,x_{k-1}) = \int_{a_k}^{b_k} f_{k}(x_1, \dots ,x_k)d x_k
Tale funzione è definita su I^{k-1} ed è a sua volta continua a causa della continuità di f_k. Iterando il procedimento si ottiene una classe di funzioni f_j continue su I^j che sono il risultato dell'integrale di f_{j+1} rispetto alla variabile x_{j+1} sull'intervallo [a_{j+1},b_{j+1}]. Dopo k volte si ottiene il numero:
f_0 = \int_{a_1}^{b_1} f_1(x_1)d x_1
Si tratta dell'integrale di f_k(x) su I^k rispetto a x, e non dipende dall'ordine con il quale vengono eseguite le k integrazioni.
In particolare, sia g(x) = f_1(x_1) \dots f_k(x_k). Allora si ha:
\int_{I^k} g(x)dx = \prod_{i=1}^{k} \int_{a_i}^{b_i} f_i(x_i)dx_i
Inoltre, sia f una funzione a supporto compatto e si ponga che I^k contenga il supporto di f. Allora è possibile scrivere:
\int_{I^k} f = \int_{\R^k} f
Nell'ambito della teoria dell'integrale di Lebesgue è possibile estendere questa definizione ad insiemi di funzioni più ampi.
Una proprietà di notevole importanza dell'integrale di una funzione in più variabili è la seguente. Siano:
Allora si ha:
\int_{\R^k} f(y)dy = \int_{\R^k} f(T(x))|J_T(x)|dx
L'integrando f(T(x))|J_T(x)| ha un supporto compatto grazie all'invertibilità di T, dovuta all'ipotesi J_T(x) \ne 0 per ogni x \in E che garantisce la continuità di T^{-1} in T(E) per il teorema della funzione inversa.
Integrale curvilineo[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Integrale di linea e Integrale di superficie.
Dato un campo scalare f : \mathbb{R}^n \to \mathbb{R}, si definisce l'integrale di linea (di prima specie) su una curva C, parametrizzata da \mathbf{r}(t), con t \in [a, b], come:[10]
\int_C f\ \operatorname ds = \int_a^b f(\mathbf{r}(t)) \|\mathbf{r}'(t)\| \,\mathrm{d}t
dove il termine \mathrm{d}s indica che l'integrale è effettuato su un'ascissa curvilinea. Se il dominio della funzione f è \mathbb{R}, l'integrale curvilineo si riduce al comune integrale di Riemann valutato nell'intervallo [r(a),r(b)]. Alla famiglia degli integrali di linea appartengono anche gli integrali ellittici di prima e di seconda specie, questi ultimi impiegati anche in ambito statistico per il calcolo della lunghezza della curva di Lorenz.
Similmente, per un campo vettoriale \mathbf{F} : \R^n \to \R^n, l'integrale di linea (di seconda specie) lungo una curva C, parametrizzata da \mathbf{r}(t) con t \in [a, b], è definito da:[11]
\int_C \mathbf{F} = \int_C \mathbf{F}(\mathbf{x})\cdot\,\mathrm{d}\mathbf{x} = \int_a^b \mathbf{F}(\mathbf{r}(t))\cdot\mathbf{r}'(t)\,\mathrm{d}t
Continuità e integrabilità[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Funzione integrabile.
Una condizione sufficiente ai fini dell'integrabilità è che una funzione definita su un intervallo chiuso e limitato sia continua: una funzione continua definita su un compatto, e quindi continua uniformemente per il teorema di Heine-Cantor, è integrabile.
Dimostrazione
Si suddivida l'intervallo \ [a,b] in n sottointervalli \ [x_{i-1},x_{i}] di uguale ampiezza:
\delta x = {{(b - a)} \over {n}}
Si scelga in ogni intervallo un punto \ t_{i} interno a \ [x_{i-1},x_{i}] e si definisce la somma integrale:
\ \sigma_{n}= \sum_{s=1}^{n} f(t_{s}) \,\delta x_{s}= {{b-a} \over {n}} \sum_{s=1}^{n} f(t_{s})
Ponendo \ M_i e \ m_i il massimo e il minimo di \ f in ogni intervallo \ [x_{i-1},x_{i}] si costruiscono quindi le somme:
\ S_{n}= \sum_{i=1}^{n} M_{i}(x_{i}-x_{i-1}) \qquad \ s_{n}= \sum_{i=1}^{n} m_{i}(x_{i}-x_{i-1})
All'aumentare di n, \ S_{n} diminuisce mentre \ s_{n} cresce. Essendo allora le due successioni monotone, esse ammettono un limite, il quale è finito. Sia ora:
\ m_{i} \le f(t_{i}) \le M_{i} \
Si ha che:
\ s_{n} \le \sigma_{n} \le S_{n} \
Per il teorema di esistenza del limite di successioni monotone risulta \ s_{n} \to s e \ S_{n} \to S, con \ s \le S. All'affinarsi della partizione di \ [a,b] risulta \ s = S, infatti è possibile fissare un \ \varepsilon piccolo a piacere e un numero di suddivisioni della partizione sufficientemente grande da far risultare:
\ S_{n}-s_{n}= \sum_{i=1}^{n}(M_{i}-m_{i})(x_{i}-x_{i-1})< \varepsilon
poiché per la continuità uniforme di f si ha:
\ M_{i}-m_{i} < {{ \varepsilon} \over {(b-a)}}
Ovvero, per un numero di n suddivisioni abbastanza elevato:
\ S_{n}-s_{n}= \sum_{i=1}^{n}(M_{i}-m_{i})(x_{i}-x_{i-1})< {{ \varepsilon} \over {(b-a)}} \sum_{i=1}^{n}(x_{i}-x_{i-1}) = \varepsilon
Per il teorema del confronto delle successioni si ha:
\ \lim_{n \to + \infty} (S_{n}-s_{n}) \le \varepsilon
ovvero:
\ S-s \le \varepsilon
da cui, data l'arbitrarietà del fattore \varepsilon, risulta che con il passaggio al limite la differenza tra le somme integrali massimante e minimante tende a zero. Da questo segue che:
\ S=s=I
In definitiva, essendo:
\ s_{n} \le \sigma_{n} \le S_{n}
per il teorema del confronto risulta \ \sigma_{n} \to I , da cui si deduce che se la funzione integranda è continua su un compatto \ [a,b] allora l'operazione di integrazione non dipende dalla scelta dei punti interni agli intervalli \ [x_{i-1},x_{i}], ovvero la funzione è integrabile.
Assoluta integrabilità[modifica | modifica wikitesto]
Una funzione f si dice assolutamente integrabile su un intervallo aperto del tipo [a,+\infty) se su tale intervallo è integrabile \left|f\right|. Non tutte le funzioni integrabili sono assolutamente integrabili: un esempio di funzione di questo tipo è \sin x / x. Viceversa, il teorema sull'esistenza degli integrali impropri all'infinito garantisce che una funzione f assolutamente integrabile sia integrabile su un intervallo del tipo [a,+\infty).
Dimostrazione
Infatti, una condizione necessaria e sufficiente affinché \int_{a}^{+\infty}\!f(x) \,\mathrm{d}x esista finito è che per ogni \varepsilon>0 esista \gamma >0 tale che per ogni x_1,x_2 <\gamma si abbia:
\left| \int_{x_1}^{x_2}f(x) \,\mathrm{d}x\right | <\varepsilon
Sostituendo in quest'ultima espressione f(x) con |f(x)| la condizione di esistenza diventa:
\left| \int_{x_1}^{x_2} \left | f(x) \,\mathrm{d}x \right | \right |<\varepsilon
da cui si ha:
\left | \int_a^b \!f(x) \,\mathrm{d}x \right | \le \int_a^b \left | f(x) \right | \,\mathrm{d}x
e quindi si può scrivere:
\int_{x_1}^{x_2} \left | f(x)\,\mathrm{d}x\right |<\varepsilon
Si ricava così che f(x) è integrabile.
Teorema di Vitali-Lebesgue[modifica | modifica wikitesto]
Il teorema di Vitali-Lebesgue è un teorema che consente di individuare le funzioni definite su uno spazio \R^n che siano integrabili secondo Riemann. Fu dimostrato nel 1907 dal matematico italiano Giuseppe Vitali contemporaneamente e indipendentemente con il matematico francese Henri Lebesgue.
Data una funzione su \R^n che sia limitata e nulla al di fuori di un sottoinsieme limitato di \R^n , essa è integrabile secondo Riemann se e solo se è trascurabile l'insieme dei suoi punti di discontinuità. Se si verifica questo, la funzione è anche integrabile secondo Lebesgue e i due integrali coincidono. Nel caso in cui n=1 l'enuciato assume la seguente forma: una funzione f limitata in un intervallo [a, b] è ivi integrabile secondo Riemann se e solo se l'insieme dei suoi punti di discontinuità è di misura nulla rispetto alla misura di Lebesgue.[12]
Calcolo differenziale e calcolo integrale[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Derivata.
Il teorema fondamentale del calcolo integrale, grazie agli studi e alle intuizioni di Leibniz, Newton, Torricelli e Barrow, stabilisce la relazione esistente tra calcolo differenziale e calcolo integrale. Esso è generalizzato dal fondamentale teorema di Stokes.
Funzione Integrale[modifica | modifica wikitesto]
Sia f:I\to \mathbb R una funzione definita su un intervallo I = [a,b]. Se la funzione è integrabile su ogni intervallo chiuso e limitato J contenuto in I, al variare dell'intervallo J varia il valore dell'integrale. Si ponga J = [x_0,x], dove x_0 è fissato e l'altro estremo x è variabile: l'integrale di f su J diventa allora una funzione di x. Tale funzione si dice funzione integrale di f o integrale di Torricelli, e si indica con:
F(x) = \int_{x_0}^{x} \!f(t) \,\mathrm{d}t
La variabile di integrazione t è detta variabile muta, e varia tra x_0 e x.
Funzioni Primitive[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Primitiva (matematica).
Il problema inverso a quello della derivazione consiste nella ricerca di tutte le funzioni la cui derivata sia uguale a una funzione assegnata. Questo problema è noto come ricerca delle primitive di una funzione. Nel caso in cui F sia una primitiva di f (cioè se F'(x) = f(x) ) allora, poiché la derivata di una funzione costante è nulla, anche una qualunque funzione del tipo:
G(x)=F(x)+c
che differisca da F(x) per una costante arbitraria c, risulta essere primitiva di f(x). Infatti:
G'(x)=F'(x)+0 =f(x)
Quindi, se una funzione f(x) ammette primitiva F(x) allora esiste un'intera classe di primitive del tipo:
G(x)=F(x)+c
Viceversa, tutte le primitive di f(x) sono della forma F(x)+c.
Teorema fondamentale del calcolo integrale[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Teorema fondamentale del calcolo integrale.
La prima parte del teorema è detta primo teorema fondamentale del calcolo, afferma che la funzione integrale (come sopra definita)
F(x)=\int_a^x f(t)dt \qquad a \le x \le b
è una primitiva della funzione di partenza. Cioè
F^\prime(x)=f(x)
La seconda parte del teorema è detta secondo teorema fondamentale del calcolo, e consente di calcolare l'integrale definito di una funzione attraverso una delle sue primitive.
\int_a^b f(x)dx=F(b)-F(a)
e tale relazione è detta formula fondamentale del calcolo integrale.
Integrale indefinito[modifica | modifica wikitesto]
La totalità delle primitive di una funzione f(x) si chiama integrale indefinito di tale funzione. Il simbolo:
\int \!f(x) \,\mathrm{d}x \
denota l'integrale indefinito della funzione f(x) rispetto a x. La funzione f(x) è detta anche in questo caso funzione integranda.
Ogni funzione continua in un intervallo ammette sempre integrale indefinito, ma non è detto che sia derivabile in ogni suo punto. Se f è una funzione definita in un intervallo nel quale ammette una primitiva \ F allora l'integrale indefinito di f è:
\ \int \!f(x) \,\mathrm{d}x= F(x)+c
dove \ c è una generica costante reale.
Proprietà degli integrali[modifica | modifica wikitesto]
Di seguito si riportano le proprietà principali dell'operatore integrale.
Linearità[modifica | modifica wikitesto]
Siano f e g due funzioni continue definite in un intervallo [a, b] e siano \alpha, \beta \in \mathbb{R}. Allora:
\int_a^b [\alpha f(x) + \beta g(x)] \,\mathrm{d}x = \alpha \int_a^b \!f(x) \,\mathrm{d}x + \beta \int_a^b \!g(x) \,\mathrm{d}x
Dimostrazione
Infatti, dalla definizione si ha che:
\ \int_a^b [\alpha f(x) + \beta g(x)] \,\mathrm{d}x = \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} \,[\alpha f(t_{s}) + \beta g(t_{s})]
da cui:
\ \int_a^b [\alpha f(x) + \beta g(x)] \,\mathrm{d}x = \lim_{n \to + \infty} {{b-a} \over {n}} [\alpha \sum_{s=1}^{n} f(t_{s}) + \beta \sum_{s=1}^{n} g(t_{s})]
Dalla proprietà distributiva e dal fatto che il limite della somma coincide con la somma dei limiti si ha:
\ \int_a^b [\alpha f(x) + \beta g(x)] \,\mathrm{d}x = \alpha \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} f(t_{s}) + \beta \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} g(t_{s})
da cui discende la proprietà di linearità.
Additività[modifica | modifica wikitesto]
Sia f continua e definita in un intervallo [a, c] e sia b \in [a, c]. Allora:
\int_a^c \!f(x) \,\mathrm{d}x = \int_a^b \!f(x) \,\mathrm{d}x + \int_b^c \!f(x) \,\mathrm{d}x
Dimostrazione
Infatti, dalla definizione si ha che:
\ \int^{b}_{a} \!f(x)\,\mathrm{d}x= \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} f(t_{s})
da cui se si ha \ c \in [a,b] esistono un valore \ h e un valore \ k la cui somma è \ n tali che per un affinamento sufficiente della partizione risulti:
\ {\frac{b-c}{h}} = {\frac{c-a}{k}} = \delta x
\ \int^{b}_{a} \!f(x) \,\mathrm{d}x= \lim_{n \to + \infty} {{b-a} \over {n}} \left( \sum_{s=1}^{n-k} f(t_{s}) + \sum_{s=h+1}^{n} f(t_{s}) \right)
Distribuendo la misura dell'intervallo:
\ \int^{b}_{a} \!f(x) \,\mathrm{d}x= \lim_{n \to + \infty} {\frac{b-a}{n}} \sum_{s=1}^{n-k} f(t_{s}) + \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=h+1}^{n} f(t_{s})
in cui \ n-k=h. Considerando l'intervallo \ [c,b], l'indice \ s=h+1,...,n può essere riscritto come \ s=1,...,k in quanto \ t_{h+1} è il valore superiore del primo intervallo della partizione di \ [c,b]. Ricordando che:
\ {{b-c} \over {h}} = {{c-a} \over {k}} = \delta x
risulta allora:
\ \int^{b}_{a} \!f(x) \,\mathrm{d}x= \lim_{h \to + \infty} {{c-a} \over {h}} \sum_{s=1}^{h} f(t_{s}) + \lim_{k \to + \infty} {{b-c} \over {k}} \sum_{s=1}^{k} f\left(t(s)\right)
da cui discende la proprietà di additività.
Monotonia (o teorema del confronto)[modifica | modifica wikitesto]
Siano f e g due funzioni continue definite in un intervallo [a, b] e tali che f(x) \le g(x) in [a, b]. Allora:
\int_a^b \!f(x) \,\mathrm{d}x \le \int_a^b \!g(x) \,\mathrm{d}x
Dimostrazione
Infatti, se si verifica che f(x) \le g(x) nel compatto \ [a,b], effettuando una partizione di tale intervallo la disuguaglianza permane e moltiplicando da ambo i lati per il fattore b-a / n si ottiene:
{{b-a} \over {n}} f(t_{s}) \le {{b-a} \over {n}} g(t_{s})\
per ogni t_{s}. A questo punto se la relazione è valida per qualsiasi intervallo in cui è suddiviso il compatto vale la seguente:
\sum_{s=1}^{n} {{b-a} \over {n}} f(t_{s}) \le \sum_{s=1}^{n} {{b-a} \over {n}} g(t_{s})
Come conseguenza del corollario del teorema della permanenza del segno dei limiti, applicando il limite alle somme integrali di Riemann (ottenendo quindi l'integrale) la disuguaglianza resta immutata:
\lim_{n \to + \infty} \sum_{s=1}^{n} {{b-a} \over {n}} f(t_{s}) \le \lim_{n \to + \infty} \sum_{s=1}^{n} {{b-a} \over {n}} g(t_{s})
Da ciò deriva la proprietà di monotonia degli integrali.
Valore assoluto[modifica | modifica wikitesto]
Tale teorema si potrebbe considerare come un corollario del teorema del confronto. Se f è integrabile in un intervallo [a, b] si ha:
\left | \int_a^b \!f(x) \,\mathrm{d}x \right | \le \int_a^b \left | f(x) \right | \,\mathrm{d}x
Dimostrazione
Infatti, essendo valida la relazione - | f(t_{s}) | \le f(t_{s}) \le | f(t_{s}) | per ogni s, è possibile sommare membro a membro le varie componenti della relazione, ottenendo:
- \sum_{s=1}^{n} | f(t_{s}) | \le \sum_{s=1}^{n} f(t_{s}) \le \sum_{s=1}^{n} | f(t_{s}) |
Moltiplicando ogni membro per il fattore b-a / n e applicando il limite in modo da affinare gli intervalli della partizione si ottengono gli integrali:
- \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} | f(t_{s}) | \le \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} f(t_{s}) \le \lim_{n \to + \infty} {{b-a} \over {n}} \sum_{s=1}^{n} | f(t_{s}) |
- \int_a^b |f(x)| \,\mathrm{d}x \le \int_a^b \!f(x) \,\mathrm{d}x \le \int_a^b |f(x)| \,\mathrm{d}x
ove quest'ultima disuguaglianza può essere espressa in termini di valore assoluto come:
\left | \int_a^b \!f(x) \,\mathrm{d}x \right | \le \int_a^b \left | f(x) \right | \,\mathrm{d}x
la quale è la proprietà del valore assoluto degli integrali.
Teorema della media[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Teorema della media integrale e Teorema della media pesata.
Se f:[a,b]\to \mathbb R è continua allora esiste c \in (a,b) tale che:
{{1} \over {b-a}} \int_{a}^{b} \!f(x) \,\mathrm{d}x=f(c)
Integrale improprio[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Integrale improprio.
Un integrale improprio è un limite della forma:
\lim_{b\to\infty} \int_a^bf(x)\, \mathrm{d}x \qquad \lim_{a\to -\infty} \int_a^bf(x)\, \mathrm{d}x
oppure:
\lim_{c\to b^-} \int_a^cf(x)\, \mathrm{d}x \quad
\lim_{c\to a^+} \int_c^bf(x)\, \mathrm{d}x
Un integrale è improprio anche nel caso in cui la funzione integranda non è definita in uno o più punti interni del dominio di integrazione.
Metodi di integrazione[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Metodi di integrazione.
Il caso più semplice che può capitare è quando si riconosce la funzione integranda essere la derivata di una funzione nota \Phi. In casi più complessi esistono numerosi metodi per trovare la funzione primitiva. In particolare, tra le tecniche più diffuse per la semplificazione della funzione integranda vi sono le seguenti due:
• Se l'integranda è il prodotto di due funzioni, l'integrazione per parti riduce l'integrale alla somma di due integrali, di cui uno calcolabile immediatamente grazie alla formula fondamentale del calcolo integrale.
• Se l'integranda è trasformazione di una derivata nota attraverso una qualche funzione derivabile, l'integrazione per sostituzione riporta il calcolo all'integrale di quella derivata nota, modificato per un fattore di proporzionalità che dipende dalla trasformazione in gioco.
Stima di somme tramite integrale[modifica | modifica wikitesto]
Un metodo che consente di ottenere la stima asintotica di una somma è l'approssimazione di una serie tramite il suo integrale. Sia f: \R \to \R^+ una funzione monotona non decrescente. Allora per ogni a \in \N e ogni intero n \geq a si ha:
f(a) + \int_{a}^{n} \!f(x) \,\mathrm{d}x \leq \sum_{k =a}^n \!f(k) \leq \int_{a}^{n} f(x) \,\mathrm{d}x + f(n)
Infatti, se n = a la proprietà è banale, mentre se n \,>\, a si osserva che la funzione è integrabile in ogni intervallo chiuso e limitato di \R^+, e che per ogni k \in \N vale la relazione:
f(k)\leq \int_{k}^{k+1} f(x) \,\mathrm{d}x \leq f(k+1)
Sommando per k = a, a+1, ... n-1 si ottiene dalla prima disuguaglianza:
\sum_{k=a}^{n-1} f(k) \leq \sum_{k=a}^{n-1} \int_{k}^{k+1} f(x)\,\mathrm{d}x = \int_{a}^{n} f(x)\,\mathrm{d}x
mentre dalla seconda segue che:
\int_{a}^{n}f(x)\,\mathrm{d}x = \sum_{k=a}^{n-1}\int_{k}^{k+1} f(x)\,\mathrm{d}x \leq \sum_{k=a}^{n-1}f(k+1)
Aggiungendo ora f(a) e f(n) alle due somme precedenti si verifica la relazione.
Altri operatori di integrazione[modifica | modifica wikitesto]
Accanto agli integrali di Riemann e Lebesgue sono stati introdotti diversi altri operatori integrali. L'integrale di Riemann-Stieltjes è una generalizzazione dell'integrale di Riemann, ed è a sua volta generalizzato dall'integrale di Lebesgue-Stieltjes, che è anche un'estensione dell'integrale di Lebesgue.
Integrali di Denjoy, Perron, Henstock e altri[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Integrale di Denjoy, Integrale di Perron, Integrale di Henstock e [[]].
Sono state sviluppate altre definizioni di integrale, alcune delle quali sono dovute a Denjoy, Perron, Henstock e altri. I tre nominati condividono la validità del teorema fondamentale del calcolo integrale in una forma più generale rispetto alla trattazione di Riemann e Lebesgue.
Il primo in ordine cronologico a essere introdotto è stato l'integrale di Denjoy, definito per mezzo di una classe di funzioni che generalizza le funzioni assolutamente continue. Successivamente, solo due anni dopo, Perron ha dato la sua definizione con un metodo che ricorda le funzioni maggioranti e minoranti di Darboux. In ultimo, Ralph Henstock e (indipendentemente) Jaroslaw Kurzweil forniscono una terza definizione equivalente, detta anche integrale di gauge: essa sfrutta una leggera generalizzazione della definizione di Riemann, la cui semplicità rispetto alle altre due è probabilmente il motivo per cui questo integrale è più noto con il nome del matematico inglese che con quelli di Denjoy e Perron.
Integrale di Ito[modifica | modifica wikitesto]
Exquisite-kfind.png Lo stesso argomento in dettaglio: Calcolo di Itō.
L'integrale di Ito fa parte dell'analisi di Itō per i processi stocastici. In letteratura è introdotto utilizzando varie notazioni, una delle quali è la seguente:
\int_{0}^{T}X_{s} \,\mathrm{d}W_{s}
dove W_{s} è il processo di Wiener. L'integrale non è definito come un integrale ordinario, in quanto il processo di Wiener ha variazione totale infinita. In particolare, gli strumenti canonici di integrazione di funzioni continue non sono sufficienti. L'utilizzo principale di tale strumento matematico è nel calcolo differenziale di equazioni in cui sono coinvolti integrali stocastici, che inseriti in equazioni volte a modellizzare un particolare fenomeno (come il moto aleatorio delle particelle o il prezzo delle azioni nei mercati finanziari) rappresentano il contributo aleatorio sommabile (rumore) dell'evoluzione del fenomeno stesso.
Esempi di calcolo di un integrale[modifica | modifica wikitesto]
• In base alle informazioni fornite dal primo teorema fondamentale del calcolo integrale si può effettuare il calcolo di un integrale cercando una funzione la cui derivata coincide con la funzione da integrare. A questo scopo possono essere d'aiuto le tavole d'integrazione. Così per effettuare il calcolo dell'integrale della funzione vista in precedenza f(x)=mx attraverso la ricerca di una primitiva si ricorre alla formula:
\int mx^{\alpha} \,\mathrm{d}x= {{mx^{ \alpha + 1}} \over { \alpha + 1}} + c
la cui derivata coincide proprio con \ mx^{\alpha}. Prendendo in considerazione la (già esaminata precedentemente) funzione \ f(x)=mx e integrandola si ottiene:
\int mx \,\mathrm{d}x= {{mx^{2}} \over {2}} + c
Mentre per quanto concerne l'integrale definito nel compatto \ [a,b] si ha, in forza del secondo teorema fondamentale del calcolo integrale
\int_{a}^{b} mx \,\mathrm{d}x= \left[{{mb^{2}} \over {2}} + c\right] - \left[{{ma^{2}} \over {2}} + c\right] = m {{b^2-a^2} \over {2}}
esattamente (ovviamente) lo stesso risultato ottenuto in precedenza.
• Si supponga di fissare un sistema di riferimento cartesiano attraverso le rette ortogonali e orientate delle ascisse e delle ordinate. Si supponga ora che su tale sistema di assi sia definita una retta la cui equazione esplicita è f(x)=mx. :Si vuole calcolare l'integrale di tale retta definita sul compatto [a,b] situato sull'asse delle ascisse. Si supponga per semplicità che i punti a e b si trovino sul semiasse positivo delle ascisse e siano entrambi positivi. Allora l'area sottesa alla retta considerata nel compatto [a,b] è uguale all'area di un trapezio che "poggiato" in orizzontale sull'asse delle ascisse è caratterizzato da un'altezza uguale a b-a, base maggiore mb e base minore \ ma. L'area di tale figura è data, come noto dalla geometria elementare, dalla formula {{1} \over {2}}(mb+ma)(b-a), ovvero m{{b^2-a^2} \over {2}}.
Nell'ottica del calcolo dell'integrale di questa retta definita nel compatto \ [a,b] si effettua una partizione di tale intervallo, dividendolo in n parti uguali:
\ x_{0}=a; \quad x_{1}=a+{{b-a} \over {n}}; \quad x_{2}= a+2{{b-a} \over {n}};\quad \dots \,; \quad x_{n}= a+n{{b-a} \over {n}}=b
Nel generico intervallo [x_{i-1},x_{i}] si sceglie come punto arbitrario il punto più esterno x_{i} (ma andrebbe bene qualsiasi punto dell'intervallo), considerando la funzione \ y=mx nel generico punto x_{i} interno all'intervallo [x_{i-1},x_{i}]. Si avrà quindi f(x_{i})=m\left[a+i{{b-a} \over {n}}\right], e la somma integrale di Riemann diventa:
\ \sigma_{n} = \sum_{i=1}^{n} f(x_{i}){{b-a} \over {n}} = \sum_{i=1}^{n}m\left[a+i{{b-a} \over {n}}\right]{{b-a} \over {n}}=ma(b-a)+m\left({{b-a} \over {n}}\right)^2 \sum_{i=1}^{n}i
nella quale la progressione aritmetica \sum_{i=1}^{n}i= {{n(n+1)} \over {2}} restituisce un'espressione delle somme di Riemann uguale a:
\sigma_{n} = ma(b-a)+m(b-a)^2 {{n+1} \over {2n}}
Per passare dalle somme integrali di Riemann all'integrale vero e proprio è ora necessario, in conformità con la definizione di integrale, il passaggio al limite di suddette somme. Ovvero:
\int^{b}_{a} mx \,\mathrm{d}x = \lim_{n \to + \infty} \sigma_{n} = ma(b-a)+m(b-a)^2 \lim_{n \to + \infty} {{n+1} \over {2n}}
Calcolando il limite per \ n \to \infty , dato che \ {{n+1} \over {2n}} \to \ {{1} \over {2}}, si ottiene:
\int^{b}_{a} mx \,\mathrm{d}x = \lim_{n \to + \infty} \sigma_{n} = ma(b-a)+{{m(b-a)^2} \over {2}}
dalla quale, eseguendo la somma si ricava:
\int^{b}_{a} mx \,\mathrm{d}x = m{{b^2-a^2} \over {2}}
la quale è esattamente l'area del trapezio costruito dalla retta \ y=mx sul piano insieme all'asse delle ascisse.
Note[modifica | modifica wikitesto]
1. ^ W. Rudin, Pag. 68
2. ^ Si pone in tale contesto che due funzioni uguali quasi ovunque siano coincidenti.
3. ^ W. Rudin, Pag. 69
4. ^ Reed, Simon, Pag. 10
5. ^ Reed, Simon, Pag. 11
6. ^ W. Rudin, Pag. 8
7. ^ a b W. Rudin, Pag. 15
8. ^ W. Rudin, Pag. 19
9. ^ W. Rudin, Pag. 34
10. ^ L.D. Kudryavtsev, Encyclopedia of Mathematics - Curvilinear integral, encyclopediaofmath.org, 2012.
11. ^ Eric Weisstein, MathWorld - Line Integral, mathworld.wolfram.com, 2012.
12. ^ Gianluca Gorni - Il teorema di Vitali-Lebesgue
Bibliografia[modifica | modifica wikitesto]
• (EN) Walter Rudin, Real and Complex Analysis, Mladinska Knjiga, McGraw-Hill, 1970, ISBN 0-07-054234-1.
• Michael Reed, Barry Simon, Methods of Modern Mathematical Physics, Vol. 1: Functional Analysis, 2ª ed., San Diego, California, Academic press inc., 1980, ISBN 0-12-585050-6.
Voci correlate[modifica | modifica wikitesto]
Tavole di integrali[modifica | modifica wikitesto]
Integrali indefiniti[modifica | modifica wikitesto]
Altri progetti[modifica | modifica wikitesto]
Collegamenti esterni[modifica | modifica wikitesto]
matematica Portale Matematica: accedi alle voci di Wikipedia che trattano di matematica
|
__label__pos
| 0.950156 |
How to manage Robots in UiPath Orchestrator
0 votes
How to manage robots like editing or deleting them in UiPath Orchestrator?
Apr 5, 2019 in RPA by Priyanka
648 views
1 answer to this question.
0 votes
Hello Priyanka, you can perform following activities to manage a robot in UiPath orchestrator:
1. Converting a Standard Robot into a Floating Robot: Click the More Actions button and then Convert to Floating. Only Attended Standard Robots can be converted to Floating Robots.
2. Deleting a Robot: To delete a specific Robot, click the More Actions button and then Remove. Alternatively, select one or multiple Robots from the Robots page and click Remove. You can only delete Robots if they do not have pending or active jobs attached to them. Deleting a Robot also removes it from all associations it may be part of (environments, assets, processes, schedules).
3. Editing a Robot: Click the Edit button, make the necessary changes and click Update. The Description tab allows you to change Name, Username, Password (For Standard Robots only), Type (For Standard Robots only) and Description. Robot settings can be configured in the Settings tab:
• Logging Level
• Allow Development Logging
• Login To Console
• Resolution Width
• Resolution Height
• Resolution Depth
• Font Smoothing
answered Apr 5, 2019 by Pratibha
• 3,690 points
Related Questions In RPA
0 votes
1 answer
How to create a Floating Robot in UiPath Orchestrator?
Hello Rajan, to create a Floating Robot ...READ MORE
answered Apr 5, 2019 in RPA by Pratibha
• 3,690 points
1,101 views
0 votes
1 answer
What is an Environment in UiPath Orchestrator and how to create one?
Hi @Amisha, an Environment is a grouping ...READ MORE
answered Apr 5, 2019 in RPA by Anvi
• 14,130 points
4,646 views
0 votes
1 answer
How can I add or remove robots to/from an environment in an orchestrator?
Hey @Sanjay, to add a robot to an ...READ MORE
answered Apr 5, 2019 in RPA by Abha
• 28,000 points
1,549 views
0 votes
1 answer
How to add or delete a Machine in UiPath Orchestrator?
Hey @Raj, to add a standard machine, ...READ MORE
answered Apr 5, 2019 in RPA by Anvi
• 14,130 points
1,084 views
+2 votes
1 answer
How can I connect Robots to Orchestrator in UiPath?
Hey Rakshit, for connecting Robots to Orchestrator, ...READ MORE
answered Mar 11, 2019 in RPA by Abha
• 28,000 points
17,384 views
0 votes
1 answer
What are the types of robots you can create in an Orchestrator in UiPath?
Hi Urvashi, there are 2 types of ...READ MORE
answered Mar 12, 2019 in RPA by Abha
• 28,000 points
5,891 views
–1 vote
1 answer
Based on Use Cases what are the type of robots present in UiPath orchestrator?
Hello Gauri, UiPath Orchestrator has the capability of managing ...READ MORE
answered Apr 1, 2019 in RPA by Pratibha
• 3,690 points
2,207 views
0 votes
1 answer
Does UiPath supports Backward and Forward compatibility between Robots and Orchestrators versions?
Hey @Rohan, UiPath doesn't support Forward compatibility ...READ MORE
answered Apr 1, 2019 in RPA by Anvi
• 14,130 points
673 views
0 votes
1 answer
How to create and manage Users in UiPath Orchestrator?
Hello Rashmi, you can follow these steps ...READ MORE
answered Apr 4, 2019 in RPA by Pratibha
• 3,690 points
1,776 views
0 votes
1 answer
How to manage schedules in UiPath Orchestrator?
Hello Samuel, you can manage Schedules by ...READ MORE
answered Apr 8, 2019 in RPA by Pratibha
• 3,690 points
2,531 views
|
__label__pos
| 0.67869 |
Swift Tips & Tricks: Using Functions as Parameters in Swift
Published Aug 19, 2015Last updated Feb 10, 2017
Suppose you have a function that returns the double of any integer that it takes.
func doubleOfNumber(number: Int) -> Int {
return number * 2
}
And you want to double every Int in an array. You might want to do this:
[3, 1, 2, 4].map {
return doubleOfNumber($0)
}
// Returns [6, 2, 4, 8]
Now the map function in this case takes a closure that takes an Int and returns an Int. Just like our function doubleOfNumber.
We can actually use this function as a parameter instead of providing the closure ourselves. Like this:
[3, 1, 2, 4].map(doubleOfNumber)
// Returns [6, 2, 4, 8]
It gets better. Now suppose you want to convert these Ints to Strings:
[3, 1, 2, 4].map {
return String($0)
}
// Returns ["3", "1", "2", "4"]
Since String.init is itself a function, you can do this:
[3, 1, 2, 4].map(String.init)
You can even do this:
[3, 1, 2, 4].sort(<)
You can do very cool and actually practical things with Swift if you know where to look.
Discover and read more posts from Seyithan Teymur
get started
Enjoy this post?
Leave a like and comment for Seyithan
2
|
__label__pos
| 0.985473 |
Set theory – events of supremum and infimum of random variables
To let $ Y_n $ Be a sequence of real random variables and let $ c in mathbb {R} $ Be a constant. I know that $ inf_n Y_n $ and $ sup_n Y_n $ are also random variables. I am interested in representing events of these sup and inf random variables as associations or intersections of events from $ Y_n $, I intuitively came up with the following relations, but I'm not 100% sure whether they are true:
• $ { sup_n Y_n> c } = bigcup_n {Y_n> c } $
• $ { inf_n Y_n> c } = bigcap_n {Y_n> c } $
and
• $ { sup_n Y_n <c } = bigcap_n {Y_n <c } $
• $ { inf_n Y_n <c } = bigcup_n {Y_n <c } $
If so, in which cases can the $> $ or $ <$ to be relaxed in $ geq $ or $ leq $?
cv.complex variables – Regarding the supremum achieved through holomorphic functions
I read the article Characterizations of some domains about Carathéodory Extremals.
To let $ Omega $$ subset mathbb {C} ^ n $ be an openly connected sentence (domain). To let $ T ( Omega) $ denote the tangent bundle of $ Omega $, To let $ mathbb {D} $ be the open unit disk in the complex plane $ mathbb {C} $ , To let $ f: Omega longrightarrow mathbb {D} $ is to be called holomorphic with $ f in H ( Omega, mathbb {D}) $,
The Carathéodory Reiffen Pseudometric is for every point $ (z; xi) $ in the $ T ( Omega) $ is defined as
$ c_ Omega (z, xi) = sup {| f ^ * (z) xi |; f: Omega longrightarrow mathbb {D}, ; f in H ( Omega, mathbb {D}) }
= sup {| sum_ {i = 1} ^ {n} { frac {{ partial f}} {{ partial z_i}} (z) xi_i} |; f: Omega longrightarrow mathbb {D}, ; f in H ( Omega, mathbb {D}) } $
A function $ f_0 in H ( Omega, mathbb {D}) $ is said to be a universal caratheodory extremal function if $ | f_0 ^ * (z) xi | = c_ Omega (z, xi) $ for each $ (z; xi) $ in the $ T ( Omega) $,
I can find the universal set in the case of $ Omega = mathbb {D} $,
However, in the case of the Polydisc and the Euclidean sphere unit, the author of the above article (Example 2.2, i and ii) claimed / mentioned that they are the coordinate functions and compositions of projections onto the planes through the center Unity Euclidean sphere. Can anyone tell how these were calculated or provide a reference for them?
Properties of Supremum and Infimum
Part b) of the following is the problem I am trying to solve:
Enter image description here
I solved part a) and used the property in combination with the note offered in part b) to get the following:
$ sup {f} = sup (f-g + g) leq sup (f-g) + sup {g} $and then I would subtract $ sup {g} $:
$ sup {f} – sup {g} leq sup (f-g) $
But I don't know what to do next. I noticed that I could take the absolute value of both sides, but wonder if the statement will still be true (it seems to be wrong). If not, how can this be solved differently using the information provided?
Calculus – what is the supremum of $ f (z) $?
I know that Supremum is the lowest upper limit and Infimum is the highest lower limit. Even according to the supremum axiom, every non-empty set limited above has a supremum. Put
$$ f (z) = frac {| frac { sin (z) + sin (1) – sin (z + 1)} {2-2 cos (1)} | – frac {1} {2} sinh (| z |) – cosh (| z |) +1} { frac {1} { pi ^ 2-1} cosh ( pi | z |) – frac {1} { pi ^ 2-1} – cosh (| z |) +1}, ; ; z in mathbb {C} setminus {0 }. $$
I know that $ f (z) $ is limited above. Now my question is the supremum of $ f (z) $For this purpose I tried to get a limit, but I could not be successful. I appreciate any help on this matter.
real analysis – proving a supremum of a crowd
The question:
Find the supremum of the set $$ { { sqrt (4) {n ^ 4 + n ^ 3} -n: n in mathbb {N} }} $$
And then it tells us that we need to take large values of n to find an appropriate guess, to show that it is an upper bound, and then to prove that it is the smallest upper bound.
I followed the question and found a suitable guess for s = 1/4, showing that this is a good upper limit. My problem is to prove that there is no lower limit. At this point, my work looks like I'm trying to prove by contradiction:
Suppose h is another upper bound such that h <1/4.
$$ { sqrt (4) {n ^ 4 + n ^ 3} -n <h} $$
$$ n ^ 4 + n ^ 3 <(h + n) ^ 4 $$
But after the expansion I can only quit $ n ^ 4 $ This gives me a lot of unknown powers and a really complicated solution that I can do by hand
$$ n ^ 3 <h ^ 4 + 4h ^ 3n + 6h ^ 2n ^ 2 + 4hn ^ 3 $$
That said, I know that I have taken the wrong path, but I'm not sure which direction to prove it. I adapted a response from another book example, but that was only up to potency 2, so it was much easier to simplify this method.
o.operator algebras – infimum and supremum for normal semi-finished trace
A semi-final track $ tau $ on $ M _ {+} $ (for one of Neumann algebra $ M $) is considered normal if $ tau ( sup x_i) = sup tau (x_i) $ for a limited network of positive operators $ (x_i) _ {i in I} $,
Is it true that $ tau ( inf x_i) = inf tau (x_i) $ for a limited decreasing network of positive operators $ (x_i) _ {i in I} $?
If the trail was finite, it is easy enough to see that the above holds the increasing net $ (x_0 – x_i) _ {i in I} $ and the fact that $ tau (x_0) $ is finally. In case the same strategy works, we have to have the following result in the semi-finite case: if $ tau ( inf x_i) < infty $Then there is an index $ j in I $ so for all $ i j j $, we have $ tau (x_i) < infty $, This seems to be something that should be true. But I can not provide quick proof.
Many Thanks.
Essential Supremum and Supremum in anticipation
Suppose that $ {Z_i } _ {i in I} $ are a family of densities in $ L ^ 2 ( Omega, mathcal {F}, mathbb {P}) $, and $ X = L ^ 2 ( Omega, mathcal {F}, mathbb {P}) $, When is it true?
$$
sup_ {i in I} mathbb {E} left[Z_icdot
(X- mathbb{E}[Z_icdot X|mathcal{G}]) ^ 2
Law]=
sup_ {i in I} mathbb {E} left[Z_icdot
(X- operatorname{esssup}_{i in I}mathbb{E}[Z_icdot X|mathcal{G}]) ^ 2
Law]?
$$
I've seen similar questions, but I have not encountered anything like that here.
Expectation of the supremum of a sequence of random variables
To let $ Omega = [0,1]$, $ mathcal {F} = mathcal {B} (0,1) $, P = Lebesgue measure.
To let $ X_n (w) =
begin {cases}
0 quad frac {1} {n} <w leq 1 \
n-n ^ 2w quad 0 leq w leq frac {1} {n}
end {cases} $
The first part of the exercise proves that $ X_ {n} $ is a martingale I've made.
Now my problem is calculating $ E (sup_ {n geq 1} | X_n |) $ and I do not know how to go about it. I know from the solutions that it has to be the same $ + infty $,
Number theory – $ p $ -adic Supremum of the cyclotomic polynomial
To let $ p $ be a prime and $ Phi_ {n} (T) $ be that $ p ^ {n} $the cyclotomic polynomial, which we consider to be a function $ mathbb {C} _ {p} $, In Robert Pollack's newspaper "About the $ p $-adic L-function of a modular form with a supersingular prime number, it is said that
$$
text {sup} _ { vertz vert <r} vert Phi_ {n} (1 + z) vert_ {p} = frac {r} {p ^ {n-1} (p – 1) )}.
$$
I could not do that. My last attempt was as follows:
The polynomial $ Phi_ {n} (T) $ Has $ p ^ {n-1} (p-1) $ Given by roots $ zeta_ {n} – 1 $, from where $ zeta $ is a primitive $ p ^ {n} $the root of the unit. Each of these roots has a rating
$$
v_ {p} ( zeta_ {n} – 1) = frac {1} {p ^ {n-1} (p-1)}.
$$
If we write $ phi_ {n} (1 + T) = sum_ {i = 0} ^ {p ^ {n-1} (p-1)} a_ {i} T ^ {i} $, then for $ r geq 0 $ We can look at the growth module
$$
M _ { Phi} (r) = text {sup} _ {i} vert a_ {i} vert r.
$$
Since $ Phi_ {n} (1 + T) $ has all zeroes in the same radius $ r _ { Phi} $This is the only critical radius (radius) $ r $ in which $ vert Phi_ {n} (1 + r) vert_ {p} neq M _ { Phi} (r) $). To let $ nu = text {sup} {i: vert a_ {i} vert r _ { Phi} ^ {i} = M _ { Phi} (r _ { Phi}) } $ and $ mu = text {inf} {i: vert a_ {i} vert r _ { Phi} ^ {i} = M _ { Phi} (r _ { Phi}) } $ then it is a sentence that the number of zeros of $ Phi_ {n} (1 + T) $ in the closed disc $ overline {D (0, r _ { Phi})} $ (Center $ 0 $radius $ r _ { Phi} $) corresponds $ nu $ and the number of zeros on the opened disc $ D (0, r _ { Phi}) $ corresponds to $ mu $, There are all zeroes $ r _ { Phi} $ these forces
begin {align *}
nu & = p ^ {n-1} (p-1) \
mu & = 0.
end {align *}
So, if we think about it $ r> r _ { Phi} $ then $ r $ is a regular radius and so on
begin {align *}
vert Phi_ {n} (1 + r) vert_ {p} & = text {sup} vert a_ {i} vert r ^ {i} \
& = M _ { Phi} (r) \
& = vert a_ {p ^ {n – 1} (p – 1)} vert r ^ {p ^ {n – 1} (p – 1)} \
& = r ^ {p ^ {n – 1} (p – 1)}.
end {align *}
So for this choice of $ r $ we should have
$$
text {sup} _ { vertz vert <r} vert phi_ {n} (1 + T) vert_ {p} = r ^ {p ^ {n-1} (p-1)}
$$
That's not what Pollack gets. Where (and how wildly) did I go wrong?
|
__label__pos
| 1 |
SpeechRecognitionEngine.EmulateRecognize Method
Definition
Emulates input to the speech recognizer, using text in place of audio for synchronous speech recognition.
Overloads
EmulateRecognize(String)
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition.
EmulateRecognize(RecognizedWordUnit[], CompareOptions)
Emulates input of specific words to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
EmulateRecognize(String, CompareOptions)
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
Remarks
These methods bypass the system audio input and provide text to the recognizer as String objects or as an array of RecognizedWordUnit objects. This can be helpful when you are testing or debugging an application or grammar. For example, you can use emulation to determine whether a word is in a grammar and what semantics are returned when the word is recognized. Use the SetInputToNull method to disable audio input to the speech recognition engine during emulation operations.
The speech recognizer raises the SpeechDetected, SpeechHypothesized, SpeechRecognitionRejected, and SpeechRecognized events as if the recognition operation is not emulated. The recognizer ignores new lines and extra white space and treats punctuation as literal input.
Note
The RecognitionResult object generated by the speech recognizer in response to emulated input has a value of null for its Audio property.
To emulate asynchronous recognition, use the EmulateRecognizeAsync method.
EmulateRecognize(String)
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition.
public:
System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(System::String ^ inputText);
public System.Speech.Recognition.RecognitionResult EmulateRecognize (string inputText);
member this.EmulateRecognize : string -> System.Speech.Recognition.RecognitionResult
Public Function EmulateRecognize (inputText As String) As RecognitionResult
Parameters
inputText
String
The input for the recognition operation.
Returns
The result for the recognition operation, or null if the operation is not successful or the recognizer is not enabled.
Exceptions
The recognizer has no speech recognition grammars loaded.
inputText is null.
inputText is the empty string ("").
Examples
The code example below is part of a console application that demonstrates emulated input, the associated recognition results, and the associated events raised by the speech recognizer. The example generates the following output.
TestRecognize("Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = Smith
...Recognition result text = Smith
TestRecognize("Jones")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Jones; Text = Jones
...Recognition result text = Jones
TestRecognize("Mister")...
SpeechDetected event raised.
SpeechHypothesized event raised.
Grammar = Smith; Text = mister
SpeechRecognitionRejected event raised.
Grammar = <not available>; Text =
...No recognition result.
TestRecognize("Mister Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = mister Smith
...Recognition result text = mister Smith
press any key to exit...
using System;
using System.Globalization;
using System.Speech.Recognition;
namespace Sre_EmulateRecognize
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new CultureInfo("en-US")))
{
// Load grammars.
recognizer.LoadGrammar(CreateNameGrammar("Smith"));
recognizer.LoadGrammar(CreateNameGrammar("Jones"));
// Disable audio input to the recognizer.
recognizer.SetInputToNull();
// Add handlers for events raised by the EmulateRecognize method.
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(
SpeechDetectedHandler);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(
SpeechHypothesizedHandler);
recognizer.SpeechRecognitionRejected +=
new EventHandler<SpeechRecognitionRejectedEventArgs>(
SpeechRecognitionRejectedHandler);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(
SpeechRecognizedHandler);
// Start four synchronous emulated recognition operations.
TestRecognize(recognizer, "Smith");
TestRecognize(recognizer, "Jones");
TestRecognize(recognizer, "Mister");
TestRecognize(recognizer, "Mister Smith");
}
Console.WriteLine("press any key to exit...");
Console.ReadKey(true);
}
// Create a simple name grammar.
// Set the grammar name to the surname.
private static Grammar CreateNameGrammar(string surname)
{
GrammarBuilder builder = new GrammarBuilder("mister", 0, 1);
builder.Append(surname);
Grammar nameGrammar = new Grammar(builder);
nameGrammar.Name = surname;
return nameGrammar;
}
// Send emulated input to the recognizer for synchronous recognition.
private static void TestRecognize(
SpeechRecognitionEngine recognizer, string input)
{
Console.WriteLine("TestRecognize(\"{0}\")...", input);
RecognitionResult result =
recognizer.EmulateRecognize(input,CompareOptions.IgnoreCase);
if (result != null)
{
Console.WriteLine("...Recognition result text = {0}",
result.Text ?? "<null>");
}
else
{
Console.WriteLine("...No recognition result.");
}
Console.WriteLine();
}
static void SpeechDetectedHandler(
object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" SpeechDetected event raised.");
}
// Handle events.
static void SpeechHypothesizedHandler(
object sender, SpeechHypothesizedEventArgs e)
{
Console.WriteLine(" SpeechHypothesized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void SpeechRecognitionRejectedHandler(
object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine(" SpeechRecognitionRejected event raised.");
if (e.Result != null)
{
string grammarName;
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name ?? "<none>";
}
else
{
grammarName = "<not available>";
}
Console.WriteLine(" Grammar = {0}; Text = {1}",
grammarName, e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" SpeechRecognized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
}
}
Remarks
The speech recognizer raises the SpeechDetected, SpeechHypothesized, SpeechRecognitionRejected, and SpeechRecognized events as if the recognition operation is not emulated.
The recognizers that ship with Vista and Windows 7 ignore case and character width when applying grammar rules to the input phrase. For more information about this type of comparison, see the CompareOptions enumeration values OrdinalIgnoreCase and IgnoreWidth. The recognizers also ignore new lines and extra white space and treat punctuation as literal input.
See also
EmulateRecognize(RecognizedWordUnit[], CompareOptions)
Emulates input of specific words to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
public:
System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(cli::array <System::Speech::Recognition::RecognizedWordUnit ^> ^ wordUnits, System::Globalization::CompareOptions compareOptions);
public System.Speech.Recognition.RecognitionResult EmulateRecognize (System.Speech.Recognition.RecognizedWordUnit[] wordUnits, System.Globalization.CompareOptions compareOptions);
member this.EmulateRecognize : System.Speech.Recognition.RecognizedWordUnit[] * System.Globalization.CompareOptions -> System.Speech.Recognition.RecognitionResult
Parameters
wordUnits
RecognizedWordUnit[]
An array of word units that contains the input for the recognition operation.
compareOptions
CompareOptions
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
Returns
The result for the recognition operation, or null if the operation is not successful or the recognizer is not enabled.
Exceptions
The recognizer has no speech recognition grammars loaded.
wordUnits is null.
wordUnits contains one or more null elements.
compareOptions contains the IgnoreNonSpace, IgnoreSymbols, or StringSort flag.
Remarks
The speech recognizer raises the SpeechDetected, SpeechHypothesized, SpeechRecognitionRejected, and SpeechRecognized events as if the recognition operation is not emulated.
The recognizer uses compareOptions when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the OrdinalIgnoreCase or IgnoreCase value is present. The recognizer always ignores the character width and never ignores the Kana type. The recognizer also ignores new lines and extra white space and treats punctuation as literal input. For more information about character width and Kana type, see the CompareOptions enumeration.
See also
EmulateRecognize(String, CompareOptions)
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
public:
System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(System::String ^ inputText, System::Globalization::CompareOptions compareOptions);
public System.Speech.Recognition.RecognitionResult EmulateRecognize (string inputText, System.Globalization.CompareOptions compareOptions);
member this.EmulateRecognize : string * System.Globalization.CompareOptions -> System.Speech.Recognition.RecognitionResult
Parameters
inputText
String
The input phrase for the recognition operation.
compareOptions
CompareOptions
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
Returns
The result for the recognition operation, or null if the operation is not successful or the recognizer is not enabled.
Exceptions
The recognizer has no speech recognition grammars loaded.
inputText is null.
inputText is the empty string ("").
compareOptions contains the IgnoreNonSpace, IgnoreSymbols, or StringSort flag.
Remarks
The speech recognizer raises the SpeechDetected, SpeechHypothesized, SpeechRecognitionRejected, and SpeechRecognized events as if the recognition operation is not emulated.
The recognizer uses compareOptions when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the OrdinalIgnoreCase or IgnoreCase value is present. The recognizer always ignores the character width and never ignores the Kana type. The recognizer also ignores new lines and extra white space and treats punctuation as literal input. For more information about character width and Kana type, see the CompareOptions enumeration.
See also
Applies to
|
__label__pos
| 0.999379 |
How to write a formula in kotlin?
Member
by savanah , in category: Other , 24 days ago
How to write a formula in kotlin?
Facebook Twitter LinkedIn Telegram Whatsapp
1 answer
by cameron.mccullough , 23 days ago
@savanah
To write a formula in Kotlin, you can use the same syntax as in any other programming language. Here is an example of how you can write a simple formula to calculate the area of a rectangle:
1
2
3
4
5
6
7
8
9
10
fun calculateRectangleArea(width: Int, height: Int): Int {
return width * height
}
fun main() {
val width = 5
val height = 10
val area = calculateRectangleArea(width, height)
println("The area of the rectangle is $area")
}
In this example, we define a function calculateRectangleArea that takes the width and height of a rectangle as parameters and returns the area of the rectangle. We then call this function in the main function with the width and height values of 5 and 10, respectively, and print the result.
You can write more complex formulas by defining functions that take input parameters and return a calculated result based on those parameters.
|
__label__pos
| 1 |
Want to support development? Maybe get a nice server to do your own development on? Check out BuyVM or Vultr using our affiliate links! New users get $50 of free credit with Vultr, and BuyVM offers affordable servers with affordable, true DDoS filtering. You can also support us on Patreon.
Submitted on November 29, 2014 at 02:59 AM
Section 1 (Java)
public class ReflectionUtil
{
public static void runMethodsFromObjects(ArrayList<Object> objects, String methodName, Object... args) {
for (int i = 0; i < objects.size(); i++) {
runMethodFromObj(objects.get(i), methodName, args );
}
}
public static void runMethodsFromClasses(ArrayList<Class> classes, String methodName, Object... args) {
for (int i = 0; i < classes.size(); i++) {
runMethodFromClass(classes.get(i), methodName, args);
}
}
public static void runMethodFromObj(Object obj, String methodName, Object... args) {
try {
Class[] params = new Class[args.length];
for (int i = 0; i < args.length; i++) {
params[i] = args[i].getClass();
}
Method method = obj.getClass().getDeclaredMethod(methodName, params);
method.setAccessible(true);
method.invoke(obj, args);
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
public static void runMethodFromClass(Class aClass, String methodName, Object... args) {
try {
Class[] params = new Class[args.length];
for (int i = 0; i < args.length; i++) {
params[i] = args[i].getClass();
}
Method method = aClass.getDeclaredMethod(methodName, params);
method.setAccessible(true);
method.invoke(aClass, args);
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
public static void changeFieldFromObj(Object obj, String fieldName, Object value) {
Class aClass = obj.getClass();
Field field;
try {
field = aClass.getDeclaredField(fieldName);
field.setAccessible(true);
field.set(obj, value);
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
public static Object getFieldValFromObj(Object obj, String fieldName) {
Class aClass = obj.getClass();
Field field;
try {
field = aClass.getDeclaredField(fieldName);
field.setAccessible(true);
return field.get(obj);
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
}
|
__label__pos
| 1 |
51
I want to generate this:
enter image description here
With this data structure (ids are random, btw not sequential):
var tree = [
{ "id": 1, "name": "Me", "dob": "1988", "children": [4], "partners" : [2,3], root:true, level: 0, "parents": [5,6] },
{ "id": 2, "name": "Mistress 1", "dob": "1987", "children": [4], "partners" : [1], level: 0, "parents": [] },
{ "id": 3, "name": "Wife 1", "dob": "1988", "children": [5], "partners" : [1], level: 0, "parents": [] },
{ "id": 4, "name": "son 1", "dob": "", "children": [], "partners" : [], level: -1, "parents": [1,2] },
{ "id": 5, "name": "daughter 1", "dob": "", "children": [7], "partners" : [6], level: -1, "parents": [1,3] },
{ "id": 6, "name": "daughter 1s boyfriend", "dob": "", "children": [7], "partners" : [5], level: -1, "parents": [] },
{ "id": 7, "name": "son (bottom most)", "dob": "", "children": [], "partners" : [], level: -2, "parents": [5,6] },
{ "id": 8, "name": "jeff", "dob": "", "children": [1], "partners" : [9], level: 1, "parents": [10,11] },
{ "id": 9, "name": "maggie", "dob": "", "children": [1], "partners" : [8], level: 1, "parents": [] },
{ "id": 10, "name": "bob", "dob": "", "children": [8], "partners" : [11], level: 2, "parents": [12] },
{ "id": 11, "name": "mary", "dob": "", "children": [], "partners" : [10], level: 2, "parents": [] },
{ "id": 12, "name": "john", "dob": "", "children": [10], "partners" : [], level: 3, "parents": [] },
{ "id": 13, "name": "robert", "dob": "", "children": [9], "partners" : [], level: 2, "parents": [] },
{ "id": 14, "name": "jessie", "dob": "", "children": [9], "partners" : [], level: 2, "parents": [15,16] },
{ "id": 15, "name": "raymond", "dob": "", "children": [14], "partners" : [], level: 3, "parents": [] },
{ "id": 16, "name": "betty", "dob": "", "children": [14], "partners" : [], level: 3, "parents": [] },
];
To give a description of the data structure, the root/starting node (me) is defined. Any partner (wife,ex) is on the same level. Anything below becomes level -1, -2. Anything above is level 1, 2, etc. There are properties for parents, siblings, children and partners which define the ids for that particular field.
In my previous question, eh9 described how he would solve this. I am attempting to do this, but as I've found out, it isn't an easy task.
My first attempt is rendering this by levels from the top down. In this more simplistic attempt, I basically nest all of the people by levels and render this from the top down.
My second attempt is rendering this with one of the ancestor nodes using a depth-first search.
My main question is: How can I actually apply that answer to what I currently have? In my second attempt I'm trying to do a depth first traversal but how can I even begin to account for calculating the distances necessary to offset the grids to make it consistent with how I want to generate this?
Also, is my understanding/implementation of depth-first ideal, or can I traverse this differently?
The nodes obviously overlap in my second example since I have no offsetting/distance calculation code, but I'm lost as to actually figuring out how I can begin that.
Here is a description of the walk function I made, where I am attempting a depth first traversal:
// this is used to map nodes to what they have "traversed". So on the first call of "john", dict would internally store this:
// dict.getItems() = [{ '12': [10] }]
// this means john (id=10) has traversed bob (id=10) and the code makes it not traverse if its already been traversed.
var dict = new Dictionary;
walk( nested[0]['values'][0] ); // this calls walk on the first element in the "highest" level. in this case it's "john"
function walk( person, fromPersonId, callback ) {
// if a person hasn't been defined in the dict map, define them
if ( dict.get(person.id) == null ) {
dict.set(person.id, []);
if ( fromPersonId !== undefined || first ) {
var div = generateBlock ( person, {
// this offset code needs to be replaced
top: first ? 0 : parseInt( $(getNodeById( fromPersonId ).element).css('top'), 10 )+50,
left: first ? 0 : parseInt( $(getNodeById( fromPersonId ).element).css('left'), 10 )+50
});
//append this to the canvas
$(canvas).append(div);
person.element = div;
}
}
// if this is not the first instance, so if we're calling walk on another node, and if the parent node hasn't been defined, define it
if ( fromPersonId !== undefined ) {
if ( dict.get(fromPersonId) == null ) {
dict.set(fromPersonId, []);
}
// if the "caller" person hasn't been defined as traversing the current node, define them
// so on the first call of walk, fromPersonId is null
// it calls walk on the children and passes fromPersonId which is 12
// so this defines {12:[10]} since fromPersonId is 12 and person.id would be 10 (bob)
if ( dict.get(fromPersonId).indexOf(person.id) == -1 )
dict.get(fromPersonId).push( person.id );
}
console.log( person.name );
// list of properties which house ids of relationships
var iterable = ['partners', 'siblings', 'children', 'parents'];
iterable.forEach(function(property) {
if ( person[property] ) {
person[property].forEach(function(nodeId) {
// if this person hasnt been "traversed", walk through them
if ( dict.get(person.id).indexOf(nodeId) == -1 )
walk( getNodeById( nodeId ), person.id, function() {
dict.get(person.id).push( nodeId );
});
});
}
});
}
}
Requirements/restrictions:
1. This is for an editor and would be similar to familyecho.com. Pretty much any business rules not defined can be assumed through that.
2. In-family breeding isn't supported as it would make this way too complex. Don't worry about this.
3. Multiple partners are supported so this isn't as easy as a traditional "family tree" with just 2 parents and 1 child.
4. There is only one "root" node, which is just the starting node.
Notes: familyecho.com seems to "hide" a branch if there's lots of leaf nodes and there's a collision. May need to implement this.
11
• So your id ordering matches the level ordering? And you always get the person's level in the data? So to understand better, you just need to figure out how to render the branches and the x offsets of each person?
– Matt Way
Commented Dec 31, 2015 at 1:49
• This is for creating family trees, so on initial load you only begin with 1 node with a level of 0. Any node created has its level adjusted depending if parent or sibling or child. So if you add a child to "Me", child is -1. Sibling is 0 (same level ). Parent is 1. And id is just an increment/GUID which has nothing to do with level. This "tree" data assumes this data is stored and loaded, and I'll just re-render the family graph in a permalink page basically. Commented Dec 31, 2015 at 1:51
• Right. So in your first example, you order each layer by the id (which matches your rendering). Is this just for ease of explanation of what you want? i.e. if you added robert and jessie before bob and mary their ids would not be in layer rendering order?
– Matt Way
Commented Dec 31, 2015 at 1:56
• 1
It's just for ease of explanation, sorry. You can assume random order. Commented Dec 31, 2015 at 1:58
• 9
How does your mistress and your wife feel about you posting this?
– lkessler
Commented Dec 31, 2015 at 23:46
5 Answers 5
15
+500
Although an answer has been posted (and accepted), I thought there is no harm in posting what I worked upon this problem last night.
I approached this problem more from a novice point of view rather than working out with existing algorithms of graph/tree traversal.
My first attempt is rendering this by levels from the top down. In this more simplistic attempt, I basically nest all of the people by levels and render this from the top down.
This was exactly my first attempt as well. You could traverse the tree top-down, or bottom-up or starting from the root. As you have been inspired by a particular website, starting from the root seems to be a logical choice. However, I found the bottom-up approach to be simpler and easier to understand.
Here is a crude attempt:
Plotting the data:
1. We start from the bottom-most layer and work our way upwards. As mentioned in the question that you are trying to work it out via an editor, we can store all related properties in the object array as we build the tree.
We cache the levels and use that to walk up the tree:
// For all level starting from lowest one
levels.forEach(function(level) {
// Get all persons from this level
var startAt = data.filter(function(person) {
return person.level == level;
});
startAt.forEach(function(start) {
var person = getPerson(start.id);
// Plot each person in this level
plotNode(person, 'self');
// Plot partners
plotPartners(person);
// And plot the parents of this person walking up
plotParents(person);
});
});
Where, getPerson gets the object from the data based on its id.
1. As we are walking up, we plot the node itself, its parents (recursively) and its partners. Plotting partners is not really required, but I did it here just so that plotting the connectors could be easy. If a node has already been plotted, we simply skip that part.
This is how we plot the partners:
/* Plot partners for the current person */
function plotPartners(start) {
if (! start) { return; }
start.partners.forEach(function(partnerId) {
var partner = getPerson(partnerId);
// Plot node
plotNode(partner, 'partners', start);
// Plot partner connector
plotConnector(start, partner, 'partners');
});
}
And the parents recursively:
/* Plot parents walking up the tree */
function plotParents(start) {
if (! start) { return; }
start.parents.reduce(function(previousId, currentId) {
var previousParent = getPerson(previousId),
currentParent = getPerson(currentId);
// Plot node
plotNode(currentParent, 'parents', start, start.parents.length);
// Plot partner connector if multiple parents
if (previousParent) { plotConnector(previousParent, currentParent, 'partners'); }
// Plot parent connector
plotConnector(start, currentParent, 'parents');
// Recurse and plot parent by walking up the tree
plotParents(currentParent);
return currentId;
}, 0);
}
Where we use reduce to simplify plotting a connector between two parents as partners.
1. This is how we plot a node itself:
Where, we reuse the coordinates for each unique level via the findLevel utility function. We maintain a map of levels and check that to arrive at the top position. Rest is determined on the basis of relationships.
/* Plot a single node */
function plotNode() {
var person = arguments[0], relationType = arguments[1], relative = arguments[2], numberOfParents = arguments[3],
node = get(person.id), relativeNode, element = {}, thisLevel, exists
;
if (node) { return; }
node = createNodeElement(person);
// Get the current level
thisLevel = findLevel(person.level);
if (! thisLevel) {
thisLevel = { 'level': person.level, 'top': startTop };
levelMap.push(thisLevel);
}
// Depending on relation determine position to plot at relative to current person
if (relationType == 'self') {
node.style.left = startLeft + 'px';
node.style.top = thisLevel.top + 'px';
} else {
relativeNode = get(relative.id);
}
if (relationType == 'partners') {
// Plot to the right
node.style.left = (parseInt(relativeNode.style.left) + size + (gap * 2)) + 'px';
node.style.top = parseInt(relativeNode.style.top) + 'px';
}
if (relationType == 'children') {
// Plot below
node.style.left = (parseInt(relativeNode.style.left) - size) + 'px';
node.style.top = (parseInt(relativeNode.style.top) + size + gap) + 'px';
}
if (relationType == 'parents') {
// Plot above, if single parent plot directly above else plot with an offset to left
if (numberOfParents == 1) {
node.style.left = parseInt(relativeNode.style.left) + 'px';
node.style.top = (parseInt(relativeNode.style.top) - gap - size) + 'px';
} else {
node.style.left = (parseInt(relativeNode.style.left) - size) + 'px';
node.style.top = (parseInt(relativeNode.style.top) - gap - size) + 'px';
}
}
// Avoid collision moving to right
while (exists = detectCollision(node)) {
node.style.left = (exists.left + size + (gap * 2)) + 'px';
}
// Record level position
if (thisLevel.top > parseInt(node.style.top)) {
updateLevel(person.level, 'top', parseInt(node.style.top));
}
element.id = node.id; element.left = parseInt(node.style.left); element.top = parseInt(node.style.top);
elements.push(element);
// Add the node to the DOM tree
tree.appendChild(node);
}
Here to keep it simple, I used a very crude collision detection to move nodes to right when one already exists. In a much sophisticated app, this would move nodes dynamically to the left or right to maintain a horizontal balance.
Lastly, we add that node to the DOM.
1. Rest are all helper functions.
The important ones are:
function detectCollision(node) {
var element = elements.filter(function(elem) {
var left = parseInt(node.style.left);
return ((elem.left == left || (elem.left < left && left < (elem.left + size + gap))) && elem.top == parseInt(node.style.top));
});
return element.pop();
}
Above is a simple detection of collision taking into account the gap between the nodes.
And, to plot the connectors:
function plotConnector(source, destination, relation) {
var connector = document.createElement('div'), orientation, start, stop,
x1, y1, x2, y2, length, angle, transform
;
orientation = (relation == 'partners') ? 'h' : 'v';
connector.classList.add('asset');
connector.classList.add('connector');
connector.classList.add(orientation);
start = get(source.id); stop = get(destination.id);
if (relation == 'partners') {
x1 = parseInt(start.style.left) + size; y1 = parseInt(start.style.top) + (size/2);
x2 = parseInt(stop.style.left); y2 = parseInt(stop.style.top);
length = (x2 - x1) + 'px';
connector.style.width = length;
connector.style.left = x1 + 'px';
connector.style.top = y1 + 'px';
}
if (relation == 'parents') {
x1 = parseInt(start.style.left) + (size/2); y1 = parseInt(start.style.top);
x2 = parseInt(stop.style.left) + (size/2); y2 = parseInt(stop.style.top) + (size - 2);
length = Math.sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2));
angle = Math.atan2(y2 - y1, x2 - x1) * 180 / Math.PI;
transform = 'rotate(' + angle + 'deg)';
connector.style.width = length + 'px';
connector.style.left = x1 + 'px';
connector.style.top = y1 + 'px';
connector.style.transform = transform;
}
tree.appendChild(connector);
}
I used two different connectors, a horizontal one to connect partners, and an angled one to connect parent-child relationships. This turned out to be a very tricky part for me, i.e. to plot inverted ] horizontal connectors. This is why to keep it simple, I simply rotated a div to make it look like an angled connector.
1. Once the entire tree is drawn/plotted, there could be nodes which go off-screen due to negative positions (because we are traversing bottom-up). To offset this, we simply check if there are any negative positions, and then shift the entire tree downwards.
Here is the complete code with a fiddle demo.
Fiddle Demo: http://jsfiddle.net/abhitalks/fvdw9xfq/embedded/result/
This is for an editor and would be similar to
Creating an editor:
The best way to test if it works, is to have an editor which allows you to create such trees/graphs on the fly and see if it plots successfully.
So, I also created a simple editor to test out. The code remains exactly the same, however has been re-factored a little to fit with the routines for the editor.
Fiddle Demo with Editor: http://jsfiddle.net/abhitalks/56whqh0w/embedded/result
Snippet Demo with Editor (view full-screen):
var sampleData = [
{ "id": 1, "name": "Me", "children": [4], "partners" : [2,3], root:true, level: 0, "parents": [8,9] },
{ "id": 2, "name": "Mistress", "children": [4], "partners" : [1], level: 0, "parents": [] },
{ "id": 3, "name": "Wife", "children": [5], "partners" : [1], level: 0, "parents": [] },
{ "id": 4, "name": "Son", "children": [], "partners" : [], level: -1, "parents": [1,2] },
{ "id": 5, "name": "Daughter", "children": [7], "partners" : [6], level: -1, "parents": [1,3] },
{ "id": 6, "name": "Boyfriend", "children": [7], "partners" : [5], level: -1, "parents": [] },
{ "id": 7, "name": "Son Last", "children": [], "partners" : [], level: -2, "parents": [5,6] },
{ "id": 8, "name": "Jeff", "children": [1], "partners" : [9], level: 1, "parents": [10,11] },
{ "id": 9, "name": "Maggie", "children": [1], "partners" : [8], level: 1, "parents": [13,14] },
{ "id": 10, "name": "Bob", "children": [8], "partners" : [11], level: 2, "parents": [12] },
{ "id": 11, "name": "Mary", "children": [], "partners" : [10], level: 2, "parents": [] },
{ "id": 12, "name": "John", "children": [10], "partners" : [], level: 3, "parents": [] },
{ "id": 13, "name": "Robert", "children": [9], "partners" : [14], level: 2, "parents": [] },
{ "id": 14, "name": "Jessie", "children": [9], "partners" : [13], level: 2, "parents": [15,16] },
{ "id": 15, "name": "Raymond", "children": [14], "partners" : [16], level: 3, "parents": [] },
{ "id": 16, "name": "Betty", "children": [14], "partners" : [15], level: 3, "parents": [] },
],
data = [], elements = [], levels = [], levelMap = [],
tree = document.getElementById('tree'), people = document.getElementById('people'), selectedNode,
startTop, startLeft, gap = 32, size = 64
;
/* Template object for person */
function Person(id) {
this.id = id ? id : '';
this.name = id ? id : '';
this.partners = [];
this.siblings = [];
this.parents = [];
this.children = [];
this.level = 0;
this.root = false;
}
/* Event listeners */
tree.addEventListener('click', function(e) {
if (e.target.classList.contains('node')) {
selectedNode = e.target;
select(selectedNode);
document.getElementById('title').textContent = selectedNode.textContent;
fillPeopleAtLevel();
}
});
document.getElementById('save').addEventListener('click', function() {
var pname = document.getElementById('pname').value;
if (pname.length > 0) {
data.forEach(function(person) {
if (person.id == selectedNode.id) {
person.name = pname;
selectedNode.textContent = pname;
document.getElementById('title').textContent = pname;
}
});
}
});
document.getElementById('add').addEventListener('click', function() {
addPerson(document.getElementById('relation').value);
plotTree();
});
document.getElementById('addExisting').addEventListener('click', function() {
attachParent();
plotTree();
});
document.getElementById('clear').addEventListener('click', startFresh);
document.getElementById('sample').addEventListener('click', function() {
data = sampleData.slice();
plotTree();
});
document.getElementById('download').addEventListener('click', function() {
if (data.length > 1) {
var download = JSON.stringify(data, null, 4);
var payload = "text/json;charset=utf-8," + encodeURIComponent(download);
var a = document.createElement('a');
a.href = 'data:' + payload;
a.download = 'data.json';
a.innerHTML = 'click to download';
var container = document.getElementById('downloadLink');
container.appendChild(a);
}
});
/* Initialize */
function appInit() {
// Approximate center of the div
startTop = parseInt((tree.clientHeight / 2) - (size / 2));
startLeft = parseInt((tree.clientWidth / 2) - (size / 2));
}
/* Start a fresh tree */
function startFresh() {
var start, downloadArea = document.getElementById('downloadLink');
// Reset Data Cache
data = [];
appInit();
while (downloadArea.hasChildNodes()) { downloadArea.removeChild(downloadArea.lastChild); }
// Add a root "me" person to start with
start = new Person('P01'); start.name = 'Me'; start.root = true;
data.push(start);
// Plot the tree
plotTree();
// Pre-select the root node
selectedNode = get('P01');
document.getElementById('title').textContent = selectedNode.textContent;
}
/* Plot entire tree from bottom-up */
function plotTree() {
// Reset other cache and DOM
elements = [], levels = [], levelMap = []
while (tree.hasChildNodes()) { tree.removeChild(tree.lastChild); }
// Get all the available levels from the data
data.forEach(function(elem) {
if (levels.indexOf(elem.level) === -1) { levels.push(elem.level); }
});
// Sort the levels in ascending order
levels.sort(function(a, b) { return a - b; });
// For all level starting from lowest one
levels.forEach(function(level) {
// Get all persons from this level
var startAt = data.filter(function(person) {
return person.level == level;
});
startAt.forEach(function(start) {
var person = getPerson(start.id);
// Plot each person in this level
plotNode(person, 'self');
// Plot partners
plotPartners(person);
// And plot the parents of this person walking up
plotParents(person);
});
});
// Adjust coordinates to keep the tree more or less in center
adjustNegatives();
}
/* Plot partners for the current person */
function plotPartners(start) {
if (! start) { return; }
start.partners.forEach(function(partnerId) {
var partner = getPerson(partnerId);
// Plot node
plotNode(partner, 'partners', start);
// Plot partner connector
plotConnector(start, partner, 'partners');
});
}
/* Plot parents walking up the tree */
function plotParents(start) {
if (! start) { return; }
start.parents.reduce(function(previousId, currentId) {
var previousParent = getPerson(previousId),
currentParent = getPerson(currentId);
// Plot node
plotNode(currentParent, 'parents', start, start.parents.length);
// Plot partner connector if multiple parents
if (previousParent) { plotConnector(previousParent, currentParent, 'partners'); }
// Plot parent connector
plotConnector(start, currentParent, 'parents');
// Recurse and plot parent by walking up the tree
plotParents(currentParent);
return currentId;
}, 0);
}
/* Plot a single node */
function plotNode() {
var person = arguments[0], relationType = arguments[1], relative = arguments[2], numberOfParents = arguments[3],
node = get(person.id), relativeNode, element = {}, thisLevel, exists
;
if (node) { return; }
node = createNodeElement(person);
// Get the current level
thisLevel = findLevel(person.level);
if (! thisLevel) {
thisLevel = { 'level': person.level, 'top': startTop };
levelMap.push(thisLevel);
}
// Depending on relation determine position to plot at relative to current person
if (relationType == 'self') {
node.style.left = startLeft + 'px';
node.style.top = thisLevel.top + 'px';
} else {
relativeNode = get(relative.id);
}
if (relationType == 'partners') {
// Plot to the right
node.style.left = (parseInt(relativeNode.style.left) + size + (gap * 2)) + 'px';
node.style.top = parseInt(relativeNode.style.top) + 'px';
}
if (relationType == 'children') {
// Plot below
node.style.left = (parseInt(relativeNode.style.left) - size) + 'px';
node.style.top = (parseInt(relativeNode.style.top) + size + gap) + 'px';
}
if (relationType == 'parents') {
// Plot above, if single parent plot directly above else plot with an offset to left
if (numberOfParents == 1) {
node.style.left = parseInt(relativeNode.style.left) + 'px';
node.style.top = (parseInt(relativeNode.style.top) - gap - size) + 'px';
} else {
node.style.left = (parseInt(relativeNode.style.left) - size) + 'px';
node.style.top = (parseInt(relativeNode.style.top) - gap - size) + 'px';
}
}
// Avoid collision moving to right
while (exists = detectCollision(node)) {
node.style.left = (exists.left + size + (gap * 2)) + 'px';
}
// Record level position
if (thisLevel.top > parseInt(node.style.top)) {
updateLevel(person.level, 'top', parseInt(node.style.top));
}
element.id = node.id; element.left = parseInt(node.style.left); element.top = parseInt(node.style.top);
elements.push(element);
// Add the node to the DOM tree
tree.appendChild(node);
}
/* Helper Functions */
function createNodeElement(person) {
var node = document.createElement('div');
node.id = person.id;
node.classList.add('node'); node.classList.add('asset');
node.textContent = person.name;
node.setAttribute('data-level', person.level);
return node;
}
function select(selectedNode) {
var allNodes = document.querySelectorAll('div.node');
[].forEach.call(allNodes, function(node) {
node.classList.remove('selected');
});
selectedNode.classList.add('selected');
}
function get(id) { return document.getElementById(id); }
function getPerson(id) {
var element = data.filter(function(elem) {
return elem.id == id;
});
return element.pop();
}
function fillPeopleAtLevel() {
if (!selectedNode) return;
var person = getPerson(selectedNode.id), level = (person.level + 1), persons, option;
while (people.hasChildNodes()) { people.removeChild(people.lastChild); }
data.forEach(function(elem) {
if (elem.level === level) {
option = document.createElement('option');
option.value = elem.id; option.textContent = elem.name;
people.appendChild(option);
}
});
return persons;
}
function attachParent() {
var parentId = people.value, thisId = selectedNode.id;
updatePerson(thisId, 'parents', parentId);
updatePerson(parentId, 'children', thisId);
}
function addPerson(relationType) {
var newId = 'P' + (data.length < 9 ? '0' + (data.length + 1) : data.length + 1),
newPerson = new Person(newId), thisPerson;
;
thisPerson = getPerson(selectedNode.id);
// Add relation between originating person and this person
updatePerson(thisPerson.id, relationType, newId);
switch (relationType) {
case 'children':
newPerson.parents.push(thisPerson.id);
newPerson.level = thisPerson.level - 1;
break;
case 'partners':
newPerson.partners.push(thisPerson.id);
newPerson.level = thisPerson.level;
break;
case 'siblings':
newPerson.siblings.push(thisPerson.id);
newPerson.level = thisPerson.level;
// Add relation for all other relatives of originating person
newPerson = addRelation(thisPerson.id, relationType, newPerson);
break;
case 'parents':
newPerson.children.push(thisPerson.id);
newPerson.level = thisPerson.level + 1;
break;
}
data.push(newPerson);
}
function updatePerson(id, key, value) {
data.forEach(function(person) {
if (person.id === id) {
if (person[key].constructor === Array) { person[key].push(value); }
else { person[key] = value; }
}
});
}
function addRelation(id, relationType, newPerson) {
data.forEach(function(person) {
if (person[relationType].indexOf(id) != -1) {
person[relationType].push(newPerson.id);
newPerson[relationType].push(person.id);
}
});
return newPerson;
}
function findLevel(level) {
var element = levelMap.filter(function(elem) {
return elem.level == level;
});
return element.pop();
}
function updateLevel(id, key, value) {
levelMap.forEach(function(level) {
if (level.level === id) {
level[key] = value;
}
});
}
function detectCollision(node) {
var element = elements.filter(function(elem) {
var left = parseInt(node.style.left);
return ((elem.left == left || (elem.left < left && left < (elem.left + size + gap))) && elem.top == parseInt(node.style.top));
});
return element.pop();
}
function adjustNegatives() {
var allNodes = document.querySelectorAll('div.asset'),
minTop = startTop, diff = 0;
for (var i=0; i < allNodes.length; i++) {
if (parseInt(allNodes[i].style.top) < minTop) { minTop = parseInt(allNodes[i].style.top); }
};
if (minTop < startTop) {
diff = Math.abs(minTop) + gap;
for (var i=0; i < allNodes.length; i++) {
allNodes[i].style.top = parseInt(allNodes[i].style.top) + diff + 'px';
};
}
}
function plotConnector(source, destination, relation) {
var connector = document.createElement('div'), orientation, start, stop,
x1, y1, x2, y2, length, angle, transform
;
orientation = (relation == 'partners') ? 'h' : 'v';
connector.classList.add('asset');
connector.classList.add('connector');
connector.classList.add(orientation);
start = get(source.id); stop = get(destination.id);
if (relation == 'partners') {
x1 = parseInt(start.style.left) + size; y1 = parseInt(start.style.top) + (size/2);
x2 = parseInt(stop.style.left); y2 = parseInt(stop.style.top);
length = (x2 - x1) + 'px';
connector.style.width = length;
connector.style.left = x1 + 'px';
connector.style.top = y1 + 'px';
}
if (relation == 'parents') {
x1 = parseInt(start.style.left) + (size/2); y1 = parseInt(start.style.top);
x2 = parseInt(stop.style.left) + (size/2); y2 = parseInt(stop.style.top) + (size - 2);
length = Math.sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2));
angle = Math.atan2(y2 - y1, x2 - x1) * 180 / Math.PI;
transform = 'rotate(' + angle + 'deg)';
connector.style.width = length + 'px';
connector.style.left = x1 + 'px';
connector.style.top = y1 + 'px';
connector.style.transform = transform;
}
tree.appendChild(connector);
}
/* App Starts Here */
appInit();
startFresh();
* { box-sizing: border-box; padding: 0; margin: 0; }
html, body { width: 100vw; height: 100vh; overflow: hidden; font-family: sans-serif; font-size: 0.9em; }
#editor { float: left; width: 20vw; height: 100vh; overflow: hidden; overflow-y: scroll; border: 1px solid #ddd; }
#tree { float: left; width: 80vw; height: 100vh; overflow: auto; position: relative; }
h2 { text-align: center; margin: 12px; color: #bbb; }
fieldset { margin: 12px; padding: 8px 4px; border: 1px solid #bbb; }
legend { margin: 0px 8px; padding: 4px; }
button, input, select { padding: 4px; margin: 8px 0px; }
button { min-width: 64px; }
div.node {
width: 64px; height: 64px; line-height: 64px;
background-color: #339; color: #efefef;
font-family: sans-serif; font-size: 0.7em;
text-align: center; border-radius: 50%;
overflow: hidden; position: absolute; cursor: pointer;
}
div.connector { position: absolute; background-color: #333; z-index: -10; }
div.connector.h { height: 2px; background-color: #ddd; }
div.connector.v { height: 1px; background-color: #66d; -webkit-transform-origin: 0 100%; transform-origin: 0 100%; }
div[data-level='0'] { background-color: #933; }
div[data-level='1'], div[data-level='-1'] { background-color: #393; }
div[data-level='2'], div[data-level='-2'] { background-color: #333; }
div.node.selected { background-color: #efefef; color: #444; }
<div id="editor">
<h2 id="title">Me</h2>
<div>
<fieldset>
<legend>Change Name</legend>
<label>Name: <input id="pname" type="text" /></label>
<br /><button id="save">Ok</button>
</fieldset>
<fieldset>
<legend>Add Nodes</legend>
<label for="relation">Add: </label>
<select id="relation">
<option value="partners">Partner</option>
<option value="siblings">Sibling</option>
<option value="parents">Parent</option>
<option value="children">Child</option>
</select>
<button id="add">Ok</button><br />
<label for="relation">Add: </label>
<select id="people"></select>
<button id="addExisting">As Parent</button>
</fieldset>
<fieldset>
<legend>Misc</legend>
<button id="clear">Clear</button> <button id="sample">Load Sample</button>
<br/><button id="download">Download Data</button>
</fieldset>
<fieldset id="downloadLink"></fieldset>
</div>
</div>
<div id="tree"></div>
This is all a very crude attempt and beyond doubts an un-optimized one. What I particularly couldn't get done are:
1. Getting inverted [ or ] shaped horizontal connectors for parent-child relationships.
2. Getting the tree to be horizontally balanced. i.e. dynamically figuring out which is the heavier side and then shifting those nodes to the left.
3. Getting the parent to centrally align with respect to children especially multiple children. Currently, my attempt simply pushes everything to right in order.
Hope it helps. And posting it here so that I too can refer to it when needed.
6
• 1
This is amazing. Is there a way I can split answers? You and @manuBriot really put an amazing effort into this. Commented Jan 19, 2016 at 15:01
• 1
Can't believe this has just 5 votes. What a monster of an answer. I also did have a question in terms of a visual enhancement. How complex would it be to make the lines draw in a manner like this? i.imgur.com/cCTD5Rh.png Commented May 24, 2016 at 15:09
• 1
@mederomuraliev: That is a little tricky with the dynamic structure that we have. You may want to play with this and adapt it to your needs - thecodeplayer.com/walkthrough/css3-family-tree
– Abhitalks
Commented May 25, 2016 at 5:30
• I see. So just comment out / remove the JS line connectors and try this entirely through CSS, right? I'll take a look. Thanks! Commented May 25, 2016 at 15:17
• Also, in your example the siblings don't have a connecting line, right? I assume that's because it's easier that way otherwise you have to have extra logic for partners vs siblings and who is connected, and in what order, etc? In the future I might try that css3 solution but for the time being I need a connecting line for siblings. Commented May 26, 2016 at 20:57
11
As you show it, your tree data will not allow you to draw the diagram. You are in fact missing some information there:
• tree should really be an object (dictionary) mapping the id to the person's data. Otherwise, it is costly to go from the id, as given in children for instance, back to the child's data.
• there is duplicate information since children are associated with both parent. This actually lead to incorrect data in the example you sent ('daugher1' is a child of 'wife', but a parent of 'me', and 'mary' is presumably mother of 'jeff'; jessie is a partner of robert; so are raymond and betty)
In my attempt (https://jsfiddle.net/61q2ym7q/), I am therefore converting your tree to a graph, and then doing various stages of computation to achieve a layout.
This is inspired by the Sugiyama algorithm, but simplified, since that algorithm is very tricky to implement. Still, the various stages are:
• organize nodes into layers, using depth-first search. We do this in two steps by making sure that parents are always on a layer above their parent, and then trying to shorten the links when there is more than one layer between child and parent. This is the part where I am not using the exact Sugiyama algorithm, which uses a complex notion of cut points.
• then sort the nodes into each layer to minimize crossing of edges. I use the barycenter method for this
• finally, while preserving the order above, assign a specific x coordinate for each node, again using the barycenter method
There are lots of things that can be improved in this code (efficiency, by merging some of the loops for instance) and also in the final layout. But I tried to keep it simpler to make it easier to follow...
4
• Honestly I'm not sure which answer to choose between yours and @abhitalks, both of yours are amazing. I didn't think anyone would answer in the remaining 12 hours and should've not picked in advance. Since his didn't need to use d3.js to render his, and the lines are more of what I wanted, I may go with his answer. But I just want to give you credit for this answer too. If there was a way I can split credit/answers.. I would do that. Commented Jan 19, 2016 at 15:11
• I understand that :-) I must admit I don't know how bounties work, that was only the second one I answered. True, his answer is also very good. I was thinking there might be another approach to your initial question: it might be worth trying to create a higher-level graph where the meta-nodes are sets of one or two people from your data, related by a "partner" relationship, and with children in common. Doing the layout on that graph would be faster (since it is smaller), which would give a general layout, which we can refine back in the original graph.
– manuBriot
Commented Jan 19, 2016 at 15:28
• You mention d3.js in your comment: in a real application, you should really consider using that instead of pure html, since d3.js will provide animation almost for free when you add new persons and the layout needs to be recomputed. I have done a number of such genealogical diagrams as experiments (briot.github.io/geneapro/reports.html if you are interested).
– manuBriot
Commented Jan 19, 2016 at 15:32
• I did use d3.js in one of my attempts but I think it added a complexity layer and it might be easier (at least for me) to do this without d3.js. Like if I have to customize a calculation - due to my limited experience it'll be harder than messing with the direct calculations/algorithms. I really appreciate both of your answers. Commented Jan 19, 2016 at 18:07
10
+250
This is not that far from how the Sugiyama algorithm is used to layout class hierarchies, so you might want to take a look at papers that discuss that. There's a book chapter that covers Sugiyama and other hierarchical layout algorithms here.
I'd lay out the upper and lower halves of the tree independently. The thing to recognize about the upper half is that, in its fully populated form, it's all powers of two, so you have two parents, four grandparents, sixteen great-grandparents, etc.
As you do your depth-first search, tag each node with a) it's layer number and b) its collating order. Your data structure doesn't include gender and you really need this both for stylistic reasons and to figure out the collating order. Fortunately, all genealogy data includes gender.
We'll tag fathers with "A" and mothers with "B". Grandparents get another letter appended, so you get:
father jeff - A, layer 1
mother maggie - B, layer 1
paternal grandfather bob - AA, layer 2
paternal grandmother mary - AB, layer 2
paternal grandfather robert - BA, layer 2
paternal grandmother jessie - BB, layer 2
g-g-father john - AAA, layer 3
etc
Add the nodes to a list for each layer as you go. Sort each layer by their gender keys (unless using sorted lists). Start your layout at the layer with the highest number and lay out the nodes from left (AAAAA) to right (BBBBB), leaving gaps for any missing nodes. Stylistically, decide if you want to collapse around missing nodes and, if so, by how much (although I'd recommend implementing the simple-minded version first).
Lay out the layers in descending order. If there's no collapsing/adjusting of positions, lower layer positions can be computed directly. If you're adjusting, you'll need to refer to the parents position in the previous layer and center the child under that.
The lower half of the diagram can be done in a similar fashion, except that instead of sorting by gender, you'd probably want to sort by birth order and build up your keys from that e.g. eldest child of eldest child has key "11" while eldest child of the second eldest child is "21" etc.
You could do this with a graph library like cola.js, but you'd only be using a sliver of its functionality and some of the stylistic elements that you want (e.g. keep father & mother close together), would probably need to be added separately, so I suspect it's as easy to build from scratch unless you need other functionality from the library.
Speaking of style, it's customary to use a different line style for the parent connector (traditionally it's a double line). Also, you don't want the "Mistress" node laid out on top of the "me" / "wife" edge.
p.s. With fixed size nodes, you can get away with a simple grid for your coordinate system.
6
• I think what you said makes sense - basically just create a simple version with a limit of 2 parents and no mistresses/partners for now and build on top of that? And I agree a library such as cola/d3 would be harder because I really need a custom algorithm to do the offsetting/calculation part, so it's more trouble than it's worth. Commented Jan 4, 2016 at 19:50
• Going to actually re-read everything you posted when I get back on this and make sense of it all. Appreciate your time. Commented Jan 4, 2016 at 19:50
• Yeah but I'm still not understanding how it's just powers of two if there can be multiple partners or no partners, or did I misunderstand? Commented Jan 10, 2016 at 6:28
• And what do you mean about "Mistress" node not being laid out on top of "me" or "wife" edge? How would I do that then if the person has multiple partners? Again, I'm trying to pretty much simulate how familyecho.com does it. Commented Jan 12, 2016 at 14:53
• I didn't realize that you wanted to handle multiple parters for ancestors since you didn't mention it and the example data didn't include it. It complicates the layout, but can be done. I'll see if I can update my answer. My comment about the Mistress node was simply that other lines shouldn't run "underneath" it. This can be avoided by moving it to the other side of the husband or routing the edge that it overlaps differently.
– Tom Morris
Commented Jan 12, 2016 at 21:31
5
This is not trivial question and it involves large corpus of research in graph drawing algorithms.
The most prominent approach for this problem is through constraints satisfaction. But don't try to implement this on your own (unless you want to learn something new and spend months debugging)
I cannot recommend highly enough this library: cola.js (GitHub)
The particular example that may be very close to what you need is grid layout.
1
• 2
Please clarify your answer. May be my limitations, but I don't readily see an answer in your post. Commented Jan 4, 2016 at 1:48
5
From what I can see - without looking at the code you have there (for now) - you have a DAG (the visual representation is another matter, now I'm talking only about the data structure). Each node has a maximum of 2 incoming connections and no constraint on connections going to other nodes (one can have an arbitrary number of children but we have info on a maximum of 2 parents for each person/node).
That being said, there will be nodes that do not have parents (in this case are "john", "raymond", "betty", "mistress 1", "wife 1", and "daughter 1 boyfriend"). If you do a BFS on the graph starting from these nodes - which would compose level 0 - you get the nodes for each level. The correct level has to be updated on the fly though.
Regarding the visual representation, I'm no expert, but IMO it can be achieved via a grid (as in, a table-like one) view. Each row contains the nodes of a certain level. The elements in a given row are arranged based on the relationship with the other elements in the same row, in row x - 1, and in row x + 1.
To better explain the idea I guess it's better to put in some pseudo-code (not JS though as it's not my strength):
getItemsByLevel(Graph graph)
{
Node[,] result = new Node[,];
var orphans = graph.getOrphans();
var visiting = new HashMap();
var visited = new HashMap();
var queue = new Queue<Node>();
queue.pushAll(orphans);
while(!queue.isEmpty())
{
var currentNode = queue.pop();
if(currentNode.relatedNodes.areNotBeingVisited()) // the nodes that should be on the same level
{
// the level of the current node was not right
currentNode.level++;
queue.push(currentNode);
}
else
{
var children = currentNode.children;
foreach(var child in children)
{
child.level = currentNode.level + 1;
queue.push(child);
}
visited.insert(currentNode);
result[currentNode.level, lastOfRow] = currentNode;
}
}
return result;
}
In the end of the procedure you're going to have a matrix of nodes where row i contains the nodes of level i. You just have to represent them in the gridview (or whatever you choose as layout).
Let me know if anything's unclear.
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.982777 |
You're all set!
To start developing, please head over to our developer documentation.
Activate the Google Maps Geolocation API
To get you started we'll guide you through the Google Developers Console to do a few things first:
1. Create or choose a project
2. Activate the Google Maps Geolocation API
3. Create appropriate keys
Continue
The Google Maps Geolocation API
Overview
The Google Maps Geolocation API returns a location and accuracy radius based on information about cell towers and WiFi nodes that the mobile client can detect. This document describes the protocol used to send this data to the server and to return a response to the client.
Communication is done over HTTPS using POST. Both request and response are formatted as JSON, and the content type of both is application/json.
Before you start developing with the Geolocation API, review the authentication requirements (you need an API key) and the API usage limits.
Geolocation requests
Geolocation requests are sent using POST to the following URL:
https://www.googleapis.com/geolocation/v1/geolocate?key=YOUR_API_KEY
You must specify a key in your request, included as the value of a key parameter. A key is your application's API key. This key identifies your application for purposes of quota management. Learn how to get a key.
Request body
The request body must be formatted as JSON. The following fields are supported, and all fields are optional:
• homeMobileCountryCode: The mobile country code (MCC) for the device's home network.
• homeMobileNetworkCode: The mobile network code (MNC) for the device's home network.
• radioType: The mobile radio type. Supported values are lte, gsm, cdma, and wcdma. While this field is optional, it should be included if a value is available, for more accurate results.
• carrier: The carrier name.
• considerIp: Specifies whether to fall back to IP geolocation if wifi and cell tower signals are not available. Defaults to true. Set considerIp to false to disable fall back.
• cellTowers: An array of cell tower objects. See the Cell Tower Objects section below.
• wifiAccessPoints: An array of WiFi access point objects. See the WiFi Access Point Objects section below.
{
"homeMobileCountryCode": 310,
"homeMobileNetworkCode": 410,
"radioType": "gsm",
"carrier": "Vodafone",
"considerIp": "true",
"cellTowers": [
// See the Cell Tower Objects section below.
],
"wifiAccessPoints": [
// See the WiFi Access Point Objects section below.
]
}
Cell tower objects
The request body's cellTowers array contains zero or more cell tower objects.
• cellId (required): Unique identifier of the cell. On GSM, this is the Cell ID (CID); CDMA networks use the Base Station ID (BID). WCDMA networks use the UTRAN/GERAN Cell Identity (UC-Id), which is a 32-bit value concatenating the Radio Network Controller (RNC) and Cell ID. Specifying only the 16-bit Cell ID value in WCDMA networks may return inaccurate results.
• locationAreaCode (required): The Location Area Code (LAC) for GSM and WCDMA networks. The Network ID (NID) for CDMA networks.
• mobileCountryCode (required): The cell tower's Mobile Country Code (MCC).
• mobileNetworkCode (required): The cell tower's Mobile Network Code. This is the MNC for GSM and WCDMA; CDMA uses the System ID (SID).
The following optional fields are not currently used, but may be included if values are available.
• age: The number of milliseconds since this cell was primary. If age is 0, the cellId represents a current measurement.
• signalStrength: Radio signal strength measured in dBm.
• timingAdvance: The timing advance value.
An example GSM cell tower object is below.
{
"cellTowers": [
{
"cellId": 42,
"locationAreaCode": 415,
"mobileCountryCode": 310,
"mobileNetworkCode": 410,
"age": 0,
"signalStrength": -60,
"timingAdvance": 15
}
]
}
An example WCDMA cell tower object is below.
{
"cellTowers": [
{
"cellId": 21532831,
"locationAreaCode": 2862,
"mobileCountryCode": 214,
"mobileNetworkCode": 7
}
]
}
WiFi access point objects
The request body's wifiAccessPoints array must contain two or more WiFi access point objects. macAddress is required; all other fields are optional.
• macAddress: (required) The MAC address of the WiFi node. It's typically called a BSS, BSSID or MAC address. Separators must be : (colon).
• signalStrength: The current signal strength measured in dBm.
• age: The number of milliseconds since this access point was detected.
• channel: The channel over which the client is communicating with the access point.
• signalToNoiseRatio: The current signal to noise ratio measured in dB.
An example WiFi access point object is shown below.
{
"macAddress": "00:25:9c:cf:1c:ac",
"signalStrength": -43,
"age": 0,
"channel": 11,
"signalToNoiseRatio": 0
}
Geolocation responses
A successful geolocation request will return a JSON-formatted response defining a location and radius.
• location: The user’s estimated latitude and longitude, in degrees. Contains one lat and one lng subfield.
• accuracy: The accuracy of the estimated location, in meters. This represents the radius of a circle around the given location.
{
"location": {
"lat": 51.0,
"lng": -0.1
},
"accuracy": 1200.4
}
Errors
In the case of an error, a standard format error response body will be returned and the HTTP status code will be set to an error status.
The response contains an object with a single error object with the following keys:
• code: This is the same as the HTTP status of the response.
• message: A short description of the error.
• errors: A list of errors which occurred. Each error contains an identifier for the type of error (the reason) and a short description (the message).
For example, sending invalid JSON will return the following error:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "parseError",
"message": "Parse Error",
}
],
"code": 400,
"message": "Parse Error"
}
}
Possible errors include:
Reason Domain HTTP Status Code Description
dailyLimitExceeded usageLimits 403 You have exceeded your daily limit.
keyInvalid usageLimits 400 Your API key is not valid for the Google Maps Geolocation API. Please ensure that you've included the entire key, and that you've either purchased the API or have enabled billing and activated the API to obtain the free quota.
userRateLimitExceeded usageLimits 403 You have exceeded the requests per second per user limit that you configured in the Google API Console. This limit should be configured to prevent a single or small group of users from exhausting your daily quota, while still allowing reasonable access to all users.
notFound geolocation 404 The request was valid, but no results were returned.
parseError global 400 The request body is not valid JSON. Refer to the Request Body section for details on each field.
Sample requests
If you'd like to try the Google Maps Geolocation API with sample data, save the following JSON to a file:
{
"considerIp": "false",
"wifiAccessPoints": [
{
"macAddress": "00:25:9c:cf:1c:ac",
"signalStrength": -43,
"signalToNoiseRatio": 0
},
{
"macAddress": "00:25:9c:cf:1c:ad",
"signalStrength": -55,
"signalToNoiseRatio": 0
}
]
}
You can then use cURL to make your request from the command line:
$ curl -d @your_filename.json -H "Content-Type: application/json" -i "https://www.googleapis.com/geolocation/v1/geolocate?key=YOUR_API_KEY"
The response for the above Mac addresses looks like this:
{
"location": {
"lat": 33.3632256,
"lng": -117.0874871
},
"accuracy": 20
}
(See Get a Key if you don't have an API key.)
For additional testing, you can gather information from your Android device using the Google Places API for Android and the Android Location APIs, and from your iOS device using the Google Places API for iOS.
Frequently asked questions
Why am I getting a very large accuracy radius in my Geolocation response?
If your Geolocation response shows a very high value in the accuracy field, the service may be geolocating based on the request IP, instead of WiFi points or cell towers. This can happen if no cell towers or access points are valid or recognized.
To confirm that this is the issue, set considerIp to false in your request. If the response is a 404, you've confirmed that your wifiAccessPoints and cellTowers objects could not be geolocated.
Send feedback about...
Google Maps Geolocation API
Google Maps Geolocation API
Need help? Visit our support page.
|
__label__pos
| 0.907035 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I'm developing a complex website that heavily leverages jQuery and a number of scripts. On load of the site, none of my scripting is working (though I can confirm that other scripts are functioning fine). I wouldn't be posting such a lame question here on SE except for one thing:
The instant I hit F12 to turn on developer tools so I can debug my issue, everything instantly works perfectly!
Worse, if I shut down the browser, start it up, turn on Dev Tools first and visit the site, everything works as expected.
So I can't even debug the darned problem because Dev Tools fixes it! What could Dev Tools be doing that makes things work? Does it change the UA (I do some jQuery.browser detection)? Does it do something to doctype?
EDIT
All my console logging is wrapped in the following wrapper utility function:
function log(msg){
if (console){
console.log(msg);
}
}
Any thoughts or suggestions I could try would be welcome. I'll post here if I find a solution.
share|improve this question
3
Have you tried in another browser ? – Nasreddine Nov 11 '11 at 14:29
9
Do you have console calls in js? – Oleksandr Skrypnyk Nov 11 '11 at 14:30
Works in all other browsers. Also IE8 and 7 – Tom Auger Nov 11 '11 at 14:47
2
Why are you throwing an error when console doesn't exist? That'll kill the code for IE. – zzzzBov Nov 11 '11 at 14:52
I'm throwing an error in the logError block because I only use that for Fatal Errors and if the developer can't see the error because the console is hidden, I want execution to stop. So that part is legit. I'm actually going to remove it from my example because it's not ever being invoked (given that there are no fatal errors in my script at the moment!) – Tom Auger Nov 11 '11 at 15:08
show 2 more comments
8 Answers
up vote 19 down vote accepted
I appreciate I'm pretty late to the party here, but I've got a solution for IE9 that's a little different.
(function() {
var temp_log = [];
function log() {
if (console && console.log) {
for (var i = 0; i < temp_log.length; i++) {
console.log.call(window, temp_log[i]);
}
console.log.call(window, arguments);
} else {
temp_log.push(arguments);
}
}
})();
Basically instead of console.log you use log. If console.log exists then it works as normal, otherwise it stores log entries in an array and outputs them on the next log where the console is available.
It would be nice if it pushed the data as soon as the console is available, but this is less expensive than setting up a custom setInterval listener.
Updated function (1 October 2012)
I've updated this script for my own use and thought I'd share it. It has a few worthy improvements:
• use console.log() like normal, i.e. no longer need to use non-standard log()
• supports multiple arguments, e.g. console.log('foo', 'bar')
• you can also use console.error, console.warn and console.info (though outputs them as console.log)
• script checks for native console every 1000ms and outputs the buffer when found
I think with these improvements, this has become a pretty solid shim for IE9. Check out the GitHub repo here.
if (!window.console) (function() {
var __console, Console;
Console = function() {
var check = setInterval(function() {
var f;
if (window.console && console.log && !console.__buffer) {
clearInterval(check);
f = (Function.prototype.bind) ? Function.prototype.bind.call(console.log, console) : console.log;
for (var i = 0; i < __console.__buffer.length; i++) f.apply(console, __console.__buffer[i]);
}
}, 1000);
function log() {
this.__buffer.push(arguments);
}
this.log = log;
this.error = log;
this.warn = log;
this.info = log;
this.__buffer = [];
};
__console = window.console = new Console();
})();
share|improve this answer
Nice, encapsulated solution +1 – Tom Auger Feb 10 '12 at 16:27
I can't believe I haven't accepted this answer until now. Thank you! – Tom Auger Dec 15 '12 at 22:26
@Liam how about a nuget package for the shim ? – Dhana Krishnasamy Oct 10 '13 at 15:41
add comment
You have console calls, in IE these will fail if the dev tools are not open. A simple fix is to wrap any console calls in a function like:
function log(msg) {
if(console)
console.log(msg);
}
share|improve this answer
I thought I had done just such a wrapper. But I will double check. Good suggestion. – Tom Auger Nov 11 '11 at 14:47
Actually, you were right on the money that it was a console.log issue. However, your wrapper isn't a sufficient check in IE9 (though it has always worked in IE7 and 8). See my answer. – Tom Auger Nov 11 '11 at 15:02
this worked for me but I had to use window.console as just console did not work. – Rick Glos Sep 25 '12 at 15:58
add comment
I have hacked it the following way
<script type="text/javascript">
(function () {
if (typeof console == "undefined") {
console = {
log : function () {}
}
}
})();
</script>
And this is the first script element in the .
share|improve this answer
add comment
I find it much more convenient to simply use console && console.log('foo', 'bar', 'baz') rather than use a wrapper function.
The code you provided:
function logError(msg){
if (console) {
console.log(msg);
} else {
throw new Error(msg);
}
}
Will produce an error for IE when dev tools are closed because console will be undefined.
share|improve this answer
Your above code should also fail in IE9, though it works in IE8 and 7 as advertised. – Tom Auger Nov 11 '11 at 15:06
add comment
The console.log wrapper that I used was not sufficient to detect the console in IE9. Here's the wrapper that works from a related question on SE:
function logError(msg){
try {
console.log(msg);
} catch (error) {
throw new Error(msg);
}
}
function log(msg){
try {
console.log(msg);
} catch (error) { }
}
A proper test for the availability of the console object would be: if (typeof console === "undefined" || typeof console.log === "undefined")
share|improve this answer
Annoyingly (feature detection should win out every time), this is the only solution that worked for me with a downgraded IE9 in forced-non compatibility mode. – AndyC Jun 20 at 11:01
add comment
Most of the other solutions should work great, but here's a short one liner if you don't care about catching log messages if the console is not available.
// Stub hack to prevent errors in IE
console = window.console || { log: function() {} };
This lets you still use the native console.log function directly still instead of wrapping it with anything or having a conditional each time.
share|improve this answer
add comment
If you have multiple parallel script files, maybe the files are being loaded/executed in a different order with developer tools on/off.
share|improve this answer
Interesting thought. I'll check that. – Tom Auger Nov 11 '11 at 14:47
add comment
I have run into this issue many times. Basically with variables we do this to check if they are valid
var somevar;
if (somevar)
//do code
this works because somevar will resolve to undefined. But if your checking a window property for example. window.console.
if (console) <---- this throws an exception
You cannot do the same check. The browser treats it differently. Basically only doing this
if (window.console) <---- will NOT throw an exception if undefined
//some code
this will work the same as the first example. So you need to change your code to
function log(msg){
if (window.console){
console.log(msg);
}
}
share|improve this answer
add comment
protected by g.d.d.c Jan 14 at 18:06
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.817801 |
answersLogoWhite
0
Best Answer
The answer depends on what information you are given. If you know the circumference, then divide by Pi and that is the diameter. Half of the diameter is the radius. If you don't know anything about the circle, you would need to measure and find the diameter.
User Avatar
Wiki User
2009-05-16 04:08:18
This answer is:
User Avatar
Study guides
Algebra
20 cards
A polynomial of degree zero is a constant term
The grouping method of factoring can still be used when only some of the terms share a common factor A True B False
The sum or difference of p and q is the of the x-term in the trinomial
A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials
➡️
See all cards
3.79
1462 Reviews
Add your answer:
Earn +20 pts
Q: How do you determine the length of the radius and diameter of a circle?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions
What is the length of a radius and the length of a diameter?
The length of a radius of a circle is half of the diameter of the same circle. So, diameter is always twice the radius.
Is diameter of a circle is the length of the radius?
The diameter of a circle is twice its radius.
How does the length of the diameter of a circle compare to the length of the radius of that circle?
diameter = 2 X radius
If the length of a diameter of a circle is10what is the length of a radius of this circle?
The radius is 1/2 of the diameter. A diameter of 10 has a radius of 5.
What is the relationship between the length of the diameter of a circle and length of the radius of that circle?
The diameter of any circle is twice its radius
What is the relationship of a radius and a diameter of a circle?
the radius is half the length of the diameter of a circle.
What is the relationship between the length of a radius of a circle and the length of a diameter of the same circle?
The radius is half the diameter.
If the length of a diameter of a circle is 11 what is the length of a radius of this circle?
the radius is half the diameter. radius is half of 11. Which is 5.5.
If the diameter of a circle is 4 centimeters what is the length of the radius?
Length of a radius is always half the diameter of a circle. This means that the circle has a radius of 2cm.
Is the radius of a circle onefourth the length of a diameter?
No. The radius is half of the diameter.
Is the diameter of a circle twice the size of the radius?
Yes, the diameter of a circle is twice the length of the radius.
How do you find the diameter if the radius for a circle is 32?
The diameter is the length of a line touching both sides of the circle and passing through the centre. The radius is the length of a line from the centre to touching the circle, and is half the length of the diameter, the diameter then is twice the length of the radius > diameter = radius * 2 = 32 * 2 = 64
People also asked
|
__label__pos
| 1 |
Microsoft PowerApps: Creating & Saving Apps
Overview/Description
Expected Duration
Lesson Objectives
Course Number
Expertise Level
Overview/Description
PowerApps give you the possibility to create apps from templates or create apps from scratch. Once you create an app, you will need to know how to navigate the editing interface to identify the key tools to manage your app. You can also use existing data from Excel or SharePoint to import your data and create apps easily within PowerApps. You can also configure app settings such as the auto-save feature or change how you app is displayed. The key concepts covered in this 10-video course include how to create an app from a template; how to create an app from an Excel workbook; and how to generate an app with SharePoint. Next, you will learn how to create a canvas app from scratch; how to access the app editing interface; and how to update and publish an app. Finally, observe how to restore and publish another version of your app; and learn to customize the settings of an app and to manage your apps.
Expected Duration (hours)
0.9
Lesson Objectives
Microsoft PowerApps: Creating & Saving Apps
• use a template to create an app
• create an app from a template
• create an app from an Excel workbook
• generate an app with SharePoint
• create a canvas app from scratch
• access the app editing interface
• update and publish an app
• restore and publish another version of your app
• customize the settings of an app
• manage your apps
• Course Number:
ds_mspaps_02_enus
Expertise Level
Beginner
|
__label__pos
| 0.596451 |
Baidu VS Google search ranking experience
in
– Google’s experience is better than Baidu, because the former rank is awesome
Where is the problem with this sentence.
First, the so-called “Google’s experience is better than Baidu” is a very subjective assertion and does not conform to the facts. If strictly speaking, it should be “for a specific group of people, for a specific field, Google’s experience is better than Baidu”.
In fact, as far as I can see, Google’s search quality is actually not as good as Baidu’s for many daily searches in China.
Second, in terms of the overall search experience, Baidu’s most criticized experience problem is the lax ad review and ads that confuse natural results. This problem is definitely not a problem with the rank algorithm. That is to say, the so-called “basically because the former rank is awesome” is also untenable.
Third, Baidu’s data search in the technical field, especially in some cutting-edge technology fields, is indeed a big gap with Google, but this is not mainly a problem of Rank strategy, but a problem of collection ability and collection direction. Google is a global index with richer information sources, while Baidu’s overseas index is lackluster, which causes two problems. One is that some high-quality technical resources are not included, which makes it impossible to give such search results; the other is that Although some high-quality technical resources were included by coincidence, various high-quality web resources that generated related citations were not included, resulting in a significant underestimation of their rank value, so high-quality resources could not get their due rankings.
Therefore, even in some areas where Baidu’s search quality is poor, the problem of insufficient indexing is actually a bigger component than the problem of rank strategy. The problem of collecting overseas high-quality resources is a dilemma that all domestic search engines have to face. It is really not a problem of money and resources, so I won’t discuss this.
Well, the above are actually some clarifications on search quality issues. If it is just these, this article can only be regarded as saliva, and the following points are valuable.
Compared with Google’s rank strategy, Baidu’s rank strategy is not as good as Google’s according to my observation, that is, Baidu is too indulgent to “click to raise rights”. Of course, this is my observation a few years ago, and it has been used in the past two years. Not much, so not sure if the status quo remains.
Well, my old article actually mentioned “click to raise rights” many times, but I didn’t explain it systematically. Why not? This thing is the job of many people, and even some people in my circle of friends. There is no reason to break people’s jobs, it is not suitable. Why do you dare to say it now?Because time has passed for a long time, the risk control strategy of many platforms has also gone up, so it is not hidden, and it is a case to enhance cognition for everyone.
A few years ago, some people criticized why Baidu searched for some common words, there would be very soft pornographic results, then some people would question Baidu’s integrity, in fact, they thought too much, this problem is caused by “click to raise rights” to escalate.
The “click to raise rights” is actually very simple. Under a certain keyword, the higher the user’s click-through rate, the higher the content weight will be considered. The starting point is to optimize the sorting of related results based on the user’s choice. A typical scenario of the machine learning of , if you only look at the logic of the original intention, it seems very reasonable, but soft porn content often has an advantage in the click rate under any word, so this situation occurs.
So the original intention of the algorithm itself is one thing, the complexity of reality is another.
Then the key point comes, if the search ranking can be changed based on the user’s choice, then what if the user’s choice is artificial?
As we said before, Google seems to have some control over click escalation, but what about outside of Google?
For example, the ASO industry is based on such a basic principle. Third-party ranking companies have at least several hundred million yuan in industry revenue each year. The logic is very simple. Use more different terminals to click on specific results under specific keywords to improve their rankings.
At first, it was a fleet operation, but it was easy to be recognized by the system risk control, but a new way of playing appeared, subcontracting, create a subcontracting task group, let everyone search for specific words, and click the specific results below to install. Earn commissions for assignments through video recording. This mode of risk control identification is very difficult, because they are all real users, and the time, location, and terminal type of the occurrence are also very scattered.
Is there anything else besides ASO? There must be, not to mention anything else, why many people on e-commerce platforms will brush orders, it is almost the same reason.
Then you say, why do you still need to use click to raise rights? In fact, the so-called recommendation algorithm isIt’s kind of a click right. It is based on the user’s choice, based on the user’s click, and based on the user’s download to change the weight of content distribution , so this gameplay not only affects search, classification lists, but even advertising.
Many advertising delivery systems make predictions on the quality of the advertising content, so as to give the delivery scale based on the prediction, especially for some advertising materials based on conversion pricing, the conversion statistics are delayed after delivery, and there is a fraudulent advertising system here time difference, then the question is, how to deceive it? It is to make the advertising system mistakenly think that this is an excellent material with a high conversion rate, so even a low unit price can give a high enough playback volume. How to make the system think that this is an excellent material, that is, when it is placed on a small scale, the conversion efficiency is artificially improved.
So this has become outrageous. In the past, we said that malicious clicks are clicks and conversions of competitors’ products, which cause budget losses to competitors. Now, in many scenarios, things are reversed, and they will consume their own budget first, and in the case of small scale to create their own high transformation illusion, and then increases the budget, deceives the delivery platform, and defrauds the high delivery scale with low unit price.
Deception algorithm is a big topic, and it is also a topic that risk control must pay great attention to the original intention of machine learning is to optimize the display weight of content based on human choice, but this logic will also be used deliberately. After all, ranking and push are behind the huge commercial value. And the related industry, to this day, is very amazing.
source:Google Chrome
|
__label__pos
| 0.515365 |
H5游戏开发实战实战物体之间的碰撞判断
H5游戏开发实战实战物体之间的碰撞判断
Jason
2020-08-10 / 0 评论 / 312 阅读 / 正在检测是否收录...
<section style="height:500px;">
<iframe id="iframe570" name="iframe570" width="400" height="500" srcdoc="<!DOCTYPE html><html><body style='overflow:hidden;margin:0px;width:400px;height:500px;'></body><script src='../../js/jquery-1.8.3.min.js'></script><script src='../../js/pixi.js'></script><script type='text/javascript'>var app = new PIXI.Application(400,500);
document.body.appendChild(app.view);
//添加飞机
var plane = PIXI.Sprite.fromImage("res/plane/plane_blue_01.png");
plane.anchor.set(0.5,0.5);
plane.x = 200;
plane.y = 400;
app.stage.addChild(plane);
//设置飞机plane,鼠标跟随
app.stage.interactive = true;
app.stage.on('mousemove',movePlane);
function movePlane(event) {
var pos=event.data.getLocalPosition(app.stage);
plane.x = pos.x;
plane.y = pos.y;
}
//存储敌机的数组
var enemyList = [];
//存储子弹的数组
var bulletList = [];
//帧频函数
app.ticker.add(animate);
function animate() {
addEnemy();//添加敌机
moveEnemy();//移动敌机
addBullet();//添加子弹
moveBullet();//移动子弹
crash();//敌机与子弹碰撞
}
//添加敌机
var a = 0;
function addEnemy() {
if(a == 20) {
//创建敌机
var enemy = PIXI.Sprite.fromImage("res/plane/enemy_04.png");
enemy.anchor.set(0.5,0.5);
enemy.x = Math.random() * 400;
app.stage.addChild(enemy);
//将敌机添加到数组
enemyList.push(enemy);
a = 0;
}
a++;
}
//移动敌机
function moveEnemy() {
for(var i=enemyList.length-1;i>=0;i--) {
var enemy = enemyList[i];
enemy.y += 4;
//敌机是否超出边界
if(enemy.y > 600) {
//销毁敌机
app.stage.removeChild(enemy);
enemyList.splice(i,1);
}
}
}
//添加子弹
var b = 0;
function addBullet() {
if(b == 5) {
//创建子弹
var bullet = PIXI.Sprite.fromImage("res/plane/bullet_01.png");
bullet.anchor.set(0.5,0.5);
bullet.y = plane.y;
bullet.x = plane.x;
app.stage.addChild(bullet);
//将子弹图片添加到子弹数组中
bulletList.push(bullet);
b = 0;
}
b++;
}
//移动子弹
function moveBullet() {
for(var i=bulletList.length-1;i>=0;i--) {
var bullet = bulletList[i];
bullet.y -= 20;
//子弹是否超出边界
if(bullet.y < -100) {
//销毁子弹
app.stage.removeChild(bullet);
bulletList.splice(i,1);
}
}
}
//敌机与子弹的碰辜
function crash(){
//循环子弹数组
for(var i=0;i<bulletList.length;i++) {
var bullet = bulletList[i];
//循环敌机数组
for(var j=0;j<enemyList.length;j++) {
var enemy = enemyList[j];
var pos = (bullet.x - enemy.x) * (bullet.x - enemy.x) + (bullet.y - enemy.y) * (bullet.y - enemy.y);
//判断是否发生碰撞
if(pos < 60 * 60) {
//销毁子弹
app.stage.removeChild(bullet);
bulletList.splice(i, 1);
//销毁敌机
app.stage.removeChild(enemy);
enemyList.splice(j, 1);
break;
}
}
}
}</script></html>" style="border:0;"></iframe>
<section></section>
</section>
0
评论 (0)
取消
|
__label__pos
| 0.993589 |
[TxMt] help: shell commands in HTML
Piero D'Ancona pierodancona at gmail.com
Thu Sep 21 23:13:48 UTC 2006
Hi,
somebody help me, I guess it takes just a look at this code
to spot what's wrong (for you):
---------------------------------------
<html>
<head>
<script>
function letsTry () {
TextMate.system("/usr/bin/open -a /Applications/TeXniscope", null);
};
}
</script>
</head>
<body>
<span id="letsTry">
<a onClick="letsTry()" href="#">Let's Try</a></span>
</body>
</html>
---------------------------------------
I put this in a p.html file. I then create a command in TextMate
that does " cat p.html " and outputs to HTML. The HTML appears,
but of course clicking on the link does nothing, not certainly
opening TeXniscope as hoped. Documentation(s) perused,
sleep hours lost. Any idea?
Piero
More information about the textmate mailing list
|
__label__pos
| 0.992152 |
FA18:Lecture 4 proofs
From CS2800 wiki
Revision as of 14:05, 31 August 2018 by {{GENDER:Mdg39|[math]'"2}} [/math]'"7
(<math>1) </math>2 | <math>3 (</math>4) | <math>5 (</math>6)
In this lecture, we will examine several forms of proposition; for each we will discuss how to prove it, how to disprove it, and how to use it in a proof (if it is already known to be true).
You may find it useful to look at the list of all proofs in the course. Although you don't have the definitions yet, you should be able to understand the structure of the proofs using the techniques from this lecture.
Propositions
Definition: Proposition
A proposition is a statement that is either true or false.
For example, [math]3 \gt 5 [/math] is a proposition, as is [math]3 \lt 5 [/math]. "Professor George is taller than 5'" is a proposition. If [math]x [/math] is known to be 5, then "[math]x \gt 5 [/math]" is a proposition.
If a fact depends on a variable, we will not consider it to be a proposition. For example, "[math]x \gt 5 [/math]" by itself is not a proposition. Neither is "[math]x \gt x-1 [/math]". These statements are called properties of [math]x [/math], or predicates on [math]x [/math]. Similarly, we can have properties of two variables (e.g. [math]x \gt y [/math]).
Predicates can be turned into propositions by quantifying them; stating that they must be true either for all [math]x [/math] or for some [math]x [/math].
For example, "for any sets [math]A [/math] and [math]B [/math], [math]A \href{/cs2800/wiki/index.php/%E2%8A%86}{⊆} A \href{/cs2800/wiki/index.php/%E2%88%AA}{∪} B [/math]" is a proposition, as is "there exists some [math]x [/math] with [math]x \gt 0 [/math]".
This process can be repeated: "for all [math]x \href{/cs2800/wiki/index.php/%E2%88%88}{∈} \href{/cs2800/wiki/index.php/%E2%84%95}{ℕ} [/math], [math]x \geq y [/math]" is a predicate, because it depends on [math]y [/math] (but not [math]x [/math]!). But "there exists [math]y \href{/cs2800/wiki/index.php/%E2%88%88}{∈} \href{/cs2800/wiki/index.php/%E2%84%95}{ℕ} [/math] such that for every [math]x \href{/cs2800/wiki/index.php/%E2%88%88}{∈} \href{/cs2800/wiki/index.php/%E2%84%95}{ℕ} [/math], [math]x ≥ y [/math]" is a proposition.
We are often lazy and leave off the quantification of a variable in a statement. When we have an undefined variable [math]x [/math] in a claim, we implicitly mean "for all [math]x [/math]". For example, if I ask you to prove or disprove that [math]A \href{/cs2800/wiki/index.php/%E2%8A%86}{⊆} A \href{/cs2800/wiki/index.php/%E2%88%A9}{∩} B [/math], I implicitly mean to prove or disprove that, for all sets [math]A [/math] and [math]B [/math], [math]A \href{/cs2800/wiki/index.php/%E2%8A%86}{⊆} A \href{/cs2800/wiki/index.php/%E2%88%A9}{∩} B [/math].
Combining propositions
If you have propositions [math]P [/math], [math]Q [/math], and [math]R [/math], you can combine them in natural ways to form new propositions. The structure of these combined propositions can help you determine how to prove them, or how to use them if you know they are true. We will examine several of these combined propositions today; we will summarize them in the next lecture in a table of proof techniques.
You can view this overview as a guide to doing what it says, and using what you know. At any point in a proof, there is a current set of facts that you've already proven (or assumed), and there is a particular proposition that you are trying to show. You can apply these strategies to those propositions to make progress.
Note on notation
We are introducing symbols for words like "and", "or", and "not". These symbols are helpful when talking about logic itself, but I find they make proofs much harder to read, so I avoid them when writing proofs. This is a matter of style.
P and Q
If [math]P [/math] and [math]Q [/math] are propositions, then "[math]P [/math] and [math]Q [/math]" is a proposition (written [math]P \href{/cs2800/wiki/index.php/%E2%88%A7}{∧} Q [/math]); it is true if both [math]P [/math] is true and [math]Q [/math] is true.
To prove "[math]P [/math] and [math]Q [/math]", you can separately prove [math]P [/math] and then prove [math]Q [/math].
If you have already proved (or assumed) [math]P [/math] and [math]Q [/math], you can conclude [math]P [/math]. You can also conclude [math]Q [/math].
To disprove "[math]P [/math] and [math]Q [/math]", you must either disprove [math]P [/math] or disprove [math]Q [/math]. Put another way, the logical negation of "[math]P [/math] and [math]Q [/math]" is "not [math]P [/math] or not [math]Q [/math]".
P or Q
If [math]P [/math] and [math]Q [/math] are propositions, then "[math]P [/math] or [math]Q [/math]" is a proposition (written [math]P \href{/cs2800/wiki/index.php/%E2%88%A8}{∨} Q [/math]); it is true if [math]P [/math] is true, or if [math]Q [/math] is true (or both).
Note that this is somewhat different from the use of "or" in colloquial English; if both [math]P [/math] and [math]Q [/math] are true, we still consider [math]P [/math] or [math]Q [/math] to be true. This saves us work: we can prove [math]P [/math] or [math]Q [/math] by just proving [math]P [/math]; we don't have to also disprove [math]Q [/math].
To prove "[math]P [/math] or [math]Q [/math]", you can either prove [math]P [/math], or you can prove [math]Q [/math] (your choice!)
If you know that "P or Q" is true for some statements P and Q, and you wish to show a third statement R, you can do so by separately considering the cases where P is true and where Q is true. If you are able to prove R in either case, then you know that R is necessarily true.
This technique is often referred to as case analysis.
To disprove "[math]P [/math] or [math]Q [/math]", you must both disprove [math]P [/math] and disprove [math]Q [/math]. Put another way, the logical negation of "[math]P [/math] or [math]Q [/math]" is "not [math]P [/math] and not [math]Q [/math]".
P is false
If [math]P [/math] is a proposition, then "[math]P [/math] is false" (or more succinctly, "not [math]P [/math]", written [math]\href{/cs2800/wiki/index.php?title=%C2%ACP&action=edit&redlink=1}{¬P} [/math]) is also a proposition; it is true if [math]P [/math] is false, and false if [math]P [/math] is true.
• To prove [math]P [/math] is false: disprove [math]P [/math]; To disprove "[math]P [/math] is false": prove [math]P [/math]. Put another way, the logical negation of "not [math]P [/math]" is [math]P [/math].
• You can think of "proof by contradiction" as a way of using "not [math]P [/math]". If you are able to prove both [math]P [/math] and "not [math]P [/math]", then you must have made contradictory assumptions at some point; this means you can go back to your most recent assumption and conclude that it was invalid. This is useful to rule out cases when doing case analysis.
It is often a good way to start a proof: assume that what you are trying to prove is false, and then find a contradiction. At that point, you know your assumption was incorrect, so the original claim must be true.
This style of proof usually starts "Assume for the sake of contradiction, that [math]P [/math] is false..." and usually ends "...this is a contradiction, and [math]P [/math] must have been true in the first place." It is a useful approach when the claim you are trying to prove already has a logical negation in it, for example if you are trying to prove that something is not injective or not countable. You assume that it is injective or countable, and then you have a fact that you can work with.
For all x, P
If [math]P [/math] is a predicate that depends on [math]x [/math], then "for all [math]x [/math], [math]P [/math]" is a proposition. It is true if every possible value of [math]x [/math] makes [math]P [/math] evaluate to true.
• If your goal is to prove "for all [math]x\href{/cs2800/wiki/index.php/%E2%88%88}{∈}A [/math], P", you can proceed by choosing an arbitrary value [math]x∈A [/math] and then proving that P holds for that [math]x [/math].
The fact that [math]x [/math] is arbitrary does not mean you get to pick [math]x [/math]; on the contrary, your proof should work no matter what [math]x [/math] you choose. This means you can't use any property of [math]x [/math] other than that [math]x \href{/cs2800/wiki/index.php/%E2%88%88}{∈} A [/math].
• If you know [math]P [/math] holds for all [math]x [/math], then you can conclude [math]P [/math] holds for any specific [math]x [/math]. For example, if you know for all [math]x \href{/cs2800/wiki/index.php/%E2%88%88}{∈} ℝ [/math], [math]x^2 ≥ 0 [/math], then you can conclude [math]7^2 ≥ 0 [/math] (since [math]7 ∈ ℝ [/math]).
There exists x such that P
If [math]P [/math] is a predicate depending only on [math]x [/math], then "there exists [math]x [/math] such that P" (written [math]\exists x, P [/math] or [math]\exists x~\href{/cs2800/wiki/index.php?title=S.t.&action=edit&redlink=1}{s.t.}~P [/math]) is a proposition. It is true if there is some value [math]x [/math] that makes [math]P [/math] evaluate to true.
[math]\href{/cs2800/wiki/index.php/%5Cexists}{\exists} [/math] is sometimes called the existential quantifier.
• To prove that there exists an [math]x [/math] such that [math]P(x) [/math] holds, it suffices to give a specific [math]x [/math] and then prove that [math]P [/math] is true for that [math]x [/math]. Such a proof usually starts "let [math]x:= \cdots [/math]", and then goes on to prove that [math]P(x) [/math] holds for the given [math]x [/math]. [math]x [/math] is sometimes referred to as a witness for [math]P(x) [/math].
• If you know there exists some [math]x [/math] satisfying [math]P [/math], you can use it in a proof by treating [math]x [/math] as an arbitrary value. [math]x [/math] is arbitrary because the only thing you know about [math]x [/math] is that it exists, not what its value is.
• To disprove that there exists an [math]x [/math] satisfying [math]P [/math], you must disprove [math]P [/math] for an arbitrary [math]x [/math]. Put another way, the logical negation of "there exists an [math]x [/math] such that [math]P [/math]" is "for all x, not P".
When trying to prove an existential statement [math]\href{/cs2800/wiki/index.php/%E2%88%83}{∃x}, \href{/cs2800/wiki/index.php/Predicate}{P}(x) [/math], you need to give a specific value of [math]x [/math] (a witness).
Often, in a proof, it is not immediately obvious what the witness should be. Finding one often involves solving some equations or combining some known values.
One nice technique for finding a witness is to simply leave a blank space for the value of [math]x [/math] and continue on with your proof of [math]P(x) [/math]. As you go, you may need [math]x [/math] to satisfy certain properties (for example, maybe you need [math]x \gt 17 [/math] at one point, and later you need [math]x \lt 85 [/math]). You can make a "wishlist" on the side of your proof, reminding you of all the properties you want [math]x [/math] to satisfy. Once you've completed your proof, you can go back and find a specific value of [math]x [/math] (say, [math]50 [/math]) that satisfies all of your wishes.
If P then Q
If [math]P [/math] and [math]Q [/math] are propositions, then "if [math]P [/math] then [math]Q [/math]" (written [math]P \href{/cs2800/wiki/index.php?title=%5Cimplies&action=edit&redlink=1}{\implies} Q [/math] or "[math]P [/math] implies [math]Q [/math]") is a proposition. It is true if either [math]P [/math] is false, or if [math]Q [/math] is true.
• To prove "if [math]P [/math] then [math]Q [/math]", assume [math]P [/math] and then prove [math]Q [/math].
• If you know "if [math]P [/math] then [math]Q [/math]", and you also know [math]P [/math], you can conclude [math]Q [/math]. This technique is sometimes referred to as "modus ponens".
• To disprove "if [math]P [/math] then [math]Q [/math]", you must show that [math]P [/math] is true and that [math]Q [/math] is false ("[math]P [/math] implies [math]Q [/math]" only makes a claim about the world where [math]P [/math] is true; an example where [math]P [/math] is false doesn't contradict the claim).
Logical negation
If [math]P [/math] is a proposition, the logical negation of [math]P [/math] is the proposition that is equivalent to "not [math]P [/math]".
For example, to disprove "[math]P [/math] and [math]Q [/math]", it suffices to either disprove [math]P [/math] or to disprove [math]Q [/math]. This is the same thing as proving "not [math]P [/math] or not [math]Q [/math]," so the logical negation of "[math]P [/math] and [math]Q [/math]" is "not [math]P [/math] or not [math]Q [/math]."
For further examples, see the table of proof techniques.
Table of proof techniques
Here is a table of proof techniques summarizing proof techniques for the basic logical connectives. These techniques will get you most of the way through most of the proofs in this course. There is also a convenient one-page pdf
Proposition Symbol To prove it To use it Logical negation
P and Q (P Q) prove both P and Q you may use either P or Q (¬P) (¬Q)
P or Q (P Q) You may either prove P or prove Q case analysis (¬P) (¬Q)
P is false (or "not P") ¬ P disprove P contradiction P
if P then Q (or "P implies Q") P Q assume P, then prove Q if you know P, conclude Q P ¬ Q
for all x, P ∀x, P choose an arbitrary value x apply to a specific x x, ¬ P
there exists x such that [math]P [/math] ∃x, P give a specific x use an arbitrary x satisfying P x, ¬ P
You can think of these as the valid outlines of a valid proof. When writing a 5-paragraph essay, the structure consists of an introductory paragraph, three supporting paragraphs, and a conclusion. Supporting paragraphs have their own structure (made up of sentences, which themselves have a structure...).
Similarly, a proof of a statement like "for all x, P" has an introduction of an arbitrary variable, followed by another proof (this time of P(x)). This proof in turn will have one of the structures in the table above, and so on.
Note that this process is recursive: most of these techniques say "to prove ..., do ... and then prove ...". In most cases, you must repeatedly apply these techniques to build a complete proof.
|
__label__pos
| 0.993868 |
Class: YARD::CodeObjects::MethodObject
Inherits:
Base
• Object
show all
Defined in:
lib/yard/code_objects/method_object.rb
Overview
Represents a Ruby method in source
Instance Attribute Summary collapse
Instance Method Summary collapse
Constructor Details
#initialize(namespace, name, scope = :instance, &block) ⇒ MethodObject
Creates a new method object in namespace with name and an instance or class scope
If scope is :module, this object is instantiated as a public method in :class scope, but also creates a new (empty) method as a private :instance method on the same class or module.
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# File 'lib/yard/code_objects/method_object.rb', line 37
def initialize(namespace, name, scope = :instance, &block)
@module_function = false
@scope = nil
# handle module function
if scope == :module
other = self.class.new(namespace, name, &block)
other.visibility = :private
scope = :class
@module_function = true
end
@visibility = :public
self.scope = scope
self.parameters = []
super
end
Dynamic Method Handling
This class handles dynamic methods through the method_missing method in the class YARD::CodeObjects::Base
Instance Attribute Details
#explicitBoolean
Whether the object is explicitly defined in source or whether it was inferred by a handler. For instance, attribute methods are generally inferred and therefore not explicitly defined in source.
18
19
20
# File 'lib/yard/code_objects/method_object.rb', line 18
def explicit
@explicit
end
#parametersArray<Array(String, String)>
Returns the list of parameters parsed out of the method signature with their default values.
25
26
27
# File 'lib/yard/code_objects/method_object.rb', line 25
def parameters
@parameters
end
#scopeSymbol
The scope of the method (:class or :instance)
11
12
13
# File 'lib/yard/code_objects/method_object.rb', line 11
def scope
@scope
end
Instance Method Details
#aliasesArray<Symbol>
Returns all alias names of the object
149
150
151
152
153
154
155
156
# File 'lib/yard/code_objects/method_object.rb', line 149
def aliases
list = []
return list unless namespace.is_a?(NamespaceObject)
namespace.aliases.each do |o, aname|
list << o if aname == name && o.scope == scope
end
list
end
#attr_infoSymbolHash?
Returns the read/writer info for the attribute if it is one
Since:
• 0.5.3
93
94
95
96
# File 'lib/yard/code_objects/method_object.rb', line 93
def attr_info
return nil unless namespace.is_a?(NamespaceObject)
namespace.attributes[scope][name.to_s.gsub(/=$/, '')]
end
#constructor?Boolean
78
79
80
# File 'lib/yard/code_objects/method_object.rb', line 78
def constructor?
name == :initialize && scope == :instance && namespace.is_a?(ClassObject)
end
#copyable_attributesObject (protected)
192
193
194
# File 'lib/yard/code_objects/method_object.rb', line 192
def copyable_attributes
super - %w(scope module_function)
end
#is_alias?Boolean
Tests if the object is defined as an alias of another method
126
127
128
129
# File 'lib/yard/code_objects/method_object.rb', line 126
def is_alias?
return false unless namespace.is_a?(NamespaceObject)
namespace.aliases.key? self
end
#is_attribute?Boolean
Tests if the object is defined as an attribute in the namespace
114
115
116
117
118
119
120
121
122
# File 'lib/yard/code_objects/method_object.rb', line 114
def is_attribute?
info = attr_info
if info
read_or_write = name.to_s =~ /=$/ ? :write : :read
info[read_or_write] ? true : false
else
false
end
end
#is_explicit?Boolean
Tests boolean #explicit value.
134
135
136
# File 'lib/yard/code_objects/method_object.rb', line 134
def is_explicit?
explicit ? true : false
end
#module_function?Boolean
Returns whether or not this method was created as a module function.
Since:
• 0.8.0
85
86
87
# File 'lib/yard/code_objects/method_object.rb', line 85
def module_function?
@module_function
end
#name(prefix = false) ⇒ String, Symbol
Returns the name of the object.
Examples:
The name of an instance method (with prefix)
an_instance_method.name(true) # => "#mymethod"
The name of a class method (with prefix)
a_class_method.name(true) # => "mymethod"
175
176
177
# File 'lib/yard/code_objects/method_object.rb', line 175
def name(prefix = false)
prefix ? (sep == ISEP ? "#{sep}#{super}" : super.to_s) : super
end
#overridden_methodMethodObject?
Since:
• 0.6.0
141
142
143
144
145
# File 'lib/yard/code_objects/method_object.rb', line 141
def overridden_method
return nil if namespace.is_a?(Proxy)
meths = namespace.meths(:all => true)
meths.find {|m| m.path != path && m.name == name && m.scope == scope }
end
#pathString
Override path handling for instance methods in the root namespace (they should still have a separator as a prefix).
161
162
163
# File 'lib/yard/code_objects/method_object.rb', line 161
def path
@path ||= !namespace || namespace.path == "" ? sep + super : super
end
#reader?Boolean
Returns whether the method is a reader attribute.
Since:
• 0.5.3
107
108
109
110
# File 'lib/yard/code_objects/method_object.rb', line 107
def reader?
info = attr_info
info && info[:read] == self ? true : false
end
#sepString
Override separator to differentiate between class and instance methods.
182
183
184
185
186
187
188
# File 'lib/yard/code_objects/method_object.rb', line 182
def sep
if scope == :class
namespace && namespace != YARD::Registry.root ? CSEP : NSEP
else
ISEP
end
end
#writer?Boolean
Returns whether the method is a writer attribute.
Since:
• 0.5.3
100
101
102
103
# File 'lib/yard/code_objects/method_object.rb', line 100
def writer?
info = attr_info
info && info[:write] == self ? true : false
end
|
__label__pos
| 0.951371 |
Print Multiple Emails
I often take a group of emails from one day/one subject and print them together. In other apps, I can imply select them and give the “Print” command. But trying that in emClient result in only one being printed. Am I missing something?
(Paid version)
I just tried to print a few selected emails and they all printed.
Aha! I guess it does, but not well. It combines them all into a single print job with no page breaks between them. As though all of it were one continuous email. That’s probably why I missed it.
1 Like
Perhaps this will help you…
2 Likes
Duh! Thank you!!!
|
__label__pos
| 0.927211 |
Turning proxies authenticated bot
As the use of proxies to bypass security measures becomes increasingly common, it is becoming increasingly important for organizations to ensure that their proxies are authenticated and secure. This article explores the use of # authentication in proxies, and its effectiveness in turning proxies into authenticated bot. The article delves into the different types of proxy authentication methods, their advantages and disadvantages, and provides a practical implementation overview for using # authentication to secure proxies against bot attacks. The article also discusses the benefits of using # authentication in improving the scalability and performance of bot management systems, and highlights some of the emerging trends and technologies in this area. Overall, the article provides a comprehensive overview of the use of # authentication in proxies and its importance in safeguarding online operations from bot attacks.
Proxies authenticated bot is a powerful tool that organizations can use to protect their online operations from bot attacks. Bots can be used for malicious purposes such as DDoS attacks, keyword stuffing, click fraud, and other types of automated abuse. Proxy authentication methods such as # authentication can help to verify the identity of proxies before they are used, and prevent bots from accessing sensitive resources or data. This can be especially useful in scenarios where bots are attempting to mimic human behavior and are able to bypass traditional security measures. Overall, proxy authentication is a critical element of any organization's bot management strategy, and can be used to improve scalability and performance while also protecting against bot attacks.
Buy proxies in different countries!
Select Your Desired Country
Geo-location of proxy servers
Required amount of IP-addresses in proxy list
Duration of Subscription
$79
$0.1 per IP
You want to target USA?
Excellent, select a proxy plan below.
• USA 20K
• USA locations
Bingo!
• 20'000 IPs
• 2999 USD
1499 USD
• 0.075 USD per IP
• 30 days
• Buy now
• USA 15K
• USA locations
to rock!
• 15'000 IPs
• 1949 USD
1099 USD
• 0.0733 USD per IP
• 30 days
• Buy now
• USA 3K
• USA locations
to dominate
• 3'000 IPs
• 389 USD
249 USD
• 0.083 USD per IP
• 30 days
• Buy now
• USA 1400
• USA locations
to work with
• 1400 IPs
• SALE! SALE!
• 159 99 USD
• 0.070 USD per IP
• 30 days
• Buy now
• USA 650
• USA locations
to win
• 650 IPs
• Popular!
• 99 USD
• 0.152 USD per IP
• 30 days
• Buy now
• USA only
• Limited locations
from USA
• 250 IPs
• Smart buy!
• 39 USD
• 0.165 USD per IP
• 30 days
• Buy now
Pros and Cons of Turning proxies authenticated bot
Pros:
• Increased security: Proxy authentication methods such as # authentication can help to verify the identity of proxies before they are used, which can help to prevent bots from accessing sensitive resources or data.
• Improved scalability: By automating proxy authentication, organizations can streamline their bot management processes and improve the scalability of their systems.
• Better performance: Proxy authentication can help to ensure that only legitimate proxies are used, which can improve the performance of online applications and services.
Cons:
• False positives: Proxy authentication methods may sometimes mistake legitimate proxies for bots, resulting in false positives and the blockage of legitimate users.
• Performance issues: Proxy authentication can add additional processing time to requests, which can impact the performance of online applications and services, especially in high-traffic scenarios.
• Complexity: Proxy authentication methods can be complex to set up and maintain, requiring specialized knowledge and resources. This can make it challenging for organizations to implement effectively.
Forecast for the future:
The use of proxies authenticated bot is expected to continue to grow in the near future as organizations strive to protect their online operations from bot attacks. Proxy authentication methods such as # authentication will become increasingly common as organizations seek to verify the identity of proxies before they are used.
The development of more sophisticated and sophisticated bot detection techniques will also drive the adoption of proxy authentication. Additionally, the use of artificial intelligence and machine learning in bot detection and preventive measures will become more prevalent, improving the accuracy and effectiveness of proxy authentication.
Furthermore, the increasing use of cloud-based services and the proliferation of zero-trust mobility are likely to drive the adoption of proxy authentication in a broader range of contexts. With the growth of the edge computing infrastructure, the use of proxy authentication will also become more widespread.
Overall, the future of proxy authentication and the proxies authenticated bot will continue to evolve rapidly as organizations seek to protect their online operations from bot attacks, with more advanced technologies and techniques being deployed to improve the effectiveness and scalability of bot management systems.
Key Definitions for Better Understanding
For improved comprehension as you read through this article, we present essential technical definitions, associated terms, and crucial explanations below. This section has been meticulously reviewed by our experienced system administrators, subject matter experts in technology, and dedicated IT editors.
TradingView DCA Bots
Bots designed to execute trades based on a dollar-cost averaging strategy
IMAP (Internet Message Access Protocol)
A protocol used to access and manage email messages on a server.
User Experience
The process of evaluating the user experience provided by e-commerce websites.
Data Modeling
The process of creating a representation of data.
Privacy Preservation
The process of using a proxy server to mask the user IP address and make their location untraceable.
System Administrator in Datacenter, configuring Proxy Server
Dedicated System Administrators Striving for Premium Quality Proxy Service
Our team works diligently towards providing exceptional services, ensuring high availability at 99% uptime, and minimizing network latency. It's worth noting that occasionally, due to dedication, they may prioritize tasks over personal comfort :)
How could Turning proxies authenticated bot be improved?
To improve the efficiency and effectiveness of proxy authentication, humanity may need to consider the following new technologies:
• Biometric authentication: Biometric authentication uses physical characteristics such as fingerprints, facial recognition, and voice recognition to verify an individual's identity. This technology could be applied to proxy authentication to provide an additional layer of authentication beyond traditional credentials.
• Blockchain-based authentication: Blockchain technology provides a secure and tamper-proof record of transactions, making it an ideal choice for proxy authentication. By using a blockchain-based authentication system, organizations could ensure that the identity of proxies is securely and transparently verified.
• Behavioral biometrics: Behavioral biometrics technology analyzes an individual's unique patterns and movements to verify their identity. This technology could be applied to proxy authentication to detect and prevent bot attacks by identifying abnormal behavior patterns.
• Artificial Intelligence: AI can be used to analyze large amounts of data and identify patterns that are indicative of a bot attack. AI-based bot detection systems can improve the efficiency and accuracy of proxy authentication, reducing the time required for verification and increasing the scalability of bot management systems.
• Zero-knowledge proofs: Zero-knowledge proofs allow for the verification of an assertion without revealing any additional information. This can be used to improve the efficiency and security of proxy authentication by verifying the identity of a proxy without exposing sensitive information.
• Distributed ledger technology: Distributed ledger technology (DLT) provides a distributed and transparent record of transactions, making it ideal for proxy authentication. By using DLT-based authentication, organizations can ensure that the identity of proxies is securely and transparently verified while also improving the scalability and performance of bot management systems.
Turning proxies authenticated bot: Insights and Analysis
A proper and efficient proxy authentication is crucial for an effective bot management strategy and protecting online operations from bot attacks. Organizations should ensure that their proxy authentication methods are properly implemented and validated to minimize the risk of false positives and negatives, and to prevent legitimate users from being blocked unnecessarily. Additionally, it is important for organizations to monitor and analyze their bot traffic to gain insights into the types of bot attacks and malicious activity that are targeting their systems. This can be done using various tools and techniques, such as machine learning algorithms and artificial intelligence-based systems. By understanding the patterns and behaviors of bot attacks, organizations can better uncover vulnerabilities and develop more effective bot management strategies. Ultimately, effective proxy authentication, combined with careful monitoring and analysis, can help organizations to protect their online operations from bot attacks and ensure that their systems and services are secure and reliable.
Turning proxies authenticated bot: Expert Opinion
In recent years, the practice of using proxies authenticated bot has become increasingly prevalent among organizations seeking to protect their online operations from bot attacks. Experts in the field agree that proxy authentication is a key component of any organization's bot management strategy, but there are differing opinions on the best methods for implementing proxy authentication. Some experts suggest that traditional authentication methods, such as IP address and username/password verification, are no longer sufficient and that more advanced technologies such as machine learning and artificial intelligence are necessary. Others argue that a combination of traditional and advanced authentication methods is the most effective approach. Ultimately, the best approach will depend on the specific needs and requirements of each organization. Importantly, it is recommended to stay up to date with the latest technologies and best practices in the field of bot management, and to continuously optimize proxy authentication methods to ensure the security and reliability of online operations.
Final Thoughts
In conclusion, the use of proxies authenticated bot is a powerful tool in protecting online operations from bot attacks. It is crucial for organizations to ensure that their proxies are properly authenticated and that their bot management systems are scalable and effective. When selecting proxy lists, it is recommended to use reputable providers such as cyber-gateway.net, which offer low-cost proxies that are flexible, fast, and stable. It's also recommended to stay up to date with the latest technologies and best practices in the field of bot management and to continuously optimize proxy authentication methods to ensure the security and reliability of online operations. By implementing effective proxy authentication and continuously updating their bot management systems, organizations can better protect their online operations from bot attacks and ensure the accuracy and performance of their online services and applications.
FAQ
What is a proxy?
A proxy is a computer system that acts as an intermediary between a user and the internet. It is commonly used to access content that is geographically restricted or to protect privacy by hiding the user's IP address.
What is a proxy authenticated bot?
A proxy authenticated bot is a bot that has been authenticated and authorized to access a specific proxy network. Proxy authentication ensures that the bot is a legitimate user and not a malicious actor attempting to bypass security measures.
What types of proxy authentication methods can be used to authenticate proxies?
There are various types of proxy authentication methods, including IP address, username/password, digital certificates, and multi-factor authentication (MFA).
What are the advantages of using # authentication to authenticate proxies?
The main advantage of using # authentication is that it provides a secure and reliable way of verifying the identity of a proxy. This can help to prevent malicious bots from accessing sensitive resources or data.
What are the disadvantages of using # authentication to authenticate proxies?
The main disadvantage of using # authentication is that it requires additional processing time, which can impact the performance of online applications and services. Additionally, there is a risk of false positives if the authentication criteria are not properly defined.
What are the potential risks of using proxies authenticated bot?
The use of proxies authenticated bot can be vulnerable to various types of bot attacks such as bot-in-the-middle attacks and bot-command-and-control (bot-C&C) attacks. It's important to monitor and analyze bot traffic to gain insights into the types of bot attacks and malicious activity targeting the systems
What is a low-cost proxy list and where can it be bought?
A low-cost proxy list is a collection of proxies that are publicly available and can be purchased at a low cost. One reputable provider for low-cost proxies is cyber-gateway.net.
You might also be interested in...
Proxy Calculator
Pick desired IP pool:
Price per IP:
0$
Price total:
0$
What is my IP?
We can analyze your IP address and extract some personal information from it. Find out, where is geolocation of your IP address and whether you have a proxy!
Check now!
Secured Surfing
The use of a proxy provides higher Internet anonymity and guards your privacy. According to statistics, proxy servers repel 75% of DDOS attack on autopilot.
Learn More
Free VPN
We provide free VPN for secure and stable internet connection! Choose your outgoing server from US, UK, China, Netherlands, Germany and surf!
Grab it!
Telegram Chat
|
__label__pos
| 0.989262 |
Featured - Deceased Person's Android
5 ways to retrieve data from a deceased person’s Android phone
It’s not a secret that…
Losing someone is never an easy thing to deal with. On top of coming to terms with the loss, you may need to deal with the seemingly endless amount of arrangements that need to be made.
For people who left us that used their smartphones for important purposes…
We put together the following guide to help you retrieve data such as emails, texts, pictures and videos from an Android phone. We have a few methods for how to do so depending on the device and the information you have on hand.
Now, here’s what you need to consider:
Taking the device in to be unlocked can often lead to data loss as it usually involves resetting the handset, so it’s worth checking to see if one of these methods can help before incurring any loss of data.
Luckily, if you don’t want to go through that, you can bypass the passcode or the pattern lock with the following methods.
If you have access to the owner’s Google account information it’s a simple process. If the phone asks for the Google account associated with the device and you know the details, move on to method 2.
Without further ado, here are the methods on how to bypass a phone lock.
Methods to retrieve data from a deceased person’s Android phone
You can retrieve data from a deceased person’s Android phone with these methods.
1. SD card
2. Bypassing the security/pattern lock
3. Google Account
4. Accessing a rooted handset
5. Samsung devices
Method 1: The SD Card
If the phone has a removable memory card, there is a good chance that most of the photos and videos stored on the device will be found there. If you are unable to get past the lock screen with the other methods in this guide, this could prove useful if you want access to certain media files.
After removing the SD Card you have a couple of options. You can connect the SD to your computer with an SD card reader to retrieve the files, or you can connect the SD card to another handset and copy the files across if you prefer.
Some Micro SD cards are password protected, which can be problematic if you don’t know the passcode. However, if you manage to unlock the device then you should be able to access the files found on the card with no further issues.
Method 2: Bypassing the Security/Pattern Lock
Getting past the security lock should give you full access to the device, including text messages and images. You can have as many attempts as you want to guess the passcode, but most phones will have a 30-second delay after five attempts.
If the firmware is lower than 4.1, you can download an app like Clear Mobile Password PIN Help (paid) from the Play Store to your computer, and unlock the device using the instructions provided. Unfortunately, most devices are now higher than 4.1.
Android Pattern Lock
If the device isn’t rooted, it can be difficult to get around the security code, but an exploit has been uncovered by computer security expert John Gordon that allows you to bypass the lock screen on Lollipop.
Unfortunately, this exploit will only work on a handful of devices; specifically Stock Android Devices that use a password instead of a pin or pattern lock.
Step 1: Tap or open Emergency Call.
Step 2: Type out a line of characters.
Step 3: Copy and paste the text until you have completely filled the field. (This should take roughly 11 repetitions.)
Step 4: Once the field is full, go back to the lockscreen and swipe left to open the camera.
Step 5: Swipe down to reveal the Notification Bar, and tap Settings.
Step 6: A password field will appear. Long tap the text to copy and paste until the phone starts to crash. This can take five minutes to complete.
Step 7: When it crashes, it should boot up the home screen with no password protection.
Here is a video that takes you through the necessary steps.
Method 3: Google Accounts
A number of Android phones use Google Accounts to unlock phones if several incorrect security attempts occur.
If you enter the lock code incorrectly five times, it may ask if you want to reset the passcode for the phone by using the Google account password associated with the device. You will need to know both the account name and password for this method to work.
This method only applies for phones running Android 4.4 or below with a pattern passcode.
Step 1: Forgot Pattern should appear on your lock screen after entering the wrong code five times.
Android Forgot Pattern
Step 2: Tap it, and enter the Google Account username and password associated with the device.
Step 3: You should now be able to access the device without restriction.
Method 4: Accessing a Rooted Handset (Custom Recovery Method)
If the handset is rooted, it’s fairly easy to access the device if it has Custom Recovery installed. The phone will also need an SD slot.
Step 1: Download this Password Protect Disable ZIP file to your computer.
Step 2: Put it on an SD card using your computer.
Step 3: Insert the SD card into the Android device.
Step 4: Reboot the phone into Recovery Mode. This can usually be done by holding a combination of the buttons when the device is switched off. For example, Samsung Galaxy devices use the combination of the Power, Volume Up and Home Check to see how this is done on your specific device.
Step 5: Flash the ZIP file on the card. You can do this from the Recovery Mode Navigate to Install or Install ZIP from SD Card and select the Password Protect Disable file. This will flash the file onto the device.
CWM Deceased PersonGÇÖs Android
Step 6: Reboot the device.
Step 7: If there is still a lock screen after you reboot, any combination code should work and you should now be able to access the device.
Method 5: Samsung Devices
Samsung phones allow you to unlock the screen if the device has been registered with a Samsung account. (You will to know need the Samsung Account details for this to work).
Step 1: Login to Samsung’s Find My Mobile service.
Step 2: Navigate to Unlock my screen, found on the sidebar on the left.
Step 3: Select Unlock.
Step 4: The device should be unlocked in a few seconds, giving you full access.
Suggested reads:
How to Fix Call Failed Errors on Android
5 Must-Have Internet Phone Call Apps for Android
8 Best Android Call/Text Blockers
FAQ
Can I still retrieve files from a locked phone?
Yes, you can! But, it won’t be so easy. There are different methods that you can try. Some will ask you for information while some won’t. Check our list to try these methods out!
Will Apple unlock a dead person’s phone?
If an Apple phone is passcode protected, you cannot unlock it unless you have access to the passcode or lock.
Can you use Face ID while someone is asleep?
No, you cannot unlock an Android phone when the person is asleep.
Conclusion
It’s a tough subject. It is somewhat comforting to know at least there are a few options that might get you some of the information from a loved one’s phone. However, it’s still quite hard to bypass phone security without root access.
I hope that the guide helped you get into the device. If you’ve found a better or more recent method, please let us know in the comments below, or you can also send us a message, (or a follow) on both Facebook and Twitter.
If we managed to help out or you have any problems, let us know in the comments below.
Featured Image
Similar Posts
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.561532 |
PlatformIO Community
ESP8266 Flash Address and boot.bin
When I compile my firmware, does PlatformIO automatically incorporate boot.bin? Because it looks like it is flashing to 0x00000 and that seems to be the address of boot.bin where user1.bin should be in 0x01000.
What framework = .. are we talking about?
That’s an excellent question, and is why you get paid the big money. :slight_smile:
framework = arduino
platform = espressif8266
board = d1_mini
1 Like
Do you mean this ?
So the answer is yes? It joins in the bootloader and creates a bin file inclusive of the bootloader I’m guessing?
The way the firmware is built and linked (or C/C++ too for that matter) is still somewhat mystical to me. :slight_smile:
@valeros could you explain us how does this work?
Hi @LBussy! Yes, the bootloader and the user firmware are merged together and uploaded as a single image.
1 Like
Thank you @valeros. It seemed that way, but I did not want to depend on what I was doing until I knew there was a reason for it.
1 Like
|
__label__pos
| 0.999977 |
GED Math : Angles and Quadrilaterals
Study concepts, example questions & explanations for GED Math
varsity tutors app store varsity tutors android store
Example Questions
Example Question #1 : Angles And Quadrilaterals
In Rhombus . If is constructed, which of the following is true about ?
Possible Answers:
is right and isosceles, but not equilateral
is acute and equilateral
is obtuse and isosceles, but not equilateral
is acute and isosceles, but not equilateral
Correct answer:
is obtuse and isosceles, but not equilateral
Explanation:
The figure referenced is below.
Rhombus
The sides of a rhombus are congruent by definition, so , making isosceles (and possibly equilateral).
Also, consecutive angles of a rhombus are supplementary, as they are with all parallelograms, so
.
, having measure greater than , is obtuse, making an obtuse triangle. Also, the triangle is not equilateral, since such a triangle must have three angles.
The correct response is that is obtuse and isosceles, but not equilateral.
Example Question #1 : Angles And Quadrilaterals
Given Quadrilateral , which of these statements would prove that it is a parallelogram?
I) and
II) and
III) and are supplementary and and are supplementary
Possible Answers:
Statement II only
Statement I, II, or III
Statement I only
Statement III only
Correct answer:
Statement II only
Explanation:
Statement I asserts that two pairs of consecutive angles are congruent. This does not prove that the figure is a parallelogram. For example, an isosceles trapezoid has two pairs of congruent base angles, which are consecutive.
Statement II asserts that both pairs of opposite angles are congruent. By a theorem of geometry, this proves the quadrilateral to be a parallelogram.
Statement III asserts that two pairs of consecutive angles are supplementary. While all parallelograms have this characteristic, trapezoids do as well, so this does not prove the figure a parallelogram.
The correct response is Statement II only.
Example Question #1 : Angles And Quadrilaterals
You are given Parallelogram with . Which of the following statements, along with what you are given, would be enough to prove that Parallelogram is a rectangle?
I)
II)
III)
Possible Answers:
Statement III only
Statement I, II, or III
Statement II only
Statement I only
Correct answer:
Statement I, II, or III
Explanation:
A rectangle is defined as a parallelogram with four right, or , angles.
Since opposite angles of a paralellogram are congruent, if one angle measures , so does its opposite. Since consecutive angles of a paralellogram are supplementary - that is, their degree measures total - if one angle measures , then both of the neighboring angles measure .
In short, in a parallelogram, if one angle is right, all are right and the parallelogram is a rectangle. All three statements assert that one angle is right, so from any one, it follows that the figure is a rectangle. The correct response is Statements I, II, or III.
Note that the sidelengths are irrelevant.
Example Question #4 : Angles And Quadrilaterals
If the rectangle has a width of 5 and a length of 10, what is the area of the rectangle?
Possible Answers:
Correct answer:
Explanation:
Write the area for a rectangle.
Substitute the given dimensions.
The answer is:
Example Question #2 : Angles And Quadrilaterals
In the figure below, find the measure of the largest angle.
3
Possible Answers:
Correct answer:
Explanation:
Recall that in a quadrilateral, the interior angles must add up to .
Thus, we can solve for :
Now, to find the largest angle, plug in the value of into each expression for each angle.
The largest angle is .
Learning Tools by Varsity Tutors
|
__label__pos
| 0.998594 |
How to Extend your Laptop’s lifespan if you’re using Windows XP?
0
1429
How to Extend your Laptop's lifespan if you're using Windows XP?
Did you know that you can extend the lifespan of your computer or laptop for years? Can't imagine it? Yes, it is possible. How? Simple, just like any other appliance in our home, it will last longer if maintenance and care are regularly done. What are those? First, let's go back to the basic cleaning of appliances.
The question: How to clean my laptop or computer?
According to Wisco Computing, dust and grime are accumulating daily, anywhere we may be. This is also true of our computers, laptops or other electronic gadgets which can cause overheating, random crashes and lock up. Dust and grime build up will eventually result in hardware failure.
For any electronic device it is always best to disconnect any power source, AC supply, battery, etc. before starting. Any type of appliance cleaner, (even just plain water), will do for external cleaning of your laptop or computer, but Do Not use any cleaner on the display screen unless it is recommended by the manufacturer and states that it is safe for plastics!
You may lightly dampen a cloth and use it to wipe down the external surfaces, but do not spray it directly onto the surface as excess liquid could sleep between gaps, getting to the internal components.
For internal surfaces it is best to use a brush or compressed air to dislodge dust build up, and use a dry cloth or Q-tip to wipe away grime build up. If you are still unsure of what to do, I highly suggest that you ask someone who's knowledgeable about computer hardware.
Is it enough that we’ve cleaned the external case and internal components?
No. You also need to clean up your hard drives, eliminating files that are no longer in use.
What should I use to clean the hard drive?
A Disk Cleanup Utility like cleanmgr.exe. What is that? It is a computer maintenance utility which was created for the sole purpose of cleaning disk space on a computer’s hard drive. This tool helps you identify what files can be safely deleted from your disk. The Disk Cleanup Utility or cleanmgr.exe is available in most Windows operating systems, (although it must be "turned-on"in Windows Server 2008 R2 from the Control Panel's "Programs and Features"menu).
What’s the importance of doing this?
It will extend your computer or laptop's lifespan and can greatly improve its performance.
How óften should I do this?
It would be best to do it every day or several times a week.
What files can be deleted?
a. Downloaded Programs Files –some of these are ActiveX controls and Java applets downloaded automatically from the Internet when you view certain pages. They are temporarily stored in the Downloaded Program Files folder on your hard disk.
b. Temporary Internet Files – this contains web pages stored on your hard disk for quick viewing. Your personalized settings for web pages will be left intact.
c. Recycle Bin – this contains files you have marked for deletion from your computer which will only be permanently removed when you empty the Recycle Bin.
d. Temporary Files – these are temporary information files which programs sometimes store in a TEMP folder. You can simply delete temporary files that have not been modified in over a week
e. WebClient/Publisher Temporary Files – The WebClient/Publisher service maintains a cache of frequently accessed files on the disk. These files are kept locally for performance reasons only, and can be deleted safely.
f. Compress Old Files – these are old files which you haven’t accessed for a while and can be compressed by Windows to save more disk space while still allowing you to access them anytime.
g. Catalog Files for the Content Indexer – the indexing service speeds up and enriches file searches by maintaining an index of the files on disk. Catalog files are left over from a previous indexing operation and can be deleted safely.
How do I do disk clean up using Windows XP?
Click on Start > All Programs > Accessories > System Tools > Disk Clean up.
Choose the disk drive that you wanted to clean up. On the image below, I have two disk drive, C and D.
After choosing, click OK. Then the computer will process the disk clean up for you.
SHARE
Related Tips
|
__label__pos
| 0.92082 |
Python. Словари. Основные понятия. Характеристики. Создание словарей. Доступ к значениям в словаре
Словари. Основные понятия. Характеристики. Создание словарей. Доступ к значениям в словаре
Содержание
1. Особенности словарей. Преимущества применения
В языке Python существует возможность использовать словари. Словари – это встроенный тип данных, который является ассоциативным массивом или хешем и базируется на отображении пар типа (ключ:значение).
Преимущества применения словарей:
• с помощью словарей можно разрабатывать эффективные структуры данных;
• для словарей не нужно писать вручную алгоритмы поиска данных, поскольку эти операции уже реализованы;
• словари могут содержать объединенные данные в виде записей;
• словари эффективны при представлении разреженных структур данных.
Основные характеристики словарей следующие:
• в словарях доступ к элементам выполняется по ключу, а не по индексу. Словари определяют взаимосвязь пар ключ:значение. По ключу происходит доступ к значению. Если в словаре реализовывать доступ к значению по индексу, то в этом случае индекс представляет собой ключ, а не смещение относительно начала;
• словари представляют неупорядоченные коллекции произвольных объектов. В словарях элементы (ключи) сохраняются в неопределенном порядке. Порядок формирования элементов в словаре определяет интерпретатор. Это необходимо для обеспечения более быстрого поиска.
• словари имеют переменную длину. Количество элементов в словаре может увеличиваться или уменьшаться;
• гетерогенность – словари могут содержать объекты любых типов;
• произвольное количество вложений – словари имеют возможность создания произвольного количества уровней вложений, поскольку словари могут содержать списки, другие словари и т.п.;
• словари относятся к категории изменяемых объектов. Поэтому, в словарях нет смысла использовать операции которые имеют фиксированный порядок следования элементов (например, конкатенация);
• словари являются таблицами ссылок на объекты или хеш-таблицами и относятся к отображаемым объектам. Это означает, что в словарях объекты отображают (представляют) ключи на значение.
2. Отличия между словарями и списками
Между словарями и списками существуют следующие основные отличия:
• списки являются упорядоченными коллекциями, словари не являются упорядоченными коллекциями;
• в списках элементы извлекаются с помощью смещения, которое определяет позицию элемента в списке. В словарях элементы вытягиваются с помощью ключа;
• в отличие от списков, в словарях нет поддержки операций над последовательностями (например, конкатенация, получение среза и прочее);
• списки являются массивами ссылок на объекты. Словари являются массивами неупорядоченных таблиц ссылок на объекты, поддерживающим доступ к объектам по ключу.
3. Операции и методы обработки словарей. Перечень
В языке Python существует огромное разнообразие операций и методов для обработки словарей. Все эти средства вкратце перечислены здесь:
• стандартный метод len() – определяет количество элементов в списке;
• операция D[key] – доступ к элементу словаря D по значению ключа key;
• операция del – удаление элемента по ключу;
• операции in, not in – определения наличия или отсутствия ключа в словаре;
• метод iter() – получить итератор по ключам словаря;
• метод clear() – удалить все элементи из словаря;
• метод copy() – вернуть копию словаря;
• метод dict.fromkeys() – создать новый словарь из ключей и значений;
• метод get() – получить значение по ключу;
• метод items() – вернуть представление элементов словаря;
• метод keys() – получить новое представление ключей словаря;
• метод pop() – удаление элементов из словаря с возвратом значения;
• метод popitem() – вытягивание из словаря произвольной пары ключ:значение;
• метод setdefault() – установить элемент по умолчанию;
• метод update() – обновление словаря по заданному списку пар ключ:значение;
• метод values() – получить список значений из словаря.
Более подробно использование вышеперечисленных операций и методов можно изучить в темах:
4. Особенности представления ключей числовыми типами
В словарях, в качестве ключей могут использоваться числовые типы: целый или вещественный (с плавающей запятой). В случае представления ключей числовыми типами можно выделить следующие особенности:
• при поиске нужного значения по ключу используется простая операция сравнения. Если в качестве ключа указывать значение разных числовых типов, например 1 и 1.0, то такие значения считаются взаимозаменяемыми (обращаются к одному и тому же элементу словаря);
• представление ключа числовым типом с плавающей запятой не рекомендуется, поскольку значение этих типов есть приближенными. Если в качестве ключа нужно указать числовой тип, то для этого целесообразно использовать целые числовые типы.
5. Какие значения не могут использоваться в качестве ключей?
В качестве ключей нельзя использовать изменяемые типы объектов. Например, списки, словари и прочие изменяемые типы не могут использоваться в качестве ключей. Однако, в качестве значений эти изменяемые типы можно использовать.
6. Какие существуют способы создания словаря?
В языке Python, чтобы создать словарь можно использовать один из следующих способов:
• с помощью оператора присваивания = и фигурных скобок, в которых через запятую размещаются пары ключ:значение;
• с помощью конструктора dict() класса dict.
7. Создание словаря с помощью оператора присваивания =. Примеры
Словарь можно создать удобным (естественном) способом с помощью оператора присваивания =.
# Создание словаря
# с помощью оператора присваивания
# 1. Пустой словарь
A = {}
print('A = ', A)
# 2. Словарь из 3-х элементов
# 2.1. Содержит целочисленные ключи и строки значений
B = { 1:'Mon', 2:'Tue', 3:'Wed', 4:'Thu',
5:'Fri', 6:'Sat', 7:'Sun' }
print('B = ', B)
print('B[2] = ', B[2]) # B[2] = Tue
# 2.2. Содержит строковые ключи и вещественные значения
C = { 'Pi':3.1415, 'Exp':1.71 }
print('C = ', C)
print('C[Exp] = ', C['Exp']) # C[Exp] = 1.71
# 2.3. Словарь, который содержит наборы значений
D = { 'Table1':[ 1, 2, 4], 'Table2':[ 8, -2, 2.33] }
print('D = ', D)
print('D[Table1] = ', D['Table1']) # D[Table1] = [1, 2, 4]
print('D[Table2] = ', D['Table2']) # D[Table2] = [8, -2, 2.33]
Результат выполнения программы
A = {}
B = {1: 'Mon', 2: 'Tue', 3: 'Wed', 4: 'Thu', 5: 'Fri', 6: 'Sat', 7: 'Sun'}
B[2] = Tue
C = {'Pi': 3.1415, 'Exp': 1.71}
C[Exp] = 1.71
D = {'Table1': [1, 2, 4], 'Table2': [8, -2, 2.33]}
D[Table1] = [1, 2, 4]
D[Table2] = [8, -2, 2.33]
8. Создание словаря с помощью конструктора dict(). Пример
Словарь может быть создан с помощью одного из конструкторов dict(), которые реализованы в классе dict. В соответствии с документацией Python в классе dict используется 3 конструктора, которые имеют следующую общую форму
dict(**keyword_arg)
dict(mapping, **keyword_arg)
dict(iterable, **keyword_arg)
где
• keyword_arg – необязательный ключевой аргумент. Если конструктор вызывается без ключевого аргумента (например, dict()), то создается пустой словарь. Если в конструкторе задается несколько ключевых аргументов, то они разделяются запятой (в общей форме конструктора использованы символы **);
• mapping – отображаемый объект, на основе которого создается словарь;
• iterable – итерированный объект, на основе которого создается словарь.
В зависимости от аргументов, интерпретатор Python вызывает соответствующий конструктор. Во всех конструкторах первый объект каждого элемента становится ключом, второй становится соответствующим значением.
В случае, если ключ встречается больше одного раза, то принимается последнее значение, установленное по этому ключу.
Пример. Демонстрируется создание словарей разными способами.
# Создание словаря с помощью конструктора dict
# 1. Разновидность конструктора dict(**keyword_arg)
# 1.1. Создать пустой словарь
A = dict()
print('A = ', A) # A = {}
# 1.2. Создать список
# {'Winter': 1, 'Spring': 2, 'Summer': 3, 'Autumn': 4}
SEASONS = dict( Winter=1, Spring=2, Summer=3, Autumn=4)
print('SEASONS = ', SEASONS)
# ---------------------------------------------------
# 2. Разновидность конструктора dict(mapping, **keyword_arg)
# 2.1. Применение функции zip()
DAYS = [ 1, 2, 3 ]
DAYS_NAMES = [ 'Mon', 'Tue', 'Wed' ]
DICT_DAYS = dict(zip(DAYS, DAYS_NAMES))
print(DICT_DAYS) # {1: 'Mon', 2: 'Tue', 3: 'Wed'}
# 2.2. Использование пар (key:value)
DICT_DAYS = dict([(1,'Mon'), (2,'Tue'), (3,'Wed')])
print(DICT_DAYS) # {1: 'Mon', 2: 'Tue', 3: 'Wed'}
# ---------------------------------------------------
# 3. Разновидность конструктора dict(iterable, **keyword_arg)
# Применение конструктора по образцу B = {}
B = dict({1:'Mon', 2:'Tue', 3:'Wed'})
print('B = ', B) # B = {1: 'Mon', 2: 'Tue', 3: 'Wed'}
Результат выполнения программы
A = {}
SEASONS = {'Winter': 1, 'Spring': 2, 'Summer': 3, 'Autumn': 4}
{1: 'Mon', 2: 'Tue', 3: 'Wed'}
{1: 'Mon', 2: 'Tue', 3: 'Wed'}
B = {1: 'Mon', 2: 'Tue', 3: 'Wed'}
9. Каким образом осуществляется доступ к значению по его ключу?
В словаре, если известен ключ, то доступ к значению по эту ключу можно получить с помощью операции []. В этот же способ можно изменить значение, если известен ключ, который соответствует этому значению.
Например.
# Доступ к значению в словаре
# 1. Создать список
# {'Winter': 1, 'Spring': 2, 'Summer': 3, 'Autumn': 4}
SEASONS = dict( Winter=1, Spring=2, Summer=3, Autumn=4)
# 2. Вывести значение по ключу 'Spring'
value = SEASONS['Spring'] # value = 2
# 3. Изменить значение в словаре по его ключу
# Задается словарь с номерами и названиями помещений
D = { 233:'Lecture hall', 234:'Laboratory' }
# Изменение 'Laboratory' на 'Programming laboratory'
D[234] = 'Programming laboratory'
10. Примеры избежания ошибок доступа по несуществующему ключу
При работе со словарями, возможна ситуация, когда происходит доступ по ключу, которого нет в словаре. В этом случае в программе возникает ошибка и генерируется исключение KeyError.
Чтобы избежать ошибки доступа по несуществующемоу ключу можно использовать один из трех способов:
• предварительно проверить наличие ключа с помощью конструкции if;
• использовать блок try-except для обработки исключения KeyError;
• использовать метод get(), который, в случае несуществующего ключа, возвращает значение по умолчанию.
В примере демонстрируются все три способа.
Пример.
# Словари. Избежание доступа по несуществующему ключу
# Исходный словарь
Days = { 1:'Mon', 2:'Tue', 3:'Wed',
4:'Thu', 5:'Fri', 6:'Sat', 7:'Sun' }
# 1. Попытка обратиться к несуществующему дню
# day = Days[8] # возникает исключение KeyError: 8
# 2. Способ 1. Обращение к несуществующему дню,
# использование оператора if
day = int(input("Enter day: "))
if day in Days:
print("day = ", Days[day])
else:
print("1. Using if statement: Error.")
# 3. Способ 2. Обращение к несуществующему ключу,
# использование конструкции try-except
try:
print('day = ', Days[day])
except KeyError:
print('2. Using try-except statement: Error.')
# 4. Способ 3. Обращение к несуществующему ключу,
# использование метода get()
print('3. Using get() method: ', Days.get(day))
11. Примеры вложенных словарей
В качестве значений словари могут содержать другие вложенные словари. Например:
# Вложенные словари
# Пример 1.
# Внутренний словарь
Worker_Type = { 'Manager':1, 'SupportStaff':2 }
# Внешний словарь
# {'Worker': {'Manager': 1, 'SupportStaff': 2}}
Worker_Dict = { 'Worker' : Worker_Type }
print(Worker_Type)
print(Worker_Dict)
# Пример 2.
Figures = { 'Figure' : { 1:'Circle', 2:'Triangle', 3:'Rombus' } }
print(Figures)
Результат выполнения программы
{'Manager': 1, 'SupportStaff': 2}
{'Worker': {'Manager': 1, 'SupportStaff': 2}}
{'Figure': {1: 'Circle', 2: 'Triangle', 3: 'Rombus'}}
Связанные темы
|
__label__pos
| 0.962479 |
DanielRoberts DanielRoberts - 8 months ago 39
C# Question
Ask user to re-input a response if it is not a valid response C#
So I am creating a game that is a command line game in C#, using Visual Studio. What I am wanting to do is ask a question and get the response either 'Y' or 'N'.
I want to be able to have it so that if the response is not either of those, it just repeats the question for them to give a correct response. I am not able to find a good way to do this, as I have not had to do this before. Can anyone help me with this?
Thanks in advance!
Answer
You have to simply re-ask for the response until you get the valid one. A method like this can work for your case:
static void GetResponse()
{
Console.WriteLine("Do you wish to continue? [Y/N]");
var keyInfo = Console.ReadKey(); //Read a single key from the user. The ReadKey method displays the pressed key on the console.
//Check the pressed key and if it's not y or n, re-ask for it.
while (keyInfo.KeyChar.ToString().ToLower() != "y" && keyInfo.KeyChar.ToString().ToLower() != "n")
{
Console.WriteLine();
Console.WriteLine("Invalid choice. Valid choices are: Y or N");
keyInfo = Console.ReadKey(); //Retake the input.
}
Console.WriteLine(); //For formatting purposes.
}
And then from your Main method, call the GetResponse method:
static void Main(string[] args)
{
GetResponse();
//Valid response received. Do something here...
}
|
__label__pos
| 0.720967 |
LC 1798. 你能构造出连续值的最大数目
题目描述
这是 LeetCode 上的 1798. 你能构造出连续值的最大数目 ,难度为 中等
给你一个长度为 n 的整数数组 coins,它代表你拥有的 n 个硬币。第 i 个硬币的值为 coins[i]。如果你从这些硬币中选出一部分硬币,它们的和为 x ,那么称,你可以构造出 x
请返回从 0 开始(包括 0 ),你最多能构造出多少个连续整数。
你可能有多个相同值的硬币。
示例 1:
1
2
3
4
5
6
7
8
输入:coins = [1,3]
输出:2
解释:你可以得到以下这些值:
- 0:什么都不取 []
- 1:取 [1]
从 0 开始,你可以构造出 2 个连续整数。
示例 2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
输入:coins = [1,1,1,4]
输出:8
解释:你可以得到以下这些值:
- 0:什么都不取 []
- 1:取 [1]
- 2:取 [1,1]
- 3:取 [1,1,1]
- 4:取 [4]
- 5:取 [4,1]
- 6:取 [4,1,1]
- 7:取 [4,1,1,1]
从 0 开始,你可以构造出 8 个连续整数。
示例 3:
1
2
3
输入:nums = [1,4,10,3,1]
输出:20
提示:
• $coins.length = n$
• $1 <= n <= 4 \times 10^4$
• $1 <= coins[i] <= 4 \times 10^4$
数学
n 的数据范围为 $4 \times 10^4$,必然不是考察我们使用 coins 来构造单个数值 x 的逻辑,因为「遍历 + 逐个构造验证」的做法会超时,因此只能是考察我们能否推导出整段构造的相关性质
假设我们已经用前 k 个数值构造出连段 $[0, x]$ 中的任意数,当增加第 $k + 1$ 个数值时,还能否进行连续构造:
• 若不能,则连续构造中断,答案为 $[0, x]$,共 $x + 1$ 个
• 若能,则再考虑连续构造的右边界会到哪个地方
由于题目允许我们任意使用 coins 中的数,同时整段构造又是不断扩大 $[0, x]$ 中右边界的过程(从小到大),为了方便,我们可以先对 coins 进行排序。
不失一般性,假设我们已经使用 coins 中的前 k 个数构造出了范围 $[0, x]$ 中的任意数。当考虑增加一个 $coins[k]$ 后,我们可在每一个原有构造方案中增加 $coins[k]$,这样由 $coins[k]$ 所拓展出的构造范围便是 $[coins[k], coins[k] + x]$。
原来的连续数是 $[0, x]$,若要保证连续,我们需要保证 $coins[k] <= x + 1$,此时构造连续段也从 $[0, x]$ 变为 $[0, coins[k] + x]$。
即 $coins[k] > k + 1$ 是中断构造必要条件,再结合我们实现对 coins 进行了排序,容易证明如果 $coins[k]$ 都不能满足 $coins[k] <= x + 1$,排在 $coins[k]$ 后面比其大的数均不能满足要求。
一些细节:起始时,我们可以不选 coins 中的任何数,即此时连续构造范围为 $[0, 0]$,随后从小到大遍历 coins,检查当前 $coins[i]$ 是否会中断构造。
Java 代码:
1
2
3
4
5
6
7
8
9
10
11
class Solution {
public int getMaximumConsecutive(int[] coins) {
Arrays.sort(coins);
int ans = 0;
for (int c : coins) {
if (c > ans + 1) break;
ans += c;
}
return ans + 1;
}
}
TypeScript 代码:
1
2
3
4
5
6
7
8
9
function getMaximumConsecutive(coins: number[]): number {
coins.sort((a,b)=>a-b)
let ans = 0
for (const c of coins) {
if (c > ans + 1) break
ans += c
}
return ans + 1
}
• 时间复杂度:$O(n\log{n})$
• 空间复杂度:$O(\log{n})$
最后
这是我们「刷穿 LeetCode」系列文章的第 No.1798 篇,系列开始于 2021/01/01,截止于起始日 LeetCode 上共有 1916 道题目,部分是有锁题,我们将先把所有不带锁的题目刷完。
在这个系列文章里面,除了讲解解题思路以外,还会尽可能给出最为简洁的代码。如果涉及通解还会相应的代码模板。
为了方便各位同学能够电脑上进行调试和提交代码,我建立了相关的仓库:https://github.com/SharingSource/LogicStack-LeetCode
在仓库地址里,你可以看到系列文章的题解链接、系列文章的相应代码、LeetCode 原题链接和其他优选题解。
|
__label__pos
| 0.958799 |
Portada » JQuery biblioteca de JavaScript » Sistema de chat en vivo con Ajax, PHP y MySQL
Sistema de chat en vivo con Ajax, PHP y MySQL
Sistema de chat en vivo con Ajax, PHP y MySQL. Hoy en día la comunicación prima en todo sistema web o aplicación de chat, estos sistemas se utiliza principalmente para comunicarse con personas, potenciales clientes y usuarios de algún servicio, etc.
Ahora, tener un chat en nuestra organización es muy importante para cualquier tipo de negocio, ya que la mayoría de servicios por internet tiene su sistema de chat integrado en sus sitios web para una constante comunicación con sus clientes.
¿Cuál es la finalidad de usar un Chat en mi Sistema Web?
Básicamente, para ayudarles con respecto a los servicios y resolver problemas que el usuario o cliente pueda experimentar al adquirir un producto o servicio que estamos ofreciendo.
Por lo tanto, si usted está buscando para desarrollar su propio sistema de chat, entonces estás aquí en el lugar correcto en donde le mostraremos de una manera simple como implementar un chat para sus sistema web.
En este artículo web aprenderás cómo desarrollar un modelo de chat en vivo usando herramientas y lenguajes de programación existentes en el mercado como ser el: Ajax, PHP y MySQL.
Sistema de Chat con PHP y MySQL
Sistema de Chat con PHP y MySQL
Sistema de chat en vivo con Ajax, PHP y MySQL
A continuación, veremos una serie de pasos que necesitaremos para tener nuestro propio CHAT en línea.
1. Index.php
2. login. php
3. chat. js
4. Container.php
5. chat_action. php
6. logout. php
7. Chat. php
8. Footer.php
9. logout.php
10. php_chat.sql
IMPLEMENTACIÓN DE NUESTRO SISTEMA CHAT EN VIVO
Veremos paso a paso los recursos que necesitaremos para tener nuestro CHAT operativo.
a) Paso 1: Crear tablas de base de datos
Nuestra base de datos tendrá como nombre «php_chat.sql» y en ella vamos a crear tablas de base de datos MySQL que se utilizan para almacenar la información del sistema de chat. Por lo tanto, vamos a crear la tabla chat_users para almacenar a los usuarios y sus respectivas sesiones.
CREATE TABLE `chat_users` (
`userid` int(11) NOT NULL,
`username` varchar(255) NOT NULL,
`password` varchar(255) NOT NULL,
`avatar` varchar(255) NOT NULL,
`current_session` int(11) NOT NULL,
`online` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
A continuación, también insertamos registros de usuarios como ejemplo para que funcione nuestro chat.
INSERT INTO `chat_users` (`userid`, `username`, `password`, `avatar`, `current_session`, `online`) VALUES
(1, 'Rose', '123', 'user1.jpg', 3, 0),
(2, 'Smith', '123', 'user2.jpg', 1, 0),
(3, 'adam', '123', 'user3.jpg', 1, 0),
(4, 'Merry', '123', 'user4.jpg', 0, 0),
(5, 'katrina', '123', 'user5.jpg', 0, 0),
(6, 'Rhodes', '123', 'user6.jpg', 0, 0);
Siguiendo con la instalación, crearemos la tabla de «chat» para almacenar los detalles del chat.
CREATE TABLE `chat` (
`chatid` int(11) NOT NULL,
`sender_userid` int(11) NOT NULL,
`reciever_userid` int(11) NOT NULL,
`message` text NOT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`status` int(1) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Crearemos «chat_login_details» tabla para almacenar la actividad de chat iniciada por el usuario.
CREATE TABLE `chat_login_details` (
`id` int(11) NOT NULL,
`userid` int(11) NOT NULL,
`last_activity` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`is_typing` enum('no','yes') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
b) Paso 2: Iniciar sesión de usuario de chat
En esta sección vamos a crear la interfaz de inicio de sesión de chat (Formulario HTML5) en login.php, que nos permitirá iniciar sesión en el sistema de chat.
<div class="row">
<div class="col-sm-4">
<h4>Chat Login:</h4>
<form method="post">
<div class="form-group">
<?php if ($loginError ) { ?>
<div class="alert alert-warning"><?php echo $loginError; ?></div>
<?php } ?>
</div>
<div class="form-group">
<label for="username">User:</label>
<input type="username" class="form-control" name="username" required>
</div>
<div class="form-group">
<label for="pwd">Password:</label>
<input type="password" class="form-control" name="pwd" required>
</div>
<button type="submit" name="login" class="btn btn-info">Login</button>
</form>
</div>
</div>
c) Paso 3: Crear HTML del sistema de chat y librerías externas
En el fichero index.php, vamos a incluir librerías de estilos llamado Bootstrap, jQuery y archivos CSS para crear la interfaz del sistema de chat con bootstrap para obtener una interfaz profesional.
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<link rel='stylesheet prefetch' href='https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.6.2/css/font-awesome.min.css'>
<link href="css/style.css" rel="stylesheet" id="bootstrap-css">
<script src="js/chat.js"></script>
A continuación, poco después de que el usuario inicie sesión, este se re direccionará a « index.php» donde se mostrará el sistema de chat con la lista de usuarios a la izquierda y los detalles del chat de usuario a la derecha.
Para terminar vamos a obtener detalles del chat actual del usuario de chat activo utilizando el método de chat getUserChat() de chat.php y mostrar los detalles del chat.
<div class="contact-profile" id="userSection">
<?php
$userDetails = $chat->getUserDetails($currentSession);
foreach ($userDetails as $user) {
echo '<img src="userpics/'.$user['avatar'].'" alt="" />';
echo '<p>'.$user['username'].'</p>';
echo '<div class="social-media">';
echo '<i class="fa fa-facebook" aria-hidden="true"></i>';
echo '<i class="fa fa-twitter" aria-hidden="true"></i>';
echo '<i class="fa fa-instagram" aria-hidden="true"></i>';
echo '</div>';
}
?>
</div>
<div class="messages" id="conversation">
<?php
echo $chat->getUserChat($_SESSION['userid'], $currentSession);
?>
</div>
d) Paso 4: Gestionar respuesta de chat de usuario
En esta sección manejaremos la funcionalidad de envío de mensajes en chat.js cuando haga clic en el botón «Enviar mensaje» y la función de llamada sendMessage()
$(document).on("click", '.submit', function(event) {
var to_user_id = $(this).attr('id');
to_user_id = to_user_id.replace(/chatButton/g, "");
sendMessage(to_user_id);
});
En el fichero chat_action.php, llamaremos al método de chat insertChat() para insertar los detalles del chat a la base de datos MySQL para su posterior consulta.
<?php
session_start();
include ('Chat.php');
if($_POST['action'] == 'insert_chat') {
$chat->insertChat($_POST['to_user_id'], $_SESSION['userid'], $_POST['chat_message']);
}
?>
e) Paso 5: Actualizar información de la lista de usuarios de chat
Siguiendo el proceso de instalación, en el archivo chat.js, crearemos la función updateUserList() para actualizar la información de la lista de usuarios de chat como el estado del usuario en línea haciendo la solicitud Ajax a chat_action.php.
function updateUserList() {
$.ajax({
url:"chat_action.php",
method:"POST",
dataType: "json",
data:{action:'update_user_list'},
success:function(response){
var obj = response.profileHTML;
Object.keys(obj).forEach(function(key) {
// update user online/offline status
if($("#"+obj[key].userid).length) {
if(obj[key].online == 1 && !$("#status_"+obj[key].userid).hasClass('online')) {
$("#status_"+obj[key].userid).addClass('online');
} else if(obj[key].online == 0){
$("#status_"+obj[key].userid).removeClass('online');
}
}
});
}
});
}
f) Paso 6: Actualizar detalles de chat de usuario activo
En chat.js, crearemos la función updateUserChat() para actualizar los detalles del chat de usuario activo haciendo la solicitud Ajax a chat_action. php.
function updateUserChat() {
$('li.contact.active').each(function(){
var to_user_id = $(this).attr('data-touserid');
$.ajax({
url:"chat_action.php",
method:"POST",
data:{to_user_id:to_user_id, action:'update_user_chat'},
dataType: "json",
success:function(response){
$('#conversation').html(response.conversation);
}
});
});
}
g) Paso 7: Actualizar recuento de mensajes no leídos del usuario
Fichero chat.js, crearemos la función updateUnreadMessageCount() para actualizar el recuento de mensajes no leídos del usuario haciendo la solicitud Ajax a chat_action.php.
function updateUnreadMessageCount() {
$('li.contact').each(function(){
if(!$(this).hasClass('active')) {
var to_user_id = $(this).attr('data-touserid');
$.ajax({
url:"chat_action.php",
method:"POST",
data:{to_user_id:to_user_id, action:'update_unread_message'},
dataType: "json",
success:function(response){
if(response.count) {
$('#unread_'+to_user_id).html(response.count);
}
}
});
}
});
}
h) Paso 8: Actualizar estado de escritura de usuario
Archivo chat.js, manejaremos el estado de mecanografía del usuario haciendo solicitud Ajax a chat_action.php para actualizar escribiendo como sí si el usuario escribe en el evento de enfoque de entrada.
$(document).on('focus', '.message-input', function(){
var is_type = 'yes';
$.ajax({
url:"chat_action.php",
method:"POST",
data:{is_type:is_type, action:'update_typing_status'},
success:function(){
}
});
});
i) Paso 9: Gestionar sesión de usuario de chat
En cierre.php, manejaremos la funcionalidad de cierre de sesión de usuario y actualizaré el estado offline del usuario.
<?php
session_start();
include ('Chat.php');
$chat = new Chat();
$chat->updateUserOnline($_SESSION['userid'], 0);
$_SESSION['username'] = "";
$_SESSION['userid'] = "";
$_SESSION['login_details_id']= "";
header("Location:index.php");
?>
Sistema de chat en vivo con Ajax, PHP y MySQL
Sistema de chat en vivo con Ajax, PHP
CONCLUSIÓN SISTEMA CHAT
La implementación del CHAT en vivo en nuestros sistemas de ventas es muy recomendable, una de las razones para tal implementación son las siguientes:
1. Resolver las dudas de nuestros clientes y/ usuarios.
2. Si un usuario intenta realizar una consulta antes de realizar una compra para despejar su duda.
3. Plantear sugerencias para nuestros usuarios y/o clientes.
A continuación les dejare un archivo para que puedan descargar e implementarlo. El fichero que les dejare incluye base de datos.
DESCARGA DEL SISTEMA CHAT EN VIVO
Pueden descargar el código fuente del script y también incluye base de datos.
Descargar Código Fuente
[purchase_link id=»6898″ text=»Comprar» style=»button» color=»blue»]
¿De cuánta utilidad te ha parecido este contenido?
¡Haz clic en una estrella para puntuarlo!
Promedio de puntuación 0 / 5. Recuento de votos: 0
Hasta ahora, ¡no hay votos!. Sé el primero en puntuar este contenido.
Scroll al inicio
Esta web utiliza cookies propias para su correcto funcionamiento. Contiene enlaces a sitios web de terceros con políticas de privacidad ajenas que podrás aceptar o no cuando accedas a ellos. Al hacer clic en el botón Aceptar, acepta el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos.
Privacidad
|
__label__pos
| 0.943873 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
I have seen code where the constructor has been declared as private while the destructor is public. What is the use of such a declaration? Is the destructor required to be public so that during inheritance the calls can be possible or is it a bug in the code?
The question might seem to be a bit short on information, but what I really want to know is if having a public destructor when the constructor is required to be private abides by the C++ rules?
share|improve this question
1
A class only has one destructor, but it has several constructors (default, copy, move (since C++11), and possibly more user-defined). In the code you have seen, were all constructors declared private? Also, were they given a definition or left undefined? Also, you mentioned inheritance: was the class a base class? (You can add a code snippet to your question for illustration) – gx_ Aug 31 '13 at 8:24
The constructors were all private; default constructor has been given a definition. I think I'll not be able to provide the code snippet as it's an industry code, but, I think the information in the question suffices. – Sankalp Aug 31 '13 at 8:27
how exactly would you "inherit" from a base class where you don't have at least protected access to a constructor of that base? Regarding why someone would make a constructor private, one common scenario is a singleton model class (ugh, dirty word, that singleton). A static class variable is declared of the same class type. It can construct, but no one else can. – WhozCraig Aug 31 '13 at 8:31
Regarding whether a private destructor is allowed I don't suppose this helps explain it – WhozCraig Aug 31 '13 at 8:37
(As for "industry code", it's ok to make up a simplified example (with names like "Foo" and snipped implementation) when it's just for illustration, but whatever.) Looking at how the class is actually used (how external code gets an instance (or a pointer to one), where destruction is triggered), not only at its definition, may also help you understand the why. – gx_ Aug 31 '13 at 9:07
6 Answers 6
First thing: the destructor can be private. Here is great example of a private destructor.
having a public destructor when the constructor is required to be private abides by the C++ rules?
It's totally working in C++. In fact, a great example of this scenario is the singleton pattern, where the constructor is private and the destructor is public.
share|improve this answer
1
Exactly. Singleton was what I had in mind while asking this question. But, wherever singleton is explained, the destructor has been made private raising further doubts and questions. – Sankalp Aug 31 '13 at 8:39
Short Answer
Creating a constructor as private but the destructor as public has many practical uses.
You can use this paradigm to:
Long Answer
Above I hinted that you can use private constructors and destructors to implement several design patterns. Well, here's how...
Reference Counting
Using private destructor within an object lends itself to a reference counting system. This lets the developer have stronger control of an objects lifetime.
class MyReferenceObject
{
public:
static MyReferenceObject* Create()
{
return new MyReferenceObject();
}
void retain()
{
m_ref_count++;
}
void release()
{
m_ref_count--;
if (m_ref_count <= 0)
{
// Perform any resource/sub object cleanup.
// Delete myself.
delete this; // Dangerous example but demonstrates the principle.
}
}
private:
int m_ref_count;
MyReferenceObject()
{
m_ref_count = 1;
}
~MyReferenceObject() { }
}
int main()
{
new MyReferenceObject(); // Illegal.
MyReferenceObject object; // Illegal, cannot be made on stack as destructor is private.
MyReferenceObject* object = MyReferenceObject::Create(); // Creates a new instance of 'MyReferenceObject' with reference count.
object->retain(); // Reference count of 2.
object->release(); // Reference count of 1.
object->release(); // Reference count of 0, object deletes itself from the heap.
}
This demonstrates how an object can manage itself and prevent developers from corrupting the memory system. Note that this is a dangerous example as MyReferenceObject deletes itself, see here for a list of things to consider when doing this.
Singleton
A major advantage to private constructors and destructors within a singleton class is that enforces the user to use it in only the manner that the code was design. A rogue singleton object can't be created (because it's enforced at compile time) and the user can't delete the singleton instance (again, enforced at compile time).
For example:
class MySingleton
{
public:
MySingleton* Instance()
{
static MySingleton* instance = NULL;
if (!instance)
{
instance = new MySingleton();
}
return instance;
}
private:
MySingleton() { }
~MySingleton() { }
}
int main()
{
new MySingleton(); // Illegal
delete MySingleton::Instance(); // Illegal.
}
See how it is almost impossible for the code to be misused. The proper use of the MySingleton is enforce at compile time, thus ensuring that developers must use MySingleton as intended.
Factory
Using private constructors within the factory design pattern is an important mechanism to enforce the use of only the factory to create objects.
For example:
class MyFactoryObject
{
public:
protected:
friend class MyFactory; // Allows the object factory to create instances of MyFactoryObject
MyFactoryObject() {} // Can only be created by itself or a friend class (MyFactory).
}
class MyFactory
{
public:
static MyFactoryObject* MakeObject()
{
// You can perform any MyFactoryObject specific initialisation here and it will carry through to wherever the factory method is invoked.
return new MyFactoryObject();
}
}
int main()
{
new MyFactoryObject(); // Illegal.
MyFactory::MakeObject(); // Legal, enforces the developer to make MyFactoryObject only through MyFactory.
}
This is powerful as it hides the creation of MyFactoryObject from the developer. You can use the factory method to perform any initilisation for MyFactoryObject (eg: setting a GUID, registering into a DB) and anywhere the factory method is used, that initilisation code will also take place.
Summary
This is just a few examples of how you can use private constructors and destructors to enforce the correct use of your API. If you want to get tricky, you can combine all these design patterns as well ;)
share|improve this answer
You have a bunch of errors in your examples, such as your Singleton::Instance() is returning a MySingleton by value, not by reference or pointer. Your declaration of the static instance variable has no type. Same with your MakeObject() method returning by value, but attempting to return a pointer. – Andre Kostur Aug 31 '13 at 16:45
Thanks for highlighting those. I'll fix the answer (I've been spending too much time in c# land). – matthewrobbinsdev Aug 31 '13 at 21:38
You make constructor private if you want to prevent creating more then one instance of your class. That way you control the creation of intances not their destruction. Thus, the destructor may be public.
share|improve this answer
One example in my head, let's say you want to limit the class instance number to be 0 or 1. For example, for some singleton class, you want the application can temprary destroy the object to rducue memory usage. to implement this constructor will be private, but destructor will be public. see following code snippet.
class SingletoneBigMemoryConsumer
{
private:
SingletoneBigMemoryConsumer()
{
// Allocate a lot of resource here.
}
public:
static SingletoneBigMemoryConsumer* getInstance()
{
if (instance != NULL)
return instance;
else
return new SingletoneBigMemoryConsumer();
}
~SingletoneBigMemoryConsumer()
{
// release the allocated resource.
instance = NULL;
}
private:
// data memeber.
static SingletoneBigMemoryConsumer* instance;
}
//Usage.
SingletoneBigMemoryConsumer* obj = SingletoneBigMemoryConsumer::getInstance();
// You cannot create more SingletoneBigMemoryConsumer here.
// After 1 seconds usage, delete it to reduce memory usage.
delete obj;
// You can create an new one when needed later
share|improve this answer
The owner of an object needs access to the destructor to destroy it. If the constuctors are private, there must be some accessible function to create an object. If that function transferes the ownership of the constructed object to the caller ( for example return a pointer to a object on the free store ) the caller must have the right to access the destructor when he decides to delete the object.
share|improve this answer
In reverse order.
Is the destructor required to be public so that during inheritance the calls can be possible or is it a bug in the code?
Actually, for inheritance to work the destructor should be at least protected. If you inherit from a class with a private destructor, then no destructor can be generated for the derived class, which actually prevents instantiation (you can still use static methods and attributes).
What is the use of such declaration?
Note that even though the constructor is private, without further indication the class has a (default generated) public copy constructor and copy assignment operator. This pattern occurs frequently with:
• the named constructor idiom
• a factory
Example of named constructor idiom:
class Angle {
public:
static Angle FromDegrees(double d);
static Angle FromRadian(double d);
private:
Angle(double x): _value(x) {}
double _value;
};
Because it is ambiguous whether x should be precised in degrees or radians (or whatever), the constructor is made private and named method are provided. This way, usage makes units obvious:
Angle a = Angle::FromDegrees(360);
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.987653 |
MongoDB to Postgres
This page provides you with instructions on how to extract data from MongoDB and load it into PostgreSQL. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is MongoDB?
MongoDB, or just Mongo, is an open source NoSQL database that stores data in JSON format. It uses a document-oriented data model, and data fields can vary by document. MongoDB isn't tied to any specified data structure, meaning that there's no particular format or schema for data in a Mongo database.
What is PostgreSQL?
PostgreSQL, often known simply as Postgres, is a hugely popular object-relational database management system (ORDBMS). It labels itself as "the world's most advanced open source database," and for good reason. The platform, which is available via an open source license, offers enterprise-grade features including a strong emphasis on extensibility and standards compliance.
PostgreSQL runs on all major operating systems, including Linux, Unix, and Windows. It is fully ACID-compliant, and has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). Postgres is often the best tool for the job as a back-end database for web systems and software tools, and cloud-based deployments are offered by most major cloud vendors. Its syntax also forms the basis for querying Amazon Redshift, which makes migration between the two systems relatively painless and makes Postgres a good "first step" for developers who may later work on Redshift's data warehouse platform.
Getting data out of MongoDB
The process of pulling data out of MongoDB depends on how you've loaded data into MongoDB. In some cases, it may be impossible to extract all of your data, because NoSQL databases don't require structure (i.e. specific columns). Relational databases, such as those used for data warehouses, use a more traditional, rigid structure. You'll need to defined a structure in the relational database into which you can insert MongoDB data.
Don't stress about the confusing data structure. Lots of the data that's loaded into MongoDB is created by a computer, so it probably has a pretty predictable structure. If you can find specific fields that exist for every record, you're well on your way. Make sure these fields appear in the records of each collection you'd like to replicate from MongoDB. There are many ways to do this. The most popular method to get data from MongoDB is to use the find() command.
Sample MongoDB data
MongoDB stores and returns JSON-formatted data. Here's an example of what a response might look like to a query against the products collection.
db.products.find( { qty: { $gt: 25 } }, { _id: 0, qty: 0 } )
{ "item" : "pencil", "type" : "no.2" }
{ "item" : "bottle", "type" : "blue" }
{ "item" : "paper" }
Loading data into Postgres
Once you have identified all of the columns you will want to insert, you can use the CREATE TABLE statement in Postgres to create a table that can receive all of this data. Then, Postgres offers a number of methods for loading in data, and the best method varies depending on the quantity of data you have and the regularity with which you plan to load it.
For simple, day-to-day data insertion, running INSERT queries against the database directly are the standard SQL method for getting data added. Documentation on INSERT queries and their bretheren can be found in the Postgres documentation here.
For bulk insertions of data, which you will likely want to conduct if you have a high volume of data to load, other tools exist as well. This is where the COPY command becomes quite useful, as it allows you to load large sets of data into Postgres without needing to run a series of INSERT statements. Documentation can be found here.
The Postgres documentation also provides a helpful overall guide for conducting fast data inserts, populating your database, and avoiding common pitfalls in the process. You can find it here.
Keeping MongoDB data up to date
Fine job! You are the proud developer of a script that moves data from MongoDB to your data warehouse. This works as a one-shot deal. It's good to think about what will happen when there is new and updated data in MongoDB.
One option that works would be to load the entire MongoDB dataset all over again. That would certainly update the data, but it's not very efficient and can also cause terribly latency.
The smartest way to get data updated from MongoDB would be to identify keys that can be used as bookmarks to store where you script left off on the last run. Fields like updated_at, modified_at, or other auto-incrementing data are useful here. With that done, you can set up your script as a cron job or continuous loop to identify new data as it appears.
Other data warehouse options
PostgreSQL is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Snowflake, and To Panoply.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your MongoDB data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your PostgreSQL data warehouse.
|
__label__pos
| 0.672777 |
# Ubuntu20.04配置 ### 1.创建 root 用户 ``` sudo passwd root ``` ### 2.创建其他用户并赋予root权限 ```sh # 新建的用户名为user sudo adduser user # 输入用户名密码 sudo usermod -aG sudo user # 赋予sudo权限 # 顺便在该用户家目录创建.ssh文件夹以及authorized_keys文件 cd mkdir .ssh && chmod 700 .ssh touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys ``` ### 3.更换清华源 ```sh sudo sed -i "s@http://.*archive.ubuntu.com@https://mirrors.tuna.tsinghua.edu.cn@g" /etc/apt/sources.list sudo sed -i "s@http://.*security.ubuntu.com@https://mirrors.tuna.tsinghua.edu.cn@g" /etc/apt/sources.list ``` ### 4.修改时区 ```sh sudo timedatectl set-timezone Asia/Shanghai ``` ### 5.配置静态 ip 查看 ip 地址,网卡等等 ```sh ip a ``` 查看网关 ```sh sudo apt install net-tools route -n ``` 打开配置文件 ```sh sudo vim /etc/netplan/00-installer-config.yaml ``` 修改文件内容如下(**注意缩进**): ```yaml network: ethernets: ens33: dhcp4: no optional: true addresses: [192.168.236.130/24] gateway4: 192.168.236.2 nameservers: addresses: [8.8.8.8, 114.114.114.114] version: 2 ``` `ens33`网卡没有绑定 ipv4 地址,尝试查看服务有没有启动(**锐捷客户端会自动关闭 VMware NAT Service**)
{{}}
### 6.配置免密登录 本地生成 ssh 密钥 ``` ssh-keygen ```
{{}}
获取公钥 ``` cat ~/.ssh/ubuntu_2_rsa.pub ``` 存入服务器的`.ssh/authorized_keys`文件内 **更简单粗暴的方法:** ``` ssh-copy-id 服务器别名 ``` ### 7.ssh服务 `ssh`服务的配置文件是`/etc/ssh/sshd_config`,要修改该文件需使用`root`添加写权限,修改完成之后建议**取消**写权限 ```bash sudo /etc/init.d/ssh start # 启动 sudo /etc/init.d/ssh restart # 重启 ``` ### 8.关闭防火墙 ```sh # 查看防火墙状态 sudo ufw status # 开启防火墙 sudo ufw enable # 关闭防火墙 sudo ufw disable ``` ### 9.修改主机名 ```sh hostnamectl set-hostname newname ``` ### 10.服务管理相关 ```sh systemctl is-enabled servicename.service #查询服务是否开机启动 systemctl enable *.service #开机运行服务 systemctl disable *.service #取消开机运行 systemctl start *.service #启动服务 systemctl stop *.service #停止服务 systemctl restart *.service #重启服务 systemctl reload *.service #重新加载服务配置文件 systemctl status *.service #查询服务运行状态 ``` ### 11.好用软件推荐 1.`ncdu` 可用来查看目录大小以及查看目录中的文件 安装命令 ```sh sudo apt install ncdu ``` 使用方法: `ncdu+路径名`,按`q`退出,方向键切换目录 例如: `ncdu ~` 2.`htop`(自带) 查看系统资源使用状态 3.`ranger` 用于动态查看子目录及其所有文件,还可以预览文件内容以及选择用什么软件打开 安装命令 ```sh sudo apt install ranger ``` 操作方式同`ncdu` 4.`neofetch` 简单地查看主机的硬件信息。 安装命令: ```sh sudo apt install neofetch ``` ### 12.安装开发常用工具 自带软件: - `git[v2.25.0]` #### 1.安装 gcc ```sh sudo apt install build-essential gcc -v # 查看gcc版本 ``` #### 2.安装 miniconda 安装`python3.8`对应版本 ```sh wget -c https://repo.anaconda.com/miniconda/Miniconda3-py38_4.12.0-Linux-x86_64.sh ``` 添加权限 ```sh chmod 777 Miniconda3-py38_4.12.0-Linux-x86_64.sh sh Miniconda3-py38_4.12.0-Linux-x86_64.sh ``` 一直`Enter`,然后`yes`,最后确认一下安装路径 添加到环境变量,这里使用的是`zsh` ```sh vim ~/.zshrc export PATH=/home/user/miniconda3/bin:$PATH ``` #### 3.安装 oracle-jdk 安装之前最好卸载`openjdk` 下载地址:[https://sunyanos.github.io/2021/04/19/Ubuntu20-04LTS%E5%AE%89%E8%A3%85OpenJDK8-Oracle-JDK8/](https://sunyanos.github.io/2021/04/19/Ubuntu20-04LTS%E5%AE%89%E8%A3%85OpenJDK8-Oracle-JDK8/) 下载下面这个版本:
{{}}
```sh # 下载(注意不要用wget直接下载) 本地下载之后将压缩包上传至服务器 # 解压 sudo mkdir /usr/lib/jvm sudo tar -zxvf jdk-8u261-linux-x64.gz -C /usr/lib/jvm # 添加环境变量 vim ~/.zshrc # Oracle JDK8 Envirenment export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_341 ## 目录要换成自己解压的目录 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH # 使环境变量立即生效 source ~/.zshrc ## 检查是否安装成功 java -verison javac ```
{{}}
#### 4.安装 node.js 官网下载地址:[https://nodejs.org/en/download/](https://nodejs.org/en/download/) ```sh # 创建文件夹 cd /usr/local/ mkdir node cd node # 官网下载压缩包后上传至Ubuntu # 解压 sudo tar -xJvf ~/node-v16.17.1-linux-x64.tar.xz -C /usr/local/node # 添加环境变量 vim ~/.zshrc export PATH=/usr/local/node/node-v16.17.1-linux-x64/bin:$PATH source ~/.zshrc # 验证安装是否成功 node -v npm -v ```
{{}}
**npm更换淘宝源** ```sh npm config set registry https://registry.npm.taobao.org # 换源 npm config get registry # 验证结果 ``` 如果需要换回原来的源,输入下面的命令 ```sh npm config set registry https://registry.npmjs.org/ ``` #### 5.安装 maven 官网下载地址:[https://maven.apache.org/download.cgi](https://maven.apache.org/download.cgi) ```sh # 下载并解压 tar xzvf apache-maven-3.8.6-bin.tar.gz # 修改路径, 将解压的文件夹放在/opt下 sudo cp -r ~/apache-maven-3.8.6 /opt # 添加环境变量 vim ~/.zshrc export PATH=/opt/apache-maven-3.8.6/bin:$PATH source ~/.zshrc # 验证安装是否成功 mvn -v ```
{{}}
**配置阿里源** 修改`/opt/apache-maven-3.8.6/conf/settings.xml`,添加下列内容 ```xml alimaven aliyun maven http://maven.aliyun.com/nexus/content/groups/public/ central ``` **配置本地仓库** 我这里配置本地仓库地址为`/opt/apache-maven-3.8.6/repo` ```sh /opt/apache-maven-3.8.6/repo ``` 最后,记得设置一下权限,避免后面打包出现[问题](https://www.cnblogs.com/love-zf/p/15895020.html) ```sh sudo chmod -R 777 /opt/apache-maven-3.8.6 ``` 下面是一些常用命令 ```sh mvn clean # 清除taget目录 mvn package -DskipTests # 跳过测试代码,SpringBoot项目中常用 ```
{{}}
#### 6.安装 MySQL8.0 ```sh # 安装 sudo apt install mysql-server # 查看MySQL服务状态 sudo systemctl status mysql # 创建root@localhost用户 sudo mysql ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '你的密码'; FLUSH PRIVILEGES; # 创建root@%用户,远程登录使用 create user 'root'@'%' identified with mysql_native_password by '你的密码'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; # 查看创建的用户 use mysql; select host,user from user; # 尝试登陆 mysql -uroot -p # 输入你上面设置的密码 # 实现远程连接mysql # 先停止服务 sudo service mysql stop # 打开mysqld配置文件 sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf 注释掉bind-address = 127.0.0.1 # 重启服务 sudo service mysql start # 服务管理 sudo service mysql status # 查看服务状态 sudo service mysql start # 启动服务 sudo service mysql stop # 停止服务 sudo service mysql restart # 重启服务 ```
{{}}
{{}}
#### 7.安装 redis 下载地址:[https://redis.io/download/](https://redis.io/download/) ```sh # 创建文件夹 sudo mkdir /usr/local/redis/ # 安装 wget https://download.redis.io/redis-stable.tar.gz tar -xzvf redis-stable.tar.gz cd redis-stable sudo make && make install # 前台运行 redis-server # 后台运行 # 修改redis.conf将daemonize设置为yes # 重新使用redis配置(注意redis.conf的路径) redis-server redis.conf # 查看redis进程 ps -ef|grep redis # 关闭redis服务 redis-cli shutdown # 使用redis工具 redis-cli ```
{{}}
配置远程登录 打开`redis.conf`,修改下面内容 ```sh # 注释下面这行 # bind 127.0.0.1 -::1 requirepass 你的密码 ``` 关闭 redis ```sh redis-cli -a 密码 shutdown ```
{{}}
{{}}
redis 服务化 创建服务文件 ```sh sudo vim /etc/systemd/system/redis.service sudo chmod +x /etc/systemd/system/redis.service ``` 配置如下: ``` [Unit] Description=redis-server After=network.target [Service] Type=forking # 改为自己的redis-server的位置和redis.conf位置 ExecStart=/usr/local/bin/redis-server /home/zfp/redis-stable/redis.conf PrivateTmp=true [Install] WantedBy=multi-user.target ``` 服务管理 ```sh # 通知systemd一个新的单元文件存在 sudo systemctl daemon-reload # 设置开机自启动 sudo systemctl enable redis # 检查服务状态 sudo systemctl status redis # 启动、停止、重启tomcat sudo systemctl start redis sudo systemctl stop redis sudo systemctl restart redis ``` #### 8.安装 tomcat 下载地址:[https://tomcat.apache.org/download-90.cgi](https://tomcat.apache.org/download-90.cgi) ```sh # 建文件夹 sudo mkdir /usr/local/tomcat # 将下载的压缩包解压到上面的目录 sudo tar -zxvf apache-tomcat-9.0.68.tar.gz -C /usr/local/tomcat/ # 使用root用户 cd /usr/local/tomcat/apache-tomcat-9.0.68/bin/ ./startup.sh # 启动tomcat服务器 ```
{{}}
将`tomcat`服务化,使用`systemctl`来管理 首先新建`tomcat.service`单元文件 ```sh sudo vim /etc/systemd/system/tomcat.service # 加上权限 sudo chmod 777 /etc/systemd/system/tomcat.service ``` 写入下列内容 ```sh [Unit] Description=Tomcat 9 servlet container After=network.target [Service] Type=forking User=zfp # 改为自己的用户 # 下面的信息和压缩包的位置对应 Environment="JRE_HOME=/usr/lib/jvm/jdk1.8.0_341/jre" Environment="CLASSPATH=/usr/local/tomcat/apache-tomcat-9.0.68/bin/bootstrap.jar:/usr/local/tomcat/apache-tomcat-9.0.68/bin/tomcat-juli.jar" Environment="CATALINA_BASE=/usr/local/tomcat/apache-tomcat-9.0.68" Environment="CATALINA_HOME=/usr/local/tomcat/apache-tomcat-9.0.68" Environment="CATALINA_TMPDIR=/usr/local/tomcat/apache-tomcat-9.0.68/temp" Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC" ExecStart=/usr/local/tomcat/apache-tomcat-9.0.68/bin/startup.sh ExecStop=/usr/local/tomcat/apache-tomcat-9.0.68/bin/shutdown.sh [Install] WantedBy=multi-user.target ``` 服务管理 ```sh # 通知systemd一个新的单元文件存在 sudo systemctl daemon-reload # 启用并且启动Tomcat服务 sudo systemctl enable --now tomcat # 检查服务状态 sudo systemctl status tomcat # 启动、停止、重启tomcat sudo systemctl start tomcat sudo systemctl stop tomcat sudo systemctl restart tomcat ``` 将`war`包放在`tomcat`安装文件夹的`webapps` ```sh mv xxx.war /usr/local/tomcat/apache-tomcat-9.0.68/webapps # 重启服务后该出现一个war包同名文件夹 sudo systemctl restart tomcat ``` 将`tomcat`的根路径改为上述`war`包的产生的文件夹 ```xml # 打开server配置文件 sudo /usr/local/tomcat/apache-tomcat-9.0.68/conf/server.xml # 填入以下内容 ```
{{}}
#### 9.安装 nginx ```sh # 安装nginx sudo apt update sudo apt install nginx # 查看运行状态 sudo systemctl status nginx # 配置文件 sudo vim /etc/nginx/nginx.conf # 测试nginx改动是否正确 sudo nginx -t # 重启nginx服务 sudo systemctl restart nginx ``` ```nginx # ruoyi项目后端转发路径 location /prod-api/ { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header REMOTE-HOST $remote_addr; proxt_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxt_pass http://192.168.153.131:8080/; } ``` - 所有的 Nginx 配置文件都在`/etc/nginx/`目录下。 - 主要的 Nginx 配置文件是`/etc/nginx/nginx.conf`。 - 为每个域名创建一个独立的配置文件,便于维护服务器。你可以按照需要定义任意多的 block 文件。 - Nginx 服务器配置文件被储存在`/etc/nginx/sites-available`目录下。在`/etc/nginx/sites-enabled`目录下的配置文件都将被 Nginx 使用。 - 最佳推荐是使用标准的命名方式。例如,如果你的域名是`mydomain.com`,那么配置文件应该被命名为`/etc/nginx/sites-available/mydomain.com.conf` - 如果你在域名服务器配置块中有可重用的配置段,把这些配置段摘出来,做成一小段可重用的配置。 - Nginx 日志文件(access.log 和 error.log)定位在`/var/log/nginx/`目录下。推荐为每个服务器配置块,配置一个不同的`access`和`error`。 - 你可以将你的网站根目录设置在任何你想要的地方。最常用的网站根目录位置包括: - `/home//` - `/var/www/` - `/var/www/html/` - `/opt/` #### 10.安装 docker 官方教程:[https://docs.docker.com/engine/install/ubuntu/](https://docs.docker.com/engine/install/ubuntu/) ``` sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gp echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin apt-cache madison docker-ce sudo service docker start ``` 测试: ``` sudo docker run hello-world ```
{{}}
避免每次使用`docker`都要加`sudo`命令,运行下列命令 ```sh sudo usermod -aG docker $USER ``` **注: 该命令需要重新连接服务器才生效** #### 11.安装 Mongodb4.4 安装完成之后配置文件位于`/etc/mongod.conf` ```sh # 首先安装gnupg软件包 sudo apt-get install gnupg # 导入包管理系统使用的公钥 wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add - # 添加MongoDB存储库 echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list # 更新存储库 sudo apt-get update # 使用以下命令安装MongoDB sudo apt install mongodb-org # 启用mongod服务 sudo systemctl start mongod.service # 查看mongod服务状态 sudo systemctl status mongod # 可通过下面的命令检查是否安装成功 mongo --eval 'db.runCommand({ connectionStatus: 1 })' ``` 启动成功!
{{}}
**mongod 服务管理** ```sh # 启用并且启动mongod服务 sudo systemctl enable --now mongod # 检查服务状态 sudo systemctl status mongod # 启动、停止、重启mongod sudo systemctl start mongod sudo systemctl stop mongod sudo systemctl restart mongod ``` 查看`mongodb`版本 ```sh mongo --version ``` 进入`mongo shell` ```sh mongo ``` **配置远程登录** - 打开配置文件: `sudo vim /etc/mongod.conf` - 将`bindIp: 127.0.0.1` 改为 `bindIp: 0.0.0.0`- - 重启`mongd`服务 ##### 给 Admin 数据库创建账户管理员 ``` # 进入/切换数据库到admin中 use admin # 创建账户管理员 db.createUser({ user: "admin", pwd: "123456", roles: [ {role: "userAdminAnyDatabase",db:"admin"} ] }) ``` **创建超级管理员** ``` # 进入/切换数据库到admin中 use admin # 创建超级管理员账号 db.createUser({ user: "root", pwd: "123456", roles: [ { role: "root", db: "admin" } ] }) ```
|
__label__pos
| 0.984349 |
CleanUp
These days’ patients are losing their life due to negligence of the authority. There has been many of such issues reported. What are the steps for reducing medication errors in hospitals?
• 122 views
• 1 answers
• 0 votes
How can I restore clean up due to CC cleaner for the registry? I recently had a cleanup that is affecting my machine, the registry may be damaged or corrupted. How can I back up the registry next time to avoid such a mess? Is it possible to store the registry settings separately from the machine?
• 389 views
• 1 answers
• 0 votes
I have made a replacement for my laptop battery and since im an environmentalist I was concerned with the waste disposal for the battery. How I can dispose off this mercury without making an environmental problem? Do the manufacturers have a recycling plan to pick up the already used batteries?
• 423 views
• 1 answers
• 0 votes
I have an HP 15t laptop which has a 1 TB hard drive installed in it. The data on my disk is quite a lot and the hard drive is almost full, but I think I could save a lot of space by removing the unnecessary files. What are the methods to identify the unnecessary files? What are some good software to clean up my hard drive and remove the unnecessary files?
• 413 views
• 1 answers
• 0 votes
I have an Intel Core i3 laptop by Dell which has become very slow recently. I didn't have any anti malware / anti virus installed and now I think there is a lot of malware etc. In my laptop. The laptop is quite a mess now and it has some unnecessary software installed on it which were installed with some other third party applications. Can you suggest me some good PC cleanup software? Will I lose my data if the PC gets cleaned up? Is TuneUp utilities a reliable software?
• 462 views
• 1 answers
• 0 votes
Hi everyone,
I am having Printing business. I am having Xerox Work Centre 7435 Color Multifunction Copier. I am planning to buy one more printer for my business.
Can somebody suggest which one is Best quality and affordable for more load? I have one doubt regarding cleaning of the print head. Is that fine if I clean the dust inside the printer head by using some liquids? Will it work again properly? If yes, please suggest me which liquid is good to clean the printer head?
Your suggestions and advice will be very much appreciated. Many Thanks and Regards!
• 508 views
• 1 answers
• 0 votes
Handcent remove jumper disable double notifications when using handcent SMS. These threads popping up every day. Why there are double notifications and how to disable those double notifications?
• 613 views
• 1 answers
• 0 votes
To All Techyv Members:
What are the reasons of overheating of the notebook cooling fan despite the manufacturer’s of different brands continuously designing a better and greater speed? Why these notebooks are still susceptible to overheating despite cooling fan is being installed? Why is it so despite the cooling system is installed? Is there any new product that ventilation will not be clogged anymore or will no longer be impaired despite usage in normal conditions? I am so much afraid that this will lead into a fire in the house and will create more problem. This will be another problem as a consequence of using the notebook.
• 609 views
• 2 answers
• 0 votes
Hello expert, One of my friends has damaged JPEG pictures. So he needs to recover it. The pictures are so necessary things. I want to know how to repair damaged JPEG header recover. Please tell me how doing it using software or without any software. Thank you.
• 821 views
• 1 answers
• 0 votes
Hi everyone, I owing a lap. I just want to delete all my background processes when my lap is too slow. When , I am killing the background processes. The task scheduler is not able to delete the background processes. I want a software that can kill processes task scheduler at a scheduled time. Please help me in finding me correct solution. Thanks.
• 817 views
• 2 answers
• 0 votes
Articles
Blogs
Tips
|
__label__pos
| 0.869409 |
// file : libbutl/fdstream.mxx -*- C++ -*- // copyright : Copyright (c) 2014-2019 Code Synthesis Ltd // license : MIT; see accompanying LICENSE file #ifndef __cpp_modules_ts #pragma once #endif #include #ifndef __cpp_lib_modules_ts #include // streamsize #include #include #include #include #include // move(), pair #include // uint16_t, uint64_t #include // size_t #include #endif // Other includes. #ifdef __cpp_modules_ts export module butl.fdstream; #ifdef __cpp_lib_modules_ts import std.core; import std.io; #endif import butl.path; import butl.filesystem; // permissions import butl.small_vector; #else #include #include #include #endif #include LIBBUTL_MODEXPORT namespace butl { // RAII type for file descriptors. Note that failure to close the descriptor // is silently ignored by both the destructor and reset(). // // The descriptor can be negative. Such a descriptor is treated as unopened // and is not closed. // struct nullfd_t { constexpr explicit nullfd_t (int) {} constexpr operator int () const {return -1;} }; #if defined(__cpp_modules_ts) && defined(__clang__) //@@ MOD Clang duplicate sym. inline #endif constexpr nullfd_t nullfd (-1); class LIBBUTL_SYMEXPORT auto_fd { public: auto_fd (nullfd_t = nullfd) noexcept: fd_ (-1) {} explicit auto_fd (int fd) noexcept: fd_ (fd) {} auto_fd (auto_fd&& fd) noexcept: fd_ (fd.release ()) {} auto_fd& operator= (auto_fd&&) noexcept; auto_fd (const auto_fd&) = delete; auto_fd& operator= (const auto_fd&) = delete; ~auto_fd () noexcept; int get () const noexcept {return fd_;} void reset (int fd = -1) noexcept; int release () noexcept { int r (fd_); fd_ = -1; return r; } // Close an open file descriptor. Throw ios::failure on the underlying OS // error. Reset the descriptor to -1 whether the exception is thrown or // not. // void close (); private: int fd_; }; inline bool operator== (const auto_fd& x, const auto_fd& y) { return x.get () == y.get (); } inline bool operator!= (const auto_fd& x, const auto_fd& y) { return !(x == y); } inline bool operator== (const auto_fd& x, nullfd_t) { return x.get () == -1; } inline bool operator!= (const auto_fd& x, nullfd_t y) { return !(x == y); } // An [io]fstream that can be initialized with a file descriptor in addition // to a file name and that also by default enables exceptions on badbit and // failbit. So instead of a dance like this: // // ifstream ifs; // ifs.exceptions (ifstream::badbit | ifstream::failbit); // ifs.open (path.string ()); // // You can simply do: // // ifdstream ifs (path); // // Notes and limitations: // // - char only // - input or output but not both (can use a union of two streams for that) // - no support for put back // - use of tell[gp]() and seek[gp]() is discouraged on Windows for // fdstreams opened in the text mode (see fdbuf::seekoff() implementation // for reasoning and consider using non-standard tellg() and seekg() in // fdbuf, instead) // - non-blocking file descriptor is supported only by showmanyc() function // and only for pipes on Windows, in contrast to POSIX systems // - throws ios::failure in case of open(), read(), write(), close(), // seek[gp](), or tell[gp]() errors // - exception mask has at least badbit // - after catching an exception caused by badbit the stream is no longer // usable // - not movable, though can be easily supported (or not: there is no move // constructor for istream/ostream in GCC 4.9) // - passing to constructor auto_fd with a negative file descriptor is valid // and results in the creation of an unopened object // class LIBBUTL_SYMEXPORT fdbuf: public std::basic_streambuf { public: fdbuf () = default; // Unless specified, the current read/write position is assumed to // be 0 (note: not queried). // fdbuf (auto_fd&&, std::uint64_t pos = 0); // Before we invented auto_fd into fdstreams we keept fdbuf opened on // faulty close attempt. Now fdbuf is always closed by close() function. // This semantics change seems to be the right one as there is no reason to // expect fdclose() to succeed after it has already failed once. // void close () {fd_.close ();} auto_fd release (); void open (auto_fd&&, std::uint64_t pos = 0); bool is_open () const {return fd_.get () >= 0;} int fd () const {return fd_.get ();} // Set the file descriptor blocking mode returning the previous mode on // success and throwing ios::failure otherwise (see fdmode() for details). // // Note that besides calling fdmode(fd()), this function also updating its // internal state according to the new mode. // bool blocking (bool); public: using base = std::basic_streambuf; using int_type = base::int_type; using traits_type = base::traits_type; using pos_type = base::pos_type; // std::streampos using off_type = base::off_type; // std::streamoff // basic_streambuf input interface. // public: virtual std::streamsize showmanyc (); virtual int_type underflow (); // Direct access to the get area. Use with caution. // using base::gptr; using base::egptr; using base::gbump; // Return the (logical) position of the next byte to be read. // // Note that on Windows when reading in the text mode the logical position // may differ from the physical file descriptor position due to the CRLF // character sequence translation. See the seekoff() implementation for // more background on this issue. // std::uint64_t tellg () const {return off_ - (egptr () - gptr ());} // Seek to the (logical) position as if by reading the specified number of // bytes from the beginning of the stream. Throw ios::failure on the // underlying OS errors. // void seekg (std::uint64_t); private: bool load (); // basic_streambuf output interface. // public: virtual int_type overflow (int_type); virtual int sync (); virtual std::streamsize xsputn (const char_type*, std::streamsize); // Return the (logical) position of the next byte to be written. // std::uint64_t tellp () const {return off_ + (pptr () - buf_);} // basic_streambuf positioning interface (both input/output). // public: virtual pos_type seekpos (pos_type, std::ios_base::openmode); virtual pos_type seekoff (off_type, std::ios_base::seekdir, std::ios_base::openmode); private: bool save (); private: auto_fd fd_; std::uint64_t off_; char buf_[8192]; bool non_blocking_ = false; }; // File stream mode. // // The text/binary flags have the same semantics as those in std::fstream. // Specifically, this is a noop for POSIX systems where the two modes are // the same. On Windows, when reading in the text mode the sequence of 0xD, // 0xA characters is translated into the single OxA character and 0x1A is // interpreted as EOF. When writing in the text mode the OxA character is // translated into the 0xD, 0xA sequence. // // The skip flag instructs the stream to skip to the end before closing the // file descriptor. This is primarily useful when working with pipes where // you may want not to "offend" the other end by closing your end before // reading all the data. // // The blocking/non_blocking flags determine whether the IO operation should // block or return control if currently there is no data to read or no room // to write. Only the istream::readsome() function supports the semantics of // non-blocking operations. In contrast to POSIX systems, we only support // this for pipes on Windows, always assuming the blocking mode for other // file descriptors. IO stream operations other than readsome() are illegal // in the non-blocking mode and result in the badbit being set (note that // it is not the more appropriate failbit for implementation reasons). // enum class fdstream_mode: std::uint16_t { text = 0x01, binary = 0x02, skip = 0x04, blocking = 0x08, non_blocking = 0x10 }; inline fdstream_mode operator& (fdstream_mode, fdstream_mode); inline fdstream_mode operator| (fdstream_mode, fdstream_mode); inline fdstream_mode operator&= (fdstream_mode&, fdstream_mode); inline fdstream_mode operator|= (fdstream_mode&, fdstream_mode); // Extended (compared to ios::openmode) file open flags. // enum class fdopen_mode: std::uint16_t { in = 0x01, // Open for reading. out = 0x02, // Open for writing. append = 0x04, // Seek to the end of file before each write. truncate = 0x08, // Discard the file contents on open. create = 0x10, // Create a file if not exists. exclusive = 0x20, // Fail if the file exists and the create flag is set. binary = 0x40, // Set binary translation mode. at_end = 0x80, // Seek to the end of stream immediately after open. none = 0 // Usefull when building the mode incrementally. }; inline fdopen_mode operator& (fdopen_mode, fdopen_mode); inline fdopen_mode operator| (fdopen_mode, fdopen_mode); inline fdopen_mode operator&= (fdopen_mode&, fdopen_mode); inline fdopen_mode operator|= (fdopen_mode&, fdopen_mode); class LIBBUTL_SYMEXPORT fdstream_base { protected: fdstream_base () = default; fdstream_base (auto_fd&&, std::uint64_t pos); fdstream_base (auto_fd&&, fdstream_mode, std::uint64_t pos); public: int fd () const {return buf_.fd ();} protected: fdbuf buf_; }; // iofdstream constructors and open() functions that take openmode as an // argument mimic the corresponding iofstream functions in terms of the // openmode mask interpretation. They throw std::invalid_argument for an // invalid combination of flags (as per the standard). Note that the in and // out flags are always added implicitly for ifdstream and ofdstream, // respectively. // // iofdstream constructors and open() functions that take fdopen_mode as an // argument interpret the mask literally just ignoring some flags which are // meaningless in the absense of others (read more on that in the comment // for fdopen()). Note that the in and out flags are always added implicitly // for ifdstream and ofdstream, respectively. // // iofdstream constructors and open() functions that take file path as a // const std::string& or const char* may throw the invalid_path exception. // // Passing auto_fd with a negative file descriptor is valid and results in // the creation of an unopened object. // // Also note that open() and close() functions can be successfully called // for an opened and unopened objects respectively. That is in contrast with // iofstream that sets failbit in such cases. // // Note that ifdstream destructor will close an open file descriptor but // will ignore any errors. To detect such errors, call close() explicitly. // // This is a sample usage of iofdstreams with process. Note that here it is // expected that the child process reads from STDIN first and writes to // STDOUT afterwards. // // try // { // process pr (args, -1, -1); // // try // { // // In case of exception, skip and close input after output. // // // ifdstream is (move (pr.in_ofd), fdstream_mode::skip); // ofdstream os (move (pr.out_fd)); // // // Write. // // os.close (); // Don't block the other end. // // // Read. // // is.close (); // Skip till end and close. // // if (pr.wait ()) // { // return ...; // Good. // } // // // Non-zero exit, diagnostics presumably issued, fall through. // } // catch (const failure&) // { // // IO failure, child exit status doesn't matter. Just wait for the // // process completion and fall through. // // // // Note that this is optional if the process_error handler simply // // falls through since process destructor will wait (but will ignore // // any errors). // // // pr.wait (); // } // // error << .... ; // // // Fall through. // } // catch (const process_error& e) // { // error << ... << e; // // if (e.child ()) // exit (1); // // // Fall through. // } // // throw failed (); // class LIBBUTL_SYMEXPORT ifdstream: public fdstream_base, public std::istream { public: // Create an unopened object. // explicit ifdstream (iostate e = badbit | failbit); explicit ifdstream (auto_fd&&, iostate e = badbit | failbit, std::uint64_t pos = 0); ifdstream (auto_fd&&, fdstream_mode m, iostate e = badbit | failbit, std::uint64_t pos = 0); explicit ifdstream (const char*, openmode = in, iostate e = badbit | failbit); explicit ifdstream (const std::string&, openmode = in, iostate e = badbit | failbit); explicit ifdstream (const path&, openmode = in, iostate e = badbit | failbit); ifdstream (const char*, fdopen_mode, iostate e = badbit | failbit); ifdstream (const std::string&, fdopen_mode, iostate e = badbit | failbit); ifdstream (const path&, fdopen_mode, iostate e = badbit | failbit); ~ifdstream () override; void open (const char*, openmode = in); void open (const std::string&, openmode = in); void open (const path&, openmode = in); void open (const char*, fdopen_mode); void open (const std::string&, fdopen_mode); void open (const path&, fdopen_mode); void open (auto_fd&& fd, std::uint64_t pos = 0) { buf_.open (std::move (fd), pos); clear (); } void open (auto_fd&& fd, fdstream_mode m, std::uint64_t pos = 0) { open (std::move (fd), pos); skip_ = (m & fdstream_mode::skip) == fdstream_mode::skip; } void close (); auto_fd release (); // Note: no skipping. bool is_open () const {return buf_.is_open ();} // Read the textual stream. The stream is supposed not to contain the null // character. // std::string read_text (); // Read the binary stream. // std::vector read_binary (); private: bool skip_ = false; }; // Note that ofdstream requires that you explicitly call close() before // destroying it. Or, more specifically, the ofdstream object should not be // in the opened state by the time its destructor is called, unless it is in // the "not good" state (good() == false) or the destructor is being called // during the stack unwinding due to an exception being thrown // (std::uncaught_exception() == true). This is enforced with assert() in // the ofdstream destructor. // class LIBBUTL_SYMEXPORT ofdstream: public fdstream_base, public std::ostream { public: // Create an unopened object. // explicit ofdstream (iostate e = badbit | failbit); explicit ofdstream (auto_fd&&, iostate e = badbit | failbit, std::uint64_t pos = 0); ofdstream (auto_fd&&, fdstream_mode m, iostate e = badbit | failbit, std::uint64_t pos = 0); explicit ofdstream (const char*, openmode = out, iostate e = badbit | failbit); explicit ofdstream (const std::string&, openmode = out, iostate e = badbit | failbit); explicit ofdstream (const path&, openmode = out, iostate e = badbit | failbit); ofdstream (const char*, fdopen_mode, iostate e = badbit | failbit); ofdstream (const std::string&, fdopen_mode, iostate e = badbit | failbit); ofdstream (const path&, fdopen_mode, iostate e = badbit | failbit); ~ofdstream () override; void open (const char*, openmode = out); void open (const std::string&, openmode = out); void open (const path&, openmode = out); void open (const char*, fdopen_mode); void open (const std::string&, fdopen_mode); void open (const path&, fdopen_mode); void open (auto_fd&& fd, std::uint64_t pos = 0) { buf_.open (std::move (fd), pos); clear (); } void close () {if (is_open ()) flush (); buf_.close ();} auto_fd release (); bool is_open () const {return buf_.is_open ();} }; // The std::getline() replacement that provides a workaround for libstdc++'s // ios::failure ABI fiasco (#66145) by throwing ios::failure, as it is // defined at libbutl build time (new ABI on recent distributions) rather // than libstdc++ build time (still old ABI on most distributions). // // Notes: // // - This relies of ADL so if the stream is used via the std::istream // interface, then std::getline() will still be used. To put it another // way, this is "the best we can do" until GCC folks get their act // together. // // - The fail and eof bits may be left cleared in the stream exception mask // when the function throws because of badbit. // LIBBUTL_SYMEXPORT ifdstream& getline (ifdstream&, std::string&, char delim = '\n'); // Open a file returning an auto_fd that holds its file descriptor on // success and throwing ios::failure otherwise. // // The mode argument should have at least one of the in or out flags set. // The append and truncate flags are meaningless in the absense of the out // flag and are ignored without it. The exclusive flag is meaningless in the // absense of the create flag and is ignored without it. Note also that if // the exclusive flag is specified then a dangling symbolic link is treated // as an existing file. // // The permissions argument is taken into account only if the file is // created. Note also that permissions can be adjusted while being set in a // way specific for the OS. On POSIX systems they are modified with the // process' umask, so effective permissions are permissions & ~umask. On // Windows permissions other than ru and wu are unlikelly to have effect. // // Also note that on POSIX the FD_CLOEXEC flag is set for the file descriptor // to prevent its leakage into child processes. On Windows, for the same // purpose, the _O_NOINHERIT flag is set. Note that the process class, that // passes such a descriptor to the child, makes it inheritable for a while. // LIBBUTL_SYMEXPORT auto_fd fdopen (const char*, fdopen_mode, permissions = permissions::ru | permissions::wu | permissions::rg | permissions::wg | permissions::ro | permissions::wo); LIBBUTL_SYMEXPORT auto_fd fdopen (const std::string&, fdopen_mode, permissions = permissions::ru | permissions::wu | permissions::rg | permissions::wg | permissions::ro | permissions::wo); LIBBUTL_SYMEXPORT auto_fd fdopen (const path&, fdopen_mode, permissions = permissions::ru | permissions::wu | permissions::rg | permissions::wg | permissions::ro | permissions::wo); // Duplicate an open file descriptor. Throw ios::failure on the underlying // OS error. // // Note that on POSIX the FD_CLOEXEC flag is set for the new descriptor if it // is present for the source one. That's in contrast to POSIX dup() that // doesn't copy file descriptor flags. Also note that duplicating descriptor // and setting the flag is not an atomic operation generally, but it is in // regards to child process spawning (to prevent file descriptor leakage into // a child process). // // Note that on Windows the _O_NOINHERIT flag is set for the new descriptor // if it is present for the source one. That's in contrast to Windows _dup() // that doesn't copy the flag. Also note that duplicating descriptor and // setting the flag is not an atomic operation generally, but it is in // regards to child process spawning (to prevent file descriptor leakage into // a child process). // LIBBUTL_SYMEXPORT auto_fd fddup (int fd); // Set the translation and/or blocking modes for the file descriptor. Throw // invalid_argument for an invalid combination of flags. Return the previous // mode on success, throw ios::failure otherwise. // // The text and binary flags are mutually exclusive on Windows. On POSIX // system the two modes are the same and so no check is performed. // // The blocking and non-blocking flags are mutually exclusive. In contrast // to POSIX systems, on Windows the non-blocking mode is only supported for // pipes, with the blocking mode assumed for other file descriptors // regardless of the flags. // // Note that on Wine currently pipes always behave as blocking regardless of // the mode set. // LIBBUTL_SYMEXPORT fdstream_mode fdmode (int, fdstream_mode); // Portable functions for obtaining file descriptors of standard streams. // Throw ios::failure on the underlying OS error. // // Note that you normally wouldn't want to close them using fddup() to // convert them to auto_fd, for example: // // ifdstream is (fddup (stdin_fd ())); // LIBBUTL_SYMEXPORT int stdin_fd (); LIBBUTL_SYMEXPORT int stdout_fd (); LIBBUTL_SYMEXPORT int stderr_fd (); // Convenience functions for setting the translation mode for standard // streams. // LIBBUTL_SYMEXPORT fdstream_mode stdin_fdmode (fdstream_mode); LIBBUTL_SYMEXPORT fdstream_mode stdout_fdmode (fdstream_mode); LIBBUTL_SYMEXPORT fdstream_mode stderr_fdmode (fdstream_mode); // Low-level, nothrow file descriptor API. // // Close the file descriptor. Return true on success, set errno and return // false otherwise. // LIBBUTL_SYMEXPORT bool fdclose (int) noexcept; // Open the null device (e.g., /dev/null) that discards all data written to // it and provides no data for read operations (i.e., yelds EOF on read). // Return an auto_fd that holds its file descriptor on success and throwing // ios::failure otherwise. // // On Windows the null device is NUL and writing anything substantial to it // (like redirecting a process' output) is extremely slow, as in, an order // of magnitude slower than writing to disk. If you are using the descriptor // yourself this can be mitigated by setting the binary mode (already done // by fdopen()) and using a buffer of around 64K. However, sometimes you // have no control of how the descriptor will be used. For instance, it can // be used to redirect a child's stdout and the way the child sets up its // stdout is out of your control (on Windows). For such cases, there is an // emulation via a temporary file. Mostly it functions as a proper null // device with the file automatically removed once the descriptor is // closed. One difference, however, would be if you were to both write to // and read from the descriptor. // // Note that on POSIX the FD_CLOEXEC flag is set for the file descriptor to // prevent its leakage into child processes. On Windows, for the same // purpose, the _O_NOINHERIT flag is set. // #ifndef _WIN32 LIBBUTL_SYMEXPORT auto_fd fdnull (); #else LIBBUTL_SYMEXPORT auto_fd fdnull (bool temp = false); #endif struct fdpipe { auto_fd in; auto_fd out; void close () { in.close (); out.close (); } }; // Create a pipe. Throw ios::failure on the underlying OS error. By default // both ends of the pipe are opened in the text mode. Pass the binary flag // to instead open them in the binary mode. Passing a mode other than none // or binary is illegal. // // Note that on Windows both ends of the created pipe are not inheritable. // In particular, the process class that uses fdpipe underneath makes the // appropriate end (the one being passed to the child) inheritable. // // Note that on POSIX the FD_CLOEXEC flag is set for both ends, so they get // automatically closed by the child process to prevent undesired behaviors // (such as child deadlock on read from a pipe due to the write-end leakage // into the child process). Opening a pipe and setting the flag is not an // atomic operation generally, but it is in regards to child process spawning // (to prevent file descriptor leakage into child processes spawned from // other threads). Also note that you don't need to reset the flag for a pipe // end being passed to the process class ctor. // LIBBUTL_SYMEXPORT fdpipe fdopen_pipe (fdopen_mode = fdopen_mode::none); // Seeking. // enum class fdseek_mode {set, cur, end}; LIBBUTL_SYMEXPORT std::uint64_t fdseek (int, std::int64_t, fdseek_mode); // Truncate or expand the file to the specified size. Throw ios::failure on // the underlying OS error. // LIBBUTL_SYMEXPORT void fdtruncate (int, std::uint64_t); // Test whether a file descriptor refers to a terminal. Throw ios::failure on // the underlying OS error. // LIBBUTL_SYMEXPORT bool fdterm (int); // Wait until one or more file descriptors becomes ready for input (reading) // or output (writing). Return the pair of numbers of descriptors that are // ready. Throw std::invalid_argument if anything is wrong with arguments // (both sets are empty, invalid fd, etc). Throw ios::failure on the // underlying OS error. // // Note that the function clears all the previously-ready entries on each // call. Entries with nullfd are ignored. // // On Windows only pipes and only their input (read) ends are supported. // struct fdselect_state { int fd; bool ready; // Note: intentionally non-explicit to allow implicit initialization when // pushing to fdselect_set. // fdselect_state (int fd): fd (fd), ready (false) {} }; using fdselect_set = small_vector; LIBBUTL_SYMEXPORT std::pair fdselect (fdselect_set& ifds, fdselect_set& ofds); inline std::size_t ifdselect (fdselect_set& ifds) { fdselect_set ofds; return fdselect (ifds, ofds).first; } inline std::size_t ofdselect (fdselect_set& ofds) { fdselect_set ifds; return fdselect (ifds, ofds).second; } // As above but wait up to the specified timeout returning a pair of zeroes // if none of the descriptors became ready. // // @@ Maybe merge it with the above via a default/optional value? // // LIBBUTL_SYMEXPORT std::pair // fdselect (fdselect_set&, fdselect_set&, const duration& timeout); // POSIX read() function wrapper. In particular, it supports the semantics // of non-blocking read for pipes on Windows. // // Note that on Wine currently pipes always behave as blocking regardless of // the mode. // LIBBUTL_SYMEXPORT std::streamsize fdread (int, void*, std::size_t); } #include
|
__label__pos
| 0.875874 |
Home Lessons Calculators Worksheets Resources Feedback Algebra Tutors
Calculator Output
Simplifying
-2p2 + 28p + -66 = 0
Reorder the terms:
-66 + 28p + -2p2 = 0
Solving
-66 + 28p + -2p2 = 0
Solving for variable 'p'.
Factor out the Greatest Common Factor (GCF), '2'.
2(-33 + 14p + -1p2) = 0
Factor a trinomial.
2((-11 + p)(3 + -1p)) = 0
Ignore the factor 2.
Subproblem 1
Set the factor '(-11 + p)' equal to zero and attempt to solve: Simplifying -11 + p = 0 Solving -11 + p = 0 Move all terms containing p to the left, all other terms to the right. Add '11' to each side of the equation. -11 + 11 + p = 0 + 11 Combine like terms: -11 + 11 = 0 0 + p = 0 + 11 p = 0 + 11 Combine like terms: 0 + 11 = 11 p = 11 Simplifying p = 11
Subproblem 2
Set the factor '(3 + -1p)' equal to zero and attempt to solve: Simplifying 3 + -1p = 0 Solving 3 + -1p = 0 Move all terms containing p to the left, all other terms to the right. Add '-3' to each side of the equation. 3 + -3 + -1p = 0 + -3 Combine like terms: 3 + -3 = 0 0 + -1p = 0 + -3 -1p = 0 + -3 Combine like terms: 0 + -3 = -3 -1p = -3 Divide each side by '-1'. p = 3 Simplifying p = 3
Solution
p = {11, 3}
Processing time: 1 ms. 76634513 equations since February 08, 2004. Disclaimer
Equation Factoring Calculator
Equation: Variable:
Hint: Selecting "AUTO" in the variable box will make the calculator automatically solve for the first variable it sees.
Home Lessons Calculators Worksheets Resources Feedback Algebra Tutors
|
__label__pos
| 0.976538 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I've a xpath that looks like this:
$path = '//*[@id="page-content"]/table/tbody/tr[3]/td['.$i.']/div/a';
where $i goes from 1 to X. I would normaly use:
for($i=1; $i<X;$i++){
$path = '//*[@id="page-content"]/table/tbody/tr[3]/td['.$i.']/div/a';
$nodelist = $xpath->query($path);
$result = $nodelist->item(0)->nodeValue;
};
However, in this case, I dont know how much is X. Is there any way to loop through this without knowing X?
share|improve this question
1
i'm not sure I understand your question. Your for() loop is already looping up to 'X' times, regardless if you know the value there. What exactly are you trying to do? – JWiley Nov 9 '12 at 15:21
4 Answers 4
up vote 2 down vote accepted
If I'm understanding your question, you're asking how to loop up until the max number of <td> elements under your XPath?
You could retrieve the number of nodes using:
count(//*[@id="page-content"]/table/tbody/tr[3]/td) and store it as a temp variable, then just use it in your next statement like so:
for($i=1; $i<numberOfTdElements;$i++){
$path = '//*[@id="page-content"]/table/tbody/tr[3]/td['.$i.']/div/a';
$nodelist = $xpath->query($path);
$result = $nodelist->item(0)->nodeValue;
};
In response to hakre's suggestion:
$tbody = $doc->getElementsByTagName('tbody')->item(0);
// our query is relative to the tbody node
$query = 'count(tr[3]/td)';
$tdcount = $xpath->evaluate($query, $tbody);
echo "There are $tdcount elements under tr[3]\n";
And then combine it all in:
for($i=1; $i<$tdcount;$i++){
$path = '//*[@id="page-content"]/table/tbody/tr[3]/td['.$i.']/div/a';
$nodelist = $xpath->query($path);
$result = $nodelist->item(0)->nodeValue;
};
share|improve this answer
Nice idea, too. You might want to show how the count can be returned via DOMXPath::evaluate, too. – hakre Nov 9 '12 at 15:45
@hakre Good suggestion, I added that. – JWiley Nov 9 '12 at 16:24
Why not just stack em? Something like (fragile code, add your checks):
// first xpath for the outer node-list
$tds = $xpath->query('//*[@id="page-content"]/table/tbody/tr[3]/td');
foreach ($tds as $td)
{
// fetch the included values with a relative xpath to the current node
$nodelist = $xpath->query('./div/a', $td);
...
}
And actually you wont even need that inner nodelist, because you want to query the node-values in the end. However I leave this here to show what you can do straight ahead by using an xpath relative to a concrete node.
So if you need the first <a> element inside any <div> inside the third <tr> of any table inside of any node with the id "page-content", you can write it as such directly, it is one query:
//*[@id="page-content"]/table/tbody/tr[3]/td/div/a[1]
The predicate (that are the brackets) is only for the node in the path prefixed to it, so the [1] is only for a at the end as was the [3] only for the tr.
Code Example:
$as = $xpath->query('//*[@id="page-content"]/table/tbody/tr[3]/td/div/a[1]');
foreach ($as as $a)
{
echo $a->nodeValue, "\n";
}
So this would give you the result as a single node-list, you do not need to run a second xpath query.
share|improve this answer
I like yours better, more elegant/cleaner. +1 – JWiley Nov 9 '12 at 15:28
Much better solution than the one that was accepted. – Michael Kay Nov 9 '12 at 17:37
@MichaelKay I agree. Maybe he chose that answer because it required the least change to his current code, which was my intention. – JWiley Nov 9 '12 at 18:52
I think what you are trying to do is fetch every a element that is a child of a div, which in its turn is a child of any td element that, in its turn, is a child of every third tr element, etc. If that is correct, you can simply fetch these with this query:
<?php
$doc = new DOMDocument();
$doc->loadXML( $xml );
$xpath = new DOMXPath( $doc );
$nodes = $xpath->query( '//*[@id="page-content"]/table/tbody/tr[3]/td/div/a' );
foreach( $nodes as $node )
{
echo $node->nodeValue . '<br>';
}
Where $xml is a document, similar to this:
<?php
$xml = <<<XML
<?xml version="1.0" encoding="utf-8" ?>
<result>
<div id="page-content">
<table>
<tbody>
<tr>
<td>
<div><a>This one shouldn't be fetched</a></div>
</td>
</tr>
<tr>
<td>
<div><a>This one shouldn't be fetched</a></div>
</td>
</tr>
<tr>
<td>
<div><a>This one should be fetched</a></div>
</td>
<td>
<div><a>This one should be fetched</a></div>
</td>
<td>
<div><a>This one should be fetched</a></div>
</td>
<td>
<div><a>This one should be fetched</a></div>
</td>
<td>
<div><a>This one should be fetched</a></div>
</td>
</tr>
<tr>
<td>
<div><a>This one shouldn't be fetched</a></div>
</td>
</tr>
</tbody>
</table>
</div>
</result>
XML;
In other words, no need to loop trough all these td elements. You can fetch them all in one go, resulting in a DOMNodeList with all required nodes.
share|improve this answer
$doc = new DOMDocument();
$doc->loadXML( $xml );
$xpath = new DOMXPath( $doc );
$nodes = $xpath->query( '/result/div[@id="page-content"]/table/tbody/tr[3]/td/div/a');
foreach( $nodes as $node )
{
echo $node->nodeValue . '<br>';
}
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.803563 |
Streaming live at 10am (PST)
Re-Capta type Website
I want to create a site that has a purpose somewhat similar to Re-Capta; actually more like Inverse of Re-Capta.
The vision is:
1. Admin will post a few hundred words in some language
1.5 Allow members to sign-up if they give a fee of $1.00, say.
2. Ask members to articulate what feelings or meanings it evokes, from their POV. Members can also post a photo or drawing.
3 Let members engage in Likes
4. Let members search by Highest Likes
5. Let members Submit their own words for Review by Admin, Before posting on website.
6. Allow members to donate small $ amounts
Would it be easy for a non-programmer to create a Secure site for this. Would it also be usable on a Mobile phone?
Thank you in advance for your time!!
|
__label__pos
| 0.95964 |
Boost C++ Libraries
...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards
This is the documentation for a snapshot of the develop branch, built from commit 9c88ccf530.
PrevUpHomeNext
Move algorithms
The standard library offers several copy-based algorithms. Some of them, like std::copy or std::uninitialized_copy are basic building blocks for containers and other data structures. This library offers move-based functions for those purposes:
template<typename I, typename O> O move(I, I, O);
template<typename I, typename O> O move_backward(I, I, O);
template<typename I, typename F> F uninitialized_move(I, I, F);
template<typename I, typename F> F uninitialized_copy_or_move(I, I, F);
The first 3 are move variations of their equivalent copy algorithms, but copy assignment and copy construction are replaced with move assignment and construction. The last one has the same behaviour as std::uninitialized_copy but since several standand library implementations don't play very well with move_iterators, this version is a portable version for those willing to use move iterators.
#include "movable.hpp"
#include <boost/move/algorithm.hpp>
#include <cassert>
#include <boost/aligned_storage.hpp>
int main()
{
const std::size_t ArraySize = 10;
movable movable_array[ArraySize];
movable movable_array2[ArraySize];
//move
boost::move(&movable_array2[0], &movable_array2[ArraySize], &movable_array[0]);
assert(movable_array2[0].moved());
assert(!movable_array[0].moved());
//move backward
boost::move_backward(&movable_array[0], &movable_array[ArraySize], &movable_array2[ArraySize]);
assert(movable_array[0].moved());
assert(!movable_array2[0].moved());
//uninitialized_move
boost::aligned_storage< sizeof(movable)*ArraySize
, boost::alignment_of<movable>::value>::type storage;
movable *raw_movable = static_cast<movable*>(static_cast<void*>(&storage));
boost::uninitialized_move(&movable_array2[0], &movable_array2[ArraySize], raw_movable);
assert(movable_array2[0].moved());
assert(!raw_movable[0].moved());
return 0;
}
PrevUpHomeNext
|
__label__pos
| 0.541666 |
1
In order to be able to copy all files to certain specific subfolders in site/default/files during my migration från D6 to D7 I use the file class MigrateFileUri (as far as I understand MigrateFileFid does not care about destination directories). So, in prepareRow() I fetch the filepath from the legacy db for the main image and likewise for the thumbnail of that same product.
The problem occurs when there is only a main image and no thumbnail in a product node. What happens is that the thumbnail field gets the previuos thumbnail path used. This is my prepareRow():
public function prepareRow($row){
// Always include this fragment at the beginning of every prepareRow()
// implementation, so parent classes can ignore rows.
if (parent::prepareRow($row) === FALSE) {
return FALSE;
}
$fid = (isset($row->field_sensorimage) && !empty($row->field_sensorimage[0])) ? $row->field_sensorimage[0]: FALSE;
if($fid != FALSE){
$filepath = Database::getConnection('default', 'legacy')->query('SELECT f.filepath FROM {files} f WHERE f.fid = :fid',array(':fid' => $fid))->fetchAssoc();
$this->addFieldMapping('field_sensorimage_newsite')->defaultValue($filepath);
}
$fid2 = (isset($row->field_thumbnail) && !empty($row->field_thumbnail[0])) ? $row->field_thumbnail[0]: FALSE;
if($fid2 != FALSE){
$filepath = Database::getConnection('default', 'legacy')->query('SELECT f.filepath FROM {files} f WHERE f.fid = :fid',array(':fid' => $fid2))->fetchAssoc();
$this->addFieldMapping('field_thumbnail_newsite')->defaultValue($filepath);
dsm($fid2);
dsm($filepath);
}
return TRUE;
}
This happens: if let's say the 100 first nodes has no thumbnail, it works as expected. No thumbnails are copied to the destination directory. But after the first node with a thumbnail all the following nodes that did not have a thumbnail in the D6 legacy site, gets the previous nodes' thumbnail all the same in the new D7 site.
Thanks to the 2 dsm() near the end of the snippet above I can confirm that no thumbnails gets duplicated within that if statement. There are about 50 thumbnails mapped. But the end result is that several hundred nodes gets duplicate thumbnails from other nodes.
So once the first thumbnail is migrated, that thumbnail path is inherited by all the following nodes and renamed with a number extension *_1.jog, *_2.jpg .... *_30.jpg as long as there is no thumbnail in the legacy field.
Now, where does this happen and how can I stop it from happening? I have rewritten the prepareRow() many times, but always with this same result. I am really stuck... Please help :-(
1 Answer 1
1
Finally found a workaorund… by deliberately giving the destination field an empty value instead of not doing anything when $fid2 is FALSE, I got rid of all the duplications :-)
if($fid2 === FALSE){
$filepath = array();
$this->addFieldMapping('field_thumbnail_newsite')->defaultValue($filepath);
} else {
$filepath = Database::getConnection('default', 'legacy')->query('SELECT f.filepath FROM {files} f WHERE f.fid = :fid',array(':fid' => $fid2))->fetchAssoc();
$this->addFieldMapping('field_thumbnail_newsite')->defaultValue($filepath);
}
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.933774 |
Reply To: Kinect Sport Season 2?
HomeForumsGeneral DiscussionKinect Sport Season 2?Reply To: Kinect Sport Season 2?
Author Replies
QuickMythril # Posted on 2012-01-31 at 12:24AM UTC
avatar
none of those games are ap2.5 so there is no reason they wouldn’t work after changing your drive from 2.0 to 3.0, the discs should not need reburned or anything. what exactly is happening when you try to play them?
|
__label__pos
| 0.979027 |
Results 1 to 6 of 6
Math Help - Conics - Distance between Two Points on Circle
1. #1
Member
Joined
Jul 2009
Posts
132
Conics - Distance between Two Points on Circle
R (x, y) P (a,b) Q (c,d) are points on x^2 + y^2 + 2gx + 2fy +k = 0
i) if d is the distance between points R and P, show that:
-d^2/2 = x.a + y.b + g(x+a) + f(y+b) + k
I am not sure how to begin this question. Any approach is possible. Thanks guys
Follow Math Help Forum on Facebook and Google+
2. #2
Super Member
Joined
Jun 2009
Posts
806
Thanks
4
Quote Originally Posted by Lukybear View Post
R (x, y) P (a,b) Q (c,d) are points on x^2 + y^2 + 2gx + 2fy +k = 0
i) if d is the distance between points R and P, show that:
-d^2/2 = x.a + y.b + g(x+a) + f(y+b) + k
I am not sure how to begin this question. Any approach is possible. Thanks guys
Mid point of RP is ( \frac{x+a}{2}, \frac{y+b}{2})
If ( -g, -f ) is the center of the circle,
then (\frac{d}{2})^2 = [(a+g)^2 + (b+f)^2] - [(\frac{x+a}{2} + g)^2 + (\frac{y+b}{2} + f)^2]
Simplify and proceed.
Last edited by sa-ri-ga-ma; June 5th 2010 at 06:49 AM.
Follow Math Help Forum on Facebook and Google+
3. #3
Member
Joined
Jul 2009
Posts
132
Could you just expand on how the centre of circle is acquired?
Also, can i just acquire that the method used was by pythagoras? i.e. mid pt to centre is perpendicular to PR?
Thanks.
Follow Math Help Forum on Facebook and Google+
4. #4
Super Member
Joined
Jun 2009
Posts
806
Thanks
4
Quote Originally Posted by Lukybear View Post
Could you just expand on how the centre of circle is acquired?
Also, can i just acquire that the method used was by pythagoras? i.e. mid pt to centre is perpendicular to PR?
Thanks.
The general equation of the circle is x^2 + y^2 + 2gx + 2fy +K = 0
The center of this circle is (-g, -f). If r is the radius of this circle, then
(x+g)^2 + (y+f)^2 = r^2
x^2 + 2gx + g^2 + y^2 + 2fy + f^2 = r^2
x^2 + y^2 + 2gx + 2fy + g^2 + f^2 - r^2 = 0
x^2 + y^2 + 2gx + 2fy + k = 0
For the second part of the question, it is yes.
Follow Math Help Forum on Facebook and Google+
5. #5
Super Member
Joined
May 2006
From
Lexington, MA (USA)
Posts
11,547
Thanks
539
Hello, Lukybear!
Your variables are confusing.
. . d is a coordinate . . . and a distance ?
. . x is a coordinate . . . and a variable ?
I must revise the problem.
P(a,b)\text{ and }R(c,d) are points on: . x^2 + y^2 + 2px + 2qy +r \:=\: 0
i) If D is the distance between points P and R, show that:
. . -\frac{D^2}{2} \;=\;ac + bd + p(a+c) + q(b+d) + r
\begin{array}{ccccccc}P(a,b)\text{ is on the circle:} &a^2+b^2+2pa + 2qb + r &=& 0 \\ \\[-3mm]<br />
R(c,d)\text{ is on the circle:} & c^2+d^2+2pc + 2qd + r &=& 0 \end{array}
Add the equations: . a^2+c^2+b^2+d^2+2pa + 2pc + 2qb + 2qd + 2r \:=\:0
And we have: . (a^2+c^2)+(b^2+d^2) + 2p(a+c) + 2q(b+d) + 2r \:=\:0
. . Hence: . (a^2+b^2) + (b^2+d^2) \;=\;-2\bigg[p(a+c) + q(b+d) + r\bigg] .[1]
The distance between P and R is given by: . D^2 \;=\;(a-c)^2 + (b-d)^2)
And we have: . D^2 \;=\;a^2-2ac + c^2 + b^2 - 2bd + d^2
. . Hence: . D^2 \;=\;(a^2+c^2) + (b^2 + d^2) - 2(ac + bd) .[2]
Substitute [1] into [2]: . D^2 \;=\;-2\bigg[p(a+c) + q(b+d) + r\bigg] - 2(ac+bd)
And we have: . D^2 \;=\;-2(ac+bd) - 2p(a+c) - 2q(b+d) - 2r
. . Therefore: . -\frac{D^2}{2} \;=\;ac+bd + p(a+c) + q(b+d) + r
Follow Math Help Forum on Facebook and Google+
6. #6
Member
Joined
Jul 2009
Posts
132
Wow thats brilliant. Thxs very much! Do apologise for question.
Follow Math Help Forum on Facebook and Google+
Similar Math Help Forum Discussions
1. Replies: 4
Last Post: May 2nd 2010, 03:34 PM
2. Replies: 3
Last Post: August 30th 2009, 12:11 PM
3. Conics-Circle Question
Posted in the Calculus Forum
Replies: 5
Last Post: May 17th 2009, 10:17 PM
4. Conics: Quick Circle Question
Posted in the Pre-Calculus Forum
Replies: 5
Last Post: April 22nd 2009, 05:12 AM
5. Vertical distance between 2 points on a circle
Posted in the Calculus Forum
Replies: 1
Last Post: March 18th 2009, 07:00 AM
Search Tags
/mathhelpforum @mathhelpforum
|
__label__pos
| 0.998317 |
guri guri - 8 months ago 56
Perl Question
perl extract and parse
I am new to Perl and trying to figure out the issue with the script.
2.what changes I can make to accept multiple XML one by one through command line?
3.How to direct o/p of function to text file?
4.How to direct o/p of function to i/p to other function as an argument?
Answer Source
The first question is simple: you declare @vuln and later use %vuln. You can store hash in an array if you wish, but the symbol %vuln has not been declared. So declare my %vuln;.
As for all others, handle it in the driver. The function extract is mostly good as it stands, reading and parsing a single file. Then, there are a number of ways to go about that.
You can loop over @ARGV, calling extract for each submitted file. For example
foreach my $file (@ARGV)
{
die "No file $file " if not -e $file;
my %vuln = NVD::extract($file);
# deal with %vuln ...
}
However, given all other checking you'd have to do, it is better to handle command-line arguments using the core module Getopt::Long. It also has options for reading a list of inputs into an array.
Once you are in a loop like the one above (over an array assigned by Getopt::Long), you can open an output file each time through, and write what you need for each processed input file (%vuln). How exactly you'd do that depends on specifics of what should be written to the output file(s).
You can also call other functions after extract and pass %vuln to them.
In principle I'd suggest passing a complex hash around by reference.
|
__label__pos
| 0.997572 |
In perl, an array is prefixed with an '@'
@_ is a special array. It is the array from which all passed arguments in a perl subroutine are accessible. It makes sense to pass all of the parameters to a subroutine as an array because the number of parameters is not fixed in perl (nor should it be). Why then, even in short subs, do we see things like this:
sub replace_nicknames {
my @names = @_;
foreach (@names) {
s/Jack/John/;
s/Bob/Robert/;
print "$_\n";
}
print "\n";
}
instead of this:
sub replace_nicknames {
foreach (@_){
s/Jack/John/;
s/Bob/Robert/;
print "$_\n";
}
print "\n";
}
The answer is because @_ is not really an array of parameters. It is an array of aliases to parameters. The assignment to a new local variable (or array, whatever) ensures that the caller's copy is unchanged when the subroutine exits.
Try it for yourself:
#!/usr/bin/perl
use strict;
my @names = ('Jack Kennedy', 'John Kennedy', 'Bob Hope', 'Robert Hope');
print "Before any test subroutine calls:\n";
&print_array(@names);
&replace_nicknames_safe(@names);
print "After the safe subroutine call:\n";
&print_array(@names);
&replace_nicknames_unsafe(@names);
print "After the unsafe subroutine call:\n";
@print_array(@names);
sub replace_nicknames_safe {
print "Inside safe subroutine:\n";
my @names = @_;
foreach (@names) {
s/Jack/John/;
s/Bob/Robert/;
}
&print_array(@names);
}
sub replace_nicknames_unsafe {
print "Inside unsafe subroutine:\n";
foreach (@_) {
s/Jack/John;
s/Bob/Robert;
}
&print_array(@_);
}
sub print_array {
my @array = @_;
for(my $i=0; $i<=$#array; $i++) {
print "$array[$i]";
unless($i==$#array) {
print ', ';
} else {
print "\n\n";
}
}
}
This should be the output you get:
Before any test subroutine calls:
Jack Kennedy, John Kennedy, Bob Hope, Robert Hope
Inside safe subroutine:
John Kennedy, John Kennedy, Robert Hope, Robert Hope
After the safe subroutine call:
Jack Kennedy, John Kennedy, Bob Hope, Robert Hope
Inside unsafe subroutine:
John Kennedy, John Kennedy, Robert Hope, Robert Hope
After the unsafe subroutine call:
John Kennedy, John Kennedy, Robert Hope, Robert Hope
|
__label__pos
| 0.994474 |
SUPPORT THE WORK
GetWiki
Objective-C
ARTICLE SUBJECTS
aesthetics →
being →
complexity →
database →
enterprise →
ethics →
fiction →
history →
internet →
knowledge →
language →
licensing →
linux →
logic →
method →
news →
perception →
philosophy →
policy →
purpose →
religion →
science →
sociology →
software →
truth →
unix →
wiki →
ARTICLE TYPES
essay →
feed →
help →
system →
wiki →
ARTICLE ORIGINS
critical →
discussion →
forked →
imported →
original →
Objective-C
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Use mdy dates|date=October 2018}}
factoids
| latest preview version =| latest preview date = Static typing>static, Dynamic typing, Weak typing>weakClang, GNU Compiler Collection>GCC| dialects =C (programming language)>C, SmalltalkGroovy (programming language)>Groovy, Java (programming language), Nu (programming language)>Nu, Objective-J, TOM (object-oriented programming language), Swift (programming language)>SwiftHTTP://NONDOT.ORG/SABRE/ >TITLE=CHRIS LATTNER'S HOMEPAGE FIRST=CHRIS ACCESSDATE=JUNE 3, 2014 QUOTE=THE SWIFT LANGUAGE IS THE PRODUCT OF TIRELESS EFFORT FROM A TEAM OF LANGUAGE EXPERTS, DOCUMENTATION GURUS, COMPILER OPTIMIZATION NINJAS, AND AN INCREDIBLY IMPORTANT INTERNAL DOGFOODING GROUP WHO PROVIDED FEEDBACK TO HELP REFINE AND BATTLE-TEST IDEAS. OF COURSE, IT ALSO GREATLY BENEFITED FROM THE EXPERIENCES HARD-WON BY MANY OTHER LANGUAGES IN THE FIELD, DRAWING IDEAS FROM OBJECTIVE-C, RUST, HASKELL, RUBY, PYTHON, C#, CLU, AND FAR TOO MANY OTHERS TO LIST., | programming language =| platform =| operating_system = Cross-platform| license =| website = developer.apple.com| wikibooks = Objective-C Programming}}Objective-C is a general-purpose, object-oriented programming language that adds Smalltalk-style messaging to the C programming language. It was the main programming language supported by Apple for the macOS and iOS operating systems, and their respective application programming interfaces (APIs) Cocoa and Cocoa Touch until the introduction of Swift.WEB
,weblink
, App Frameworks
, June 2014
, Apple
, February 13, 2019,
The programming language Objective-C was originally developed in the early 1980s. It was selected as the main language used by NeXT for its NeXTSTEP operating system, from which macOS and iOS are derived.WEB
,weblink
, A Brief History of Mac OS X
, Singh
, Amit
, December 2003
, Mac OS X Internals
, June 11, 2012, Portable Objective-C programs that do not use the Cocoa or Cocoa Touch libraries, or those using parts that may be ported or reimplemented for other systems, can also be compiled for any system supported by GNU Compiler Collection (GCC) or Clang.
Objective-C source code 'implementation' program files usually have {{mono|.m}} filename extensions, while Objective-C 'header/interface' files have {{mono|.h}} extensions, the same as C header files. Objective-C++ files are denoted with a {{mono|.mm}} file extension.
History
Objective-C was created primarily by Brad Cox and Tom Love in the early 1980s at their company Stepstone.WEB, Garling, Caleb, iPhone Coding Language Now World's Third Most Popular,weblink Wired (magazine), Wired, May 20, 2013, Both had been introduced to Smalltalk while at ITT Corporation's Programming Technology Center in 1981. The earliest work on Objective-C traces back to around that time.BOOK
, Wentk
, Richard
, Cocoa: Volume 5 of Developer Reference Apple Developer Series
, John Wiley and Sons
, 2009
,
,
,weblink
,
,
, 0-470-49589-8,
Cox was intrigued by problems of true reusability in software design and programming. He realized that a language like Smalltalk would be invaluable in building development environments for system developers at ITT. However, he and Tom Love also recognized that backward compatibility with C was critically important in ITT's telecom engineering milieu.BOOK
, Biancuzzi
, Federico
, Warden
, Shane
, Masterminds of Programming
, O'Reilly Media, Inc.
, 2009
,
, 242–246
,weblink
, 0-596-51517-0,
Cox began writing a pre-processor for C to add some of the abilities of Smalltalk. He soon had a working implementation of an object-oriented extension to the C language, which he called "OOPC" for Object-Oriented Pre-Compiler.JOURNAL
, Cox
, Brad
, Brad Cox
, The object oriented pre-compiler: programming Smalltalk 80 methods in C language
, ACM SIGPLAN Notices
, 18
, 1
,
, Association for Computing Machinery, ACM
, New York, NY
, 1983
,weblink
,
, 10.1145/948093.948095
,
, February 17, 2011,
Love was hired by Schlumberger Research in 1982 and had the opportunity to acquire the first commercial copy of Smalltalk-80, which further influenced the development of their brainchild.In order to demonstrate that real progress could be made, Cox showed that making interchangeable software components really needed only a few practical changes to existing tools. Specifically, they needed to support objects in a flexible manner, come supplied with a usable set of libraries, and allow for the code (and any resources needed by the code) to be bundled into one cross-platform format.Love and Cox eventually formed a new venture, Productivity Products International (PPI), to commercialize their product, which coupled an Objective-C compiler with class libraries. In 1986, Cox published the main description of Objective-C in its original form in the book Object-Oriented Programming, An Evolutionary Approach. Although he was careful to point out that there is more to the problem of reusability than just the language, Objective-C often found itself compared feature for feature with other languages.
Popularization through NeXT
In 1988, NeXT licensed Objective-C from StepStone (the new name of PPI, the owner of the Objective-C trademark) and extended the GCC compiler to support Objective-C. NeXT developed the AppKit and Foundation Kit libraries on which the NeXTSTEP user interface and Interface Builder were based. While the NeXT workstations failed to make a great impact in the marketplace, the tools were widely lauded in the industry. This led NeXT to drop hardware production and focus on software tools, selling NeXTSTEP (and OpenStep) as a platform for custom programming.In order to circumvent the terms of the GPL, NeXT had originally intended to ship the Objective-C frontend separately, allowing the user to link it with GCC to produce the compiler executable. After being initially accepted by Richard M. Stallman, this plan was rejected after Stallman consulted with GNU's lawyers and NeXT agreed to make Objective-C part of GCC.WEB,weblink Common Lisp and Readline, The issue first arose when NeXT proposed to distribute a modified GCC in two parts and let the user link them. Jobs asked me whether this was lawful. It seemed to me at the time that it was, following reasoning like what you are using; but since the result was very undesirable for free software, I said I would have to ask the lawyer. What the lawyer said surprised me; he said that judges would consider such schemes to be "subterfuges" and would be very harsh toward them. He said a judge would ask whether it is "really" one program, rather than how it is labeled. So I went back to Jobs and said we believed his plan was not allowed by the GPL. The direct result of this is that we now have an Objective C front end. They had wanted to distribute the Objective C parser as a separate proprietary package to link with the GCC back end, but since I didn't agree this was allowed, they made it free., The work to extend GCC was led by Steve Naroff, who joined NeXT from StepStone. The compiler changes were made available as per GPL license terms, but the runtime libraries were not, rendering the open source contribution unusable to the general public. This led to other parties developing such runtime libraries under open source license. Later, Steve Naroff was also principal contributor to work at Apple to build the Objective-C frontend to Clang.The GNU project started work on its free software implementation of Cocoa, named GNUstep, based on the OpenStep standard.WEB,weblink GNUstep developers/GNU Project, GNUstep: Introduction, July 29, 2012, Dennis Glatting wrote the first GNU Objective-C runtime in 1992. The GNU Objective-C runtime, which has been in use since 1993, is the one developed by Kresten Krab Thorup when he was a university student in Denmark.{{citation needed|date=April 2013}} Thorup also worked at NeXT from 1993 to 1996.WEB,weblink Kresten Krab Thorup {{!, LinkedIn|website=www.linkedin.com|access-date=June 23, 2016}}
Apple development and Swift
After acquiring NeXT in 1996, Apple Computer used OpenStep in its then-new operating system, Mac OS X. This included Objective-C, NeXT's Objective-C-based developer tool, Project Builder, and its interface design tool, Interface Builder, both now merged into one application, Xcode. Most of Apple's current Cocoa API is based on OpenStep interface objects and is the most significant Objective-C environment being used for active development.At WWDC 2014, Apple introduced a new language, Swift, which was characterized as "Objective-C without the C".
Syntax
Objective-C is a thin layer atop C, and is a "strict superset" of C, meaning that it is possible to compile any C program with an Objective-C compiler, and to freely include C language code within an Objective-C class.WEB
,weblink
, Write Objective-C Code
, April 23, 2013, December 22, 2013
, apple.com
, WEB,weblink Objective-C Boot Camp, Objective-C is a strict superset of ANSI C, WEB,weblink Examining Objective-C, Objective-C is an object-oriented strict superset of C, WEB,weblink Pro Objective-C, Keith, Lee, September 3, 2013, Apress, December 24, 2017, Google Books, WEB,weblink Tags for Objective-C Headers, Objective-C is a superset of C, WEB,weblink AppScan Source 8.7 now available, The Objective-C programming language is a superset of the C programming language, Objective-C derives its object syntax from Smalltalk. All of the syntax for non-object-oriented operations (including primitive variables, pre-processing, expressions, function declarations, and function calls) are identical to those of C, while the syntax for object-oriented features is an implementation of Smalltalk-style messaging.
Messages
The Objective-C model of object-oriented programming is based on message passing to object instances. In Objective-C one does not call a method; one sends a message. This is unlike the Simula-style programming model used by C++. The difference between these two concepts is in how the code referenced by the method or message name is executed. In a Simula-style language, the method name is in most cases bound to a section of code in the target class by the compiler. In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message. A method is identified by a selector or {{mono|SEL}} — a unique identifier for each message name, often just a {{mono|NUL}}-terminated string representing its name — and resolved to a C method pointer implementing it: an {{mono|IMP}}.WEB
, Apple, Inc.
, Apple, Inc.
, Dynamic Method Resolution
, Objective-C Runtime Programming Guide
, October 19, 2009
,weblink
, November 25, 2014, A consequence of this is that the message-passing system has no type checking. The object to which the message is directed — the receiver — is not guaranteed to respond to a message, and if it does not, it raises an exception.WEB
, Apple, Inc.
, Apple, Inc.
, Avoiding Messaging Errors
, The Objective-C Programming Language
, October 19, 2009
,weblink
,weblink" title="web.archive.org/web/20100908132046weblink">weblink
, September 8, 2010,
Sending the message {{mono|method}} to the object pointed to by the pointer {{mono|obj}} would require the following code in C++:obj->method(argument);In Objective-C, this is written as follows:[obj method:argument];Both styles of programming have their strengths and weaknesses. Object-oriented programming in the Simula (C++) style allows multiple inheritance and faster execution by using compile-time binding whenever possible, but it does not support dynamic binding by default. It also forces all methods to have a corresponding implementation unless they are abstract. The Smalltalk-style programming as used in Objective-C allows messages to go unimplemented, with the method resolved to its implementation at runtime. For example, a message may be sent to a collection of objects, to which only some will be expected to respond, without fear of producing runtime errors. Message passing also does not require that an object be defined at compile time. An implementation is still required for the method to be called in the derived object. (See the dynamic typing section below for more advantages of dynamic (late) binding.)
Interfaces and implementations
Objective-C requires that the interface and implementation of a class be in separately declared code blocks. By convention, developers place the interface in a header file and the implementation in a code file. The header files, normally suffixed .h, are similar to C header files while the implementation (method) files, normally suffixed .m, can be very similar to C code files.
Interface
In {{citation needed span|text=other programming languages|date=April 2019|reason=This claim is specific to a limited set of languages, e.g. C++; they should be named.}}, this is called a "class declaration".The interface of a class is usually defined in a header file. A common convention is to name the header file after the name of the class, e.g. {{mono|Ball.h}} would contain the interface for the class {{mono|Ball}}.An interface declaration takes the form:@interface classname : superclassname {
// instance variables
}+ classMethod1;+ (return_type)classMethod2;+ (return_type)classMethod3:(param1_type)param1_varName;- (return_type)instanceMethod1With1Parameter:(param1_type)param1_varName;- (return_type)instanceMethod2With2Parameters:(param1_type)param1_varName param2_callName:(param2_type)param2_varName;@endIn the above, plus signs denote class methods, or methods that can be called on the class itself (not on an instance), and minus signs denote instance methods, which can only be called on a particular instance of the class. Class methods also have no access to instance variables.The code above is roughly equivalent to the following C++ interface:class classname : public superclassname {
protected:
// instance variables
public:
// Class (static) functions
static void * classMethod1();
static return_type classMethod2();
static return_type classMethod3(param1_type param1_varName);
// Instance (member) functions
return_type instanceMethod1With1Parameter (param1_type param1_varName);
return_type instanceMethod2With2Parameters (param1_type param1_varName, param2_type param2_varName=default);
};Note that {{mono|instanceMethod2With2Parameters:param2_callName:}} demonstrates the interleaving of selector segments with argument expressions, for which there is no direct equivalent in C/C++.Return types can be any standard C type, a pointer to a generic Objective-C object, a pointer to a specific type of object such as NSArray *, NSImage *, or NSString *, or a pointer to the class to which the method belongs (instancetype). The default return type is the generic Objective-C type {{mono|id}}.Method arguments begin with a name labeling the argument that is part of the method name, followed by a colon followed by the expected argument type in parentheses and the argument name. The label can be omitted.- (void)setRangeStart:(int)start end:(int)end;- (void)importDocumentWithName:(NSString *)name withSpecifiedPreferences:(Preferences *)prefs beforePage:(int)insertPage;
Implementation
The interface only declares the class interface and not the methods themselves: the actual code is written in the implementation file. Implementation (method) files normally have the file extension .m, which originally signified "messages".BOOK, Dalrymple, Mark, Knaster, Scott, Learn Objective-C on the Mac, 9, The .m extension originally stood for "messages" when Objective-C was first introduced, referring to a central feature of Objective-C, @implementation classname+ (return_type)classMethod{
// implementation
}- (return_type)instanceMethod{
// implementation
}@endMethods are written using their interface declarations.Comparing Objective-C and C:- (int)method:(int)i{
return [self square_root:i];
}int function (int i){
return square_root(i);
}The syntax allows pseudo-naming of arguments.- (int)changeColorToRed:(float)red green:(float)green blue:(float)blue;[myColor changeColorToRed:5.0 green:2.0 blue:6.0];Internal representations of a method vary between different implementations of Objective-C. If myColor is of the class {{mono|Color}}, instance method {{mono|-changeColorToRed:green:blue:}} might be internally labeled {{mono|_i_Color_changeColorToRed_green_blue}}. The {{mono|i}} is to refer to an instance method, with the class and then method names appended and colons changed to underscores. As the order of parameters is part of the method name, it cannot be changed to suit coding style or expression as with true named parameters.However, internal names of the function are rarely used directly. Generally, messages are converted to function calls defined in the Objective-C runtime library. It is not necessarily known at link time which method will be called because the class of the receiver (the object being sent the message) need not be known until runtime.
Instantiation
Once an Objective-C class is written, it can be instantiated. This is done by first allocating an uninitialized instance of the class (an object) and then by initializing it. An object is not fully functional until both steps have been completed. These steps should be accomplished with one line of code so that there is never an allocated object that hasn't undergone initialization (and because it is unwise to keep the intermediate result since -init can return a different object than that on which it is called).Instantiation with the default, no-parameter initializer:MyObject *o = MyObject alloc] init];Instantiation with a custom initializer:MyObject *o = MyObject alloc] initWithString:myString];In the case where no custom initialization is being performed, the "new" method can often be used in place of the alloc-init messages:MyObject *o = [MyObject new];Also, some classes implement class method initializers. Like +new, they combine +alloc and -init, but unlike +new, they return an autoreleased instance. Some class method initializers take parameters:MyObject *o = [MyObject object];MyObject *o2 = [MyObject objectWithString:myString];The alloc message allocates enough memory to hold all the instance variables for an object, sets all the instance variables to zero values, and turns the memory into an instance of the class; at no point during the initialization is the memory an instance of the superclass.The init message performs the set-up of the instance upon creation. The init method is often written as follows:- (id)init {
self = [super init];
if (self) {
// perform initialization of object here
}
return self;
}In the above example, notice the id return type. This type stands for "pointer to any object" in Objective-C (See the Dynamic typing section).The initializer pattern is used to assure that the object is properly initialized by its superclass before the init method performs its initialization. It performs the following actions:
1. self = [super init]
2. : Sends the superclass instance an init message and assigns the result to self (pointer to the current object).
3. if (self)
4. : Checks if the returned object pointer is valid before performing any initialization.
5. return self
6. : Returns the value of self to the caller.
A non-valid object pointer has the value nil; conditional statements like "if" treat nil like a null pointer, so the initialization code will not be executed if [super init] returned nil. If there is an error in initialization the init method should perform any necessary cleanup, including sending a "release" message to self, and return nil to indicate that initialization failed. Any checking for such errors must only be performed after having called the superclass initialization to ensure that destroying the object will be done correctly.If a class has more than one initialization method, only one of them (the "designated initializer") needs to follow this pattern; others should call the designated initializer instead of the superclass initializer.
Protocols
In other programming languages, these are called "interfaces".Objective-C was extended at NeXT to introduce the concept of multiple inheritance of specification, but not implementation, through the introduction of protocols. This is a pattern achievable either as an abstract multiple inherited base class in C++, or as an "interface" (as in Java and C#). Objective-C makes use of ad hoc protocols called informal protocols and compiler-enforced protocols called formal protocols.An informal protocol is a list of methods that a class can opt to implement. It is specified in the documentation, since it has no presence in the language. Informal protocols are implemented as a category (see below) on NSObject and often include optional methods, which, if implemented, can change the behavior of a class. For example, a text field class might have a delegate that implements an informal protocol with an optional method for performing auto-completion of user-typed text. The text field discovers whether the delegate implements that method (via reflection) and, if so, calls the delegate's method to support the auto-complete feature.A formal protocol is similar to an interface in Java, C#, and Ada 2005. It is a list of methods that any class can declare itself to implement. Versions of Objective-C before 2.0 required that a class must implement all methods in a protocol it declares itself as adopting; the compiler will emit an error if the class does not implement every method from its declared protocols. Objective-C 2.0 added support for marking certain methods in a protocol optional, and the compiler will not enforce implementation of optional methods.A class must be declared to implement that protocol to be said to conform to it. This is detectable at runtime. Formal protocols cannot provide any implementations; they simply assure callers that classes that conform to the protocol will provide implementations. In the NeXT/Apple library, protocols are frequently used by the Distributed Objects system to represent the abilities of an object executing on a remote system.The syntax@protocol NSLocking- (void)lock;- (void)unlock;@enddenotes that there is the abstract idea of locking. By stating in the class definition that the protocol is implemented,@interface NSLock : NSObject //...@endinstances of NSLock claim that they will provide an implementation for the two instance methods.
Dynamic typing
Objective-C, like Smalltalk, can use dynamic typing: an object can be sent a message that is not specified in its interface. This can allow for increased flexibility, as it allows an object to "capture" a message and send the message to a different object that can respond to the message appropriately, or likewise send the message on to another object. This behavior is known as message forwarding or delegation (see below). Alternatively, an error handler can be used in case the message cannot be forwarded. If an object does not forward a message, respond to it, or handle an error, then the system will generate a runtime exception.WEB, Objective-C Runtime Programming Guide,weblink Apple Inc., If messages are sent to nil (the null object pointer), they will be silently ignored or raise a generic exception, depending on compiler options.Static typing information may also optionally be added to variables. This information is then checked at compile time. In the following four statements, increasingly specific type information is provided. The statements are equivalent at runtime, but the extra information allows the compiler to warn the programmer if the passed argument does not match the type specified.- (void)setMyValue:(id)foo;In the above statement, foo may be of any class.- (void)setMyValue:(id)foo;In the above statement, foo may be an instance of any class that conforms to the NSCopying protocol.- (void)setMyValue:(NSNumber *)foo;In the above statement, foo must be an instance of the NSNumber class.- (void)setMyValue:(NSNumber *)foo;In the above statement, foo must be an instance of the NSNumber class, and it must conform to the NSCopying protocol.
Forwarding
Objective-C permits the sending of a message to an object that may not respond. Rather than responding or simply dropping the message, an object can forward the message to an object that can respond. Forwarding can be used to simplify implementation of certain design patterns, such as the observer pattern or the proxy pattern.The Objective-C runtime specifies a pair of methods in {{mono|Object}}
• forwarding methods:
- (retval_t)forward:(SEL)sel args:(arglist_t)args; // with GCC- (id)forward:(SEL)sel args:(marg_list)args; // with NeXT/Apple systems
• action methods:
- (retval_t)performv:(SEL)sel args:(arglist_t)args; // with GCC- (id)performv:(SEL)sel args:(marg_list)args; // with NeXT/Apple systemsAn object wishing to implement forwarding needs only to override the forwarding method with a new method to define the forwarding behavior. The action method {{mono|performv::}} need not be overridden, as this method merely performs an action based on the selector and arguments. Notice the SEL type, which is the type of messages in Objective-C.Note: in OpenStep, Cocoa, and GNUstep, the commonly used frameworks of Objective-C, one does not use the {{mono|Object}} class. The {{mono|- (void)forwardInvocation:(NSInvocation *)anInvocation}} method of the {{mono|NSObject}} class is used to do forwarding.
Example
Here is an example of a program that demonstrates the basics of forwarding.
Forwarder.h
1. import
@interface Forwarder : Object {
id recipient; //The object we want to forward the message to.
}//Accessor methods.- (id)recipient;- (id)setRecipient:(id)_recipient;@end
Forwarder.m
1. import "Forwarder.h"
@implementation Forwarder- (retval_t)forward:(SEL)sel args:(arglist_t) args {
/*
* Check whether the recipient actually responds to the message.
* This may or may not be desirable, for example, if a recipient
* in turn does not respond to the message, it might do forwarding
* itself.
*/
if([recipient respondsToSelector:sel]) {
return [recipient performv:sel args:args];
} else {
return [self error:"Recipient does not respond"];
}
}- (id)setRecipient:(id)_recipient {
[recipient autorelease];
recipient = [_recipient retain];
return self;
}- (id) recipient {
return recipient;
}@end
Recipient.h
1. import
// A simple Recipient object.@interface Recipient : Object- (id)hello;@end
Recipient.m
1. import "Recipient.h"
@implementation Recipient- (id)hello {
printf("Recipient says hello!n");
return self;
}@end
main.m
1. import "Forwarder.h"
2. import "Recipient.h"
int main(void) {
Forwarder *forwarder = [Forwarder new];
Recipient *recipient = [Recipient new];
[forwarder setRecipient:recipient]; //Set the recipient.
/*
* Observe forwarder does not respond to a hello message! It will
* be forwarded. All unrecognized methods will be forwarded to
* the recipient
* (if the recipient responds to them, as written in the Forwarder)
*/
[forwarder hello];
[recipient release];
[forwarder release];
return 0;
}
Notes
When compiled using gcc, the compiler reports:
$ gcc -x objective-c -Wno-import Forwarder.m Recipient.m main.m -lobjc
main.m: In function `main':
main.m:12: warning: `Forwarder' does not respond to `hello'
$
The compiler is reporting the point made earlier, that {{mono|Forwarder}} does not respond to hello messages. In this circumstance, it is safe to ignore the warning since forwarding was implemented. Running the program produces this output:
$ ./a.out
Recipient says hello!
Categories
During the design of Objective-C, one of the main concerns was the maintainability of large code bases. Experience from the structured programming world had shown that one of the main ways to improve code was to break it down into smaller pieces. Objective-C borrowed and extended the concept of categories from Smalltalk implementations to help with this process.WEB,weblink Archived copy, October 7, 2008, yes,weblink" title="web.archive.org/web/20090415163318weblink">weblink April 15, 2009, dmy-all, Furthermore, the methods within a category are added to a class at run-time. Thus, categories permit the programmer to add methods to an existing class - an open class - without the need to recompile that class or even have access to its source code. For example, if a system does not contain a spell checker in its String implementation, it could be added without modifying the String source code.Methods within categories become indistinguishable from the methods in a class when the program is run. A category has full access to all of the instance variables within the class, including private variables.If a category declares a method with the same method signature as an existing method in a class, the category's method is adopted. Thus categories can not only add methods to a class, but also replace existing methods. This feature can be used to fix bugs in other classes by rewriting their methods, or to cause a global change to a class's behavior within a program. If two categories have methods with the same name but different method signatures, it is undefined which category's method is adopted.Other languages have attempted to add this feature in a variety of ways. TOM took the Objective-C system a step further and allowed for the addition of variables also. Other languages have used prototype-based solutions instead, the most notable being Self.The C# and Visual Basic.NET languages implement superficially similar functionality in the form of extension methods, but these lack access to the private variables of the class.WEB,weblink Extension Methods (C# Programming Guide), October 2010, Microsoft, July 10, 2011, Ruby and several other dynamic programming languages refer to the technique as "monkey patching".Logtalk implements a concept of categories (as first-class entities) that subsumes Objective-C categories functionality (Logtalk categories can also be used as fine-grained units of composition when defining e.g. new classes or prototypes; in particular, a Logtalk category can be virtually imported by any number of classes and prototypes).
Example usage of categories
This example builds up an {{mono|Integer}} class, by defining first a basic class with only accessor methods implemented, and adding two categories, {{mono|Arithmetic}} and {{mono|Display}}, which extend the basic class. While categories can access the base class's private data members, it is often good practice to access these private data members through the accessor methods, which helps keep categories more independent from the base class. Implementing such accessors is one typical usage of categories. Another is to use categories to add methods to the base class. However, it is not regarded as good practice to use categories for subclass overriding, also known as monkey patch]]ing. Informal protocols are implemented as a category on the base {{mono|NSObject}} class. By convention, files containing categories that extend base classes will take the name BaseClass+ExtensionClass.h.
Integer.h
1. import
@interface Integer : Object {
int integer;
}- (int) integer;- (id) integer: (int) _integer;@end
Integer.m
1. import "Integer.h"
@implementation Integer- (int) integer {
return integer;
}- (id) integer: (int) _integer {
integer = _integer;
return self;
}@end
Integer+Arithmetic.h
1. import "Integer.h"
@interface Integer (Arithmetic)- (id) add: (Integer *) addend;- (id) sub: (Integer *) subtrahend;@end
Integer+Arithmetic.m
1. import "Integer+Arithmetic.h"
@implementation Integer (Arithmetic)- (id) add: (Integer *) addend {
( return [self integer: [self integer] + [addend integer);
}- (id) sub: (Integer *) subtrahend {
( return [self integer: [self integer] - [subtrahend integer);
}@end
Integer+Display.h
1. import "Integer.h"
@interface Integer (Display)- (id) showstars;- (id) showint;@end
Integer+Display.m
1. import "Integer+Display.h"
@implementation Integer (Display)- (id) showstars {
int i, x = [self integer];
for (i = 0; i
< x; i++) {
printf("*");
}
printf("n");
return self;
}- (id) showint {
printf("%dn", [self integer]);
return self;
}@end
main.m
1. import "Integer.h"
2. import "Integer+Arithmetic.h"
3. import "Integer+Display.h"
int main(void) {
Integer *num1 = [Integer new], *num2 = [Integer new];
int x;
printf("Enter an integer: ");
scanf("%d", &x);
[num1 integer:x];
[num1 showstars];
printf("Enter an integer: ");
scanf("%d", &x);
[num2 integer:x];
[num2 showstars];
[num1 add:num2];
[num1 showint];
return 0;
}
Notes
Compilation is performed, for example, by:
gcc -x objective-c main.m Integer.m Integer+Arithmetic.m Integer+Display.m -lobjc
One can experiment by leaving out the {{mono|#import "Integer+Arithmetic.h" }} and {{mono|[num1 add:num2]}} lines and omitting {{mono|Integer+Arithmetic.m}} in compilation. The program will still run. This means that it is possible to mix-and-match added categories if needed; if a category does not need to have some ability, it can simply not be compile in.
Posing
Objective-C permits a class to wholly replace another class within a program. The replacing class is said to "pose as" the target class.Class posing was declared deprecated with Mac OS X v10.5, and is unavailable in the 64-bit runtime. Similar functionality can be achieved by using method swizzling in categories, that swaps one method's implementation with another's that have the same signature.For the versions still supporting posing, all messages sent to the target class are instead received by the posing class. There are several restrictions:
• A class may only pose as one of its direct or indirect superclasses.
• The posing class must not define any new instance variables that are absent from the target class (though it may define or override methods).
• The target class may not have received any messages prior to the posing.
Posing, similarly with categories, allows global augmentation of existing classes. Posing permits two features absent from categories:
• A posing class can call overridden methods through super, thus incorporating the implementation of the target class.
• A posing class can override methods defined in categories.
For example,@interface CustomNSApplication : NSApplication@end@implementation CustomNSApplication- (void) setMainMenu: (NSMenu*) menu {
// do something with menu
}@endclass_poseAs ([CustomNSApplication class], [NSApplication class]);This intercepts every invocation of setMainMenu to NSApplication.
#import
In the C language, the #include pre-compile directive always causes a file's contents to be inserted into the source at that point. Objective-C has the #import directive, equivalent except that each file is included only once per compilation unit, obviating the need for include guards.
Linux Gcc Compilation
// FILE: hello.m
1. import
int main (int argc, const char * argv[]){
/* my first program in Objective-C */
NSLog(@"Hello, World! n");
return 0;
}
1. Compile Command Line for gcc and MinGW Compiler:
$ gcc
`gnustep-config --objc-flags`
-o hello
hello.m
-L /GNUstep/System/Library/Libraries
-lobjc
-lgnustep-base
$ ./hello
Other features
Objective-C's features often allow for flexible, and often easy, solutions to programming issues.
• Delegating methods to other objects and remote invocation can be easily implemented using categories and message forwarding.
• Swizzling of the isa pointer allows for classes to change at runtime. Typically used for debugging where freed objects are swizzled into zombie objects whose only purpose is to report an error when someone calls them. Swizzling was also used in Enterprise Objects Framework to create database faults.{{citation needed|date=August 2015}} Swizzling is used today by Apple's Foundation Framework to implement Key-Value Observing.
Language variants
Objective-C++
Objective-C++ is a language variant accepted by the front-end to the GNU Compiler Collection and Clang, which can compile source files that use a combination of C++ and Objective-C syntax. Objective-C++ adds to C++ the extensions that Objective-C adds to C. As nothing is done to unify the semantics behind the various language features, certain restrictions apply:
• A C++ class cannot derive from an Objective-C class and vice versa.
• C++ namespaces cannot be declared inside an Objective-C declaration.
• Objective-C declarations may appear only in global scope, not inside a C++ namespace
• Objective-C classes cannot have instance variables of C++ classes that lack a default constructor or that have one or more virtual methods,{{citation needed|date=November 2013}} but pointers to C++ objects can be used as instance variables without restriction (allocate them with new in the -init method).
• C++ "by value" semantics cannot be applied to Objective-C objects, which are only accessible through pointers.
• An Objective-C declaration cannot be within a C++ template declaration and vice versa. However, Objective-C types (e.g., Classname ) can be used as C++ template parameters.
• Objective-C and C++ exception handling is distinct; the handlers of each cannot handle exceptions of the other type. This is mitigated in recent runtimes as Objective-C exceptions are either replaced by C++ exceptions completely (Apple runtime), or partly when Objective-C++ library is linked (GNUstep libobjc2).
• Care must be taken since the destructor calling conventions of Objective-C and C++'s exception run-time models do not match (i.e., a C++ destructor will not be called when an Objective-C exception exits the C++ object's scope). The new 64-bit runtime resolves this by introducing interoperability with C++ exceptions in this sense.Using C++ With Objective-C in Mac OS X Reference Library, last retrieved in February 10, 2010.
• Objective-C blocks and C++11 lambdas are distinct entities. However, a block is transparently generated on macOS when passing a lambda where a block is expected.WEB,weblink Clang Language Extensions — Clang 3.5 documentation, Clang.llvm.org, April 16, 2014,
Objective-C 2.0
At the 2006 Worldwide Developers Conference, Apple announced the release of "Objective-C 2.0," a revision of the Objective-C language to include "modern garbage collection, syntax enhancements,WEB,weblink Objective-C 2.0: more clues, Lists.apple.com, August 10, 2006, May 30, 2010, runtime performance improvements,WEB,weblink Re: Objective-C 2.0, Lists.apple.com, May 30, 2010, and 64-bit support". Mac OS X v10.5, released in October 2007, included an Objective-C 2.0 compiler. GCC 4.6 supports many new Objective-C features, such as declared and synthesized properties, dot syntax, fast enumeration, optional protocol methods, method/protocol/class attributes, class extensions and a new GNU Objective-C runtime API.WEB,weblink GCC 4.6 Release Series — Changes, New Features, and Fixes : GNU Project : Free Software Foundation, Gcc.gnu.org, December 24, 2017,
Garbage collection
Objective-C 2.0 provided an optional conservative, generational garbage collector. When run in backwards-compatible mode, the runtime turned reference counting operations such as "retain" and "release" into no-ops. All objects were subject to garbage collection when garbage collection was enabled. Regular C pointers could be qualified with "__strong" to also trigger the underlying write-barrier compiler intercepts and thus participate in garbage collection.Garbage Collection Programming Guide: Garbage Collection API (Apple developer website - search for "__strong") A zero-ing weak subsystem was also provided such that pointers marked as "__weak" are set to zero when the object (or more simply, GC memory) is collected. The garbage collector does not exist on the iOS implementation of Objective-C 2.0.WEB,weblink Garbage Collection Programming Guide: Introduction to Garbage Collection, Apple Inc., October 3, 2011, Garbage collection in Objective-C runs on a low-priority background thread, and can halt on user events, with the intention of keeping the user experience responsive.WEB,weblink Leopard Technology Series for Developers: Objective-C 2.0 Overview, Apple Inc., November 6, 2007, May 30, 2010,weblink" title="web.archive.org/web/20100724195423weblink">weblink July 24, 2010, Garbage collection was deprecated in Mac OS X v10.8 in favor of Automatic Reference Counting (ARC).WEB,weblink Transitioning to ARC Release Notes, Apple Inc., July 17, 2012, August 26, 2012, Objective-C on iOS 7 running on ARM64 uses 19 bits out of a 64-bit word to store the reference count, as a form of tagged pointers.WEB, Mike Ash,weblink Friday Q&A 2013-09-27: ARM64 and You, mikeash.com, April 27, 2014, WEB,weblink Hamster Emporium: [objc explain]: Non-pointer isa, Sealiesoftware.com, September 24, 2013, April 27, 2014,
Properties
Objective-C 2.0 introduces a new syntax to declare instance variables as properties, with optional attributes to configure the generation of accessor methods. Properties are, in a sense, public instance variables; that is, declaring an instance variable as a property provides external classes with access (possibly limited, e.g. read only) to that property. A property may be declared as "readonly", and may be provided with storage semantics such as assign, copy or retain. By default, properties are considered atomic, which results in a lock preventing multiple threads from accessing them at the same time. A property can be declared as nonatomic, which removes this lock.@interface Person : NSObject {
@public
NSString *name;
@private
int age;
}@property(copy) NSString *name;@property(readonly) int age;-(id)initWithAge:(int)age;@endProperties are implemented by way of the @synthesize keyword, which generates getter (and setter, if not read-only) methods according to the property declaration. Alternatively, the getter and setter methods must be implemented explicitly, or the @dynamic keyword can be used to indicate that accessor methods will be provided by other means. When compiled using clang 3.1 or higher, all properties which are not explicitly declared with @dynamic, marked readonly or have complete user-implemented getter and setter will be automatically implicitly @synthesize'd.@implementation Person@synthesize name;-(id)initWithAge:(int)initAge {
self = [super init];
if (self) {
age = initAge; // NOTE: direct instance variable assignment, not property setter
}
return self;
}-(int)age {
return age;
}@endProperties can be accessed using the traditional message passing syntax, dot notation, or, in Key-Value Coding, by name via the "valueForKey:"/"setValue:forKey:" methods.Person *aPerson = Person alloc] initWithAge: 53];aPerson.name = @"Steve"; // NOTE: dot notation, uses synthesized setter,
// equivalent to [aPerson setName: @"Steve"];
NSLog(@"Access by message (%@), dot notation(%@),property name(%@) and direct instance variable access (%@)",
[aPerson name], aPerson.name, [aPerson valueForKey:@"name"], aPerson->name);
In order to use dot notation to invoke property accessors within an instance method, the "self" keyword should be used:-(void) introduceMyselfWithProperties:(BOOL)useGetter {
NSLog(@"Hi, my name is %@.", (useGetter ? self.name : name));
// NOTE: getter vs. ivar access}A class or protocol's properties may be dynamically introspected.int i;int propertyCount = 0;objc_property_t *propertyList = class_copyPropertyList([aPerson class], &propertyCount);for (i = 0; i < propertyCount; i++) {
objc_property_t *thisProperty = propertyList + i;
const char* propertyName = property_getName(*thisProperty);
NSLog(@"Person has a property: '%s'", propertyName);
}
Non-fragile instance variables
Objective-C 2.0 provides non-fragile instance variables where supported by the runtime (i.e. when building code for 64-bit macOS, and all iOS). Under the modern runtime, an extra layer of indirection is added to instance variable access, allowing the dynamic linker to adjust instance layout at runtime. This feature allows for two important improvements to Objective-C code:
• It eliminates the fragile binary interface problem; superclasses can change sizes without affecting binary compatibility.
• It allows instance variables that provide the backing for properties to be synthesized at runtime without them being declared in the class's interface.
Fast enumeration
Instead of using an NSEnumerator object or indices to iterate through a collection, Objective-C 2.0 offers the fast enumeration syntax. In Objective-C 2.0, the following loops are functionally equivalent, but have different performance traits.// Using NSEnumeratorNSEnumerator *enumerator = [thePeople objectEnumerator];Person *p;while ((p = [enumerator nextObject]) != nil) {
NSLog(@"%@ is %i years old.", [p name], [p age]);
}// Using indexesfor (int i = 0; i < [thePeople count]; i++) {
Person *p = [thePeople objectAtIndex:i];
NSLog(@"%@ is %i years old.", [p name], [p age]);
}// Using fast enumerationfor (Person *p in thePeople) {
NSLog(@"%@ is %i years old.", [p name], [p age]);
}Fast enumeration generates more efficient code than standard enumeration because method calls to enumerate over objects are replaced by pointer arithmetic using the NSFastEnumeration protocol.WEB,weblink Fast Enumeration, Apple, Inc., 2009, apple.com, December 31, 2009,
Class extensions
A class extension has the same syntax as a category declaration with no category name, and the methods and properties declared in it are added directly to the main class. It is mostly used as an alternative to a category to add methods to a class without advertising them in the public headers, with the advantage that for class extensions the compiler checks that all the privately declared methods are actually implemented.WEB,weblink GCC 4.6 Release Series – Changes, New Features, and Fixes, Free Software Foundation, Inc., 2011, Gcc.gnu.org, November 27, 2013,
Implications for Cocoa development
{{Unreferenced section|date=November 2012}}All Objective-C applications developed for macOS that make use of the above improvements for Objective-C 2.0 are incompatible with all operating systems prior to 10.5 (Leopard). Since fast enumeration does not generate exactly the same binaries as standard enumeration, its use will cause an application to crash on Mac OS X version 10.4 or earlier.
Blocks
Blocks is a nonstandard extension for Objective-C (and C and C++) that uses special syntax to create closures. Blocks are only supported in Mac OS X 10.6 "Snow Leopard" or later, iOS 4 or later, and GNUstep with libobjc2 1.7 and compiling with clang 3.1 or later.WEB,weblink Blocks Programming Topics – Mac Developer Library, Apple Inc., March 8, 2011, November 28, 2012,
1. include
2. include
typedef int (^IntBlock)();IntBlock MakeCounter(int start, int increment) { __block int i = start; return Block_copy( ^ { int ret = i; i += increment; return ret; });}int main(void) { IntBlock mycounter = MakeCounter(5, 2); printf("First call: %dn", mycounter()); printf("Second call: %dn", mycounter()); printf("Third call: %dn", mycounter()); /* because it was copied, it must also be released */ Block_release(mycounter); return 0;}/* Output: First call: 5 Second call: 7 Third call: 9
• /
Modern Objective-C
Automatic Reference Counting
Automatic Reference Counting (ARC) is a compile-time feature that eliminates the need for programmers to manually manage retain counts using retain and release.WEB, Transitioning to ARC,weblink Apple Inc., October 8, 2012, Unlike garbage collection, which occurs at run time, ARC eliminates the overhead of a separate process managing retain counts. ARC and manual memory management are not mutually exclusive; programmers can continue to use non-ARC code in ARC-enabled projects by disabling ARC for individual code files. Xcode can also attempt to automatically upgrade a project to ARC.
Literals
NeXT and Apple Obj-C runtimes have long included a short-form way to create new strings, using the literal syntax @"a new string", or drop to CoreFoundation constants kCFBooleanTrue and kCFBooleanFalse for NSNumber with Boolean values. Using this format saves the programmer from having to use the longer initWithString or similar methods when doing certain operations.When using Apple LLVM compiler 4.0 or later, arrays, dictionaries, and numbers (NSArray, NSDictionary, NSNumber classes) can also be created using literal syntax instead of methods.WEB, Programming with Objective-C: Values and Collections,weblink Apple Inc., October 8, 2012, Example without literals:NSArray *myArray = [NSArray arrayWithObjects:object1,object2,object3,nil];NSDictionary *myDictionary1 = [NSDictionary dictionaryWithObject:someObject forKey:@"key"];NSDictionary *myDictionary2 = [NSDictionary dictionaryWithObjectsAndKeys:object1, key1, object2, key2, nil];NSNumber *myNumber = [NSNumber numberWithInt:myInt];NSNumber *mySumNumber= [NSNumber numberWithInt:(2 + 3)];NSNumber *myBoolNumber = [NSNumber numberWithBool:YES];Example with literals:NSArray *myArray = @[ object1, object2, object3 ];NSDictionary *myDictionary1 = @{ @"key" : someObject };NSDictionary *myDictionary2 = @{ key1: object1, key2: object2 };NSNumber *myNumber = @(myInt);NSNumber *mySumNumber = @(2+3);NSNumber *myBoolNumber = @YES;NSNumber *myIntegerNumber = @8;However, different from string literals, which compile to constants in the executable, these literals compile to code equivalent to the above method calls. In particular, under manually reference-counted memory management, these objects are autoreleased, which requires added care when e.g., used with function-static variables or other kinds of globals.
Subscripting
When using Apple LLVM compiler 4.0 or later, arrays and dictionaries (NSArray and NSDictionary classes) can be manipulated using subscripting. Subscripting can be used to retrieve values from indexes (array) or keys (dictionary), and with mutable objects, can also be used to set objects to indexes or keys. In code, subscripting is represented using brackets [ ].WEB,weblink Objective-C Literals — Clang 3.5 documentation, Clang.llvm.org, April 16, 2014, Example without subscripting:id object1 = [someArray objectAtIndex:0];id object2 = [someDictionary objectForKey:@"key"];[someMutableArray replaceObjectAtIndex:0 withObject:object3];[someMutableDictionary setObject:object4 forKey:@"key"];Example with subscripting:id object1 = someArray[0];id object2 = someDictionary[@"key"];someMutableArray[0] = object3;someMutableDictionary[@"key"] = object4;
"Modern" Objective-C syntax (1997)
After the purchase of NeXT by Apple, attempts were made to make the language more acceptable to programmers more familiar with Java than Smalltalk. One of these attempts was introducing what was dubbed "Modern Syntax" for Objective-C at the time{{citation |title=Rhapsody Developer's Guide |publisher=AP Professional |year=1997}} (as opposed to the current, "classic" syntax). There was no change in behaviour, this was merely an alternative syntax. Instead of writing a method invocation like
object = MyClass alloc] init];
[object firstLabel: param1 secondLabel: param2];
It was instead written as
object = (MyClass.alloc).init;
object.labels ( param1, param2 );
Similarly, declarations went from the form
-(void) firstLabel: (int)param1 secondLabel: (int)param2;
to
-(void) labels ( int param1, int param2 );
This "modern" syntax is no longer supported in current dialects of the Objective-C language.
Portable Object Compiler
Besides the GCC/NeXT/Apple implementation, which added several extensions to the original Stepstone implementation, another free, open-source Objective-C implementation called the Portable Object CompilerWEB,weblink Portable Object Compiler, Users.pandora.be, January 1, 1970, May 30, 2010, also exists. The set of extensions implemented by the Portable Object Compiler differs from the GCC/NeXT/Apple implementation; in particular, it includes Smalltalk-like blocks for Objective-C, while it lacks protocols and categories, two features used extensively in OpenStep and its derivatives and relatives. Overall, POC represents an older, pre-NeXT stage in the language's evolution, roughly conformant to Brad Cox's 1991 book.It also includes a runtime library called ObjectPak, which is based on Cox's original ICPak101 library (which in turn derives from the Smalltalk-80 class library), and is quite radically different from the OpenStep FoundationKit.
GEOS Objective-C
The PC GEOS system used a programming language known as GEOS Objective-C or goc;WEB,weblink Breadbox Computer Company LLC homepage, December 8, 2010, despite the name similarity, the two languages are similar only in overall concept and the use of keywords prefixed with an @ sign.
Clang
The Clang compiler suite, part of the LLVM project, implements Objective-C, and other languages.
WinObjC
WinObjC (Also known as "The Bridge") is an open-source ObjC compiler project started by Microsoft on GitHub as a way to allow the reuse of iOS Application code inside of a Windows Universal Applications.WinObjC on GitHubOn Windows, Objective-C Development tools are provided for download on GNUStep's website. The GNUStep Development System consists of the following packages: GNUstep MSYS System, GNUstep Core, GNUstep Devel, GNUstep Cairo, ProjectCenter IDE (Like Xcode, but not as complex), Gorm (Interface Builder Like Xcode NIB builder).GNUStep Installer
Library use
Objective-C today is often used in tandem with a fixed library of standard objects (often known as a "kit" or "framework"), such as Cocoa, GNUstep or ObjFW. These libraries often come with the operating system: the GNUstep libraries often come with Linux-based distributions and Cocoa comes with macOS. The programmer is not forced to inherit functionality from the existing base class (NSObject / OFObject). Objective-C allows for the declaration of new root classes that do not inherit any existing functionality. Originally, Objective-C-based programming environments typically offered an Object class as the base class from which almost all other classes inherited. With the introduction of OpenStep, NeXT created a new base class named NSObject, which offered additional features over Object (an emphasis on using object references and reference counting instead of raw pointers, for example). Almost all classes in Cocoa inherit from NSObject.Not only did the renaming serve to differentiate the new default behavior of classes within the OpenStep API, but it allowed code that used Object—the original base class used on NeXTSTEP (and, more or less, other Objective-C class libraries)—to co-exist in the same runtime with code that used NSObject (with some limitations). The introduction of the two letter prefix also became a simplistic form of namespaces, which Objective-C lacks. Using a prefix to create an informal packaging identifier became an informal coding standard in the Objective-C community, and continues to this day.More recently, package managers have started appearing, such as CocoaPods, which aims to be both a package manager and a repository of packages. A lot of open-source Objective-C code that was written in the last few years can now be installed using CocoaPods.
Analysis of the language
{{refimprove section|date=December 2011}}Objective-C implementations use a thin runtime system written in C{{Citation needed|reason=The latest version of objc4 claims to be mostly objc and C++|date=December 2018}}, which adds little to the size of the application. In contrast, most object-oriented systems at the time that it was created used large virtual machine runtimes. Programs written in Objective-C tend to be not much larger than the size of their code and that of the libraries (which generally do not need to be included in the software distribution), in contrast to Smalltalk systems where a large amount of memory was used just to open a window. Objective-C applications tend to be larger than similar C or C++ applications because Objective-C dynamic typing does not allow methods to be stripped or inlined. Since the programmer has such freedom to delegate, forward calls, build selectors on the fly and pass them to the runtime system, the Objective-C compiler cannot assume it is safe to remove unused methods or to inline calls.Likewise, the language can be implemented atop extant C compilers (in GCC, first as a preprocessor, then as a module) rather than as a new compiler. This allows Objective-C to leverage the huge existing collection of C code, libraries, tools, etc. Existing C libraries can be wrapped in Objective-C wrappers to provide an OO-style interface. In this aspect, it is similar to GObject library and Vala language, which are widely used in development of GTK applications.All of these practical changes lowered the barrier to entry, likely the biggest problem for the widespread acceptance of Smalltalk in the 1980s.A common criticism is that Objective-C does not have language support for namespaces. Instead, programmers are forced to add prefixes to their class names, which are traditionally shorter than namespace names and thus more prone to collisions. As of 2007, all macOS classes and functions in the Cocoa programming environment are prefixed with "NS" (e.g. NSObject, NSButton) to identify them as belonging to the macOS or iOS core; the "NS" derives from the names of the classes as defined during the development of NeXTSTEP.Since Objective-C is a strict superset of C, it does not treat C primitive types as first-class objects.Unlike C++, Objective-C does not support operator overloading. Also unlike C++, Objective-C allows an object to directly inherit only from one class (forbidding multiple inheritance). However, in most cases, categories and protocols may be used as alternative ways to achieve the same results.Because Objective-C uses dynamic runtime typing and because all method calls are function calls (or, in some cases, syscalls), many common performance optimizations cannot be applied to Objective-C methods (for example: inlining, constant propagation, interprocedural optimizations, and scalar replacement of aggregates). This limits the performance of Objective-C abstractions relative to similar abstractions in languages such as C++ where such optimizations are possible.
Memory management
The first versions of Objective-C did not support garbage collection. At the time this decision was a matter of some debate, and many people considered long "dead times" (when Smalltalk performed collection) to render the entire system unusable. Some 3rd party implementations have added this feature (most notably GNUstep) and Apple has implemented it as of Mac OS X v10.5.WEB,weblink Mac OS X Leopard – Xcode 3.0, Apple, Inc., August 22, 2006, apple.com, August 22, 2006, yes,weblink" title="web.archive.org/web/20071024144921weblink">weblink October 24, 2007, However, in more recent versions of macOS and iOS, garbage collection has been deprecated in favor of Automatic Reference Counting (ARC), introduced in 2011.With ARC, the compiler inserts retain and release calls automatically into Objective-C code based on static code analysis. The automation relieves the programmer of having to write in memory management code. ARC also adds weak references to the Objective-C language.WEB,weblink Transitioning to ARC Release Notes, iOS Developer Library, Developer.apple.com, April 16, 2014,
Philosophical differences between Objective-C and C++
The design and implementation of C++ and Objective-C represent fundamentally different approaches to extending C.In addition to C's style of procedural programming, C++ directly supports certain forms of object-oriented programming, generic programming, and metaprogramming. C++ also comes with a large standard library that includes several container classes. Similarly, Objective-C adds object-oriented programming, dynamic typing, and reflection to C. Objective-C does not provide a standard library per se, but in most places where Objective-C is used, it is used with an OpenStep-like library such as OPENSTEP, Cocoa, or GNUstep, which provides functionality similar to C++'s standard library.One notable difference is that Objective-C provides runtime support for reflective features, whereas C++ adds only a small amount of runtime support to C. In Objective-C, an object can be queried about its own properties, e.g., whether it will respond to a certain message. In C++, this is not possible without the use of external libraries.The use of reflection is part of the wider distinction between dynamic (run-time) features and static (compile-time) features of a language. Although Objective-C and C++ each employ a mix of both features, Objective-C is decidedly geared toward run-time decisions while C++ is geared toward compile-time decisions. The tension between dynamic and static programming involves many of the classic trade-offs in programming: dynamic features add flexibility, static features add speed and type checking.Generic programming and metaprogramming can be implemented in both languages using runtime polymorphism. In C++ this takes the form of virtual functions and runtime type identification, while Objective-C offers dynamic typing and reflection. Objective-C lacks compile-time polymorphism (generic functions) entirely, while C++ supports it via function overloading and templates.
See also
References
{{Reflist}}
Further reading
• BOOK, Brad J., Cox, Object Oriented Programming: An Evolutionary Approach, Addison Wesley, 1991, 0-201-54834-8,
External links
{{Programming languages}}{{CProLang}}{{Authority control}}
- content above as imported from Wikipedia
- "Objective-C" does not exist on GetWiki (yet)
- time: 11:59am EDT - Wed, Apr 24 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
GETWIKI 19 AUG 2014
GETWIKI 18 AUG 2014
Wikinfo
Culture
CONNECT
|
__label__pos
| 0.931083 |
For the amount of the unit
Upgrade your website to remove Wix ads. Please fill in all required fields. We always appreciate your feedback. To determine the rate of speed, divide the distance traveled by the amount of time spent traveling. Students will practice solving percent problems using the percent equation rather than proportions. The teacher will point out that a special kind of Proportion problem is the scale drawing problem. Is it easier to find the rate of change from a table, graph or equation? These cookies will be stored in your browser only with your consent. How Do You Solve an Inequality with Negative Numbers Using Division? Susan went shopping during tax free weekend to pick out an outfit for the first day of school. Answers will include miles per hour, cost per kilogram, and other real world examples. As some of my students are apt to visualize the situations, I also include sketching the story before representing it mathematically. Includes problems involving complex fractions. Write a total number of price in the numerator.
In on unit rates
How much did each person earn in one hour? Leave comments, follow people and more. Please cancel your print and try again. The supplied billing address is incorrect. Then use your knowledge of proportional reasoning and scale factor to find the actual dimensions. After comparing the Legend and the Supreme, Victor saw an advertisement for a third vehicle, the Lunar. Select all statements that are correct based on the information above. For every answer I have that no one in the room has, I get a point. Students represent the situation mathematically through use of tables. Through exploration, students discover relationships among squares, cubes, and their roots. For complete access to thousands of printable lessons click the button or the link below. Give an overview of the instructional video, including vocabulary and any special materials needed for the instructional video. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair. What are Numerical and Algebraic Expressions? Representing Rates with Tables and Graphs Worksheet. We love and recommend Desmos all of the time. Thereof, what is the ordered pair of the unit rate? Then the rate is charted and data is graphed. To make this template yours, start editing it.
Out an engaging way to graph given a member login
Graph the information from the table. Double check your email and try again. Thank you for purchasing my product. Look for and make use of structure. Discuss the different ways for solving the problem, including using proportions or the percent equation. This will allow time for the teacher to give extra assistance to students that are still struggling. Any line that has a slope greater than ¾ will have a slope greater than the table and equation. Tell us what grade you teach and how you discuss slope with your students. How Do You Find the Prime Factorization of a Number Using a Tree? Then the pairs will work together to set up the rest of the problems. The teacher will explain that most people keep their money in a checking or savings account. In this activity, students are trying to solve an overall mystery by solving math problems. It can be about the number of working hours compared with amount earned and it can be comparison of an object with its price. Maneuvering the Middle is an education blog with valuable tips for lesson planning, classroom technology, and math concepts in the middle school classroom. Facebook, or see what information is collected. Sorry, your blog cannot share posts by email. Find out the number of mugs they make in one hour. After you have done that you compare the two rates. To continue, resend a new link to your email. These are incomplete rates that you need to work on. Please try again with a different payment method. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. Model of supplies you know how to compare two of proportional relationships to unit rates through use it gives students have reached the class exhausts its supply of rates? This Unit Rates coloring activity gets students practicing with the payoff of getting to do a little coloring. Mixed Practice and Word Problems As students enter instructions will be posted for them to get into their groups. In this activity, one side has questions that challenge kids less and the other side will challenge them more. Check that everything is correct. Reviews all skills in the unit. How many like rabbits best? You can add your own CSS here. He runs for another hour. Introduce the slope formula. Are you sure you want to submit? This content is not available. Slope Guy in her classroom. Create your website today. This worksheet has some great practice problems for converting metric measurments! In a table or graphing on a coordinate plane and observing whether the graph is a. This document has been saved in your Super Teacher Worksheets filing cabinet. If the last worksheet gave you some trouble, this is a good idea for you to work on. Make a table to show the distance the train travels in various amounts of time.
In your browser as is currently not
What is a constant of proportionality? Tasks may or may not contain context. They have a theme of penguins and pirates. There are no upcoming events to display. In this project, students figure out the unit rate of grams of sugar per ounce in different drinks. Give it can be a unit rates were applied in this worksheet, you can be solved using multiplication? Anyways, give one of the ideas above a try and see how your classes react. Click below to consent to the use of this technology across the web. Quickly access your most used files AND your custom generated worksheets! This worksheet explains how to determine unit rates through the use of simple division. Find a day to create ratios, they do you complete access to get into what makes it can you. Then the class will derive the percent equation beginning with the percent proportion, using variables and whole number examples. To make an even bigger impact, I brought in packets of sugar so students could really see just how much sugar was in each drink. Reading Graphs: Students will examine circle graphs and make predictions about the actual number of people who are represented in the survey using proportions. What is not finished in class will be homework. SWBAT to calculate a unit rate given a graph. Measuring speed requires the use of three variables. How do you find the asymptotes of a conic section? To get started with Disqus head to the Settings panel. Unit 2 Mrs Stalets Math Classes Google Sites. Siple is a librarian who really enjoys reading. Look at what happens in both girls have students practice setting up problems from another go to determine the rmine the warm up in real world aspect of what surprised them. At the end of the period groups will trade and grade and any questions that the class has will be addressed. Students will identify unit rates, use them to solve equivalent ratios, and plot ratios as points on a graph. This worksheet will produce problems where the students must write rates and unit rates from word phrases. You may enter a message or special instruction that will appear on the bottom left corner of the Ratio Worksheet. How is a fraction a ratio? Sally is making some sweets. Just tell us your email above. Both choices B and C are correct. Want to See Sample Lessons? Answers for the homework and quiz. Try a different email address.
It can get overwhelming to try a whole bunch of new things.
Use the provided graphs to answer the questions below.
How do I find a local handyman?
Find mad in the teacher worksheets
What is the sale price of the shirt? What is the formula to find unit rate? Express the phrases in the form of rates. It is a formative assessment lesson. Comments are updated real time as they are posted, engaging users to participate in the discussion. What is unit rates and graphing unit rates worksheet will travel in like or performance of three. These ratio worksheets will generate 10 Rates and Unit Rates problems per. This visual worksheet will help your student understand percentages. On this show your job is to redesign a room at the lowest cost possible. Now they are connecting that the unit rate is the slope when the information is graphed. Both bikers use apps on their phones to record the time and distance of their bike rides. Slope Man, it would have been incredibly helpful to help me remember the difference between an undefined slope and a zero slope. SWBAT interpret speed as the slope of a linear graph and translate between the equation of a line and its graphical representation. Then apply the unitary method to solve each problem. We sent a link to set your new password by email. Click the link in the email to get to your account. Expressions and equations Worksheet shown above. Ratios is a way or comparing numbers or measurements. How do you find mad in operations management? Snail is going to use each of the five components. Identifying the type of slope, calculating slope from two points or a line on the graph, and graphing an equation all require multiple days on your scope and sequence.
Then graph each set of numbers on a coordinate grid.
Please enter your password.
|
__label__pos
| 0.931305 |
Comparte esto 👍👍
CAPITULO I
Suma de polinomios
Ejercicio 16
Para revolver este grupo de ejercicios vamos a relizar el siguiente proceso:
• Ordenar los polinomios
• Escribir los polinomios, uno debajo de otro de tal forma, que los téminos semejantes queden en la misma columna
• Reduciremos los terminos semejantes
1. 3a+2bc;2a+3b+c 3a+2b c 2a+3b+ c ¯ 5a+5b
2. 7a4b+5c;7a+4b6c 7a 4b +5c 7a + 4b 6c ¯ c
3. m+np;mn+p m + n p m n + p ¯ 0
4. 9x3y+5;xy+4;5x+4y9 9x 3y + 5 x y + 4 5x+ 4y 9 ¯ 3x
5. a+bc;2a+2b2c;3ab+3c a + b c 2a +2b 2c 3a b + 3c ¯ 2b
6. p+q+r;2p6q+3r;p+5q8r p + q +r 2p 6q +3r p + 5q 8r ¯ 4r
7. 7x4y+6z;10x20y8z;5x+24y+2z 7x 4y + 6z 10x 20y 8z 5x+ 24y + 2z ¯ 2x
8. 2m+3n6;3m8n+8;5m+n10 2m+3n6 3m8n+8 5m+n10 ¯ 4m4n8
9. 5a2b3c;7a3b+5c;8a+5b3c 5a 2b 3c 7a 3b +5c 8a+ 5b 3c ¯ 6ac
10. ab+bc+cd;8ab3bc3cd;5ab+2bc+2cd ab+ bc + cd 8ab 3bc 3cd 5ab+ 2bc + 2cd ¯ 2ab
11. axayaz;5ax7ay6az;4ax+9ay+8az ax ayaz 5ax 7ay6az 4ax +9ay+8az ¯ ay+az
12. 5x7y+8;y+64x;93x+8y 5x 7y +8 4x y +6 3x+ 8y +9 ¯ 2x+23
13. am+6mn4s;6sam5mn;2s5mn+3am am+6mn 4s am5mn+ 6s 3am5mn 2s ¯ am4mn
14. 2a+3b;6b4c;a+8c 2a+3b 6b4c a+8c ¯ a+9b+4c
15. 6m3n;4n+5p;m5p 6m3n 4n+ 5p m 5p ¯ 5m7n
16. Mathematical Equation
17. Mathematical Equation
18. 8a+3bc;5ab+c;abc;7ab4c 8a+ 3b c 5a b + c a b c 7a b 4c ¯ 19a5c
19. 7x+2y4;9y6z+5;y+3z6;5+8x3y 7x+2y4 9y6z+ 5 y+3z6 8x3y 5 ¯ 15x+7y3z10
20. mnp;m+2n5;3p6m+4;2n+5m8 mnp m +2n5 6m 3p+4 5m +2n8 ¯ m+3n+2p9
21. 5 a x 3 a m 7 a n ;8 a x +5 a m 9 a n ;11 a x +5 a m +16 a n 5 a x 3 a m 7 a n 8 a x +5 a m 9 a n 11 a x +5 a m + 16 a n ¯ 14 a x +7 a m
22. 6 m a+1 7 m a+2 5 m a+3 ;4 m a+1 7 m a+2 m a+3 ;5 m a+1 +3 m a+2 +12 m a+3 6 m a+1 7 m a+2 5 m a+3 4 m a+1 7 m a+2 m a+3 5 m a+1 +3 m a+2 +12 m a+3 ¯ 5 m a+1 11 m a+2 +6 m a+3
23. 8x+y+z+u;3x4y2z+3u;4x+5y+3z4u;9xy+z+2u 8x +y+ z + u 3x 4y 2z + 3u 4x + 5y +3z 4u 9x y + z +2u ¯ y+3z+2u
24. a+bc+d;ab+cd;2a+3b2c+d;3a3b+4cd a + b c + d a b + c d 2a + 3b 2c+ d 3a 3b +4c d ¯ 3a+2c
25. 5ab3bc+4cd;2bc+2cd3dc;4bc2ab+3de;53bc6cdab 5ab 3bc + 4cd 2bc + 2cd 3dc 2ab+ 4bc +3de ab 3bc 6cd +5 ¯ 3ab3dc+3de+5
26. ab;bc;c+d;ac;cd;da;ad a b b c c + d a c c d a + d a d ¯ 2a
|
__label__pos
| 0.971355 |
Walker News
How To Setup VPN Server Using pptpd On RHEL?
You might need only 3 minutes to install and configure a Linux VPN server using pptpd and a minute later have Windows 7 VPN client connect pptpd successfully.
On RHEL or CentOS 5 Linux:
1. Download pptpd. The current version used by this guide is pptpd-1.3.4-2.rhel5.i386.rpm. For experts who want to compile pptpd source code for installation, get it from Poptop official page.
2. Confirm the Linux kernel supports MPPE for encrypted tunnel:
modprobe ppp-compress-18 && echo ok
If you see “ok” printed after execution, meaning that the MPPE is enabled and nothing need to be done. Otherwise, refer to this guide for help.
3. Install pptpd:
rpm -Uvh pptpd-rpm-file
4. Edit /etc/pptpd.conf file to set the IP of VPN server and clients using private network address space defined by RFC 1918. For example:
localip 172.20.20.1
remoteip 172.20.20.2-6
Where VPN server IP is 172.20.20.1 and only 5 VPN clients allowed connecting concurrently (because remoteip is limited to allocate IP 172.20.20.2 to 172.20.20.6 for connected client).
5. Edit /etc/ppp/chap-secrets file to set VPN login ID and password. E.g.
walker pptpd vpn123:) *
Where 1st column is VPN login ID, 2nd field is fixed to “pptpd”, 3rd field is the VPN login password, and fourth field is an asterisk to indicate any VPN client IP (defined by “remoteip” in previous step).
6. Configure RHEL/CentOS to auto start pptpd at each reboot:
chkconfig --level 345 pptpd on
7. Start up pptpd immediately (without reboot Linux) and confirm the VPN server is up and listening for client connection:
service pptpd start
netstat -tulpan | grep pptpd
8. Temporarily stop Firewall for first VPN connection attempt (to confirm the pptpd setup is good to go):
service iptables stop
On Windows 7 (64-bit Ultimate edition), nothing need to be installed as the bundled VPN client is capable to connect with pptpd server:
1. Right click the connection icon at Notification Area (bottom-right corner where the Date/Time is displayed) then choose “Open Network and Sharing Center”.
2. Click “Setup a new connection or network”, “Connect to a workplace”, “Use my Internet connection (VPN)”.
3. For “Internet address”, enter the VPN server’s public/WAN IP (not the 172.20.20.1) which can be “seen” and ping-ed by Windows 7. The “Destination name” is friendly name defined by user to identify this particular VPN connection.
4. Next, enter the VPN login ID and password (as what you’ve defined in /etc/ppp/chap-secrets) and click Connect.
Suppose the login credential is correct, Windows 7 should have no problem connecting with pptpd VPN server.
Custom Search
1. Thiru Yadav 07-04-13@06:53
awesome article…..simple and superb
2014 • Privacy Policy
|
__label__pos
| 0.761008 |
26.2 Migrating iFolder to OES 2015 SP1
This section provides information on how to migrate iFolder.
26.2.1 Prerequisites
Before proceeding to migrate, meet the following prerequisites:
Transfer ID - Same Tree
In this scenario, the target server is installed in the same tree as the source server. On successful completion of Transfer ID, the target server functions with the same credentials (such as IP address and hostname) as the source server and source server node is no longer available in the network.
What is Migrated
The following data is migrated from the source server to the target server:
• The simias data store path
• The configuration files
• Proxy user (migrates along with simias and configuration files)
26.2.2 Migration Procedure
1. Install OES by using YaST on the target server.
2. Stop apache from source server using the following command: rcapache2 stop.
3. Configure iFolder on target server with same values as source server. For more information, see Configuring the iFolder Enterprise Server in the Novell iFolder 3.9.2 Administration Guide.
4. Stop apache on target server using the following command: rcapache2 stop.
5. Migrate the simias data store path from source server to target server in the same volume and directory structure. For more information, see Section 17.4, Migrating File System Using GUI.
6. Start apache on target server using the following command: rcapache2 restart.
Post Migration
After migrating iFolder,
• Verify if admin and web access pages are accessible with same details.
• Ensure that all clients are able to connect to the server without issues.
• Verify the ownership of the ifolder data source, it needs to be wwwrun:www
|
__label__pos
| 0.990134 |
Saturday, September 30, 2006
Enterprise Application Data Block Version 2.0
Since I really missed the convenience of version 1.0 of the enterprise application data block and wanted to reduce redundant code while making calls to the database I came up with a helper class much like SqlHelper class in the data block one.
Usage :
DBHelper dhp = new DBHelper();
DBParameter[] para = { new DBParameter("param_name", DbType.String, param_Value) };
return dhp.Query(@"Sql_query", para);
You will need to add the following classes to your project
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Microsoft.Practices.EnterpriseLibrary.Data;
using Microsoft.Practices.EnterpriseLibrary.Data.Sql;
using System.Data.Common;
///
/// Summary description for DBHelper
///
public class DBHelper
{
Database burstdb;
public DBHelper()
{
//
// TODO: Add constructor logic here
//
burstdb = DatabaseFactory.CreateDatabase();
}
public DataTable Query(string sqlText, DBParameter[] param)
{
DbCommand command = burstdb.GetSqlStringCommand(sqlText);
for (int i = 0; i < param.Length; i++)
{
burstdb.AddInParameter(command, param[i].Name,param[i].Type,param[i].DBValue);
}
burstdb.ExecuteDataSet(command);
DataSet ds = new DataSet();
burstdb.LoadDataSet(command, ds, "ds");
return ds.Tables[0];
}
public DataTable QuerySP(string spName, DBParameter[] param)
{
DbCommand command = burstdb.GetStoredProcCommand(spName);
for (int i = 0; i < param.Length; i++)
{
burstdb.AddInParameter(command, param[i].Name, param[i].Type, param[i].DBValue);
}
burstdb.ExecuteDataSet(command);
DataSet ds = new DataSet();
burstdb.LoadDataSet(command, ds, "ds");
return ds.Tables[0];
}
public void nonQuery(string sqlText, DBParameter[] param)
{
DbCommand command = burstdb.GetSqlStringCommand(sqlText);
for (int i = 0; i < param.Length; i++)
{
burstdb.AddInParameter(command, param[i].Name, param[i].Type, param[i].DBValue);
}
burstdb.ExecuteNonQuery(command);
}
public void nonQuerySP(string spName, DBParameter[] param)
{
DbCommand command = burstdb.GetStoredProcCommand(spName);
for (int i = 0; i < param.Length; i++)
{
burstdb.AddInParameter(command, param[i].Name, param[i].Type, param[i].DBValue);
}
burstdb.ExecuteNonQuery(command);
}
}
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
///
/// Summary description for DBParameter
///
public class DBParameter
{
string _name;
DbType _dbtype;
object _value;
public string Name
{
get { return _name; }
set { _name = value; }
}
public DbType Type
{
get { return _dbtype;}
set {_dbtype = value;}
}
public object DBValue
{
get { return _value; }
set { _value = value; }
}
public DBParameter(string name, DbType type,object value)
{
Name = name;
Type = type;
DBValue = value;
}
}
Wednesday, September 27, 2006
Install SVN on Win XP
Subversion (SVN)
From Eugene Lazutkin's blog
It looks like Subversion is the most reasonable choice for source control:
* It is free open source project available on virtually all platforms.
* It was designed as replacement for aging CVS, which is the most dominant source control system for distributed open source projects.
* It is well supported. After CVS it is the most mature and stable source control system.
* It is possible to find hosts, which offer SVN hosting for free or commercially.
* It looks like a preferred choice for new projects.
* It is easy to host internally.
A must read is free electronic book available in HTML and PDF formats here: http://svnbook.org/. Links to documentation, examples, tools, and clients can be found here: http://subversion.tigris.org/project_links.html.
Subversion provides access to repository using either Apache (via http or https) or its own proprietary svn protocol. The latter method supports secure SSH variation: svn+ssh protocol. Apache integration includes WebDAV support and (optional and not recommended) FrontPage support.
While there are many Subversion clients around, two are the most popular choices for Windows platform:
* RapidSVN (developed by Subversion developers)
* TortoiseSVN
I tried both of them and TortoiseSVN feels better for me. The caveat is it is integrated with Windows Explorer => pollutes right-click menu. RapidSVN is stand-alone program.
Let's install Subversion with TortoiseSVN. Below are step by step instructions inspired by Miguel Jimenez's excellent post on this subject: http://blogs.clearscreen.com/migs/archive/2005/01/21/824.aspx. Our goal is to run Subversion as a Windows service. In this case it may be placed on remote server and be available to all group members.
1. Download the latest Subversion binary.
2. Download the latest TortoiseSVN client software.
3. Download the latest SrvAny utility (it's a part of free Resource Kit, e.g. Windows Server 2003 Resource Kit Tools). Alternatively you can use my version extracted from the latest Resource Kit.
4. Install Subversion (run .msi file).
5. Install TortoiseSVN (run the installer). It will require to reboot.
6. Create a folder for your repository (e.g., C:\Repository). Create svn subfolder in it to keep SVN data separate from Trac data later on (e.g., C:\Repository\svn). Create one more subfolder in svn to host actual database (e.g., C:\Repository\svn\repo). Actual names and actual layout are up to you.
7. Right-click on repo subfolder (see example above) and select TortoiseSVN/Create Repository here....
1. It will ask you for type of repository.
2. Select FSFS — everybody recommends this new type. Ignore BDB.
3. Now you can run svnserve (it is in bin directory of Subversion) to test your installation: svnserve -d -r C:\Repository\svn\repo. It uses TCP port 3690.
4. Try to connect to it using TortoiseSVN or RapidSVN and make sure it works. Use connection string like that: svn://yourserver/.
8. Install SrvAny by copying it to a folder of your choice.
9. Install new service for svnserve (I've included a .doc file with manual for SrvAny and InstSrv).
1. Run InstSrv with following arguments: InstSrv svnserve SrvAny.exe. You can use any name instead of svnserve here. Sometimes fully qualified name is required for SrvAny.exe.
2. Run regedit and navigate to this key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\svnserve.
3. Create subkey Parameters.
1. Add Application string value with exact fully qualified path to svnserve, e.g., C:\Program Files\Subversion\bin\svnserve.exe.
2. Add AppParameters string value with following data: -d -r C:\Repository\svn\repo. If you used different repository folder, change the value accordingly.
4. Run the service using the standard Windows Services panel.
10. If you don't want to keep your repository wide open, it is time to modify security settings.
1. In subdirectory C:\Repository\svn\repo\conf create passwd file with following information:
[users]
user1 = password1
user2 = password2
2. Obviously you should use real user names and real passwords.
3. In the same directory create (or modify) file svnserve.conf:
[general]
anon-access = none
auth-access = write
password-db = passwd
realm = Subversion Repository
4. You can specify different file name for password-db and select different realm name. Realm is used to inform user about purpose of authentication.
5. Now use any SVN client to test connection to your new Subversion service.
If you want to serve several repositories, you should create several services for that. Obviously you should specify different ports for different repositories.
|
__label__pos
| 0.889539 |
LinkedIn Spring Framework Assessment Latest Answers - LinkedIn Spring Framework Skill Quiz Answers 2021
LinkedIn Spring Framework Assessment Latest Answers - LinkedIn Spring Framework Skill Quiz Answers 2021
Q1. How filters are used in Spring Web?
• Filters are called before a request hits the DispatcherServlet.They allow for interception-style, chained processing of web requests for security, timeouts, and other purposes.
• Filters are used with a checksum algorithm that will filter invalid bytes out of a byte stream request body and allow for processing of HTTP requests from the DispatcherRequestServlet.
• Filters are used with a checksum algorithm that will filter invalid bytes out of an octet stream a multipart upload and allow for chained processing of WebDispatcherServlet requests.
• Filters are used to validate request parameters out of the byte stream request body and allow for processing of requests from the DispatcherRequestServlet.
Q2. How is a resource defined in the context of a REST service?
• A resource is the actual String literal that composes a URI that is accessed on a RESTful web service.
• It is an abstract concept that represents a typed object, data, relationships, and a set of methods that operate on it that is accessed via a URI.
• A REST service has a pool of resources composed of allocations of memory that allow a request to be processed.
• A resource for a REST service is an explicit allocation of a thread or CPU cycles to allow a request to be processed.
Q3. Which of these is a valid Advice annotation?
• @AfterError
• @AfterReturning
• @AfterException
• @AfterExecution
Q4. What does a ViewResolver do?
• It supports internationalization of web applications by detecting a user's locale.
• It generates a view by mapping a logical view name returned by a controller method to a view technology.
• It creates a unique view determined by the uers's browser type,supporting cross-browser compatibility.
• It maps custom parameters to SQL views in the database, allowing for dynamic content to be created in the response.
Q5. How are Spring Data repositories implemented by Spring at runtime?
• Spring automatically generated code for you based on your YAML config that defined a MethodInterceptor chain that intercept calls to the instance and computed SQL on the fly.
• A JDK proxy instance is created, which backs the repository interface, and a MethodInterceptor intercepts calls to the instance and routes as required.
• The Spring JDK proxy creates a separate runtime process that acts as an intermediary between the database and the Web server, and intercepts calls to the instance and handles requests.
• Spring automatically generated code for you based on your XML config files that define a SpringMethodAutoGeneration factory that intercepts calls to the instance and creates dynamic method that computer SQL on the fly.
Q6. What is SpEL and how is it used in Spring?
• SpEL(Spring Expression Language) runs in the JVM and can act as a drop-in replacement for Groovy or other languages.
• SpEL(Spring Expression Language) supports boolean and relational operators and regular expressions, and is used for querying a graph of objects at runtime.
• SpEL(Spring Expression Language) allows you to build, configure,and execute tasks such as building artifacts and downloading object dependencies.
• SpEL(Spring Expression Language) natively transpiles one JVM language to another, allowing for greater flexibility.
Q7. The process of linking aspects with other objects to create an advised object is called
• dynamic chaining
• banding
• weaving
• interleaving
Q8. How are JDK Dynamic proxies and CGLIB proxies used in Spring?
• JDK Dynamic proxy can proxy only interface, so it is used if the target implements at least one interface. A CGLIB proxy can create a proxy by subclassing and is used if the target does not implement an interface.
• Only JDK Dynamic proxies are used in the Spring Bean Lifecycle. CGLIB proxies are used only for integrating with other frameworks.
• Only CGLIB proxies are used in the Spring Bean Lifecycle. JDK Dynamic proxies are used only for integrating with other frameworks.
• JDK Dynamic proxy can only using an abstract class extended by a target. A CGLIB proxy can create a proxy through bytecode interweaving and is used if the target does not extend an abstract class.
Q9. Which of these is not a valid method on the JoinPoint interface?
• getArgs()
• getExceptions()
• getSignature()
• getTarget()
Q10. In what order do the @PostConstruct annotated method, the init-method parameter method on beans and the afterPropertiesSet() method execute?
• 1. afterPropertiesSet() 2. init-method 3. @PostConstruct
• 1. @PostConstruct 2. afterPropertiesSet() 3. init-method
• 1. init-method 2. afterPropertiesSet() 3. @PostConstruct
• You cannot use these methods together-you must choose only one.
Q11. What is the function of the @Transactional annotation at the class level?
• It's a transaction attribute configured by spring.security.transactions.xml config file that uses Spring's transaction implementation and validation code.
• It's a transaction must actively validate by the bytecode of a transaction using Spring's TransactionBytecodeValidator class. Default Transaction behavior rolls back on validation exception but commits on proper validation
• It creates a proxy that implements the same interface(s) as the annotated class, allowing Spring to inject behaviors before, after, or around method calls into the object being proxied.
• It's a transaction that must be actively validated by Spring's TransactionValidator class using Spring's transaction validation code. Default Transaction behavior rolls back on validation exception.
Q12. Which is a valid example of the output from this code (ignoring logging statements) ?
@SpringBootApplication
public class App {
public static void main(String args[]) {
SpringApplication.run(App.class, args);
System.out.println("startup");
}
}
public class Print implements InitializingBean {
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("init");
}
}
• Nothing will print
• startup init
• init
• startup
Q13. Which println statement would you remove to stop this code throwing a null pointer exception?
@Component
public class Test implements InitializingBean {
@Autowired
ApplicationContext context;
@Autowired
static SimpleDateFormt formatter;
@Override
public void afterPropertiesSet() throws Exception {
System.out.println(context.containsBean("formatter") + " ");
System.out.println(context.getBean("formatter").getClass());
System.out.println(formatter.getClass());
System.out.println(context.getClass());
}
}
@Configuration
class TestConfig {
@Bean
public SimpleDateFormat formatter() {
return new SimpleDateFormat();
}
}
• formatter.getClass()
• context.getClass()
• context.getBean("formatter").getClass()
• context.containsBean("formatter")
Q14. What is the root interface for accessing a Spring bean container?
• SpringInitContainer
• ResourceLoader
• ApplicationEventPublisher
• BeanFactory
Q15. Which annotation can be used within Spring Security to apply method level security?
• @Secured
• @RequiresRole
• @RestrictedTo
• @SecurePath
Tags:
• linkedin skill quiz answers c
• linkedin skill quiz questions and answers
• linkedin quiz answers 2020
• linkedin skill quiz answers quizlet
• linkedin assessment answers 2021
• linkedin assessment answers github
• linkedin seo assessment answers
• linkedin skill quiz answers github
|
__label__pos
| 0.998385 |
tutorial6
tutorial6 - will best fit the data. (You are not required...
Info iconThis preview shows page 1. Sign up to view the full content.
View Full Document Right Arrow Icon
MATH1211/10-11(2)/Tu6/TNK THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS MATH1211 Multivariable Calculus 2010-11 2nd Semester: Tutorial 6 Date of tutorial classes: March 17–18. (The section/problem numbers in the following refer to those in the textbook.) 1. Suppose ( x 1 ,y 1 ) ,..., ( x n ,y n ) are some data you obtained, and you want to find an equation of the form y = ax 2 + bx + c (where a,b,c are some constants) to predict the values of y from known values of x . (a) Use the method of least squares to construct a function D ( a,b,c ) that gives the sum of the squares of the distances between the observed and predicted y -values of the data. (b) Find a system of equations that will give the values of a,b,c such that the corresponding curve
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.
Unformatted text preview: will best fit the data. (You are not required to solve for the values of a,b,c .) 2. ( § 5.1 no.8) Find the volume of the region bounded on top by the plane z = x + 3 y + 1, on the bottom by the xy-plane, and on the sides by the planes x = 0 , x = 3 , y = 1 , y = 2. 3. ( § 5.1 no.16) Suppose that f is a nonnegative-valued, continuous function defined on R = { ( x,y ) | a ≤ x ≤ b, c ≤ y ≤ d } . If f ( x,y ) ≤ M for some positive number M , explain why the volume V under the graph of f over R is at most M ( b-a )( d-c ). 4. ( § 5.2 no.12) Integrate the function f ( x,y ) = 3 xy over the region bounded by y = 32 x 3 and y = √ x . 1...
View Full Document
This note was uploaded on 05/04/2011 for the course MATH 1211 taught by Professor Wang during the Spring '11 term at HKU.
Ask a homework question - tutors are online
|
__label__pos
| 0.517298 |
Home Featured What’s in a name?
What’s in a name?
0
name-header
In his most recent piece with us, frequent WhichPLM contributor Dan Hudson, President of E-Spec, shares the issues he has found with mismatched fields between systems, and the necessity in having everyone in a business ‘on the same page’ when it comes to field names.
When Names Don’t Match Up
When we first started performing system integrations, one of the main issues we faced was matching up the fields between the various systems. The first system might not use the same name for the field as the next system or, worse yet, they might actually use the same name for a different field. This exercise often revealed even more complexity; the field in the first system might refer to a data set (multiple items) while the next system might actually refer to a single instance of the data set. For example a product system might use the name “style number” referring to a t-shirt. This style number’s related data would reflect the status of the development of this t-shirt; it’s color, fabric, fit and approval status. When you look at the ERP system, style number now refers to a particular version of this t-shirt; each color now has its own style number (and related data).
While all of these issues can be addressed, finding the issue is not always so easy. Many assumptions are made and use of the field names is contained in their respective “silos” so the subtle differences are not readily apparent.
Add a Prefix or Suffix?
One way the previous example has been addressed in the past is the use of prefix or suffix “codes”; a color code is added to the style number to distinguish between the entire data set and an individual member. Whilst this does address the issue, it doesn’t stop the ERP users from interchanging the terminology. What it does do is introduce “intelligent numbering” into the environment. Intelligent style numbers or part numbers will open up a decades old debate, which isn’t my intent here. For now, let’s say there can be an overuse of intelligence in numbering schemes. Having too many attributes included in the scheme makes the numbers very complex and not user friendly.
The more common problem is “running out of room” – the number of variations for an attribute eventually exceeds the character combinations that can be represented with the number of characters reserved for the attribute. A color code might start out as three numbers, but when the 1000th color is used typically a letter is now used for one of the values rather than adding a fourth number. This is due to the legacy system only allowing three digits for the code; the showstopper is when the legacy system doesn’t support alphanumeric values, just numbers. Reprogramming this system isn’t usually an option so more drastic measures are required; manually tracking the use of the same code for two items.
Defining your Taxonomy for a Master Schema
The implementation of a system like PLM, which is tearing down these silos, brings these types of issues to the forefront. The use of data analytics and “big data” highlight the confusion as “garbage in – garbage out”: the result if consistency is not achieved. The top down approach of Master Data Management (MDM) makes addressing these issues a science. The terms taxonomy and structured vocabularies are now routinely used. Taxonomy is the exercise of defining your classification system; what are the characteristics your company uses to describe its products and processes. These characteristics are also defined as hierarchical or as attributes; the hierarchical values “define” the product while the attributes “describe” it. Structured vocabularies define an agreed set of values to be used for your hierarchy and attributes; all departments agree to call everything by the same name. The structured part refers to the business rules applied: certain values are not allowed with other values; the values for “product type” are different based on the values for “gender” – no dresses are allowed as a product type if the gender equals male.
Adobe’s XMP Can Help
Many companies find the task of creating a taxonomy and structured vocabulary overwhelming, as there is no good or obvious place to start. Defining the taxonomy in one system does little to implement it in the next and to tackle all of your systems at once would bring the business to its knees.
Adobe’s XMP metadata standard is positioned as an ideal tool to start defining your taxonomy and structured vocabulary. The standard allows you to create a master list of field definitions that can then be deployed to each system in an orderly fashion. The use of custom XMP fields following the Adobe standard allows a business to capture all of its field and value definitions in a single schema. This “master” schema can then be used to create subsets for each department or system, as not all fields are relevant to all users. Deploying the subset schemas to a system that does not support XMP natively is eased by the similarity of XMP to XML; most of your current systems will support XML definitions.
Embedding XMP Bridges the Divide
XMP has a few features, which aid in this exercise. The displayed field name does not have to match the internal name in the XMP definition, so in one subset schema you can add a display name that is familiar to those users while maintaining the standard name internally in the XMP. In other subset schemas you can use the standard XMP name as the display name. This also aids in implementing XMP across multi-lingual systems.
The feature of embedding XMP metadata into files and images also aids in implementation across systems and departments. As these files are shared (between users or systems) the metadata travels with the file. This provides one method of integrating data as well as enforcing consistency. The trick to using XMP metadata to implement your enterprise taxonomy and structured vocabulary is to make the data collection as user friendly as possible and to collect the data at its origin; the first user who is aware of the value for the metadata field needs to be the one to enter it. By creating subset schemas with required fields you can ensure data is collected to drive your workflows and processes.
Now You Can Get On the Same Page
Getting everyone on the same page and calling things by the same name is essential to optimizing your systems’ integration and your business processes. Using Adobe’s XMP as a starting point allows incremental progress to be made while having a defined long-term outcome; have your internal XML resources investigate using XMP as a vehicle to improving your communications and workflows.
Lydia Mageean Lydia Mageean has been part of the WhichPLM team for over six years now. She has a creative and media background, and is responsible for maintaining and updating our website content, liaising with advertisers, working on special projects like the Annual Review, and more.Joining mid-2013 as our Online Editor, she has since become WhichPLM’s Editor. In addition to taking on writing and interviewing responsibilities, Lydia has also become the primary point of contact for news, events, features and other aspects of our ever-growing online content library and tools.
|
__label__pos
| 0.679737 |
1
I have created a model in ModelBuilder that uses the iterator tool to run through a polygon layer and clip a raster based on the polygon's unique name in the attribute table. The model then copies the rows of the clipped raster's attribute table and creates a dbf table saved with the polygon name.
I now need the dbf table data to be exported or copied and pasted into an excel workbook that contains formulas in a separate worksheet to analyze the data.
Is there any way to do this in ModelBuilder?
I have used the do.TransferSpreadsheet command in VBA to transfer data from MS Access to an Excel Template in past. The command works great for automated analysis of complex data and would like something similar for ArcGIS.
2
In python it's possible. I have done something similar.
This tutorial was very helpful for me: basic-excel-driving-with-python
With this, you can open an existing excel file and add/change values to whatever you want.
Once you have setup a python script that works for you, you can change it for reading parameters so you can add it as a script in ArcToolbox. Then you can call it in ModelBuilder.
Some code that can help you, basically it export to excel an attribute table:
import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Add()
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Add()
#Fields List
desc=arcpy.Describe(LayerToExport)
Fields=[]
for Fi in desc.fields:
if Fi.type != "Blob" and Fi.type != "Geometry":
Fields.append(Fi.Name)
iFi = 1
for Fi in Fields:
excel.ActiveSheet.Cells(1,iFi).Value = Fi
iFi+=1
#Each row
rows = arcpy.SearchCursor(LayerToExport)
iRow=2
for row in rows:
iFi = 1
for Fi in Fields:
excel.ActiveSheet.Cells(iRow,iFi).Value = row.getValue(Fi)
iFi+=1
iRow+=1
# Clean up cursor and row objects
del row
del rows
wb.SaveAs(outFile)
wb.Close(False)
Note: As this use win32com, VBA function are supported. So it's possible you can use your "TransferSpreadsheet" function.
2
You do not mention the version of ArcGIS Desktop that you are using so I will assume that 10.2 or later is an option.
I recommend that you investigate adding the Table To Excel tool to your model/script.
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.968281 |
Skip to main content
Deep Packer Inspector API
Project description
#  packerinspector-api
[Deep Packer Inspector's](https://www.packerinspector.com/) API.
You can access the API reference at: [https://www.packerinspector.com/reference#dpi-api-v1](https://www.packerinspector.com/reference#dpi-api-v1)
## How to install
```
pip install packerinspector-api
```
## How to use
You are given an API key when you create an account at Deep Packer Inspector
(create an account [here](https://www.packerinspector.com/login)), copy your
API key from [here](https://www.packerinspector.com/settings).
```python
import packerinspector
dpi = packerinspector.PublicAPI('your API key')
# Public scan
response = dpi.scan_sample('path-to-sample.exe', private=False)
# Public scan with some extra dlls
response = dpi.scan_sample('path-to-sample.exe', private=False,
'extrastuff.dll', 'another.dll')
# Private scan
response = dpi.scan_sample('path-to-sample.exe', private=True)
# Force sample re-scan (aka private scan)
response = dpi.rescan_sample('path-to-sample.exe')
# Get analysis report
response = dpi.get_report('MzU2Ng.taDvVrLuqvOn1GRXgTRJiDGSfsE') # report id
# Get only the behavioural packer analysis info
response = dpi.get_report('MzU2Ng.taDvVrLuqvOn1GRXgTRJiDGSfsE',
get_static_pe_info=False,
get_vt_scans=False)
# Download unpacking graph (stores a png in the given folder)
error = dpi.get_unpacking_graph('MzU2Ng.taDvVrLuqvOn1GRXgTRJiDGSfsE',
'/path/to/graphs-folder/')
# Download memory dump (stores a tar.gz in the given folder)
error = dpi.get_memory_dump('report-id', '/path/to/memory-dumps-folder/')
```
### Unpacking graph example

### Report example
See [https://www.packerinspector.com/reference#get-report-response-example](https://www.packerinspector.com/reference#get-report-response-example) for a description of each field.
```json
{
"report-url": "https://www.packerinspector.com/report/2e965b6c2734dfef93c5b517f192607c97219c5334c76fa22b0971ffdfaafbd9/MzUzOQ.QwIOR1r3E1pMnRzZZhFKYO1PCVA",
"status": 200,
"description": "Report successfully retrieved.",
"dpicode": 1,
"id": "MzUzOQ.QwIOR1r3E1pMnRzZZhFKYO1PCVA",
"vt-scans": true,
"file-identification": true,
"static-pe-information": true,
"packer-analysis": true,
"report": {
"packer-analysis": {
"layers-and-regions": [
{
"lowest-address": 4198400,
"highest-address": 4198400,
"regions": 1,
"layer-num": 0,
"frames": 0,
"size": 34487
},
{
"lowest-address": 50514240,
"highest-address": 50514240,
"regions": 1,
"layer-num": 1,
"frames": 1,
"size": 281
},
{
"lowest-address": 1184486,
"highest-address": 1184486,
"regions": 1,
"layer-num": 2,
"frames": 1,
"size": 4579
},
{
"lowest-address": 64946176,
"highest-address": 64946176,
"regions": 1,
"layer-num": 3,
"frames": 1,
"size": 3776
}
],
"num-downward-trans": 17,
"remote-memory-writes": [
{
"source-address": "",
"dest-process": 0,
"source-process": 0,
"dest-address": 65142784,
"type": "Memory unmap|deallocate",
"size": 12288
},
{
"source-address": "",
"dest-process": 0,
"source-process": 0,
"dest-address": 65077248,
"type": "Memory unmap|deallocate",
"size": 12288
},
{
"source-address": "",
"dest-process": 0,
"source-process": 0,
"dest-address": 65077248,
"type": "Memory unmap|deallocate",
"size": 65536
}
],
"num-layers": 4,
"graph": "https://www.packerinspector.com/graph/2e965b6c2734dfef93c5b517f192607c97219c5334c76fa22b0971ffdfaafbd920170608135058423189",
"num-regions": 4,
"api-calls": {
"1": {
"0": {
"address-space": "50514240-50514521",
"total-api-calls": 0
},
"total-api-calls": 0
},
"0": {
"0": {
"ntdll.dll": [
"RtlImageNtHeader",
"ZwFsControlFile",
"ZwPulseEvent",
"RtlValidateUnicodeString",
"RtlImageDirectoryEntryToData",
"RtlNtStatusToDosError",
"KiFastSystemCallRet",
"bsearch",
"KiFastSystemCall",
"RtlAcquirePebLock",
"RtlInitializeCriticalSectionAndSpinCount",
"RtlInitString",
"ZwRequestWakeupLatency",
"RtlFindCharInUnicodeString",
"ZwQueryPerformanceCounter",
"RtlFreeHeap",
"ZwOpenThreadToken",
"RtlReleasePebLock",
"ZwContinue",
"ZwQueryVirtualMemory",
"strchr",
"RtlCreateHeap",
"ZwFlushBuffersFile",
"LdrLockLoaderLock",
"ZwAdjustPrivilegesToken",
"RtlSetLastWin32Error",
"RtlFindActivationContextSectionString",
"ZwDuplicateToken",
"RtlUnicodeToMultiByteN",
"RtlUnicodeStringToAnsiString",
"RtlUnlockHeap",
"RtlGetLastWin32Error",
"RtlFindClearBits",
"RtlLogStackBackTrace",
"RtlImpersonateSelf",
"RtlAllocateHeap",
"RtlHashUnicodeString",
"memmove",
"RtlEqualUnicodeString",
"RtlSetBits",
"LdrGetDllHandle",
"RtlEncodePointer",
"RtlNtStatusToDosErrorNoTeb",
"ZwOpenProcessToken",
"RtlFreeUnicodeString",
"RtlDecodePointer",
"RtlSizeHeap",
"RtlCompactHeap",
"RtlIsValidHandle",
"RtlFindClearBitsAndSet",
"ZwOpenProcess",
"RtlDosApplyFileIsolationRedirection_Ustr",
"RtlLeaveCriticalSection",
"LdrUnlockLoaderLock",
"RtlLockHeap",
"ZwClose",
"ZwSetInformationThread",
"LdrGetDllHandleEx",
"RtlInitUnicodeString",
"ZwQueryInformationProcess",
"RtlTryEnterCriticalSection",
"ZwAllocateVirtualMemory",
"ZwQuerySystemInformation",
"RtlEnterCriticalSection",
"LdrGetProcedureAddress",
"RtlGetNtGlobalFlags",
"ZwProtectVirtualMemory",
"ZwSetInformationProcess",
"RtlInitAnsiString"
],
"KERNEL32.DLL": [
"RequestWakeupLatency",
"QueryPerformanceCounter",
"GetEnvironmentStringsW",
"GetModuleFileNameW",
"PulseEvent",
"GlobalUnfix",
"GetProcessHandleCount",
"GetProcAddress",
"GetStartupInfoA",
"InterlockedIncrement",
"CloseHandle",
"InterlockedDecrement",
"GetCurrentThreadId",
"GetSystemTimeAsFileTime",
"LocalCompact",
"GetCPInfo",
"MultiByteToWideChar",
"FlushFileBuffers",
"GetCommandLineA",
"IsWow64Process",
"UnhandledExceptionFilter",
"VirtualQuery",
"SetUnhandledExceptionFilter",
"GlobalUnWire",
"OpenProcess",
"GetModuleFileNameA",
"TlsGetValue",
"LCMapStringW",
"TlsAlloc",
"IsValidCodePage",
"HeapCreate",
"SetHandleCount",
"GetModuleHandleW",
"InitializeCriticalSectionAndSpinCount",
"GetProcessHeap",
"GetStdHandle",
"FreeEnvironmentStringsW",
"GetACP",
"GetFileType",
"SetProcessPriorityBoost",
"GetTickCount",
"VirtualQueryEx",
"GetProcessTimes",
"WideCharToMultiByte",
"GetCurrentProcessId",
"GlobalUnlock",
"SetProcessWorkingSetSize",
"TlsSetValue",
"GetStringTypeW",
"GetVersion",
"PeekNamedPipe",
"VerifyConsoleIoHandle"
],
"address-space": "4198400-4232887",
"total-api-calls": 169960
},
"total-api-calls": 169960
},
"3": {
"0": {
"ntdll.dll": [
"ZwUnmapViewOfSection",
"ZwCreateSection",
"RtlLeaveCriticalSection",
"ZwClose",
"RtlImageDirectoryEntryToData",
"KiFastSystemCallRet",
"KiFastSystemCall",
"ZwFreeVirtualMemory",
"ZwMapViewOfSection",
"ZwAllocateVirtualMemory",
"ZwQuerySystemInformation",
"RtlEnterCriticalSection",
"LdrGetProcedureAddress",
"wcscpy",
"RtlInitString"
],
"KERNEL32.DLL": [
"Process32Next",
"lstrcpyW",
"GetCurrentProcessId",
"Process32First",
"CloseHandle",
"GetProcAddress",
"Process32FirstW",
"WideCharToMultiByte",
"CreateToolhelp32Snapshot",
"Process32NextW"
],
"address-space": "64946176-64949952",
"total-api-calls": 467
},
"total-api-calls": 467
},
"2": {
"0": {
"ntdll.dll": [
"RtlValidateUnicodeString",
"RtlImageNtHeader",
"RtlMultiByteToUnicodeN",
"RtlFreeHeap",
"RtlFindCharInUnicodeString",
"RtlInitUnicodeString",
"RtlTryEnterCriticalSection",
"LdrLoadDll",
"RtlLeaveCriticalSection",
"LdrUnlockLoaderLock",
"ZwSetInformationThread",
"RtlUpcaseUnicodeChar",
"RtlAnsiStringToUnicodeString",
"_stricmp",
"LdrFindResource_U",
"RtlAllocateHeap",
"wcsncmp",
"RtlFreeUnicodeString",
"RtlImageDirectoryEntryToData",
"RtlHashUnicodeString",
"LdrAlternateResourcesEnabled",
"LdrLoadAlternateResourceModule",
"RtlNtStatusToDosError",
"KiFastSystemCallRet",
"bsearch",
"KiFastSystemCall",
"LdrLockLoaderLock",
"memmove",
"RtlReleasePebLock",
"wcsrchr",
"RtlFindActivationContextSectionString",
"RtlAcquirePebLock",
"wcslen",
"wcschr",
"ZwAllocateVirtualMemory",
"RtlEnterCriticalSection",
"LdrAccessResource",
"RtlNtStatusToDosErrorNoTeb",
"RtlQueryEnvironmentVariable_U",
"LdrGetProcedureAddress",
"RtlGetNtGlobalFlags",
"RtlInitString",
"KiUserExceptionDispatcher",
"RtlDosApplyFileIsolationRedirection_Ustr",
"RtlInitAnsiString",
"RtlEqualUnicodeString"
],
"KERNEL32.DLL": [
"LoadLibraryExA",
"LocalAlloc",
"FindResourceA",
"SetHandleCount",
"GetModuleHandleA",
"SetThreadIdealProcessor",
"GetProcAddress",
"LoadLibraryA",
"VirtualAlloc",
"VirtualAllocEx",
"LoadLibraryExW",
"LoadResource",
"SizeofResource"
],
"address-space": "1184486-1189065",
"total-api-calls": 1343
},
"total-api-calls": 1343
}
},
"num-upward-trans": 20,
"complexity-type": 3,
"num-regions-special-apis": 2,
"loaded-modules": [
{
"pid": 1968,
"name": "dbghelp.dll",
"start-address": 1565196288,
"size": 659456
},
{
"pid": 1968,
"name": "comdlg32.dll",
"start-address": 1983250432,
"size": 303104
},
{
"pid": 1968,
"name": "msvcrt.dll",
"start-address": 2008940544,
"size": 360448
},
{
"pid": 1968,
"name": "version.dll",
"start-address": 2008875008,
"size": 32768
},
{
"pid": 1968,
"name": "gdi32.dll",
"start-address": 2012151808,
"size": 299008
},
{
"pid": 1968,
"name": "advapi32.dll",
"start-address": 2010775552,
"size": 704512
},
{
"pid": 1968,
"name": "kernel32.dll",
"start-address": 2088763392,
"size": 1060864
},
{
"pid": 1968,
"name": "shell32.dll",
"start-address": 2120876032,
"size": 8523776
},
{
"pid": 1968,
"name": "secur32.dll",
"start-address": 2013003776,
"size": 69632
},
{
"pid": 1968,
"name": "rpcrt4.dll",
"start-address": 2011496448,
"size": 598016
},
{
"pid": 1968,
"name": "45317968759d3e37282ceb75149f627d648534c5b4685f6da3966d8f6fca662",
"start-address": 4194304,
"size": 54423552
},
{
"pid": 1968,
"name": "ntdll.dll",
"start-address": 2089877504,
"size": 741376
},
{
"pid": 1968,
"name": "shlwapi.dll",
"start-address": 2012479488,
"size": 483328
},
{
"pid": 1968,
"name": "user32.dll",
"start-address": 2117664768,
"size": 593920
},
{
"pid": 1968,
"name": "comctl32.dll",
"start-address": 1489174528,
"size": 630784
}
],
"execution-time": 1804,
"granularity": "Not applicable",
"num-pro-ipc": 0,
"last-executed-region": {
"calls-api-getvers": false,
"calls-api-getcomm": false,
"num-api-fun-called": 25,
"writes-exe-region": false,
"process": 0,
"address": 64946176,
"num-diff-apis-called": 25,
"layer-num": 3,
"modified-by-extern-pro": false,
"memory-type": "",
"calls-api-getmodu": false,
"region-num": 0,
"size": 3776
},
"num-processes": 1,
"regions-pot-original": []
},
"file-identification": {
"size": 246272,
"sdhash": "omitted",
"first-seen": "Thu, 08 Jun 2017 13:50:58 GMT",
"auxiliary-files": [],
"mime-type": "application/x-dosexec",
"trid": [
{
"type": "(.DLL) Win32 Dynamic Link Library (generic)",
"percent": 14.2
},
{
"type": "(.EXE) Win32 Executable (generic)",
"percent": 9.7
},
{
"type": "(.EXE) Generic Win/DOS Executable",
"percent": 4.3
},
{
"type": "(.EXE) DOS Executable Generic",
"percent": 4.3
},
{
"type": "(.EXE) Win32 Executable MS Visual C++ (generic)",
"percent": 67.3
}
],
"sha256": "45317968759d3e37282ceb75149f627d648534c5b4685f6da3966d8f6fca662d",
"sha1": "ca963033b9a285b8cd0044df38146a932c838071",
"entropy": 5.41605,
"known-names": [
"45317968759d3e37282ceb75149f627d648534c5b4685f6da3966d8f6fca662d"
],
"imphash": "edbc0337cc897a187d263d79c09c15c7",
"file-type": "PE32 executable (GUI) Intel 80386, for MS Windows",
"packer-signatures": [],
"ssdeep": "3072:xkeyloECBch6ZCGBGSmHJ0y5lj6jdojK7+MGOXpXx8z3Lp7Yoq:xGlnCIwMpj6ijKfxx8z3F0V",
"md5": "47363b94cee907e2b8926c1be61150c7"
},
"vt-scans": [
{
"sha256": "45317968759d3e37282ceb75149f627d648534c5b4685f6da3966d8f6fca662d",
"scans": {
"date": "Wed, 24 May 2017 12:42:12 GMT",
"status": 3,
"description": "VT scan available.",
"results": [
{
"result": "W32.Ransomware_LTK.Trojan",
"antivirus": "Bkav",
"update": 20170524
},
{
"result": "Trojan.GenericKD.2080196",
"antivirus": "MicroWorld-eScan",
"update": 20170524
},
{
"result": "Trojan/W32.Agent.246272.IJ",
"antivirus": "nProtect",
"update": 20170524
},
{
"result": "Not detected",
"antivirus": "CMC",
"update": 20170523
},
{
"result": "Ransom.CryptoWall.WR5",
"antivirus": "CAT-QuickHeal",
"update": 20170524
},
{
"result": "Trojan.GenericKD.2080196",
"antivirus": "ALYac",
"update": 20170524
},
{
"result": "Trojan.Agent.0BGen",
"antivirus": "Malwarebytes",
"update": 20170524
},
{
"result": "Trojan.Win32.CryptoWall.gen",
"antivirus": "VIPRE",
"update": 20170524
},
{
"result": "Trojan/Injector.bstc",
"antivirus": "TheHacker",
"update": 20170522
},
{
"result": "Trojan.GenericKD.2080196",
"antivirus": "BitDefender",
"update": 20170524
},
{
"result": "Trojan ( 004b3f201 )",
"antivirus": "K7GW",
"update": 20170524
},
{
"result": "Trojan ( 004b3f201 )",
"antivirus": "K7AntiVirus",
"update": 20170524
},
{
"result": "W32/Backdoor2.HXGO",
"antivirus": "F-Prot",
"update": 20170524
},
{
"result": "Ransom.Cryptodefense",
"antivirus": "Symantec",
"update": 20170524
},
{
"result": "Win32/Filecoder.CryptoWall.D",
"antivirus": "ESET-NOD32",
"update": 20170524
},
{
"result": "TROJ_CRYPTWALL.F",
"antivirus": "TrendMicro-HouseCall",
"update": 20170524
},
{
"result": "Win32:Androp [Drp]",
"antivirus": "Avast",
"update": 20170524
},
{
"result": "Win.Malware.Vawtrak-860",
"antivirus": "ClamAV",
"update": 20170524
},
{
"result": "Trojan.Win32.Agent.ieva",
"antivirus": "Kaspersky",
"update": 20170524
},
{
"result": "Trojan.Win32.Panda.eahzta",
"antivirus": "NANO-Antivirus",
"update": 20170524
},
{
"result": "Trojan.Win32.Agent.246272.E[h]",
"antivirus": "ViRobot",
"update": 20170524
},
{
"result": "Troj.Ransom.W32.Cryptodef.cbs!c",
"antivirus": "AegisLab",
"update": 20170524
},
{
"result": "Trojan.GenericKD.2080196",
"antivirus": "Ad-Aware",
"update": 20170524
},
{
"result": "Troj/Vawtrak-AN",
"antivirus": "Sophos",
"update": 20170524
},
{
"result": "TrojWare.Win32.Ransom.Crowti.~RM",
"antivirus": "Comodo",
"update": 20170524
},
{
"result": "Trojan.GenericKD.2080196",
"antivirus": "F-Secure",
"update": 20170524
},
{
"result": "Trojan.PWS.Panda.7278",
"antivirus": "DrWeb",
"update": 20170524
},
{
"result": "Backdoor.Androm.Win32.14641",
"antivirus": "Zillya",
"update": 20170523
},
{
"result": "TROJ_CRYPTWALL.F",
"antivirus": "TrendMicro",
"update": 20170524
},
{
"result": "BehavesLike.Win32.PackedAP.dm",
"antivirus": "McAfee-GW-Edition",
"update": 20170523
},
{
"result": "Trojan.GenericKD.2080196 (B)",
"antivirus": "Emsisoft",
"update": 20170524
},
{
"result": "W32/Backdoor.CNGJ-2770",
"antivirus": "Cyren",
"update": 20170524
},
{
"result": "Backdoor/Androm.ebf",
"antivirus": "Jiangmin",
"update": 20170524
},
{
"result": "W32/Vawtrak.AN!tr",
"antivirus": "Fortinet",
"update": 20170524
},
{
"result": "Trojan[Backdoor]/Win32.Androm",
"antivirus": "Antiy-AVL",
"update": 20170524
},
{
"result": "Not detected",
"antivirus": "Kingsoft",
"update": 20170524
},
{
"result": "Trojan.Generic.D1FBDC4",
"antivirus": "Arcabit",
"update": 20170524
},
{
"result": "Trojan.Agent/Gen-Injector",
"antivirus": "SUPERAntiSpyware",
"update": 20170524
},
{
"result": "Ransom:Win32/Crowti.A",
"antivirus": "Microsoft",
"update": 20170524
},
{
"result": "Trojan/Win32.MDA.R131384",
"antivirus": "AhnLab-V3",
"update": 20170524
},
{
"result": "Ransom-CWall",
"antivirus": "McAfee",
"update": 20170524
},
{
"result": "Trojan.Win32.CryptoWall.gen",
"antivirus": "AVware",
"update": 20170524
},
{
"result": "SScope.Trojan.Agent.2315",
"antivirus": "VBA32",
"update": 20170524
},
{
"result": "Not detected",
"antivirus": "Zoner",
"update": 20170524
},
{
"result": "Win32.Trojan.Bp-generic.Wpav",
"antivirus": "Tencent",
"update": 20170524
},
{
"result": "Trojan-Ransom.CryptoWall3",
"antivirus": "Ikarus",
"update": 20170524
},
{
"result": "Win32.Trojan-Ransom.CryptoWall.C",
"antivirus": "GData",
"update": 20170524
},
{
"result": "Generic_r.EKI",
"antivirus": "AVG",
"update": 20170524
},
{
"result": "Trj/WLT.B",
"antivirus": "Panda",
"update": 20170523
},
{
"result": "HEUR/QVM10.1.Malware.Gen",
"antivirus": "Qihoo-360",
"update": 20170524
},
{
"result": "TR/Crypt.Xpack.134743",
"antivirus": "Avira",
"update": 20170524
},
{
"result": "Trojan.Generic (cloud:07G3VqhU2BR) ",
"antivirus": "Rising",
"update": 20170524
},
{
"result": "Trojan.Cryptodef!",
"antivirus": "Yandex",
"update": 20170518
},
{
"result": "worm.win32.dorkbot.i",
"antivirus": "Invincea",
"update": 20170519
},
{
"result": "malicious_confidence_100% (W)",
"antivirus": "CrowdStrike",
"update": 20170130
},
{
"result": "malicious (high confidence)",
"antivirus": "Endgame",
"update": 20170515
},
{
"result": "W32.Malware.gen",
"antivirus": "Webroot",
"update": 20170524
},
{
"result": "Trojan.Win32.Agent.ieva",
"antivirus": "ZoneAlarm",
"update": 20170524
},
{
"result": "generic.ml",
"antivirus": "Paloalto",
"update": 20170524
},
{
"result": "static engine - malicious",
"antivirus": "SentinelOne",
"update": 20170516
}
]
}
}
],
"static-pe-analysis": {
"exports": [],
"target-machine": "Intel 386 or later processors and compatible processors",
"overlay-size": 0,
"imports": {
"dbghelp.dll": [
"ImageNtHeader",
"ImageRvaToSection",
"ImageRvaToVa"
],
"comdlg32.dll": [
"GetSaveFileNameA",
"GetOpenFileNameA"
],
"KERNEL32.DLL": [
"IsValidCodePage",
"GetOEMCP",
"GetACP",
"GetCPInfo",
"GetSystemTimeAsFileTime",
"GetCurrentProcessId",
"GetTickCount",
"QueryPerformanceCounter",
"HeapFree",
"VirtualFree",
"HeapCreate",
"GetFileType",
"SetHandleCount",
"GetEnvironmentStringsW",
"WideCharToMultiByte",
"FreeEnvironmentStringsW",
"GetEnvironmentStrings",
"FreeEnvironmentStringsA",
"InitializeCriticalSectionAndSpinCount",
"LoadLibraryA",
"IsDebuggerPresent",
"SetUnhandledExceptionFilter",
"UnhandledExceptionFilter",
"GetCurrentProcess",
"TerminateProcess",
"EnterCriticalSection",
"HeapSize",
"LeaveCriticalSection",
"DeleteCriticalSection",
"GetLocaleInfoA",
"WriteFile",
"InterlockedDecrement",
"GetLastError",
"GetCurrentThreadId",
"SetLastError",
"InterlockedIncrement",
"TlsFree",
"TlsSetValue",
"TlsAlloc",
"TlsGetValue",
"GetStartupInfoA",
"ExitProcess",
"GetProcAddress",
"Sleep",
"GetModuleHandleW",
"GlobalCompact",
"SetProcessWorkingSetSize",
"EncodePointer",
"OpenProcess",
"GlobalUnWire",
"GetStdHandle",
"IsWow64Process",
"GetProcessHandleCount",
"GetProcessHeap",
"FlushFileBuffers",
"PulseEvent",
"GetVersion",
"RtlUnwind",
"HeapAlloc",
"VirtualAlloc",
"HeapReAlloc",
"GetStringTypeA",
"MultiByteToWideChar",
"GetStringTypeW",
"GetCommandLineA",
"GetProcessId",
"LockResource",
"GlobalDeleteAtom",
"LCMapStringA",
"LCMapStringW",
"GetModuleFileNameA",
"SetProcessPriorityBoost",
"GlobalUnfix",
"RequestWakeupLatency",
"IsProcessInJob",
"GetThreadTimes",
"GetProcessTimes",
"PeekNamedPipe"
],
"ADVAPI32.dll": [
"RegSetValueA",
"RegQueryValueExA",
"OpenProcessToken",
"LookupPrivilegeValueA",
"AdjustTokenPrivileges",
"RegOpenKeyExA",
"RegCloseKey",
"RegCreateKeyA",
"RegDeleteKeyA",
"GetUserNameA"
],
"USER32.DLL": [
"EnableMenuItem",
"GetDlgItem",
"SendDlgItemMessageA",
"AppendMenuA",
"GetWindowLongA",
"wvsprintfA",
"SetWindowPos",
"FindWindowA",
"RedrawWindow",
"GetWindowTextA",
"EnableWindow",
"GetSystemMetrics",
"IsWindow",
"CheckRadioButton",
"UnregisterClassA",
"SetCursor",
"GetSysColorBrush",
"DialogBoxParamA",
"DestroyAcceleratorTable",
"DispatchMessageA",
"TranslateMessage",
"LoadIconA",
"EmptyClipboard",
"SetClipboardData",
"SetFocus",
"CharUpperA",
"OpenClipboard",
"IsDialogMessageA",
"TranslateAcceleratorA",
"GetMessageA",
"LoadAcceleratorsA",
"RemoveMenu",
"InvalidateRect",
"ChildWindowFromPoint",
"PostMessageA",
"DestroyCursor",
"CreateDialogParamA",
"GetWindowRect",
"IsMenu",
"GetSubMenu",
"SetDlgItemInt",
"GetWindowPlacement",
"CharLowerBuffA",
"LoadCursorA",
"CheckMenuRadioItem",
"GetSysColor",
"KillTimer",
"DestroyIcon",
"DestroyWindow",
"PostQuitMessage",
"GetClientRect",
"MoveWindow",
"GetSystemMenu",
"SetTimer",
"SetWindowPlacement",
"InsertMenuItemA",
"GetMenu",
"CheckMenuItem",
"SetMenuItemInfoA",
"SetActiveWindow",
"DefDlgProcA",
"RegisterClassA",
"EndDialog",
"SetDlgItemTextA",
"EnumClipboardFormats",
"GetClipboardData",
"CloseClipboard",
"GetClassInfoA",
"CallWindowProcA",
"SetWindowLongA",
"IsDlgButtonChecked",
"SetWindowTextA",
"CheckDlgButton",
"GetActiveWindow",
"MessageBoxA",
"wsprintfA",
"GetDlgItemTextA",
"SendMessageA",
"GetCursorPos",
"TrackPopupMenu",
"ClientToScreen",
"DestroyMenu",
"CreatePopupMenu"
],
"COMCTL32.dll": [
"ImageList_Destroy",
"InitCommonControlsEx",
"ImageList_ReplaceIcon",
"ImageList_Remove",
"CreateToolbarEx",
"ImageList_SetBkColor",
"ImageList_Create"
]
},
"overlay-entropy": 0,
"resources": [
{
"count": 1,
"sha1": "57d1f324f19a5669e9d71527d1cd73b0ff7c349d",
"name": "RT_MESSAGETABLE",
"size": 91740,
"sha256": "ef97603fbb1ed118f972e91e194d6c34255c87c0fa23eb28089d6b58d870319d",
"ssdeep": "1536:+rCm5BGSt4HJ0yfGOlXzbGcw7R4jjK7+MGVUXpXJfT8zooLpE4YZ1lObN:cCGBGSmHJ0y5lj6jdojK7+MGOXpXx8z1",
"sdhash": "omitted",
"type": "ASCII text, with very long lines, with no line terminators",
"md5": "01351f623950a354353819e93c173cd8"
},
{
"count": 2,
"sha1": "4260284ce14278c397aaf6f389c1609b0ab0ce51",
"name": "RT_MANIFEST",
"size": 381,
"sha256": "4bb79dcea0a901f7d9eac5aa05728ae92acb42e0cb22e5dd14134f4421a3d8df",
"ssdeep": "6:TM3iSnjUglRu9TbX+A1WBRu9TNNSTfUTdNciW7N2x8RTdN9TIHG:TM3iSnRuV1aMN2U5Nci62xA5NEG",
"sdhash": "Not applicable",
"type": "XML 1.0 document text",
"md5": "1e4a89b11eae0fcf8bb5fdd5ec3b6f61"
}
],
"entry-point": "0x403487",
"sections": [
{
"sha1": "dad1bd7bddfe0bbf5e13eac1ed754ed0c784fda4",
"name": ".text\u0000\u0000\u0000",
"virtual-address": "0x1000",
"raw-size": "0x8800",
"raw-address": "0x86b7",
"sha256": "a32a62ccd0d08681c0c3018a330e9bf3135239afc707a20e6761e34973aaf3d0",
"flags": [
{
"name": "IMAGE_SCN_MEM_EXECUTE",
"value": 536870912
},
{
"name": "IMAGE_SCN_CNT_CODE",
"value": 32
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x86b7",
"entropy": 6.52148,
"ssdeep": "768:k1T+ZKX+VvDEzu+0CXIWBVip1IcaOK1uw7W9ekK+G5:UTCmzuw45LOf1uw7ueD+",
"sdhash": "omitted",
"type": "Code",
"md5": "c14b15c6f6e70cd124a1dcde16f070b3"
},
{
"sha1": "dad1bd7bddfe0bbf5e13eac1ed754ed0c784fda4",
"name": ".text\u0000\u0000\u0000",
"virtual-address": "0x1000",
"raw-size": "0x8800",
"raw-address": "0x86b7",
"sha256": "a32a62ccd0d08681c0c3018a330e9bf3135239afc707a20e6761e34973aaf3d0",
"flags": [
{
"name": "IMAGE_SCN_MEM_EXECUTE",
"value": 536870912
},
{
"name": "IMAGE_SCN_CNT_CODE",
"value": 32
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x86b7",
"entropy": 6.52148,
"ssdeep": "768:k1T+ZKX+VvDEzu+0CXIWBVip1IcaOK1uw7W9ekK+G5:UTCmzuw45LOf1uw7ueD+",
"sdhash": "omitted",
"type": "Code",
"md5": "c14b15c6f6e70cd124a1dcde16f070b3"
},
{
"sha1": "dad1bd7bddfe0bbf5e13eac1ed754ed0c784fda4",
"name": ".text\u0000\u0000\u0000",
"virtual-address": "0x1000",
"raw-size": "0x8800",
"raw-address": "0x86b7",
"sha256": "a32a62ccd0d08681c0c3018a330e9bf3135239afc707a20e6761e34973aaf3d0",
"flags": [
{
"name": "IMAGE_SCN_MEM_EXECUTE",
"value": 536870912
},
{
"name": "IMAGE_SCN_CNT_CODE",
"value": 32
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x86b7",
"entropy": 6.52148,
"ssdeep": "768:k1T+ZKX+VvDEzu+0CXIWBVip1IcaOK1uw7W9ekK+G5:UTCmzuw45LOf1uw7ueD+",
"sdhash": "omitted",
"type": "Code",
"md5": "c14b15c6f6e70cd124a1dcde16f070b3"
},
{
"sha1": "f031b0de605ed5cb9d615e79240fe33af12eeac8",
"name": ".rdata\u0000\u0000",
"virtual-address": "0xa000",
"raw-size": "0x2a00",
"raw-address": "0x2820",
"sha256": "36965f23b49ba777d7d0831f079e47087ad87ec2cf53ab952d8271e59287c43c",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x2820",
"entropy": 5.41741,
"ssdeep": "192:vhpls/KRn4nnnnnnnnnnLurh2AdTFJL/S+ZozitizDvZ1IHb7Dec8:5plGluFnJL/BZozitizDvZQPKc8",
"sdhash": "omitted",
"type": "Data",
"md5": "196eabd2bfebff72df631efba401fbdd"
},
{
"sha1": "f031b0de605ed5cb9d615e79240fe33af12eeac8",
"name": ".rdata\u0000\u0000",
"virtual-address": "0xa000",
"raw-size": "0x2a00",
"raw-address": "0x2820",
"sha256": "36965f23b49ba777d7d0831f079e47087ad87ec2cf53ab952d8271e59287c43c",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x2820",
"entropy": 5.41741,
"ssdeep": "192:vhpls/KRn4nnnnnnnnnnLurh2AdTFJL/S+ZozitizDvZ1IHb7Dec8:5plGluFnJL/BZozitizDvZQPKc8",
"sdhash": "omitted",
"type": "Data",
"md5": "196eabd2bfebff72df631efba401fbdd"
},
{
"sha1": "b48165649b37200709423573adfac5d9297ec1e0",
"name": ".data\u0000\u0000\u0000",
"virtual-address": "0xd000",
"raw-size": "0x1a200",
"raw-address": "0x33c2be0",
"sha256": "30c22d47b8294b12b0f15aeba97f129dd682de09faf32b32b9051456762e5aef",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_WRITE",
"value": 2147483647
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x33c2be0",
"entropy": 2.35016,
"ssdeep": "96:jgT/tQBwX2jVmW8rP37hO50ZU0GbgtIQYtqHKm+S8/ACEba7VKbWmkdb/jABgtN0:jstQB1VmWBqUBqIQDXy4CGa7YbqECE",
"sdhash": "omitted",
"type": "Data",
"md5": "dde216807b0f1105151c2caf33fee281"
},
{
"sha1": "b48165649b37200709423573adfac5d9297ec1e0",
"name": ".data\u0000\u0000\u0000",
"virtual-address": "0xd000",
"raw-size": "0x1a200",
"raw-address": "0x33c2be0",
"sha256": "30c22d47b8294b12b0f15aeba97f129dd682de09faf32b32b9051456762e5aef",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_WRITE",
"value": 2147483647
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x33c2be0",
"entropy": 2.35016,
"ssdeep": "96:jgT/tQBwX2jVmW8rP37hO50ZU0GbgtIQYtqHKm+S8/ACEba7VKbWmkdb/jABgtN0:jstQB1VmWBqUBqIQDXy4CGa7YbqECE",
"sdhash": "omitted",
"type": "Data",
"md5": "dde216807b0f1105151c2caf33fee281"
},
{
"sha1": "b48165649b37200709423573adfac5d9297ec1e0",
"name": ".data\u0000\u0000\u0000",
"virtual-address": "0xd000",
"raw-size": "0x1a200",
"raw-address": "0x33c2be0",
"sha256": "30c22d47b8294b12b0f15aeba97f129dd682de09faf32b32b9051456762e5aef",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_WRITE",
"value": 2147483647
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x33c2be0",
"entropy": 2.35016,
"ssdeep": "96:jgT/tQBwX2jVmW8rP37hO50ZU0GbgtIQYtqHKm+S8/ACEba7VKbWmkdb/jABgtN0:jstQB1VmWBqUBqIQDXy4CGa7YbqECE",
"sdhash": "omitted",
"type": "Data",
"md5": "dde216807b0f1105151c2caf33fee281"
},
{
"sha1": "b1be2680150b9ab2177ecc48db9dade0b4f752dc",
"name": ".rsrc\u0000\u0000\u0000",
"virtual-address": "0x33d0000",
"raw-size": "0x16a00",
"raw-address": "0x1687c",
"sha256": "04f9b14aaf26e35e0f32fca09bc63e7fbdd16d6bba24618625917a54fbe8a78c",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x1687c",
"entropy": 6.02005,
"ssdeep": "1536:FrCm5BGSt4HJ0yfGOlXzbGcw7R4jjK7+MGVUXpXJfT8zooLpE4YZ1lOb+:5CGBGSmHJ0y5lj6jdojK7+MGOXpXx8zm",
"sdhash": "omitted",
"type": "Data",
"md5": "be2219bffc936ebf7c285253194f3167"
},
{
"sha1": "b1be2680150b9ab2177ecc48db9dade0b4f752dc",
"name": ".rsrc\u0000\u0000\u0000",
"virtual-address": "0x33d0000",
"raw-size": "0x16a00",
"raw-address": "0x1687c",
"sha256": "04f9b14aaf26e35e0f32fca09bc63e7fbdd16d6bba24618625917a54fbe8a78c",
"flags": [
{
"name": "IMAGE_SCN_CNT_INITIALIZED_DATA",
"value": 64
},
{
"name": "IMAGE_SCN_MEM_READ",
"value": 1073741824
}
],
"virtual-size": "0x1687c",
"entropy": 6.02005,
"ssdeep": "1536:FrCm5BGSt4HJ0yfGOlXzbGcw7R4jjK7+MGVUXpXJfT8zooLpE4YZ1lOb+:5CGBGSmHJ0y5lj6jdojK7+MGOXpXx8zm",
"sdhash": "omitted",
"type": "Data",
"md5": "be2219bffc936ebf7c285253194f3167"
}
],
"compi-timestamp": "Tue, 13 Jan 2015 09:25:45 GMT"
}
}
}
```
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
packerinspector-api-1.0.0.tar.gz (41.0 kB view hashes)
Uploaded Source
Supported by
AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page
|
__label__pos
| 0.894844 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Here is the code
using(var context=new AventureWorksDataContext())
{
IEnumerable<Customer> _customerQuery = from c in context.Customers
where c.FirstName.StartsWith("A")
select c;
var watch = new Stopwatch();
watch.Start();
var result = Parallel.ForEach(_customerQuery, c=>Console.WriteLine(c.FirstName));
watch.Stop();
Debug.WriteLine(watch.ElapsedMilliseconds);
watch = new Stopwatch();
watch.Start();
foreach (var customer in _customerQuery)
{
Console.WriteLine(customer.FirstName);
}
watch.Stop();
Debug.WriteLine(watch.ElapsedMilliseconds);
}
}
The problem is, Parallel.ForEach takes about 400 MS vs a regular Foreach takes about 40MS. So What exactly am i doing wrong/ Why doesn't this work as I expect it.
share|improve this question
7
Basically because there's a setup cost involved, and you're not doing enough work inside the loop to justify the overhead. See e.g. this answer. (I expect this is a duplicate question.) – Rup May 17 '11 at 19:35
2
The Console.WriteLine() makes it totally irrelevant. – Henk Holterman May 17 '11 at 20:36
Try to remove the Console.WriteLine() and replace it with c.FirstName = c.FirstName.ToLowerInvariant(). You will not see a difference if your collection has around 5000 items; but if your collection has 6000, 7000,... 10.000 items, on a 4 cores processor, you will see a big difference (Parallel.Foreach will be faster) – Junior M May 14 '12 at 11:43
4 Answers 4
up vote 75 down vote accepted
Suppose you have a task to perform. Let's say you're a math teacher and you have twenty papers to grade. It takes you two minutes to grade a paper, so it's going to take you about forty minutes.
Now let's suppose that you decide to hire some assistants to help you grade papers. It takes you an hour to locate four assistants. You each take four papers and you are all done in eight minutes. You've traded 40 minutes of work for 68 total minutes of work including the extra hour to find the assistants, so this isn't a savings. The overhead of finding the assistants is larger than the cost of doing the work yourself.
Now suppose you have twenty thousand papers to grade, so it is going to take you about 40000 minutes. Now if you spend an hour finding assistants, that's a win. You each take 4000 papers and are done in a total of 8060 minutes instead of 40000 minutes, a savings of almost a factor of 5. The overhead of finding the assistants is basically irrelevant.
Parallelization is not free. The cost of splitting up work amongst different threads needs to be tiny compared to the amount of work done per thread.
share|improve this answer
3
Would you be able to point out some reading material that discusses how one would attempt to calculate the point at which it's feasible to take on the overhead to complete a task? – Ryan Aug 17 '12 at 12:51
The first thing you should realize is that not all parallelism is beneficial. There is an amount of overhead to parallelism, and this overhead may or may not be significant depending on the complexity what is being parallelized. Since the work in your parallel function is very small, the overhead of the management the parallelism has to do becomes significant, thus slowing down the overall work.
share|improve this answer
1
+1 - Just slightly faster than me! – Tejs May 17 '11 at 19:38
The additional overhead of creating all the threads for your enumerable VS just executing the numerable is more than likely the cause for the slowdown. Parallel.ForEach is not a blanket performance increasing move; it needs to be weighed whether or not the operation that is to be completed for each element is likely to block.
For example, if you were to make a web request or something instead of simply writing to the console, the parallel version might be faster. As it is, simply writing to the console is a very fast operation, so the overhead of creating the threads and starting them is going to be slower.
share|improve this answer
As previous writer has said there are some overhead associated with Parallel.ForEach, but that is not why you can't see your performance improvement. Console.WriteLine is a synchronous operation, so only one thread is working at a time. Try changing the body to something non-blocking and you will see the performance increase (as long as the amount of work in the body is big enough to outweight the overhead).
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.680139 |
life ideas
October 19, 2006
outlook bar javascript
Filed under: script — manoftoday @ 2:26 am
http://www.dynamicdrive.com/dynamicindex1/outbar2/index.htm
September 29, 2006
script to delete files
Filed under: script, Uncategorized — manoftoday @ 7:50 pm
bash-2.05b$ more /usr/lib/sa/sa2
find /var/log/sa \( -name ‘sar??’ -o -name ‘sa??’ \) -mtime +7 -exec rm -f {} \;
find files with name sa?? or sar?? in directory /var/log/sa with modified time older
than 7 days and then delete
September 12, 2006
bash script from hands-on experience
Filed under: script, Uncategorized — manoftoday @ 5:22 pm
• there are two formats in command substitution:
content="$(cat ./mylog)"
web_files=`ls ./public_html`
• Using Quotes to enclose your variables <—–important
X=""
if [ -n "$X" ]; then # -n tests to see if the argument is non empty
echo "the variable X is not the empty string"
fi
• Using Braces to Protect Your Variables
OK. Here’s a potential problem situation. Suppose you want to echo the value of the variable X, followed immediately by the letters “abc”. Question: how do you do this ? Let’s have a try :
#!/bin/bash
X=ABC
echo "$Xabc"
THis gives no output. What went wrong ? The answer is that the shell thought that we were asking for the variable Xabc, which is uninitialised. The way to deal with this is to put braces around X to seperate it from the other characters. The following gives the desired result:
#!/bin/bash
X=ABC
echo "${X}abc"
• test conditions
The test command needs to be in the form “operand1<space>operator<space>operand2” or operator<space>operand2
for string : -n, =, !=
for integer : lt, gt,eq,neq,ge,le
file: -f , -d
if [ ! -f “./myfile” ]; then
else
fi
• bash parses commands by space
bash gets unhappy if you leave a space on either side of the = sign. For example, the following gives an error message:
X = hello
the correct one should be X=”hello”
• send e-mail
approach 1:
EMAIL_BODY=/location/of/original #this file contains your body(template)
# these following sed commands make a new email to be sent with the user input substituted
# this first sed command creates a new email body that will be sent with the mail command
# since the -i option is not used a new email will be made, without changeing your original
sed "s/PassWord/$PASSWORD/" ${EMAIL_BODY} > ammended_email
# the -i option is used now because it will be changeing the new ammended email
sed -i "s/EmAil/$EMAIL/" ammended_email
sed -i "s/UsEr/$USER_NAME/" ammended_email
mail -s "subject" [email protected] < ammended_email
approach 2:
mail -s "subject" [email protected] <<EOF
hello,this is $USER
EOF
• read from file
1) a singleline process
#rename from *.txt -> *.text’
find /tmp -name '*.txt' |while read line; do newname=$(echo ${line}|sed 's/txt$/text/'); mv -v "${line}" "${newname}"; done
2) while loop
while read curline; do
echo $curline
done < "/tmp/yourfile"
#!/bin/sh
while read inputline
do
login="$(echo $inputline | cut -d: -f1)"
fulln="$(echo $inputline | cut -d: -f4)"
echo login = $login and fullname = $fulln
done < /etc/passwd
3) read the whole file
content = "$(cat /tmp/yourfile)"
• wait a job to be finished
>I'm looking for a way to allow a process to run for a maximum of x milliseconds. I've made a clumsy script
> using "sleep", but it has the disadvantage that each
> run takes exactly x milliseconds. I would prefer to
> allow the process to finish sooner.
>
> This was my idea:
>
> for file in *.bf
> do
> cat $file | bf > ${file%.bf} &
> sleep 0.5
> killall bf
> done
> Comments?
You probably want to be doing something with "$!", the PID of the last spawned
background job.
For example:
# Start some time consuming job in background...
sh -c "sleep 5" &
JOB=$!
# Start its nemesis
sh -c "sleep 1 ; kill $JOB" &
# Wait for the job to finish, but don't wait for it's killer
wait $JOB
# Carry on/loop to next file or whatever
....
If the first job finishes early the killer should later just emit a
harmless "no such process" error message (unless your sleep time is
so big your system manages to wrap-round the PID range of course :^).
• how to run in a spawned shell? <—–advanced
test1.sh
#!/bin/bash
export TMPFILE=”/tmp/test.$$.`whoami`”
export LOGFILE=”/tmp/mylog”
MYTRACE=”/tmp/mytrace”
source ~/test2.sh <<‘EOF’
echo “tmpfile is $TMPFILE” # the output has value
echo “log is $LOGFILE” # the out put has /tmp/mylog2
echo “trace is $MYTRACE” # the output is empty
test2.sh
#!/bin/bash
export LOGFILE=”/tmp/mylog2″
echo “tmpfile in bash2 is $TMPFILE” # the output has value
echo “log in bash2 is $LOGFILE” # the out put has /tmp/mylog2
echo “trace in bash2 is $MYTRACE” # the out put is empty
/usr/bin/bash –noprofile –norc
in summary,
1)in test1.sh, you need to use export so values can be reserved after EOF; you can’t use variable=value
2) the pid before you run test1.sh and the pid that you echo $$ after test1.sh is finished is the same.
• Debugging on part(s) of the script
set -x # activate debugging from here
...
code
...
set +x # stop debugging from here
• input arguments
test1.sh
declare flag=””
declare kflag=””
declare rflag=””
declare xflag=””
declare loop_numbers=””
while getopts “:krn:x” flag # -k -r -n 3 -x
do
case $flag in
k) kflag=1;;
r) rflag=1;;
n) loop_numbers=”${OPTARG}”;;
x) xflag=1;;
?) print-usage; exit 2;;
esac
done
enforce parameters
if [ -z $1 ]; then # no parameter
echo "Usage: $0 /path/to/httpd.conf"
exit 1
fi
• array manipulation
[bob in ~] ARRAY=(one two three)
[bob in ~] echo ${ARRAY[*]}
one two three
[bob in ~] echo ${ARRAY[@]}
one two three
[bob in ~] echo ${ARRAY[2]}
three
[bob in ~] ARRAY[3]=four
[bob in ~] echo ${ARRAY[*]}
one two three four
[bob in ~] unset ARRAY[1]
[bob in ~] echo ${ARRAY[*]}
one three four
[bob in ~] unset ARRAY
bob in ~] echo ${ARRAY[*]}
<--no output-->
• arithmetic caculation
#
# Count the number of possible testers.
# (Loop until we find an empty string.)
#
count=0
while [ "x${wholist[count]}" != "x" ]
do
count=$(( $count + 1 ))
done
• variables expansion
1)remove parts
[bob in ~]ARRAY=(one two one three one four)
[bob in ~] echo ${ARRAY[*]}
one two one three one four
[bob in ~] echo ${ARRAY[*]#one}
two three four
[bob in ~] echo ${ARRAY[*]} # ARRAY itself has no change
one two one three one four
[bob in ~] echo ${ARRAY[*]#t}
one wo one hree one four
[bob in ~] echo ${ARRAY[*]#t*} # one # for shortest match from beginning
one wo one hree one four
[bob in ~] echo ${ARRAY[*]##t*} # two ## for longest match from beginning including spaces
one one one four
similar ,%, and %% match from the end.
we normally use this for variables, such as :
SZINFO=`echo ${SZINFO##*(}`
SZINFO=`echo ${SZINFO%%)*}`
2) Replacing parts
This is done using the
${VAR/PATTERN/STRING}
or
${VAR//PATTERN/STRING}
syntax. The first form replaces only the first match, the second replaces all matches of PATTERN with STRING:
3) length
[bob in ~] echo $SHELL
/bin/bash
[bob in ~] echo ${#SHELL}
9
[bob in ~] ARRAY=(one two three)
[bob in ~] echo ${#ARRAY}
3
Create a free website or blog at WordPress.com.
|
__label__pos
| 0.555977 |
Matlab极坐标绘制散点图polarscatter
3.8
(5)
今天,来说下Matlab中用极坐标体系绘制散点图的函数polarscatter。该函数,可以在极坐标中绘制散点图,散点图在极坐标中体现为一个一个的扇形区域上的离散点,这与传统离散图中的矩形区域离散点有所不同。本文主要讲解极坐标离散图函数polarscatter的常见用法、语法说明、极坐标中创建散点图、使用已填充标记并设置标记大小、使用具有不同大小和颜色的标记、绘图之前从度转换为弧度、合并两个散点图以及创建散点图之后进行修改等方法。
Matlab极坐标绘制散点图polarscatter
下面,我们将开始详细的介绍polarscatter函数的语法介绍,实例引用,结果展示。由于极坐标散点图函数只有Matlab2020版本中可以,我这里安装的是Matlab2016版本,因此这里不再提供相关的帮助文档。此外,下文中出现的代码及运行结果均来源于官方文档教程。
常见用法
polarscatter(th,r)
polarscatter(th,r,sz)
polarscatter(th,r,sz,c)
polarscatter(___,mkr)
polarscatter(___,'filled')
polarscatter(___,Name,Value)
polarscatter(pax,___)
ps = polarscatter(___)
语法说明
polarscatter(th,r) 绘制 th 对 r 的图,并在每个数据点显示一个圆圈。th 和 r 必须是具有相同长度的向量。必须以弧度为单位指定 th。
polarscatter(th,r,sz) 设置标记大小,其中 sz 以平方磅为单位指定每个标记的面积。要以相同的大小绘制所有标记,请将 sz 指定为标量。要以不同的大小绘制标记,请将 sz 指定为长度与 th 相同的向量。
polarscatter(th,r,sz,c) 设置标记颜色,其中 c 是向量、三列矩阵、RGB 三元组或颜色名称,例如 ‘red’。
polarscatter(___,mkr) 设置标记符号。例如,’+’ 显示十字标记。在上述语法中的任何输入参数组合之后指定标记符号。
polarscatter(___,’filled’) 填充标记内部。
polarscatter(___,Name,Value) 使用一个或多个名称-值对组参数修改散点图的外观。例如,您可以指定 ‘FaceAlpha’ 和一个介于 0 和 1 之间的标量值,从而使用半透明标记。
polarscatter(pax,___) 将在 pax 指定的极坐标区(而不是当前坐标区)中绘制图形。
ps = polarscatter(___) 返回 Scatter 对象。在创建 Scatter 对象之后可使用 ps 修改其外观。
极坐标中创建散点图
在极坐标中创建一个散点图。
th = pi/4:pi/4:2*pi;
r = [19 6 12 18 16 11 15 15];
polarscatter(th,r)
Matlab极坐标绘制散点图polarscatter
使用已填充标记并设置标记大小
通过指定可选的输入参数 ‘filled’,创建一个使用已填充标记的散点图。将标记大小设置为 75 平方磅。
th = linspace(0,2*pi,20);
r = rand(1,20);
sz = 75;
polarscatter(th,r,sz,'filled')
Matlab极坐标绘制散点图polarscatter
使用具有不同大小和颜色的标记
使用具有不同大小和颜色的标记创建散点图。将可选的大小和颜色输入参数指定为向量。使用颜色向量中的唯一性值指定所需的不同颜色。这些值映射到颜色图中的不同颜色。
th = pi/4:pi/4:2*pi;
r = [19 6 12 18 16 11 15 15];
sz = 100*[6 15 20 3 15 3 6 40];
c = [1 2 2 2 1 1 2 1];
polarscatter(th,r,sz,c,'filled','MarkerFaceAlpha',.5)
Matlab极坐标绘制散点图polarscatter
绘图之前从度转换为弧度
创建角度值以度为单位的数据。由于 polarscatter 要求角度值以弧度为单位,因此要在绘制之前使用 deg2rad 将值转换为弧度。
th = linspace(0,360,50);
r = 0.005*th/10;
th_radians = deg2rad(th);
polarscatter(th_radians,r)
Matlab极坐标绘制散点图polarscatter
合并两个散点图
使用 hold 命令将位于相同极坐标区中的两个散点图合并起来。添加包含每个图的说明信息的图例。
th = pi/6:pi/6:2*pi;
r1 = rand(12,1);
polarscatter(th,r1,'filled')
hold on
r2 = rand(12,1);
polarscatter(th,r2,'filled')
hold off
legend('Series A','Series B')
Matlab极坐标绘制散点图polarscatter
创建散点图之后进行修改
创建一个散点图,并将散点图对象赋给变量 ps。
th = pi/6:pi/6:2*pi;
r = rand(12,1);
ps = polarscatter(th,r,'filled')
Matlab极坐标绘制散点图polarscatter
运行结果为:
ps =
Scatter with properties:
Marker: 'o'
MarkerEdgeColor: 'none'
MarkerFaceColor: 'flat'
SizeData: 36
LineWidth: 0.5000
ThetaData: [1x12 double]
RData: [1x12 double]
ZData: [1x0 double]
CData: [0 0.4470 0.7410]
Show all properties
在创建散点图对象之后可使用 ps 修改其属性。
ps.Marker = 'square';
ps.SizeData = 200;
ps.MarkerFaceColor = 'red';
ps.MarkerFaceAlpha = .5;
Matlab极坐标绘制散点图polarscatter
共计5人评分,平均3.8
到目前为止还没有投票~
很抱歉,这篇文章对您没有用!
让我们改善这篇文章!
告诉我们我们如何改善这篇文章?
本文来自转载,原文出处:MathWorks官网,由古哥整理发布
如若转载,请注明出处:https://iymark.com/program/matlab-polar-function-polarscatter.html
发表评论
登录后才能评论
本站APP
本站APP
分享本页
返回顶部
|
__label__pos
| 0.533262 |
github graphql-java-kickstart/graphql-spring-boot v5.3
5.3
latest releases: v15.1.0, v15.1.0-javax, v15.0.0...
5 years ago
Exception handler support
Support for Springs @ExceptionHandler annotation. By default graphql-java-servlet when an exception occurs while processing a GraphQL request the error returned to the caller is a GraphQLError with a simple message and InternalServerError type. All details regarding the exception that actually occurred are lost.
This release introduces the property graphql.servlet.exception-handlers-enabled, which is set to false by default. That ensures that the default behavior stays the same. By switching this property to true it will instead actually use the exception that was thrown to construct the GraphQLError response, e.g.:
{
"data": null,
"errors": [
{
"message": "User 'username' cannot be found at the Identity Provider",
"type": "AccessDeniedException",
"path": null,
"extensions": null
}
]
}
The message contains the message as represented by the exception and the type contains the simple name of the exception that was thrown.
In addition you can now add methods to your Spring beans annotated with Springs @ExceptionHandler. This way you can easily customize the errors you want to return depending on the exception that was thrown while processing a GraphQL request, e.g.:
@ExceptionHandler(Throwable.class)
GraphQLError handleException(Throwable e) {
return new ThrowableGraphQLError(e);
}
This example would actually result in the exact same response as given in the example response above, but it shows the idea behind it. You can return any type of custom GraphQLError for this method, and you can have any number of methods annotated like this. It will select the method targeting the most concrete exception that was thrown.
Don't miss a new graphql-spring-boot release
NewReleases is sending notifications on new releases.
|
__label__pos
| 0.914849 |
遗传算法解决旅行商问题(TSP)一:初始化和适应值
旅行商问题(Travelling salesman problem, TSP)是这样一个问题:给定一系列城市和每对城市之间的距离,求解访问每一座城市一次并回到起始城市的最短回路。
设有n个城市,城市i和城市j之间的距离是C_{ij} 。设
那么TSP问题使下面的目标最小:
设置参数
首先,设置一下参数:
CITYSIZE = 10; % 城市个数
POPSIZE = 50; % 种群个数
PC = 0.4; % 交叉概率
PM = 0.05; % 变异概率
MAXGEN = 150; % 迭代次数
LEAVING = 5; % 父代保留数量
生成距离矩阵
这里假设有10个城市,其坐标定义于pos变量,第一行是各个城市的x坐标,第二行是各个城市的y坐标,比如第一个城市的坐标为(1,1),第三个城市的坐标为(2,2)。之后计算处各个城市之间的距离。
pos = [1 2 2 3 1 4 5 5 6 4; 1 1 2 2 3 4 4 5 5 6]; % 城市坐标
D = distancematrix(pos); % 城市距离矩阵
function D = distancematrix(pos)
% 根据城市坐标位置生成各地距离矩阵
% pos input 城市坐标
% D output 距离矩阵
N = size(pos, 2);
D = zeros(N, N);
for i = 1:N
for j = i+1:N
dis = (pos(1,i) - pos(1,j)).^2 + (pos(2,i) - pos(2,j)).^2;
D(i,j) = dis^(0.5);
D(j,i) = D(i,j);
end
end
end
初始化
种群中每个个体,都表示着一个访问城市的路径,这意味着每个个体都要覆盖所有城市,但是只能经过一个城市一次。
function pop = initpop(popsize, chromlength)
% 生成初始种群
% popsize input 种群规模
% chromlength input 染色体长度
% pop output 种群
pop = zeros(popsize, chromlength);
for i = 1:popsize
pop(i,:) = randperm(chromlength);
end
end
计算适应度值
根据种群中每个个体中城市的顺序,可以求出这个个体所代表的距离,距离越大,适应度越小,因此用距离的倒数作为个体的适应度值。
function len = callength(D, pop)
% 计算种群中每个个体所对应的路线距离
% D input 距离矩阵
% pop input 种群
% len output 距离
n = size(pop,1);
len = zeros(n, 1);
for i = 1:n
for j = 1:(size(pop,2)-1)
len(i,1) = len(i,1) + D(pop(i, j), pop(i, j+1));
end
len(i,1) = len(i,1) + D(pop(i,1), pop(i,end));
end
end
function fitness = calfitness(objval)
% 求适应度值
% objval input 目标函数值
% fitness output 适应度值
fitness = 1 ./ objval;
end
“遗传算法解决旅行商问题(TSP)一:初始化和适应值”的一个回复
发表评论
电子邮件地址不会被公开。 必填项已用*标注
|
__label__pos
| 0.888639 |
Client Credentials REST Host 403 after 1 hour
I'm in the process of migrating our old forum threads/replies into a new community instance using the ClientCredentialsRestHost PostToDynamic method and find that after 1 hour, the API response is "The remote server returned an error: (403) Forbidden". It happily chugs along, creating away until an hour passes. The documentation seems to imply that the host will handle getting refresh tokens. Maybe i've missed something in a configuration or a setting that extends beyond the hour limit.
Can anyone help?
Thanks
|
__label__pos
| 0.993786 |
Resources Contact Us Home
Browse by: INVENTOR PATENT HOLDER PATENT NUMBER DATE
Data search user interface with ergonomic mechanism for user profile definition and manipulation
6484164 Data search user interface with ergonomic mechanism for user profile definition and manipulation
Patent Drawings:Drawing: 6484164-10 Drawing: 6484164-11 Drawing: 6484164-12 Drawing: 6484164-13 Drawing: 6484164-14 Drawing: 6484164-15 Drawing: 6484164-16 Drawing: 6484164-17 Drawing: 6484164-18 Drawing: 6484164-2
« 1 2 »
(17 images)
Inventor: Nikolovska, et al.
Date Issued: November 19, 2002
Application: 09/537,494
Filed: March 29, 2000
Inventors: Camplin; Alison F. (London, GB)
Martino; Jacquelyn A. (Cold Spring, NY)
Nikolovska; Lira (Eindhoven, NL)
Assignee: Koninklijke Philips Electronics N.V. (Eindhoven, NL)
Primary Examiner: Amsbury; Wayne
Assistant Examiner: Nguyen; Cam Linh
Attorney Or Agent: Thorne; Gregory L.
U.S. Class: 707/3
Field Of Search: 707/3; 345/327; 455/4.2; 348/34; 348/569; 725/61
International Class: G06F 17/30
U.S Patent Documents: 5544360; 5737734; 5754939; 5945988; 5946678; 5987446; 6005565; 6008802; 6018372; 6130726; 6133909; 6249773; 6317741; 6326962; 2001/0039659
Foreign Patent Documents: 0774866; 0774868; WO9406248; WO9748230; WO9821878
Other References:
Abstract: A user interface for querying and displaying records from a database employs a physical metaphor for the process of constructing queries and viewing results. User profiles are presented and manipulated to operate with queries in the same way as other criteria. For example, in one embodiment, the search criteria are shown as the beads on respective strings, the strings representing categories of criteria. One of the strings is a set of user profiles that can be added to a query in the same manner as the addition of criteria. Criteria are selected to form a query by moving corresponding beads to a query string. User preference profiles can be constructed in the same manner. Profiles are saved and represented as bead strings that can be used in further interactions in the same manner as criteria beads, Profiles can also be the result of automatic machine-analysis of user interaction.
Claim: We claim:
1. A method of accessing a database, comprising the steps of: generating a user profile containing data by which data in said database may be ranked as to suitability for a particularuser associated with said user profile; displaying search criteria and said at least one user profile as display elements in a single display of a user interface; accepting commands to select certain ones of said criteria for inclusion in a searchquery; accepting at least one further command to select said at least one user profile in said search query; said steps of accepting comprising the step of moving a corresponding display element to an area of said single display corresponding to asearch area; submitting said search query to a controller programmed to access said database responsively to said query.
2. A method as in claim 1, wherein said step of displaying includes displaying respective symbols corresponding to said criteria and displaying at least one respective symbol corresponding to said at least one user profile.
3. A method as in claim 1, wherein said step of generating includes deriving, from results of past user searches of records from said database, data permitting a prediction of preferences of said user for future selections of records from saiddatabase.
4. A method as in 1, wherein said step of generating includes deriving, from user selections from results of past searches of records from said database, data permitting a prediction of preferences of said user for future selections of recordsfrom said database.
5. A method as in claim 1, including saving said search query selectively in response to first save commands adding a saved search query resulting thereby to said criteria, whereby a saved search query can be selected in the same manner asindividual criteria.
6. A method of searching a database, comprising the steps of: generating user profiles, each containing data by which data in said database may be ranked as to suitability for a particular user associated with a respective one of said userprofiles; displaying search criteria and said user profiles as display elements in a single display of a user interface; accepting commands to select a portion of said search criteria for inclusion in a search query; accepting at least one furthercommand to select at least one of said user profiles in said search query; said steps of accepting comprising the step of moving a corresponding display element to an area of said single display corresponding to a search area; submitting said searchquery to a controller programmed to access said database responsively to said query.
7. A method as in claim 6, further comprising the step of: accepting at least one further command to accept at least one other of said user profiles to include in said search query, whereby two profiles are combined in a single search.
8. A method as in claim 6, further comprising the step of: said step of accepting commands including highlighting an icon associated with said search criteria; said step of accepting at least one further command including highlighting an iconassociated with a selected one of said user profiles.
9. A method as in claim 6, further comprising the step of: displaying said search criteria and said user profiles in a three-dimensional scene; said step of accepting commands including represent said display elements in a first location ofsaid three-dimensional scene and to indicate a selection of a respective one thereof, changing a location thereof from said first location to a second location in said three-dimensional scene.
10. A device for accessing a database, comprising: a data store, a user input device, and display; a controller connected to control said data store, said user input device, and said display; said data store containing user profile data bywhich data in said database may be ranked as to suitability for a particular user associated therewith; said controller being programmed to display search criteria and said user profile data as display elements in a single display image of a userinterface; said controller being programmed to accept commands moving a corresponding display element to a search area of said single display image to select certain ones of said criteria for inclusion in a search query; said controller beingprogrammed to accept at least one further command moving a corresponding display element to said search area of said single display image to select said at least one user profile in said search query; said controller being programmed to access saiddatabase responsively to said search query.
11. A device as in claim 10, wherein said controller is programmed to display said profile data and said search criteria as respective three-dimensional symbols selectable by said input device.
12. A device as in claim 11, wherein said database is an electronic program guide.
13. A device as in claim 10, wherein said controller is further programmed to update said user profile data responsively to past user searches of records from said database.
14. A device as in claim 10, wherein said controller is programmed to represent said search criteria and said user profile data in the form of elements in a first location of a three-dimensional scene and to indicate a selection of a respectiveone thereof by changing a location thereof from said first location to a second location in said three-dimensional scene.
15. A device as in claim 14, wherein said database is an electronic program guide.
16. A device as in claim 14, wherein said elements corresponding to said criteria are grouped into categories, each located in a respective portion of said first location and said user elements corresponding to said profiles are located inanother respective portion of said first location.
17. A device as in claim 16, wherein said database is an electronic program guide.
18. A method of accessing an electronic program guide, comprising the steps of: generating a user profile containing data by which data in said database may be ranked as to suitability for a particular user associated with said user profile; displaying search criteria and said at least one user profile as display elements in a single display image of a user interface; accepting commands to select certain ones of said criteria for inclusion in a search query; accepting at least one furthercommand to select said at least one user profile in said search query; saving said search query; said steps of accepting comprising the step of moving a corresponding display element to an area of said single display image corresponding to a searcharea; generating and saving another search query through said steps of generating, displaying, accepting commands to select, accepting at least one further command, and saving; selecting one of said search query and said another search query andsubmitting a selected one of said search query and said another search query to a controller programmed to access said database responsively to said query.
Description: BACKGROUND OF THE INVENTION
The present invention relates to search, retrieval, and organization of data from large data spaces such as the contents of CD ROMS, electronic program guides, the Internet, etc.
The vast amount of information available in CD-ROMS, the Internet, television programming guides, the proposed national information infrastructure, etc. spur the dream of easy access to many large information media sources. Such increased accessto information is likely to be useful, but the prospect of such large amounts of information presents new challenges for the design of user interfaces for information access. For example, Internet users often struggle to find information sources or giveup in the face of the difficulty of constructing search queries and visualizing the results of queries. Straight text lists such as provided by electronic program guides, Internet search engines, and text search tools such as Folio.RTM., are tedious towork with, often hard to work with, and, because of the rather monotonous look, rather tiring to look at for long periods of time.
There are two major components to searching databases: filtering so irrelevant information is excluded, and sorting the filtered results by some priority schema. For example, an Internet search engine such as Google.RTM. uses a text query tofilter and sort records in its database representing entry points in the World-Wide-Web. It uses certain implicit criteria such as an implied vote "cast" by pages that link to the candidates retrieved by the query (That is, pages that are linked to bymore other pages, have more "votes"). Google also analyzes the pages that cast the votes and gives greater weight to pages that receive more votes by other pages.
Tools such as Google and most other database retrieval tools accept search queries in the form of text with connectors and results are presented in the form of lists sorted by some specific lump criterion which might be an operator involvingmultiple criteria (such as sort by A, then by B, etc).
SUMMARY OF THE INVENTION
Briefly, a user interface for querying and displaying records from a database employs a physical metaphor for the process of constructing queries and viewing results. Queries are defined by selecting predefined criteria rather than entering themas search terms, the former being more compatible with lean-back applications such as searching of electronic program guides. According to the invention, user profiles are presented and manipulated to cooperate with queries in the same way as othercriteria. For example, in one embodiment, the search criteria are shown as the beads on respective strings, the strings representing categories of criteria. One of the strings is a set of user profiles that can be added to a query in the same manner asthe addition of criteria. Criteria are selected to form a query by moving corresponding beads to a query string. User preference profiles can be constructed in the same manner. Profiles are saved and represented as bead strings that can be used infurther interactions in the same manner as criteria beads. Profiles can also be the result of automatic machine-analysis of user interaction. Thus, the historical usage pattern of a user is used by a machine learning device to predict user preferences. Such "implicit" profiles can also be added to a query in the same manner as the more typical preference profiles in which users incorporate their explicit preferences in the form of rules into a user profile.
The UI design addresses various problems with user interaction with database search devices in the "lean-back" environment. (In the "lean back" situation the user is being entertained and relaxes as when the user watches television, and in the"lean-forward" situation the user is active and focused as when the user uses a desktop computer.) For example, the invention may be used to interact with electronic program guides (EPGs) used with broadcast television. In such an application, the UImay be displayed as a layer directly on top of the recorded or broadcast program or selectively on its own screen. The UI may be accessed using a simple handheld controller. In a preferred embodiment, the controller has vertical and horizontal scrollbuttons and only a few specialized buttons to access the various operating modes directly.
The UI generates three environments or worlds: a search world, a profiling world, and an overview world. Assuming an EPG environment, in the search world, the user enters, saves, and edits filtering and sorting criteria (time of day, day ofweek, genre, etc.). In the profiling world, the user generates and modifies explicit (and some types of implicit) user profiles. Explicit profiles are the set of likes and dislikes a user has entered to represent his preferences. Each can be selectedfrom lists of criteria such as genre (movies, game shows, educational, etc.), channel (ABC, MTV, CSPAN, etc.), actors (Jodie Foster, Tom Cruise, Ricardo Bernini, etc.), and so on. In the overview world, the user views and selects among the results ofthe search, which is a result of the sorting, filtering, and profiling information.
The invention may be used in connection with various different searching functions. For example, in a preferred embodiment designed around EPGs, there are three basic searching functions provided: (1) Filtering, (2) Filtering and/or sorting byexplicit profile, and (3) Sorting by implicit profile. These are defined as follows.
(1) Filtering--A set of criteria that defines the set of results to be displayed. These criteria choose exactly what records in the database will be chosen and which will be excluded from the overview world display. (2) Filtering and/or sortingby explicit profile--A user is permitted to specify likes or dislikes by making selections from various categories. For example, the user can indicate that dramas and action movies are favored and that certain actors are disfavored. These criteria arethen applied to sort the records returned by the filtering process. The degree of importance of the criteria may also be specified, although the complexity of adding this layer may make its addition to a system less worthwhile for the vast majority ofusers.
As an example of the second type of system, one EP application (EP 0854645A2) describes a system that enables a user to enter generic preferences such as a preferred program category, for example, sitcom, dramatic series, old movies, etc. Theapplication also describes preference templates in which preference profiles can be selected, for example, one for children aged 10-12, another for teenage girls, another for airplane hobbyists, etc. This method of inputting requires that a user have thecapacity to make generalizations about him/herself and that these be a true picture of his/her preferences. It can also be a difficult task for common people to answer questions about abstractions such as: "Do you like dramas or action movies?" and "Howimportant is the `drama` criteria to you?"
(3) Sorting by implicit profile--This is a profile that is generated passively by having the system "observe" user behavior. The user merely makes viewing (recording, downloading, or otherwise "using") choices in the normal fashion and thesystem gradually builds a personal preference database by extracting a model of the user's behavior from the choices. This process can be enhanced by permitting the user to rate material (for example on a scale of one to five stars). The system usesthis model to make predictions about what the user would prefer to watch in the future. The process of extracting predictions from a viewing history, or specification of degree of desirability, can follow simple algorithms, such as marking apparentfavorites after repeated requests for the same item. It can be a sophisticated machine-learning process such as a decision-tree technique with a large number of inputs (degrees of freedom). Such models, generally speaking, look for patterns in theuser's interaction behavior (i.e., interaction with the UI for making selections).
An example of this type of profile information is MbTV, a system that learns viewers' television watching preferences by monitoring their viewing patterns. MbTV operates transparently and builds a profile of a viewer's tastes. This profile isused to provide services, for example, recommending television programs the viewer might be interested in watching. MbTV learns about each of its viewer's tastes and uses what it learns to recommend upcoming programs. MbTV can help viewers scheduletheir television watching time by alerting them to desirable upcoming programs, and with the addition of a storage device, automatically record these programs when the viewer is absent.
MbTV has a Preference Determination Engine and a Storage Management Engine. These are used to facilitate time-shifted television. MbTV can automatically record, rather than simply suggest, desirable programming. MbTV's Storage ManagementEngine tries to insure that the storage device has the optimal contents. This process involves tracking which recorded programs have been viewed (completely or partially), and which are ignored. Viewers can "lock" recorded programs for future viewingin order to prevent deletion. The ways in which viewers handle program suggestions or recorded content provides additional feedback to MbTV's preference engine which uses this information to refine future decisions.
MbTV will reserve a portion of the recording space to represent each "constituent interest." These "interests" may translate into different family members or could represent different taste categories. Though MbTV does not require userintervention, it is customizable by those that want to fine-tune its capabilities. Viewers can influence the "storage budget" for different types of programs. For example, a viewer might indicate that, though the children watch the majority oftelevision in a household, no more than 25% of the recording space should be consumed by children's programs.
Note that search criteria, and implicit and explicit profiles, may produce reliability or ranking estimates for each proposed record in the searched database rather than just "yes" and "no" results for each candidate record in the database. Asearch query can be treated as providing criteria, each of which must be satisfied by the search results. In this case, if a query contains a specified channel and a specified time range, then only records satisfying both criteria will be returned. Thesame search query could be treated as expressing preferences in which case, records that do not satisfy both criteria could be returned, and, instead of filtering, the records are sorted according to how good a match they are to the criteria. So,records satisfying both criteria would be ranked highest, records satisfying only one criterion would be ranked second-highest, and records satisfying neither criterion would be ranked last. Intermediate ranking could be performed by the closeness ofthe record criterion to the query or profile criterion. For example, in the example above, if a record is closer to the specified time range, it would be ranked higher than a record that further in time from the specified time range.
In the case of implicit profiles, there may not be any criteria at all in the sense that one could show how high each genre, for example, is ranked. If, for example, a neural network-based predicting engine were used to sort the records of thedatabase, there is no clear way to expose the criteria weighting. that is used to make the decisions, at least for an easy-to-use system. However, some simpler machine learning techniques may also be used for producing and implementing implicitprofiles. For example, the criteria appearing in selected records (or records ranked highly as highly desirable) can be scored based on the frequency of criteria hits. For example, in an EPG, if all the programs that are selected for viewing aredaytime soaps, the soap genre and daytime time range would have a high frequency count and the science documentary genre would have zero hits. These could be exposed so that the viewer can see them. In the user interface embodiments described below, inwhich profiles are edited, the user may edit such an implicit profile because it is based, on specific weights applied to each criterion. A user can remove the criterion from the profile, change the weighting, etc. The latter is only an example of animplicit profiling mechanism that provides a clear way for the user to modify it. Other mechanisms may also provide such a scheme; for example the system need not be based only on frequency of hits of the user's selections.
Construction of the queries for filtering and preference application is preferably done with three dimensional visual graphics to facilitate the organization of information and to allow users to manipulate elements of a scene ("tokens") thatrepresent data records, search and sort criteria, etc. In a preferred UI, the tokens take the form of beads. Categories are represented as strings or loops of beads. When a preference filter is constructed, specific choices (beads) are taken from acategory string and added to a search string or bin. The beads, strings, and bins are represented as three-dimensional objects, which is more than just for appearances in that it serves as a cue for the additional meaning that the third dimensionprovides: generally an object's proximity to the user represents its relative ranking in the particular context.
Where the strings represent criteria, the ranking of criteria in each category may correspond to the frequency with which the criteria are used by the user in constructing queries. So, for example, if the user's searches always include thedaytime time range, the bead or beads corresponding to this time range would be ranked higher. Alternatively, the criteria may be ranked according to selected records, rather than by all the records (or at least the most highly ranked ones) returned bysearching.
One or more categories may actually be constructed of words, for example keywords, that appear in a large proportion of the chosen programs or a large proportion of the hits returned by the user's queries. This makes sense because requiring thekeyword category to contain every conceivable keyword would be awkward. Extracting the significant keywords from the descriptions of chosen records and/or from records returned by the queries based on frequency of occurrence or a variation thereof,makes the number of possible keywords easier to handle and easier to select. Preferably, the keyword list should be editable by the user in the same fashion as described in detail with respect to the editing of profiles elsewhere in the specification. To construct a keyword list based on frequency of use data, the system could start with no keywords at all. Then, each time the user enters a query, the returned results could be scanned for common terms. The titles, descriptions, or any other datacould be scanned and those terms that occur with some degree of frequency could be stored in a keyword list. The keywords in the list could each be ranked based on frequency or frequency weighted. by the context in which the keyword appeared. Forexample, a keyword in a title might receive a lower rank than a keyword in a description or a keyword that is a direct object or subject in a grammatical parsing of a sentence in a description might receive a higher ranking than indirect objects, etc.Instead of extracting keywords from the returned records of a search, the keywords could be extracted from only the records selected for use. For example, only programs that are chosen for viewing or recording are actually used to form the keyword listin the manner described. Alternatively both selections and returns of queries could be used, but the keywords in the selected records could be weighted more strongly than keywords in other returned records.
The overview world presents the results of filtering and sorting criteria in a visually clear and simple way. Preferably, a three-dimensional animation is shown with three-dimensional tokens representing each record. Again, the (apparent)closeness of the token to the user represents the prediction of how much the user, according to the selections that are active, would prefer the item identified by the record. That is, proximity, initially, represents goodness of fit. In one example ofthis, the bead strings, each bead representing a record, are shown axially aligned with the string with the best fits being arranged closest to the user and the others receding into the background according to their degree of fit. The user can advancein an axial direction to search through the results as if walking through a tunnel. A pointer can be moved among the beads to select them. This causes additional information about each to be exposed.
The implicit and explicit user profiles are invoked by adding them to the search queries (the bin or string) just as done with other choices. The effect of adding the profile is to have results sorted according to the preferences. Explicit userprofiles are generated in the same way.
The invention will be described in connection with certain preferred embodiments, with reference to the following illustrative figures so that it may be more fully understood. With reference to the figures, it is stressed that the particularsshown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description ofthe principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with thedrawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an illustration of a hardware system that may be used to implement an embodiment of the invention.
FIG. 2 is an illustration of a remote control that may be used with an electronic program guide embodiment of the invention.
FIG. 3 is a flowchart illustrating various processes encompassed by the inventive user-interface.
FIG. 4 is an illustration of a user interface for forming and editing a search query.
FIG. 5 is an illustration of a user interface for forming and editing a user profile.
FIG. 6 is an illustration of a user interface for forming and editing a search query displaying explicit and implicit profiles as search criteria.
FIG. 7 is an illustration of a user interface for forming and editing user profiles where likes and dislikes are accommodated.
FIG. 8 is an illustration of an alternate pictorial scheme applicable to the embodiments of FIGS. 4-7.
FIG. 9 is an illustration of another alternate pictorial scheme applicable to the embodiments of FIGS. 4-7.
FIG. 10 is an illustration of yet another alternate pictorial scheme applicable to the embodiments of FIGS. 4-7.
FIG. 11 is an illustration of yet another alternate pictorial scheme applicable to the embodiments of FIGS. 4-7.
FIG. 12 is an illustration of a user interface for viewing and selecting records returned from a search of a database consistent at least some of the foregoing embodiments.
FIG. 13 is an illustration of another user interface for viewing and selecting records returned from a search of a database consistent with at least some of the foregoing embodiments.
FIG. 14 is an illustration of yet another user interface for viewing and selecting records returned from a search of a database consistent with at least some of the foregoing embodiments.
FIG. 15 is an illustration of yet another user interface for viewing and selecting records returned from a search of a database consistent with at least some of the foregoing embodiments.
FIG. 16A illustrates the plane definitions that apply to the embodiment of FIG. 16B.
FIG. 16B is an illustration of another user interface for forming and editing search queries and user profiles in which text is used to represent objects in a 3-D scene employed by the user interface.
FIG. 17 illustrates a text-based search result viewing scene that also uses text as objects in a 3-D scene.
FIG. 18 is a flow-chart illustrating processes for keyword category generation and sorting.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to FIG. 1, the invention relates to database search and retrieval and is particularly suited to lean back environments or applications where the availability of training is, or is desired to be, limited. For example, the invention maybe used in connection with search and visualization tasks in connection with electronic program guides (EPGs). In the context of televisions, EPG is applied loosely to various features that can be delivered using a database of program information. Theprogram information may include titles and various descriptive information such as a narrative summary, various keywords categorizing the content, etc. In an embodiment, a computer sends program information to a television 230. The computer 240 may beequipped to receive the video signal 270 and control the channel-changing function, and to allow a user to select channels through a tuner 245 linked to the computer 240 rather than through the television's tuner 230. The user can then select theprogram to be viewed by highlighting a desired selection from the displayed program schedule using the remote control 210 to control the computer. The computer 240 has a data link 260 through which it can receive updated program schedule data. Thiscould be a telephone line connectable to an Internet service provider or some other suitable data connection. The computer 240 has a mass storage device 235, for example a hard disk, to store program schedule information, program applications andupgrades, and other information. Information about the user's preferences and other data can be uploaded into the computer 240 via removable media such as a memory card or disk 220. A great many interesting features are enabled by appropriatelyprogramming the computer 240.
Note that many substitutions are possible in the above example hardware environment and all can be used in connection with the invention. The mass storage can be replaced by volatile memory or non-volatile memory. The data can be stored locallyor remotely. In fact, the entire computer 240 could be replaced with a server operating offsite through a link. Rather than using a remote control to send commands to the computer 240 through an infrared port 215, the controller could send commandsthrough a data channel 260 which could be separate from, or the same as, the physical channel carrying the video. The video 270 or other content can be carried by a cable, RF, or any other broadband physical channel or obtained from a mass storage orremovable storage medium. It could be carried by a switched physical channel such as a phone line or a virtually switched channel such as ATM or other network suitable for synchronous data communication. Content could be asynchronous and tolerant ofdropouts so that present-day IP networks could be used. Further, the content of the line through which programming content is received could be audio, chat conversation data, web sites, or any other kind of content for which a variety of selections arepossible. The program guide data can be received through channels other than the separate data link 260. For example, program guide information can be received through the same physical channel as the video or other content. It could even be providedthrough removable data storage media such as memory card or disk 220. The remote control 210 can be replaced by a keyboard, voice command interface, 3D-mouse, joystick, or any other suitable input device. Selections can be made by moving a highlightingindicator, identifying a selection symbolically (e.g., by a name or number), or making selections in batch form through a data transmission or via removable media. In the latter case, one or more selections may be stored in some form and transmitted tothe computer 240, bypassing the display 170 altogether. For example, batch data could come from a portable storage device (e.g. a personal digital assistant, memory card, or smart card). Such a device could have many preferences stored on it for use invarious environments so as to customize the computer equipment to be used.
Referring now to FIG. 2, a remote controller that may be used with a EPG embodiment of the invention has a simple set of keys including vertical and horizontal cursor keys 232 and 212, respectively. A select, "GO," button 214 is used to triggeractions depending on the context in which it is pressed. A search key 216 is used to invoke a search UI element, described below. A profile key is used to invoke a profile UI element described below. Start, save, reset, and delete keys 229, 222, 226,and 224 respectively, are used to control specific operations depending on context as described below.
Referring now to FIG. 3, a general overview of a user's interaction with the overall UI, which comprises search, profile, and overview worlds, may begin with the construction of an explicit profile in step S10. Referring now also to FIG. 4, forexample, using a search/profile tool 90, criteria are selected by selecting a token 105 (typ.) (here represented by a bead), for example, representing the genre "Movies" and moving them to an icon representing a selection bin 140. Before they areselected, each criterion is grouped using a bead string visual element, for example the Genre string 155, where each group of criteria resides on a particular string. When a criterion is selected, the corresponding token 105 is highlighted in some way,such as by bolding or color change. In addition, further informatin relating to the selected criterion token may be revealed. For example, the Movies bead 165 was selected. Selected tokens are shown in the foreground of the three-dimensional scenepermitting more information to be shown clearly on the screen. The Movies bead 165 in this example has been moved from the Genre string 155 to the selection bin 140. The place occupied by the Movies bead 165 is marked by a ghosted bead 115 after itstransfer to the selection bin 140. In the UI, it is envisioned that any of the beads may be selected and transferred to the selection bin 140.
The search/profile tool may be navigated as follows. When the user is in the search area, the user can see all the category labels 130 (typ.). The categories may be chosen using the cursor keys 212, 232. In the figure, the Genre string 155 mayhave been selected using the horizontal cursor keys 212, as indicated by suitable highlighting 150 or any other appropriate device such as changing a color of the selected string, bolding or highlighting the characters of the genre label 155, increasingthe character size, etc. When the desired string has been selected, the GO key may be pressed to permit selection of beads on the selected string.
Note that, alternatively, the beads of non-selected strings may be hidden and only a vestige displayed to indicate the presence of the category. Also, when the selected category reaches the far left or far right of the screen, the strings can berolled in the opposite direction to reveal more strings. Alternatively, the selected category may remain at the center of the screen and each time a horizontal scroll key 212 is pressed, the set of strings rolls in the opposite direction bringing a newstring into view.
To navigate a selected string, the user may simply use the vertical cursor keys 232. This may have the effect of moving the selected bead up and down or of rotating the entire string so the center one is always the selected one. In either case,the bead strings can be arbitrarily long and continued downward or upward cursor guided movement results in the feeding of the string in the appropriate direction to reveal more beads.
Note that in an embodiment, multiple strings may be open and the vertical and horizontal cursor keys 212 and 232 may be used to navigate among them without reselecting any strings. When a bead is selected, it can be moved to the search bin 140by pressing the GO button 214. For example, the Movies bead in FIG. 4 was selected and the GO button 214 was pressed causing it to be moved into the search/profile bin 140 as indicated by the dotted arrow 142. To remove a bead from the search/profilebin 140, the user performs some action to move the selector to the search/profile bin 140 and selects the bead to be removed. Then the GO button 214 is pressed which causes the selected bead to retreat to the string from which it came. A fast way toclear all beads from the search bin 140 is to use the reset button 226.
Note that the search/profile bin 140 is labeled "Search" in FIG. 4. This indicates that the mode the user is currently in. Also, the basic appearance and workings of the UI when in profile mode are the same as in the search mode. However, inprofile mode, the user is given the option of indicating whether a criterion is liked or disliked. Also, in search mode, a certain set of categories may be provided. One is searches that have been saved and another is profiles. These are explainedlater.
Referring now also to FIG. 5, a search string 157 may be provided as a category in the search mode UI or in a specialized screen. The advantage of the former is that it reminds the user of the availability of the saved searches. Saved searchescan be shown on a string adjacent the search/profile bin 140. Another special category that may be presented, and preferably is presented, in search mode is the profile category. This may be shown as a bead string also.
After a search is created, it may be executed using the start button 228 to see the results of the search, or it may be saved, as indicated at 140A, and given a name by pressing the save button 222. Naming the search can be performed using knownUI elements such as a cursor-key navigable onscreen keyboard such as provided with Tivo.RTM. personal digital video recorder devices. For example, the name "Pizza" could be given to identify a search that applies for Thursday night pizza parties.
A Previously saved search can be accessed or edited as follows. To access the string, the user can use the cursor keys 212, 214 or by pressing or holding down the search button 216 while in the search mode or any other means. This willhighlight the search string 157. Then the search beads 170 can be navigated as discussed above until the desired one is highlighted (or, equivalently, rolled to the foreground). When the desired search bead is highlighted, the beads making up thecriteria defining the selected search bead appear in the search/profile bin 140. To apply the criteria defined in the selected search bead, the user may immediately hit the start button 228 or the user can move to the search bin 140 and edit the searchcriteria by deleting them or adding new ones just as in the construction of a search. When the save button 222 is pressed in this context, however, the user is permitted to save it back to the original saved search bead or to a new one, allowing savedsearches to be used as templates for new searches.
Note that, a search bead can be added to the search bin 140 along with new criteria before invoking using the start button 228 in the construction of a new search just like any other criterion bead. This, in effect, makes the saved search atemplate or starting point for searches, so a particular user does not have to enter the same data each time she/he performs a search.
In the search mode, the user can also select beads from a profile string 156 to add to a search. Each bead of the profile string 156 contains a profile of a user. In an embodiment, the profile can be an implicit profile, an explicit profile, ora combination of these. The beads representing the profile may be added to a search to cause the results to be sorted by the preferences they embody. Referring now also to FIG. 6, implicit 158 and explicit 159 profiles can be displayed and accessedseparately. In this embodiment, the profile beads are used independently, but added to the search bin 140 just as other criteria beads.
Referring now to FIG. 7, to create or edit a profile, the profile button 218 may be pressed at any time to invoke the profile mode. This brings up the profile mode UI element. The profile mode UI works the same way the search mode UI works,except that the profile bin 140' is a partitioned container with a "like" partition 164, where beads for criteria that are favored are placed, and dislike partition 165, where beads for criteria that are disfavored are placed. The location of the beadsin the respective partition indicates the action created by the profile with respect to the beads. That is, a criterion, such as movies, in the dislike partition 165 will cause the profile to negatively weight negatively records matching the criterion. Similarly, a criterion in the like partition 164 will cause the profile to weight positively records matching the criterion. Note that the profile's name appears at 169 along with a label indicating the user is in profile mode. Note also that the beadscan be given a score through a dialog box or by pressing a specialized star key multiple times to give the item a rating. For example, five stars could indicate an item that is highly favored and one star, an item that is strongly disfavored. To viewthe rating, the beads can be tagged with star icons, their colors can be changed to indicate the rating, their position in the bin can indicate the degree of the favored or disfavored rating, or their size can be changed. Thus, the user viewing theprofile bin 140' would know at a glance the effect of each bead on the profile. The profile can be saved when the save button 222 is pressed. To select an existing profile for editing, the user has only to select the appropriate bead and press the GObutton 214. To permit the deletion of a profile, the profile bead may be selected and the delete button 224 pressed.
To filter current channels through a profile, the user, in the profile mode, may select the profile and press the start button 228. In this way, the profile mode also acts as an advisor and the profile mode may be called a profile/advisor mode. Note that the implicit and explicit profiles can be reset using the reset key 226. Implicit profiles may be editable or non-editable, depending on the system used to store information. If the machine learning device used stores criteria-basedinferences, then these could be edited exactly as discussed with respect to the explicit profiles. Alternatively, implicit profiles could be edited through the use of personality beads that weight different program according to a personality templaterepresented by the personality bead. For example, beads like "movie nut" to emphasize movies and movie-related material, "quiet-type" to de-emphasize action/thriller sorts of content, or "overworked" to emphasize intellectually undemanding material,could be provided to tilt the implicit profile one way or another. The same personality beads could be used in the search mode to make their actions effective only during a search, or incorporated in a saved search, or even incorporated in implicitprofiles.
Referring to FIG. 8, the search/profile mode can be implemented in a number of different ways in accord with the following ideas: 1. the use of three-dimensional pictures organizes the information in a way that reduces clutter and makes relevantinformation and controls handy (for example, much of the information that may be scrolled into view is shown partly hidden in the background, but it can be seen to suggest its existence and how to display it, for example beads on the string that arebehind the front column of beads); 2. the more relevant information, depending on context, is shown in the foreground (for example, the currently selected items are shown in the foreground); and 3. temporarily hidden information (but which isavailable) retreats into the background (for example, the way additional beads on the string can be hidden in the background). For example, the embodiment of FIG. 8 stems from the same design principles. In this embodiment, instead of the bead stringsscrolling left and right in a straight line (like a cylinder), they roll about a vertical axis like a carousel. This way, there is one string that always at the center and closest to the observer in the 3-space scene. Here, the keyword string isselected since it is the one that is closest in the scene to the camera (user) vantage. Also, the search bin 140 is replaced with a string 140C.
Note that to exploit three-dimensional scene as a vehicle for partitioning or organizing information, the dimensions should be rendered in such a way that they are independent. Distributing variation along axes independently usually makes thescene asymmetric. Symmetrical 3-D forfeits the independence of the variegation by constraining the changes in appearance along one axis to be the same as the changes in the appearance along another axis. Thus, symmetry is hostile to the use of thethree-dimensional scene as a device for organizing data visually. Of course, this does not mean symmetrical features always destroy the capacity for three-dimensional scenes to organize information effectively. For example, the bead tokens themselvesare symmetrical. Also, even though the successive series of bead strings look the same, an example of translational symmetry, each successive bead string represents a different category. So on some level, symmetry may exist to provide visual clarity,but on another level, there is variegation that provides differentiation along the (visually) symmetric dimension.
Referring now to FIG. 9, still using the carousel concept, the bead strings are more stylized in this example. Only a few beads are visible in the front of each string, but the dominant bead on each string is a great deal more pronounced. Againthe central string 180 is the selected one. Here the keyword string's selection is indicated by its size and bold lines. The search bin 140 is replaced by a string 140B. This scene geometry is preferred because it is uncluttered and would be easier tosee superimposed on a broadcast image. It is clear how this geometry could be applied to the other contexts discussed with respect to
Referring to FIG. 10, in still another example, the beads are replaced with boxes 410 sitting on shelves 420. The selected shelf 430 extends toward the user. The search bin 140 is replaced by a hole 460 into which selected boxes 330 areinserted. Here, the shelves rotate around an axis that is horizontal and in the plane of the page. Shelves and boxes further from the forward selected position (at 430) retreat into the background, for example, as shown at 320. A particular box on theselected shelf can be shown as selected by suitable highlighting, growing the box, bolding it, etc.
Referring to FIG. 11, in still another example, signposts are used to represent the set of available categories, profiles, etc. Each sign represents a category or the set of profiles. Most of the signs 480, 485, and 450 are tilted at an anglewith respect to the point of view, except for the selected one or ones 460, and 475. When a sign is selected, the selections available within the category are exposed as tags 470 and 472 on the left side of the sign. Those criteria or profiles that areselected to form part of a search (or criteria selected for a profile) are shown on the right side of the sign, for example as shown at 460 and 462. The name of the current search being constructed, if it is a search or the name of the profile if it isa profile under construction, appears at the bottom, for example, at 440. Thus, the array of selected criteria on the right of the signpost correspond to the contents of the search bin 140 in the bead embodiments discussed above. Navigation of the FIG.10 and 11 embodiments is analogous to navigation in the bead embodiments. Pressing the vertical cursor keys 232 causes the currently selected sign to swing into "open" position as is sign 490 in FIG. 11. Pressing the horizontal cursor keys 212 causesthe tags 460/470 to be highlighted as indicated by bolding, color change, size. change, etc. Tag 471 is shown as selected by a size and bolding change. Tags can be added and removed from the right side of the sign post by selecting them. Selecting atag toggles its position between sides of the signpost. Once criteria are saved as a search, they can be made available by selecting them from their own "search" sign (not shown). Any criteria not visible on the signpost can be brought into view byscrolling vertically. New signs will appear at the bottom and top, respectively. New tags will appear at the left and right extremes.
A keyword list that may be used in all of the above embodiments can be generated dynamically, rather than from a generic template. Typically, keywords are entered by the user. However, the keyword list may also be culled from common terms inselections made by the user or to reflect the user's category choices in building queries.
Referring to FIG. 12, once a search is invoked, the user sees the overview world. This view is invoked by pressing the start button 228 in search mode. Alternatively, an overview button may be provided on the remote control 210. The overviewmode shows a visual representation that indicates pictorially, the relevance of each returned record by some metaphor for hierarchy. Each record returned by the search is displayed as a hexagonal tile in FIG. 12, For example, as shown in FIG. 12, theapparent proximity of the results relative to the viewer corresponds to the goodness of the fit between the search criteria and the record. Also, the record 510 is shown with bold lines, large overall dimensions, and bold text compared to the record535. The more relevant results are located toward the center of the display as well. There is an element that indicates the criteria from which the current result display was generated at 530. The result tiles 510, 525, etc. can be navigated using thecursor keys 212, 232. Selecting a tile opens it up to reveal further information about the selected item. A tile representing a program "Here's Kitty" is shown selected at 510. Thus, additional information is shown for this selection.
Using the cursor keys, the user can navigate to the criteria element 510. In one embodiment consistent with FIG. 12, the vertical and horizontal cursor keys 212, 232, are used to move the cursor about the X-Y projection plane (the plane of thescreen, where the Z-axis is pictorial axis leading from foreground to background) so that any icon can be accessed using the two axes of movement. In an alternative embodiment, the cursor keys 212, 232 are used to move along the Z-axis so that thebackground tiles come closer to the user and more information becomes visible when they do. In this embodiment, Z-axis control can be toggled on and off or one set of cursor keys, say the vertical cursor keys 214, may be used to move forward andbackward along the Z-axis the other set, among the current foreground set of tiles. When going in the foreground-to-background direction, the current foreground set of tiles disappears as if it moved behind the viewer.
Selecting the criteria element, by pressing the GO button, 214 causes the display to change back to the search mode with the current search (the one indicated by the criteria element) loaded into the search bin 140 (or the corresponding elementfor the other embodiments). This permits the search to be edited easily.
Referring now to FIG. 13, the results are displayed in a fashion similar to that of FIG. 12, except that the third dimension displacement element is not applied. That is, the less relevant records are further from the center and less bold, butthey do not appear to recede into the background as in the FIG. 12 embodiment. Other features are essentially the same as that of the FIG. 12 embodiment.
Referring now to FIG. 14, the results of a search are organized around substantially concentric rings 605. Each record appears as a bead or token 610, 620, 630. The rings 605 are intended to give the appearance of a tunnel going back away fromthe viewer. The horizontal cursor keys 232 may be used to rotate the currently selected token (Token 605 is the selected token in FIG. 14). The vertical cursor keys 212 may be used to move along the Z-axis, that is, move through the tunnel bringing thebackground rings into the foreground. As the rings 605 move forward (the viewer advances along the Z-axis), the tokens 610, 620, 630, come closer to the viewer and get bigger. As they get bigger, more information may be revealed so that, for example,the title gives way to a summary, which gives way to a detailed description. Alternatively, other media types may be invoked, such as audio, video, screen caps (thumbnails), etc. These are applicable to all the embodiments described herein.
Here, as in the earlier embodiments, the selection element 554 provides a visual reminder of the selection criteria that produced the current result display and a mechanism for moving back to the relevant search mode to edit the criteria. Again,suitable navigation keys can be provided to allow for fast access to any of these features. Each ring may be associated with a match-quality level that may be shown on the screen as at 566.
Referring to FIG. 15, this embodiment of an overview world scene is similar to that of FIG. 14, except that the tokens are organized around a spiral 666 rather than rings. This arrangement is essentially one-dimensional so that only one set ofcursor keys needs to be used to navigate it. Navigation may or may not be attended by movement along the Z-axis, as preferred.
Referring to FIG. 16A, a purely text embodiment makes use of the three-space visualization to separate the different portions of the display. The diagram shows the definition of the three planes and axes. Referring now also to FIG. 16B, in theUI represents categories 703 distributed along the Y-axis and the category selections 701 broken out in the X-Y plane and distributed along X-axis. Time 702 is shown along the Z-axis. The user profile 706 is shown in the Y-Z plane. The search title705 and its elements 704 are shown in the X-Z plane. Selected items are shown in brackets. The role of the search bin 140 is played by the xz plane as shown at 704 and 705. Referring now also to FIG. 17, the results of searches may be represented astext icons in a three-dimensional landscape scene. The foreground title is the most relevant as indicated by the relevancy scale 814 in the Y-Z plane. The less relevant results 802, 803 appear in order of relevancy progressively along the Z-axis awayfrom the viewer. The brackets 817 around the most relevant record indicate that this record is currently selected. A selected record may reveal detailed information about the record, for example as shown at 804. The details may include a thumbnailpicture or video. The details may include a thumbnail picture or video (not shown). The revealing of further detail, the zoomed-in state, can be invoked by a separate operation so that selection does not necessarily cause the display of additionalinformation about the selected item. This applies to all embodiments. The cursor keys may be used to scroll back toward the less relevant records and to highlight each record in turn.
In each of the above embodiments, one or more of the categories may actually be constructed of words or other symbols, for example, the keyword category described above. Keywords could be every conceivable word in the dictionary, which wouldmake selection of keywords difficult without a keyboard (physical keyboard or on-screen equivalent). Keyboards are tedious and it is preferred if keywords can simply be selected from, for example, a category string as discussed above.
Such a keyword category may be constructed using data from various sources to cull from the vast number of alternatives, those words that would be useful in a keyword selection list. The words can be extracted from the descriptions of chosenrecords and/or from records returned by the queries based on frequency of occurrence or a variation thereof.
Referring to FIG. 18, a user accesses the records of the database directly or by searching. Directly accessing records of the database could correspond, for example, to the browsing and selection of a record by a user. Searching may beperformed as discussed above. The user does one or the other and the path is selected in step S150. If a search is performed (step S100) records may be a word list is constructed from the search results in step S115. Some or all words from the titles,descriptions, contents of the records etc. could be culled from the search results depending on the capacity of the system and the desires of the designer. Less relevant words, based on grammatical parsing, could be filtered out of the list. Forexample, the list could be formed from only direct objects and subjects from sentences in the description and title words. Once the list is formed, the most common words in the list may be identified (S120) and ranked (S125) based on frequency ofoccurrence and significance (e.g., title words are more significant the words from the description or the content of the record itself). Other criteria may be used for selecting and ranking the words added to the list, for example, the goodness of fitbetween the search criteria and the retrieved records. The above are mere suggestions. The criteria used would depend on the type of database accessed. For example, some records may contain many different specialized fields such as assignee, inventor,and filing date of a patent, that characterize the records that provide significance information explicitly. The common words that remain at the top of the list in terms of significance and frequency become part of the list along with their respectiveranking data and the process is repeated each time searches are made. Repeated searches may build the list, but the list will always remain sorted with the most important items at the top. Using the user interface designs described above, the mostimportant keywords will always appear on the screen and the least important ones will be available by scrolling, or rolling, the bead string (or other corresponding element). In this way the interface remains uncluttered while still providing access toa large inventory of keywords.
If the user chooses to simply select records without searching, the word list can be formed from multiple selections and common words culled from this list in a manner similar to that for searches. In step S110, one or more records are selectedby the user. Step S110 can be reached directly without searching or by going through the steps S100-S130 first and then through S150 again to arrive at S110 to choose one or more records from the search results. In step S135, the user adds words fromthe selected record or records to the word list. To identify frequency of hits data on descriptors, it desirable to have multiple records, so each selection is added to a single list and the frequency data derived from the combined list, which coversmultiple selection iterations. Alternatively, if a large number of records are selected at once, frequency data can be obtained from these selections. The addition of words to the list may involve the same filtering and sorting steps discussed abovewith respect to the words culled from the search results. In step S140, words with a low frequency of hits may be filtered out of the list. In step S145, all the terms are ranked according to the various criteria discussed above. Note that the wordlists derived from retrieved records from a search and those derived from selected records can be combined in a single list.
Preferably, the keyword list should be editable by the user in the same fashion as described in detail with respect to the editing of profiles elsewhere in the specification. To construct a keyword list based on frequency of use data, the systemcould start with no keywords at all. Then, each time the user enters a query, the returned results could be scanned for common terms. The titles, descriptions, or any other data could be scanned and those terms that occur with some degree of frequencycould be stored in a keyword list. The keywords in the list could each be ranked based on frequency or frequency weighted by the context in which the keyword appeared. For example, a keyword in a title might receive a lower rank than a keyword in adescription or a keyword that is a direct object or subject in a grammatical parsing of a sentence in a description might receive a higher ranking than indirect objects, etc. Instead of extracting keywords from the returned records of a search, thekeywords could be extracted from only the records selected for use. For example, only programs that are chosen for viewing or recording are actually used to form the. keyword list in the manner described. Alternatively both selections and returns ofqueries could be used, but the keywords in the selected records could be weighted more strongly than keywords in other returned records. This shorter list can then be ranked using the same or similar method as used to from would be awkward.
Where the strings represent criteria, the ranking of criteria in each category may correspond to the frequency with which the criteria are used by the user in constructing queries. So, for example, if the user's searches always include thedaytime time range, the bead or beads corresponding to this time range would be ranked higher. Alternatively, the criteria may be ranked according to selected records, rather than by all the records (or at least the most highly ranked ones) returned bysearching.
Note that many of the above techniques can be used with other types of user interfaces and are not limited to the designs described, which are preferred embodiments. So, for example, the keyword list could be used with a purely textual computerinterface.
In keeping with the design philosophy around which the user interface is developed, it is desired that only a small number of highly relevant criteria be visible on the screen at a given time. Across all categories, the frequency with which theuser selects a given criterion is preferably be used to rank the criteria in order of importance. Thus, although a television database describes content on more than 100 channels, if only 5 of those channels are routinely entered in search queries,those 5 channels should be, by default, the ones displayed in the most foreground or prominent position on the display. The other criteria are still accessible, but the interface does not innocently provide the user with equal access to all. That isone of the basic ideas that leads to simple interfaces.
Note that prioritization of the search criteria categories may also be made editable by the user. For example, if a channel has fallen temporarily into disfavor judging by frequency of use during the Olympics, the user may provided a mechanismto revive it. This may be any of various techniques, for example invoking a menu option to resort the list representing the ranking of selected category's elements, and does not need to be described in detail.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit oressential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and allchanges which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
* * * * *
Recently Added Patents
Semiconductor device and method for manufacturing same
Methods and apparatus for dynamic identification (ID) assignment in wireless networks
Fusing device to prevent overheating of a heating member and image forming apparatus having the same
Crystalline form of zofenopril calcium
Gemstone
Mobile application for calendar sharing and scheduling
System and method for determining payroll related insurance premiums
Randomly Featured Patents
Keyboard device for electronic keyboard musical instrument
Detection and use of acoustic signal quality indicators
Shower curtain lock
Compositions stabilized with 5-thio-substituted benzotriazole UV-absorbers
Pool cover
Parallel-to-serial converter
Process for preparing perfluorononanes
Sealing means for chemical reactor
Cell death regulators
Limited slip differential and manufacturing method for limited slip differential
|
__label__pos
| 0.668727 |
AnsweredAssumed Answered
Make a view only field not drop down?
Question asked by LeeMoreau on Mar 7, 2015
Latest reply on Mar 7, 2015 by philmodjunk
Title
Make a view only field not drop down?
Post
Hi all, I have a field that is a drop down, or pop-up menu as FileMaker seems to call it. In security, I have a users profile set to allow view access only as I don't want them to be able to change the values on these fields from what the system set, but I do want managers to be able to manually change them.
This works in that if they try to change it, it says they don't have permission, but is there any way to prevent them from being able to use the drop down at all? Basically like if I had field entry "browse mode" unchecked? I know there's workarounds like making it just a text field and then putting the drop down somewhere else for the people that need to change it but just thought I'd check.
Outcomes
|
__label__pos
| 0.534039 |
箱线图(boxplot)介绍
箱线图(Boxplot)也称箱须图(Box-whisker Plot),是利用数据中的五个统计量:最小值、第一四分位数、中位数、第三四分位数与最大值来描述数据的一种方法。它也可以粗略地看出数据是否具有有对称性,分布的离散程度等信息;特别适用于对几个样本的比较。
A boxplot is a way of summarizing a set of data measured on an interval scale. It is often used in exploratory data analysis. It is a type of graph which is used to show the shape of the distribution, its central value, and variability. The picture produced consists of the most extreme values in the data set (maximum and minimum values), the lower and upper quartiles, and the median.
可以通过箱线图的绘制过程来了解箱线图的意义:
1. 绘制数轴
2. 计算上四分位数(Q3),中位数,下四分位数(Q1)。
3. 计算上四分位数和下四分位数之间的差值(Q3-Q1),即四分位数差(IQR,interquartile range)。
4. 绘制箱线图的矩形,上限为上四分位数,下限为下四分位数。在矩形内部中位数的位置画一条横线(中位线)。
5. 在Q3+1.5IQR和Q1-1.5IQR处画两条与中位线一样的线段,这两条线段为异常值截断点,称为内限;在Q3+3IQR和Q1-3IQR处画两条线段,称为外限。处于内限以外位置的点所表示的数据都是异常值(outliers),其中在内限与外限之间的异常值为温和的异常值(mild outliers),在外限以外的为极端的异常值(extreme outliers)。(注意:统计软件绘制的箱线图一般都没有标出内限和外限。)
6. 在非异常值的数据中,最靠近上边缘和下边缘(即内限)的两个数值处,画横线,作为箱线图的触须。
7. 从矩形的两端向外各画一条线段直到不是异常值的最远点(即上一步的触须),表示该批数据正常值的分布区间。
8. 温和的异常值(即处于1.5倍-3倍四分位数差之间的异常值)用空心点表示;极端的异常值(即超出四分位数差3倍距离的异常值)用实心点(也可以用星号*)表示。
附上一张图以便利于理解:
箱线图(boxplot)介绍
上图中:最小值(min)=0.5;下四分位数(Q1)=7;中位数(Med)=8.5;上四分位数(Q3)=9;最大值(max)=10;平均值=8;四分位数差(interquartile range,四分位间距)=Q3 − Q1=2。
下面额外补充几个箱线图,并作简单图示:
图例1:
箱线图(boxplot)介绍
图例2:
箱线图(boxplot)介绍
图例3:
箱线图(boxplot)介绍
箱线图美中不足之处在于它不能提供关于数据分布偏态和尾重程度的精确度量;对于批量较大的数据集,箱线图反映的形状信息更加模糊;用中位数代表总体平均水平有一定的局限性等等。所以,应用箱线图最好结合其它描述统计工具如均值、标准差、偏度、分布函数等来描述数据集的分布形状
更多关于箱线图(boxplot)的介绍请阅读:
什么是箱线图 , 箱线图(wiki) , 箱形图(wikipedia) ,Box plot ,Box Plot ,Boxplot ,Box Plots
本文来自:http://yixf.name/2011/02/13/%E7%AE%B1%E7%BA%BF%E5%9B%BE/ (有部分删改)
• 本文由 整理发表
• 网站部分文章源自互联网,若未正确标注来源,请联系管理员更新。文章转载,请务必保留本文链接
您必须才能发表评论!
评论:2 其中:访客 0 博主 0 引用 2
|
__label__pos
| 0.686459 |
E
ChatBot cancel
smart_toy
Hello, I'm VAI. Are you new to Vease?
Example Image ChatBot cancel
Hello, I'm VAI. Are you new to Vease?
AI’s SaaS Revolution: Workforce Impact & Emerging Opportunities
Do More
Friends, let’s take some time to discuss the evolving landscape of work within the software-as-a-service (SaaS) sector. As an entrepreneur with a keen interest in the interplay between technology and society, I find it essential to delve into the way artificial intelligence (AI) is altering the workforce dynamics in the SaaS realm. In this piece, we’ll take a closer look at the effects of AI on jobs and employment, and what it entails for the future of work.
AI harbors the extraordinary capacity to transform our working methods. It can automate monotonous or routine tasks, enhance productivity and efficiency, and offer valuable insights and analytics. However, with the increasing prevalence of AI, concerns about job displacement and the workforce’s wellbeing are emerging.
A key apprehension regarding AI in the SaaS sector lies in its capacity to displace human labor. AI’s ability to automate numerous tasks also necessitates skilled personnel to create, maintain, and manage these systems. This implies that specific job categories might be susceptible to elimination or transformation due to AI. For instance, customer service positions could be supplanted by chatbots, while data entry or processing tasks might be automated.
Nonetheless, it’s vital to remember that AI, despite potentially eliminating certain jobs, can also generate new employment opportunities and industries. As AI becomes more advanced, the demand for skilled professionals capable of developing and overseeing AI-driven systems will surge. Furthermore, AI can give rise to novel industries and prospects, like data analytics or virtual assistant services.
What implications does this have for the future of work in SaaS? First and foremost, we must acknowledge the transformative power of AI on the workforce and proactively address these shifts. This involves investing in educational and training initiatives to equip workers with the necessary skills and expertise demanded by an AI-centric economy. Moreover, it requires reevaluating our work approach, such as examining the potential for flexible or remote work options.
To sum up, the future of work in the SaaS sector is being redefined by AI, and it’s our responsibility to manage these alterations conscientiously. By allocating resources to education and training programs and reassessing our work approach, we can maximize AI’s advantages without undermining the workforce’s needs and welfare.
Frequently Asked Questions:
Q: What function does AI serve in the SaaS sector?
A: AI holds a prominent role in the SaaS sector, as it can automate tasks, elevate productivity and efficiency, and deliver valuable insights and analytics. However, the growing presence of AI also raises concerns about job displacement and its impact on the workforce.
Q: Will AI eradicate jobs in the SaaS sector?
A: While AI has the potential to eliminate specific job categories in the SaaS sector, it can also create new employment opportunities and industries. As AI advances, there will be an increasing demand for skilled professionals who can develop and manage AI-powered systems.
Q: How can we brace ourselves for AI’s impact on the workforce in the SaaS sector?
A: We can prepare for AI’s influence on the workforce in the SaaS sector by investing in educational and training programs that ready workers for the skills and expertise demanded by an AI-focused economy. In addition, we can reassess our work approach, such as considering the potential for flexible or remote work arrangements.
Leave a Comment
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.977482 |
Главная
Математика 4 класс Г.В. Дорофеев, Т.Н. Миракова, Т.Б. Бука
Математика 4 класс Г.В. Дорофеев, Т.Н. Миракова, Т.Б. Бука
авторы: , , .
издательство: "Просвещение" 2015 год
Раздел:
Номер №8
Выполни вычисления.
(46 + 18) : 16 * 980 : 5;
72 * (45 : 9 * 620) + 58;
(24 * 312) : 6 * 8 : 20;
350 : 7 : 2 + (389) * 3.
Решение
(46 + 18) : 16 * 980 : 5 = 64 : 16 * 916 = 4 * 916 = 3616 = 20
72 * (45 : 9 * 620) + 58 = 72 * (5 * 620) + 58 = 72 * (3020) + 58 = 72 * 10 + 58 = 720 + 58 = 778
(24 * 312) : 6 * 8 : 20 = (7212) : 6 * 8 : 20 = 60 : 6 * 8 : 20 = 10 * 8 : 20 = 80 : 20 = 4
350 : 7 : 2 + (389) * 3 = 50 : 2 + 29 * 3 = 25 + 87 = 112
|
__label__pos
| 0.9993 |
Linear representations of algebras and groups, Lie theory, associative algebras, multilinear algebra.
learn more… | top users | synonyms (1)
2
votes
0answers
82 views
Is there any good survey on the hook length formula and related topics?
I am recently doing some research related to the hook length formula. The hook formula counts the number of Young tableaux of certain type. I find there are plenty of research already been done and ...
1
vote
1answer
173 views
Infinitely many real roots
Given a non-acyclic quiver without loops with Kac's root system associated. When do we know there are infinitely many real roots?
2
votes
1answer
83 views
$t$-analogue of the symmetric power of an additive character over $\Bbb{F}_q^*$
Let $G$ be a finite group and let $f: G \longrightarrow \Bbb{C}$ be any complex-valued function. For integers $k, n \geq 0$, an indeterminant $t$, and $x \in G$ let $f_k(x) := f \big( x^k \big)$ and ...
6
votes
0answers
157 views
On a permutation module for GL(n,q)
Let $G=GL(n,q)$ be the general linear group of degree $n$ over the $q$ element field. Let $X$ be the set of full rank $n\times r$ matrices where $1\leq r\leq n$. Then $G$ acts transitively on the ...
5
votes
1answer
103 views
Singular/Smooth locus of Schubert variety of the affine grassmannian
Let $G$ be a connected, simply connected, semisimple, complex linear algebraic group with maximal torus $T$ and affine Grassmannian $\mathcal Gr$. It is well known that $\mathcal Gr$ admits a Bruhat ...
3
votes
2answers
226 views
A problem with pointwise stabilizer subgroups of fixed-point subspaces II
Definitions: Let $W$ be a representation of a group $G$, $K$ a subgroup of $G$, and $X$ a subspace of $W$. Let the fixed-point subspace $W^{K}:=\{w \in W \ \vert \ kw=w \ , \forall k \in K \}$. Let ...
0
votes
1answer
86 views
A problem with pointwise stabilizer subgroups of fixed-point subspaces I
Definitions: Let $W$ be a representation of a group $G$, $K$ a subgroup of $G$, and $X$ a subspace of $W$. Let the fixed-point subspace $W^{K}:=\{w \in W \ \vert \ kw=w \ , \forall k \in K \}$. Let ...
1
vote
1answer
130 views
Beilinson-Bernstein localization: $\mathfrak{g}$ action on $G$-equivariant sheaf
I have a few elementary questions related to Beilinson-Bernstein localization. Let $G$ be a semisimple algebraic group over $\mathbb{C}$ with Lie algebra $\mathfrak{g}$. Consider the setup of ...
3
votes
0answers
41 views
Intertwining Operators Associated to Simple Reflections
Let $G$ be a quasi-split reductive group, over a local field, with a Borel subgroup $B=T\cdot N$ and the associated Weyl group $W$. Given a family of induced representations $\pi_s = Ind_B^G \chi\cdot ...
6
votes
2answers
192 views
Is an $\mathfrak{sl}_2$-triple determined up to Lie algebra automorphism by the adjoint representation?
Let $\mathfrak{g}$ be a finite-dimensional complex semisimple Lie algebra, and let $\phi_1:\mathfrak{sl}_2(\mathbb{C})\rightarrow\mathfrak{g}$ and ...
2
votes
0answers
83 views
References for the bicategory of ring-bimodule pairs
One of the standard examples of a bicategory is the bicategory of rings (with bimodules as 1-morphisms), which is sometimes denoted $\operatorname{Bim}$ and in other sources $\operatorname{Ring}$ (or ...
0
votes
0answers
61 views
Differences between primitive central idempotents and primitive orthogonal idempotents
If we have a complete set of primitive orthogonal idempotents of an algebra $A$, then we can obtain simple modules, indecomposable projective modules, indecomposable injective modules of $A$. If we ...
4
votes
1answer
231 views
Is there a nonabelian finite simple group with Grothendieck ring of multiplicity one?
Let $G$ be a finite group. It admits finitely many irreducible complex representations $H_1, \dots, H_r$ which generate, for $\oplus$ and $\otimes$, the Grothendieck ring $\mathcal{G}(G)$ of $G$ (also ...
4
votes
1answer
146 views
Represent matrix immanants using Schur functions
For each irreducible character $\chi^\lambda$ of the symmetric group $S_n$, the immanant of an $n\times n$ square matrix $A$ is defined as \begin{equation*} d_\lambda(A) := \sum_{\sigma \in S_n} ...
1
vote
0answers
47 views
$\Gamma$ cohomology of principal series
Let $G$ be a noncompact connected real semisimple Lie group with finited center. Let $\Gamma$ be a cocompact discrete subgroup of $G$, and let $P$ be a parabolique subgroup with Langlands ...
3
votes
1answer
92 views
Prescribed spherical representations, symplectic group $Sp(n)$
An irreducible representation $(\pi,V_\pi)$ of a compact group $G$ is called spherical with respect to the pair $(G,K)$, $K$ is closed subgroup of $G$, if $V_\pi$ has a non-zero vector invariant by ...
3
votes
3answers
164 views
how to find explicitly given component in a regular representation
Given a finite group $G$ and its irreducible representation $\pi$ I want to find explicit elements of the group algebra $\mathbb{C}[G]$ lying in components of the left regular representation ...
17
votes
1answer
240 views
On a drawing in Dixmier's Enveloping Algebras
This image comes from Dixmier's book, 'Enveloping Algebras' ('Algèbres enveloppantes'). Dixmier writes that The curves shown on p. XIV have their origin in the study of U(sl(3)). They are ...
2
votes
1answer
160 views
Matrix Elements of Real Representations
I asked this question over at Math.StackExchange and despite having had a bounty on it I did not receive an answer. Suppose that $G$ is a finite group and we have a unitary irreducible representation ...
13
votes
2answers
612 views
Are there any natural differential operators besides $d$?
Let $\lambda = (\lambda_1, \ldots, \lambda_r)$ and $\mu = (\mu_1, \ldots, \mu_r)$ be partitions such that $\mu_j = \lambda_j +1$ for one index $j$ and $\mu_i = \lambda_i$ for all other $i$. Then there ...
5
votes
0answers
245 views
Why Jacobson, but not the left (right) maximals individually?
I firstly asked the following question on MathStackExchange, but I did not receive any responses, but a short comment. So, I decided to post it here, hoping to receive answers from experts. ...
7
votes
1answer
302 views
On a theorem of Kazhdan
Let $G=GL_n(F)$, where $F$ is a p-adic local field, $U$ be the upper triangular maximal unipotent group, and $\theta$ a character of $U$. Then a Theorem of Kazhdan says that for any irreducible smooth ...
0
votes
0answers
42 views
Framed braids and local systems
Let me start by admitting that my question is going to be somewhat vague. But hopefully it is one of these vague questions that can be immediately answered by an expert in the appropriate area. ...
2
votes
0answers
58 views
What is the name for the subring of the Grothendieck ring of a bialgebra spanned by one-dimensional representations?
Let $B$ be a finite dimensional bialgebra over a field $\Bbbk$. Let $\mathcal G_0(B)$ be the ring whose underlying additive group is generated by isomorphism classes $[V]$ of finite dimensional ...
0
votes
0answers
85 views
What interesting things do automorphism groups of trees act on?
Let $T$ be a rooted tree. We can build a poset $P(T)$ whose elements are the vertices of $T$ and whose covering relations are the edges of $T$. Let $A$ be the automorphism group of $P(T)$. The group ...
3
votes
1answer
125 views
About the second largest adjacency eigenvalue of Abelian Cayley graphs
[Assume all groups are finite] One knows the general statement that the sum of the values of the character function on the generating set is an eigenvalue of a Cayley graph. But the above doesn't ...
5
votes
0answers
67 views
How can you see the minimal relations on a quiver from its bimodule resolution?
Suppose that you are given an algebra $KQ/I$, coming from a quiver Q, of finite global dimension. Suppose also that you know its minimal bimodule resolution over its enveloping algebra. Can you get a ...
2
votes
0answers
105 views
Decomposition of symmetric powers of reduced regular representation modulo $p$
Let $\bar{\rho}$ denote the reduced regular representation of $\mathbb{Z}/p$ over a field of characteristic $p$. The representation $\mathrm{Sym}^k \bar{\rho}$ decomposes (for each $k$) as a sum of ...
2
votes
1answer
134 views
Invariant subspaces of an $F_2$-representation of the affine linear group of dimension 1
Let $p$ be an odd prime (large if it matters) and let $G= Aff(\mathbb{F}_{p^2}) \cong \mathbb{F}_{p^2} \rtimes \mathbb{F}_{p^2}^*$ be the affine linear group acting on $\mathbb{F}_{p^2}$ by $x\mapsto ...
-1
votes
1answer
190 views
Suppose that $G$ is a subgroup of $GL_n(\mathbb C)$ with finite exponent. Then is $G$ a finite group? [closed]
As title. the exponent of $G$ is the least number $n$ (if exists) such that $g^n=e$ holds for all $g\in G$ or $+\infty$.
1
vote
0answers
50 views
Computing equivariant K-theory using the amalgamted product
If I have a Lie group (or a Kac-Moody group) $G$ such that it's the amalgated product of it's proper parabolic subgroups $P_J$, i.e. $G = \text{colim} P_J$, then could I use this to compute the ...
4
votes
1answer
119 views
Motivational ideas for the Gelfand-Graev character of a finite group of Lie type
I've been studying the Gelfand-Graev character's general construction for a finite group of Lie type. I wish to discuss its particularization in a seminar for the general linear group over a finite ...
1
vote
0answers
142 views
Is there a method to simultaneously block-diagonalize a set of group matrices?
Assume that you are explicitly given the representation matrices of a group. How does one go about finding that common basis which will find the irreducible components of all of them simultaneously? ...
2
votes
1answer
161 views
What is the cohomology of the tangent bundle of a flag variety?
Let $G$ be the general linear group $\operatorname{GL}(n,\mathbb{C})$ and $P$ a parabolic subgroup with Lie algebra $\mathfrak{p}$. Consider the vector bundles $$ \mathcal{P} = G\times_P ...
3
votes
0answers
109 views
Dimension of Birman-Murakami-Wenzl Algebra
I was reading the paper Braids, Link Polynomials and A New Algebra by J. S. Birman and H. Wenzl, and I was wondering is there a combinatorial way to compute the dimension of the algebras ...
0
votes
0answers
30 views
lower central series nilradicals parabolic subalgebras
How to compute the dimension of the ideals in the lower central series of a nilradical of a parabolic subalgebra? Let $\mathfrak g$ be simple complex Lie algebra, $\mathfrak p$ a parabolic subalgebra ...
7
votes
0answers
141 views
What's the analogue of a Young symmetrizer in the Brauer algebra?
According to Schur--Weyl duality, the centralizer of $\mathrm{GL}(V)$ acting diagonally on $V^{\otimes N}$ is the group algebra of the symmetric group $\mathbb S_N$. An equivalent formulation is the ...
1
vote
0answers
77 views
Irreducible decomposition of $\Lambda^i(\mathfrak{p})$
Let $G$ be a connected semisimple real Lie group with finite center with Lie algebra $\mathfrak{g}$. Take $K$ its maximal compact subgroup of $G$, and $\mathfrak{k}$ its Lie algebra. We denote by ...
2
votes
3answers
374 views
First Explicit Irreducible Representations
Although the classification of simple Lie Algebras and their representations is fully understood, I wonder whether there is some book with exhaustive tables describing explicit irreducible ...
11
votes
0answers
220 views
Most discriminants are almost squarefree
Write, for $f(x) = x^d + a_2 x^{d-2} + \cdots + a_d\in \mathbb{Z}[x]$, $H(f) := \max(|a_i|^{\frac{1}{i}})$. Does anyone know of a reference that would allow me to show that the proportion of $f$ with ...
-2
votes
0answers
21 views
Conceptual description of the isotypical component [migrated]
This is probably rather simple but I have not found it in the literature. Consider the category $C$ of representations of a finite group $G,$ over a field $k$ of characteristic not dividing the order ...
0
votes
0answers
87 views
Reference about a formula of coroot in an affine root system
Let $\delta$ be the null of an affine root system and let $\alpha + p\delta$ be a real affine root, $p$ is an integer. It is said that $$ (\alpha + p\delta)^{\vee} = \alpha^{\vee} + ...
1
vote
2answers
334 views
Semistability in GIT
If I understand correctly, in geometric invariant theory, polystable points can be defined as those which have a closed orbit. Is it true that semistable points can be characterized as those whose ...
0
votes
0answers
53 views
Matrices over a finite field with given Jordan normal form over the algebraic closure [migrated]
Can one describe the (conjugacy classes of) square matrices over a finite field such that over the algebraic closure of this finite field their Jordan normal form consists of one Jordan block? (Such ...
2
votes
1answer
62 views
Reference that contains examples of absolutely indecomposable representations of quivers over a finite field
Is there a reference that lists/discusses examples of absolutely indecomposable representations of quivers over a finite field (absolutely indecomposable = does not decompose into a direct sum over ...
3
votes
0answers
121 views
Deligne-Lusztig and Character sheaves
Consider: $G$ - a nice group ($GL_n$) over a finite field $F$. $X$ - the flag variety. Consider a nice $G$-equivariant $l$-adic sheaf $\mathcal{M}$ on $X \times X$, equipped with Weil structure. Fix ...
6
votes
0answers
111 views
vanishing of Lie algebra cohomology with coefficients in an infinite-dimensional module
Let $G$ be a real semisimple Lie group, $K$ its maximal compact subgroup, $\mathfrak g, \mathfrak k$ the corresponding Lie algebras. Let $V$ be a locally convex, Hausdorff vector space, which is a ...
4
votes
0answers
153 views
Infinite simple p-groups with only trivial irreps in characteristic p
Is there a prime $p$ and an infinite simple $p$-group $G$ such that for any field $K$ of characteristic $p$ the only irreducible $KG$-module, whether finite or infinite dimensional, is trivial (that ...
4
votes
1answer
269 views
Frequency of a representation of SO(3)
When generalizing the basic tenets of Fourier Theory to the symmetric group $S_n$, we can define a notion of the frequency of a basis function (i.e. an irreducible representation of $S_n$). In ...
4
votes
1answer
210 views
Decomposing $(\mathbb C^n)^{\otimes m}$ as a representation of $S_n\times S_m$
$V=\mathbb C^n$ is a $\mathbb CS_n$-module, where $S_n$ is the symmetric group of degree $n$, via the representation sending a permutation to the corresponding permutation matrix. The tensor power ...
|
__label__pos
| 0.853055 |
{numbering} is about numbering document elements. Use this tag in addition to other tags specifying what should be numbered (or have its number removed). For {page-numbering} or {line-numbering}, use the respective tag instead.
learn more… | top users | synonyms
1
vote
1answer
15 views
pagenote: suppressing note number in text
How does one suppress all marks in the text? Currently I have in the Preamble: \usepackage{pagenote} \makepagenote \renewcommand*{\notedivision}{\section*{\notesname\ to chapter~\thechapter}} ...
3
votes
0answers
129 views
Incorrect page number counter in beamer using sections
I am trying to put together a beamer document put together using multiple sections where frame numbers restart at each section and I can leave certain frames at the beginning of the document and at ...
3
votes
0answers
133 views
Section with answers created automatically
My question is quite similar to this one, but I think a slightly different approach is needed. I have a document with a lot of exercices and I'd like to add aswers and hints to them in a certain ...
2
votes
0answers
80 views
How can I write numbers on the fields?
How can I write numbers on the fields?
2
votes
0answers
43 views
Automatically return list of all uses of a command, without repeats e.g. \compound using chemstyle
I'd like to automatically generate a comma separated list of all occurrences in my .tex file of a particular command. Specifically, I want to ensure the correct numbering of chemical structures using ...
2
votes
0answers
58 views
Custom page number positioning in a rotated A4 environment
I have the following code: \documentclass[landscape,english]{article} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{geometry}% http://ctan.org/pkg/showframe ...
2
votes
0answers
189 views
Per-page shared counter
I’m trying to number some objects on a per-page basis with a shared counter. The following example is close to minimal. \documentclass{article} \usepackage{zref-perpage} ...
1
vote
0answers
17 views
Option thref of package ntheorem causes equation numbering problem with amsmath's split environment
See the MWE first: \documentclass{article} \usepackage{amsmath} \usepackage[amsmath, thref]{ntheorem} \begin{document} \begin{equation} \begin{split} a & b\\ c & d \end{split} ...
1
vote
0answers
22 views
biblatex 3.0+ - bibliography with different sorting schemes but unique labels
With biblatex version 2.x or earlier, one can write something like \printbibliography [category = cited, sorting = none, title = {References}] \printbibliography [notcategory = cited, sorting = ...
1
vote
0answers
19 views
Numbering chemical intermediate compounds with chemstyle
I´m currently writing my bachelor thesis in chemistry and chose to number my compounds with chemstyle. And except of one little issue that works just fine: Is it possible to number intermediate ...
1
vote
0answers
51 views
How to change the font of the numbers in a tex
I've read the below answer : Font selection in XeTeX for specific characters but I need a solution that works with latexpdf instead of xetex.
1
vote
0answers
59 views
Problem with equation numbering, resets after ten
i have a problem with my thesis, equation numbering resets after number 10. For example, in the doc after equation 3.3.10 follows 3.3.1 and not 3.3.11. Here is my preamble ...
1
vote
0answers
60 views
Missing figure numbers
I have a separate file that can show figure number properly. However, I do not get a figure number in my another file with the same code for figure. In my main file, in lines 182, 197 and so on. My ...
1
vote
0answers
38 views
Appendix not numbered- minimal document
I'm trying to put appendices in my PhD thesis, and I'd expected them to be labelled A,B, etc. However, instead it's not producing a title for the appendix at all, and it's giving me sections to my ...
1
vote
0answers
56 views
Hide Click Number in Beamer with Mac
I want to hide the length of my beamer presentation. I have already hidden the navigation bar and created a new footline that does not show the frame number (code below). However, whenever I use a Mac ...
1
vote
0answers
101 views
Creating a hidden (unlisted) section and not breaking the current subsection
The Problem In my presentation, I want to incorporate a small crash-course inside a specific subsection. This crash-course should have no own section, but should be outside the current section and ...
1
vote
0answers
158 views
Caption numbering missing in beamer class when adding subfig package
I am having a problem with the beamer class and subfig package. Once I add the subfig package (for subfloats) the numbering of all my captions disappear. Please see the following minimial working ...
1
vote
0answers
134 views
autonum, amsmath and \qedhere
I have a weird behavior of the \qedhere command while using amsmath in combination with autonum. Consider the minimal (not working) example below: In unreferenced equation with \qedhere (Theorem 2) ...
0
votes
0answers
17 views
header in phd thesis is showing wrongly on the top of another chapter in alternate pages
I am writing phd thesis, I want an additional chapter before my first chapter which should be counted as zero. The following is the code where general introduction is my chapter 0 ...
0
votes
0answers
56 views
how to change BibTeX orders citations from alphabetically to numerically
How to change BibTeX orders citations in plain style from alphabetically to numerically. For example, if I have this reference @misc{4, author = "Nobody Jr", title = "My Article", year = ...
0
votes
0answers
50 views
Numbering of head title on titlepage?
I want to combine several independent papers in a pdf-format. However, due to time constraints, I am looking for a "quick and dirty" solution. Therefore, I manually constructed a table of contents and ...
0
votes
0answers
99 views
How to set section numbering in [svmult]?
In my book I have automatically-numbered chapters and sections but when I type \chapter{CHAP 1} \section{SEC 1} \subsection{SUBSEC 1} \subsection{SUBSEC 2} I recieve a chapter with number 1, ...
0
votes
0answers
49 views
Numbering subfigures
In case I have about 100 subfigures how should I name them to be arranged in a correct way in the TeX file on converting it into PDF file. I need to arrange them in a way that a -> z is followed by ...
0
votes
0answers
112 views
Glossaries: customize number lists (page numbers) attached to each acronym
I would like to exclude the Table of Contents and Glossary itself from the page numbers that appear attached to each acronym indicating the pages where the entries have been used. If this cannot be ...
0
votes
0answers
176 views
Numbering of equations in chemfig
This might be a stupid question, but how do I number my equations in chemfig? I've drawn this rather complicated equation in chemfig, but I have no idea how to number it. I was hoping to have it ...
0
votes
0answers
67 views
eastern arabic numbers in page numbers
I am making a book using arabtex, the numbers inside the document are written in eastern arabic format. However the page numbers are in arabic format. How can I make them turn to eastern arabic?
0
votes
0answers
50 views
Choosing what to \ref from a \newcommand
I have this definition in my class (thanks to this answer): \newcommand\clause{% \immediate\write\@auxout{\string\expandafter\gdef\noexpand\csname ...
0
votes
0answers
61 views
show ref in tex
For example I have some equation \begin{equation}\label{myequation} \end{equation} In .dvi file it would be numbered, for example, (1.10). If I would like that it would marked as [myequation], how ...
0
votes
0answers
330 views
Paragraph Numbering in LaTeX and Space Increase of Indentation
Is there any easy way to do the same thing that this code does: % arara: pdflatex \documentclass[a4paper,12pt]{report} \usepackage[inner=0.75in,outer=0.65in,top=0.75in,bottom=0.75in]{geometry} ...
|
__label__pos
| 0.760853 |
Locally testable code
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small (frequently constant) number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.
In contrast, locally decodable codes use a small number of bits of the codeword to probabilistically recover the original information. The fraction of errors determines how likely it is that the decoder correctly recovers the original bit; however, not all locally decodable codes are locally testable.[1]
Clearly, any valid codeword should be accepted as a codeword, but strings that are not codewords could be only one bit off, which would require many (certainly more than a constant number) probes. To account for this, testing failure is only defined if the string is off by at least a set fraction of its bits. This implies words of the code must be longer than the input strings by adding some redundancy.
Definition[edit]
To measure the distance between two strings, the Hamming distance is used
The distance of a string from a code is computed by
Relative distances are computed as a fraction of the number of bits
A code is called -local -testable if there exists a Turing machine M given random access to an input that makes at most non-adaptive queries of and satisfies the following:[2]
• For any and , . In other words, M accepts given access to any codeword of C.
• For such that , . M must reject strings -far from C at least half the time.
Limits[edit]
It remains an open question whether there are any locally testable codes of linear size, but there are several constructions that are considered "nearly linear":[3]
1. Polynomial arbitrarily close to linear; for any , .
2. Functions of the form , where is a function tending toward 0. This makes n closer to linear as k increases. For example:
• for some
• for
These have both been achieved, even with constant query complexity and a binary alphabet, such as with for any . The next nearly linear goal is linear up to a polylogarithmic factor; . Nobody has yet to come up with a linearly testable code that satisfies this constraint.[3]
Connection with probabilistically checkable proofs[edit]
Locally testable codes have a lot in common with probabilistically checkable proofs (PCPs). This should be apparent from the similarities of their construction. In both, we are given random nonadaptive queries into a large string and if we want to accept, we must with probability 1, and if not, we must accept no more than half the time. The major difference is that PCPs are interested in accepting if there exists a so that . Locally testable codes, on the other hand, accept if it is part of the code. Many things can go wrong in assuming a PCP proof encodes a locally testable code. For example, the PCP definition says nothing about invalid proofs, only invalid inputs.
Despite this difference, locally testable codes and PCPs are similar enough that frequently to construct one, a prover will construct the other along the way.[4]
Examples[edit]
Hadamard code[edit]
One of the most famous error-correcting codes, the Hadamard code, is a locally testable code. A codeword x is encoded in the Hadamard code to be the linear function (mod 2). This requires listing out the result of this function for every possible y, which requires exponentially more bits than its input. To test if a string w is a codeword of the Hadamard code, all we have to do is test if the function it encodes is linear. This means simply checking if for x and y uniformly random vectors (where denotes bitwise XOR).
It is easy to see that for any valid encoding , this equation is true, as that is the definition of a linear function. Somewhat harder, however, is showing that a string that is -far from C will have an upper bound on its error in terms of . One bound is found by the direct approach of approximating the chances of exactly one of the three probes yielding an incorrect result. Let A, B, and C be the events of , , and being incorrect. Let E be the event of exactly one of these occurring. This comes out to
This works for , but shortly after, . With additional work, it can be shown that the error is bounded by
For any given , this only has a constant chance of false positives, so we can simply check a constant number of times to get the probability below 1/2.[3]
Long code[edit]
The Long code is another code with very large blowup which is close to locally testable. Given an input (note, this takes bits to represent), the function that returns the bit of the input, , is evaluated on all possible -bit inputs , and the codeword is the concatenation of these (of length ). The way to locally test this with some errors is to pick a uniformly random input and set , but with a small chance of flipping each bit, . Accept a function as a codeword if . If is a codeword, this will accept as long as was unchanged, which happens with probability . This violates the requirement that codewords are always accepted, but may be good enough for some needs.[5]
Other locally testable codes include Reed-Muller codes (see locally decodable codes for a decoding algorithm), Reed-Solomon codes, and the short code.
References[edit]
1. ^ Kaufman, Tali; Viderman, Michael. "Locally Testable vs. Locally Decodable Codes".
2. ^ Ben-Sasson, Eli; Sudan, Madhu. "Robust Locally Testable Codes and Products of Codes" (PDF).
3. ^ a b c Goldreich, Oded. "Short Locally Testable Codes and Proofs (Survey)". CiteSeerX 10.1.1.110.2530Freely accessible.
4. ^ Cheraghchi, Mahdi. "Locally Testable Codes".
5. ^ Kol, Gillat; Raz, Ran. "Bounds on Locally Testable Codes with Unique Tests" (PDF).
|
__label__pos
| 0.963049 |
image/DecoderFactory.h
author ffxbld <[email protected]>
Mon, 07 Mar 2016 09:21:14 -0500
changeset 287299 552ef9bae23386aa976f28e0d66ed2ae92bc2194
parent 265403 d177c3dbbd154651b738bef519629cc75377dbe4
child 299274 1bc7102718a99bc960054cbad9f178df932234c8
permissions -rw-r--r--
No bug - Tagging mozilla-central 68d3781deda0d4d58ec9877862830db89669b3a5 with FIREFOX_AURORA_47_BASE a=release DONTBUILD CLOSED TREE
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*-
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#ifndef mozilla_image_DecoderFactory_h
#define mozilla_image_DecoderFactory_h
#include "DecoderFlags.h"
#include "mozilla/Attributes.h"
#include "mozilla/Maybe.h"
#include "mozilla/gfx/2D.h"
#include "nsCOMPtr.h"
#include "SurfaceFlags.h"
class nsACString;
namespace mozilla {
namespace image {
class Decoder;
class RasterImage;
class SourceBuffer;
/**
* The type of decoder; this is usually determined from a MIME type using
* DecoderFactory::GetDecoderType().
*/
enum class DecoderType
{
PNG,
GIF,
JPEG,
BMP,
ICO,
ICON,
UNKNOWN
};
class DecoderFactory
{
public:
/// @return the type of decoder which is appropriate for @aMimeType.
static DecoderType GetDecoderType(const char* aMimeType);
/**
* Creates and initializes a decoder for non-animated images of type @aType.
* (If the image *is* animated, only the first frame will be decoded.) The
* decoder will send notifications to @aImage.
*
* @param aType Which type of decoder to create - JPEG, PNG, etc.
* @param aImage The image will own the decoder and which should receive
* notifications as decoding progresses.
* @param aSourceBuffer The SourceBuffer which the decoder will read its data
* from.
* @param aTargetSize If not Nothing(), the target size which the image should
* be scaled to during decoding. It's an error to specify
* a target size for a decoder type which doesn't support
* downscale-during-decode.
* @param aDecoderFlags Flags specifying the behavior of this decoder.
* @param aSurfaceFlags Flags specifying the type of output this decoder
* should produce.
* @param aSampleSize The sample size requested using #-moz-samplesize (or 0
* if none).
*/
static already_AddRefed<Decoder>
CreateDecoder(DecoderType aType,
RasterImage* aImage,
SourceBuffer* aSourceBuffer,
const Maybe<gfx::IntSize>& aTargetSize,
DecoderFlags aDecoderFlags,
SurfaceFlags aSurfaceFlags,
int aSampleSize);
/**
* Creates and initializes a decoder for animated images of type @aType.
* The decoder will send notifications to @aImage.
*
* @param aType Which type of decoder to create - JPEG, PNG, etc.
* @param aImage The image will own the decoder and which should receive
* notifications as decoding progresses.
* @param aSourceBuffer The SourceBuffer which the decoder will read its data
* from.
* @param aDecoderFlags Flags specifying the behavior of this decoder.
* @param aSurfaceFlags Flags specifying the type of output this decoder
* should produce.
*/
static already_AddRefed<Decoder>
CreateAnimationDecoder(DecoderType aType,
RasterImage* aImage,
SourceBuffer* aSourceBuffer,
DecoderFlags aDecoderFlags,
SurfaceFlags aSurfaceFlags);
/**
* Creates and initializes a metadata decoder of type @aType. This decoder
* will only decode the image's header, extracting metadata like the size of
* the image. No actual image data will be decoded and no surfaces will be
* allocated. The decoder will send notifications to @aImage.
*
* @param aType Which type of decoder to create - JPEG, PNG, etc.
* @param aImage The image will own the decoder and which should receive
* notifications as decoding progresses.
* @param aSourceBuffer The SourceBuffer which the decoder will read its data
* from.
* @param aSampleSize The sample size requested using #-moz-samplesize (or 0
* if none).
*/
static already_AddRefed<Decoder>
CreateMetadataDecoder(DecoderType aType,
RasterImage* aImage,
SourceBuffer* aSourceBuffer,
int aSampleSize);
/**
* Creates and initializes an anonymous decoder (one which isn't associated
* with an Image object). Only the first frame of the image will be decoded.
*
* @param aType Which type of decoder to create - JPEG, PNG, etc.
* @param aSourceBuffer The SourceBuffer which the decoder will read its data
* from.
* @param aSurfaceFlags Flags specifying the type of output this decoder
* should produce.
*/
static already_AddRefed<Decoder>
CreateAnonymousDecoder(DecoderType aType,
SourceBuffer* aSourceBuffer,
SurfaceFlags aSurfaceFlags);
/**
* Creates and initializes an anonymous metadata decoder (one which isn't
* associated with an Image object). This decoder will only decode the image's
* header, extracting metadata like the size of the image. No actual image
* data will be decoded and no surfaces will be allocated.
*
* @param aType Which type of decoder to create - JPEG, PNG, etc.
* @param aSourceBuffer The SourceBuffer which the decoder will read its data
* from.
*/
static already_AddRefed<Decoder>
CreateAnonymousMetadataDecoder(DecoderType aType,
SourceBuffer* aSourceBuffer);
private:
virtual ~DecoderFactory() = 0;
/**
* An internal method which allocates a new decoder of the requested @aType.
*/
static already_AddRefed<Decoder> GetDecoder(DecoderType aType,
RasterImage* aImage,
bool aIsRedecode);
};
} // namespace image
} // namespace mozilla
#endif // mozilla_image_DecoderFactory_h
|
__label__pos
| 0.703377 |
how to translate benchmark performance results to flops
Lammps results do not follow the standard way of reporting performance (in flops/sec).
Is there a way to translate the results for the Lennard-Jones benchmark for example, in flops/sec?
example existing output:
Performance: 17997.357 tau/day, 41.661 timesteps/s
99.4% CPU use with 8 MPI tasks x 8 OpenMP threads
Can you provide with more info on how to interpret these results and how to translate them to flops/sec?
Lammps results do not follow the standard way of reporting performance (in
flops/sec).
i strongly disagree, that FLOPS are a meaningful descriptor for the
performance of an MD code.
what matters is how quickly a defined task is done, which is what
LAMMPS reports. it would be easy to achieve a higher FLOPS rating
while at the same time have a worse actual performance. this is
particularly true for MD codes. example: when running highly threaded
and vectorized kernels, e.g. on GPUs or xeon phi accelerators, it is
more efficient to not take advantage of newton's third law and
effectively double the number of floating point operations per time
step (and thus artificially inflate the FLOP count) to reduce the
overhead of atomic operations or waiting on locks, where with serial
or minimally threaded execution, one one rather reduce the number of
operations for more efficient processing.
Is there a way to translate the results for the Lennard-Jones benchmark for
example, in flops/sec?
no. this is a non-trivial operation. the number of floating point
operations varies due to the variations of the number of neighbors.
you have a different number of floating point operations for pairs of
atoms that are within the cutoff and those outside the cutoff. on top
of that, you have floating point operations associated with other
operations, e.g. the neighbor list builds, that are difficult to
estimate or would incur unacceptable overhead if collected/computed.
example existing output:
Performance: 17997.357 tau/day, 41.661 timesteps/s
99.4% CPU use with 8 MPI tasks x 8 OpenMP threads
Can you provide with more info on how to interpret these results and how to
http://lammps.sandia.gov/doc/Section_start.html#lammps-screen-output
translate them to flops/sec?
as stated above, determining the number of FLOPS is difficult to do
unless one would accept a lot of unwanted overhead.
please also note, that FLOPS/s is redundant, as FLOPS is an
abbreviation for "floating point operations per second"; so it should
be either FLOPS or FLOP/s.
if you want to have a handle on the number of floating point (and
SSE/AVX) operations (and lots of other relevant performance metrics)
occurring during an MD run (or any executable for that matter), your
best bet are reading the performance counters embedded into your CPU.
for example using the "perf" tool
https://perf.wiki.kernel.org/index.php/Main_Page
axel.
I would go one step further and argue that FLOPS is only a sensible metric for hardware, not software. The most “performant” software in terms of FLOPS would run an infinite loop with some floating point operations inside and nothing else, but that would hardly be worth running.
The inner loop of the LJ potential does about 25 flops per
pairwise interaction. And with newton on, each IJ is
computed only once, not twice.
So with that and the number of neighbors per atom,
you can get a flop rate.
For any other pair style in LAMMPS, you would
have to hand-count the # of flops per pairwise
interaction.
Steve
|
__label__pos
| 0.71396 |
Bryan O'Sullivan avatar Bryan O'Sullivan committed bec0b08
Simplify and centralize buffer overflow handling.
Comments (0)
Files changed (4)
Data/Text/ICU/Char.hsc
import Data.Word (Word8)
import Foreign.C.String (CString, peekCStringLen, withCString)
import Foreign.C.Types (CInt)
-import Foreign.Marshal.Alloc (allocaBytes)
import Foreign.Ptr (Ptr)
import System.IO.Unsafe (unsafePerformIO)
charName' choice c = fillString $ u_charName (fromIntegral (ord c)) choice
fillString :: (CString -> Int32 -> Ptr UErrorCode -> IO Int32) -> String
-fillString act = unsafePerformIO $ loop 128
- where
- loop !n = do
- ret <- allocaBytes n $ \ptr -> do
- ret <- handleOverflowError $ act ptr (fromIntegral n)
- case ret of
- Left overflow -> return (Left overflow)
- Right r -> Right `fmap` peekCStringLen (ptr,fromIntegral r)
- either (loop . fromIntegral) return ret
+fillString act = unsafePerformIO $
+ handleOverflowError 128 act (curry peekCStringLen)
type UBlockCode = CInt
type UCharDirection = CInt
Data/Text/ICU/Error/Internal.hsc
) where
import Control.Exception (Exception, throwIO)
+import Data.Function (fix)
import Foreign.Ptr (Ptr)
import Foreign.Marshal.Alloc (alloca)
import Foreign.Marshal.Utils (with)
+import Foreign.Marshal.Array (allocaArray)
import Data.Int (Int32)
import Data.Typeable (Typeable)
import Foreign.C.String (CString, peekCString)
throwOnError =<< peek errPtr
return ret
-handleOverflowError :: (Ptr UErrorCode -> IO a) -> IO (Either a a)
-{-# INLINE handleOverflowError #-}
-handleOverflowError action =
- with 0 $ \uerrPtr -> do
- ret <- action uerrPtr
+-- | Deal with ICU functions that report a buffer overflow error if we
+-- give them an insufficiently large buffer. Our first call will
+-- report a buffer overflow, in which case we allocate a correctly
+-- sized buffer and try again.
+handleOverflowError :: (Storable a) =>
+ Int
+ -- ^ Initial guess at buffer size.
+ -> (Ptr a -> Int32 -> Ptr UErrorCode -> IO Int32)
+ -- ^ Function that retrieves data.
+ -> (Ptr a -> Int -> IO b)
+ -- ^ Function that fills destination buffer if no
+ -- overflow occurred.
+ -> IO b
+handleOverflowError guess fill retrieve =
+ alloca $ \uerrPtr -> flip fix guess $ \loop n ->
+ (either (loop . fromIntegral) return =<<) . allocaArray n $ \ptr -> do
+ poke uerrPtr 0
+ ret <- fill ptr (fromIntegral n) uerrPtr
err <- peek uerrPtr
- if err > 0
- then if err == #const U_BUFFER_OVERFLOW_ERROR
- then return (Left ret)
- else throwIO (ICUError err)
- else return (Right ret)
+ case undefined of
+ _| err == (#const U_BUFFER_OVERFLOW_ERROR)
+ -> return (Left ret)
+ | err > 0 -> throwIO (ICUError err)
+ | otherwise -> Right `fmap` retrieve ptr (fromIntegral ret)
handleParseError :: (ICUError -> Bool)
-> (Ptr UParseError -> Ptr UErrorCode -> IO a) -> IO a
Data/Text/ICU/Normalize.hsc
#include <unicode/uchar.h>
#include <unicode/unorm.h>
-import Control.Exception (throwIO)
-import Control.Monad (when)
import Data.Text (Text)
import Data.Text.Foreign (fromPtr, useAsPtr)
-import Data.Text.ICU.Error (u_BUFFER_OVERFLOW_ERROR)
-import Data.Text.ICU.Error.Internal (UErrorCode, isFailure, handleError, withError)
+import Data.Text.ICU.Error.Internal (UErrorCode, handleError, handleOverflowError)
import Data.Text.ICU.Internal (UBool, UChar, asBool, asOrdering)
import Data.Text.ICU.Normalize.Internal (UNormalizationCheckResult, toNCR)
import Data.Typeable (Typeable)
import Data.Int (Int32)
import Data.Word (Word32)
import Foreign.C.Types (CInt)
-import Foreign.Marshal.Array (allocaArray)
-import Foreign.Ptr (Ptr)
+import Foreign.Ptr (Ptr, castPtr)
import System.IO.Unsafe (unsafePerformIO)
import Prelude hiding (compare)
import Data.List (foldl')
normalize mode t = unsafePerformIO . useAsPtr t $ \sptr slen ->
let slen' = fromIntegral slen
mode' = toNM mode
- loop dlen =
- (either loop return =<<) .
- allocaArray dlen $ \dptr -> do
- (err, newLen) <- withError $
- unorm_normalize sptr slen' mode' 0 dptr (fromIntegral dlen)
- when (isFailure err && err /= u_BUFFER_OVERFLOW_ERROR) $
- throwIO err
- let newLen' = fromIntegral newLen
- if newLen' > dlen
- then return (Left newLen')
- else Right `fmap` fromPtr dptr (fromIntegral newLen')
- in loop (fromIntegral slen)
+ in handleOverflowError (fromIntegral slen)
+ (\dptr dlen -> unorm_normalize sptr slen' mode' 0 dptr (fromIntegral dlen))
+ (\dptr dlen -> fromPtr (castPtr dptr) (fromIntegral dlen))
-- | Perform an efficient check on a string, to quickly determine if
Data/Text/ICU/Text.hs
import Data.Word (Word32)
import Foreign.C.String (CString)
import Foreign.Marshal.Array (allocaArray)
-import Foreign.Ptr (Ptr)
+import Foreign.Ptr (Ptr, castPtr)
import System.IO.Unsafe (unsafePerformIO)
-- $case
caseMap :: CaseMapper -> LocaleName -> Text -> Text
caseMap mapFn loc s = unsafePerformIO .
withLocaleName loc $ \locale ->
- useAsPtr s $ \sptr slen -> do
- let go len = do
- ret <- allocaArray len $ \dptr -> do
- ret <- handleOverflowError $
- mapFn dptr (fromIntegral len) sptr
- (fromIntegral slen) locale
- case ret of
- Left overflow -> return (Left overflow)
- Right n -> Right `fmap` fromPtr dptr (fromIntegral n)
- either (go . fromIntegral) return ret
- go (fromIntegral slen)
+ useAsPtr s $ \sptr slen ->
+ handleOverflowError (fromIntegral slen)
+ (\dptr dlen -> mapFn dptr dlen sptr (fromIntegral slen) locale)
+ (\dptr dlen -> fromPtr (castPtr dptr) (fromIntegral dlen))
-- | Lowercase the characters in a string.
--
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
|
__label__pos
| 0.913991 |
The function is continuous on [0, ∞]. Find the most suitable values of a and b.
Basic Concept:
A real function f is said to be continuous at x = c, where c is any point in the domain of f if :
where h is a very small ‘+ve’ no.
i.e. left hand limit as x c (LHL) = right hand limit as x c (RHL) = value of function at x = c.
This is very precise, using our fundamental idea of limit from class 11 we can summarise it as, A function is continuous at x = c if :
Here we have,
…………………..equation 1
The function is defined for [0,∞] and we need to find the value of a and b so that it is continuous everywhere in its domain (domain = set of numbers for which f is defined)
To find the value of constants always try to check continuity at the values of x for which f(x) is changing its expression.
As most of the time discontinuities are here only, if we make the function continuous here, it will automatically become continuous everywhere
From equation 1 ,it is clear that f(x) is changing its expression at x = 1
Given,
f(x) is continuous everywhere
[using basic ideas of limits and continuity]
[considering LHL as LHL will give expression inclusive of a ]
[using equation 1]
a = ± 1 …………… equation 2
Also from equation 1 ,it is clear that f(x) is also changing its expression at x = √2
Given,
f(x) is continuous everywhere
[using basic ideas of limits and continuity]
[considering LHL as LHL will give expression inclusive of a & b]
[using equation 1]
b2 – 2b = a ……………….Equation 3
From equation 2, a = –1
b2 – 2b = –1
b2 – 2b + 1 = 0
(b – 1)2 = 0
b = 1 when a = –1
Putting a = 1 in equation 3:
b2 – 2b = 1
b2 – 2b – 1 = 0
Thus,
For a = –1 ; b = 1
For a = 1 ; b = 1 ± √2
5
|
__label__pos
| 0.999955 |
Code Sample: Create a Persistent Memory-Aware Queue Using the Persistent Memory Development Kit (PMDK)
Автор Praveen K Kundurthy,
Опубликовано:03/12/2018 Последнее обновление:03/12/2018
Introduction
This article shows how to implement a persistent memory (PMEM)-aware queue using a linked list and the C++ bindings of the Persistent Memory Development Kit (PMDK) library libpmemobj.
A queue is a first in first out (FIFO) data structure that supports push and pop operations. In a push operation, a new element is added to the tail of the queue. In a pop operation, the element at the head of the queue gets removed. These operations require multiple separate stores. For example, a push operation requires two stores: a tail pointer, and the next pointer of the last element.
A PMEM-aware queue differs from a standard queue in that its data structures reside permanently in persistent memory. and a program or machine crash at a time when there is an incomplete queue entry could result in a memory leak or a corrupted data structureTo avoid this, queue operations must be made transactional. PMDK provides support for transactional and atomic operations specific to persistent memory.
We'll walk through a code sample that describes the core concepts and design considerations for creating a PMEM-aware queue using libpmemobj. You can build and run the code sample by following the instructions provided later in the article.
For background on persistent memory and the PMDK, read the article Introduction to Programming with Persistent Memory from Intel and watch the Persistent Memory Programming Video Series.
C++ Support in libpmemobj
The main features of the C++ bindings for libpmemobj include:
• Transactions
• Wrappers for basic types: automatically snapshots the data during a transaction
• Persistent pointers
Transactions
Transactions are at the core of libpmemobj operations. This is because, in terms of persistence, the current x86-64 CPUs guarantee atomicity only for 8-byte stores. Real-world apps may update in larger chunks. Take, for example, strings; it rarely makes sense to change only eight adjacent bytes from one consistent string state to another. To enable atomic updates to persistent memory in larger chunks, libpmemobj implements transactions.
Libpmemobj uses undo log-based transactions so that in the case of an interruption in the middle of a transaction, all of the changes made to the persistent state will be rolled back.
Transactions are done on a per thread basis, so the call returns the status of the last transaction performed by the calling thread. Transactions are power-safe but not thread-safe. For more information, see C++ bindings for libpmemobj (part 6) - transactions at pmem.io.
The p<> template
In a transaction, undo logs are used to snapshot user data. The PMDK C library requires a manual snapshot to be performed before modifying data in a transaction. The C++ bindings do all of the snapshotting automatically, which reduces the likelihood of programmer error. The pmem::obj::p template wrapper class is the basic building block for this mechanism, and is designed to work with basic types only. Its implementation is based on the operator=(). Each time the assignment operator is called, it means that the value wrapped by p will be changed and the library needs to snapshot the old value. Use of the p<> property for stack variables is discouraged because snapshotting is a computationally intensive operation.
Persistent pointers
Libraries in PMDK are built on the concept of memory mapped files. Since files can be mapped at different addresses of the process virtual address space, traditional pointers that store absolute addresses cannot be used. Instead, PMDK introduces a new pointer type that has two fields: an ID to the pool (used to access current pool virtual address from a translation table), and an offset from the beginning of the pool. Persistent pointers are a C++ wrapper around this basic C type. Its philosophy is similar to that of std::shared_ptr.
libpmemobj Core Concepts
Root object
Making any code PMEM-aware using libpmemobj always involves, as a first step, designing the types of data objects that will be persisted. The first type that needs to be defined is that of the root object. This object is mandatory and used to anchor all the other objects created in the persistent memory pool (think of a pool as a file inside a PMEM device).
Pool
A pool is a contiguous region of PMEM identified by a user-supplied identifier called layout. Multiple pools can be created with different layout strings.
Queue Implementation using C++ Bindings
The queue in this example is implemented as a singly linked list, with a head and tail that demonstrates how to use the C++ bindings of libpmemobj.
Design Decisions
Data structures
The first thing we need is a data structure that describes a node in the queue. Each entry has a value and a link to the next node. As per the figure below, both variables are persistent memory-aware.
Data structure map
Figure 1. Data structure describing the queue implementation.
Code walkthrough
Now, let's go a little deeper into the main function of the program. While running the code you need to provide three arguments. One is the absolute location of the pool file, while the second one is the actual queue operation that needs to be performed. The supported operations in the queue are push (insert element), pop (return and remove element), and show (return element).
if (argc < 3) {
std::cerr << "usage: " << argv[0]
<< " file-name [push [value]|pop|show]" << std::endl;
return 1;
}
In the snippet below, we check to see if the pool file exists. If it does, the pool is opened. If it doesn't exist, the pool is created. The layout string identifies the pool that we requested to open. Here we are opening the pool with layout name Queue as defined by the macro LAYOUT in the program.
const char *path = argv[1];
queue_op op = parse_queue_op(argv[2]);
pool<examples::pmem_queue> pop;
if (file_exists(path) != 0) {
pop = pool<examples::pmem_queue>::create(
path, LAYOUT, PMEMOBJ_MIN_POOL, CREATE_MODE_RW);
} else {
pop = pool<examples::pmem_queue>::open(path, LAYOUT);
}
pop is the pointer to the pool from where we can access a pointer to the root object, which is an instance of examples::pmem_queue, and the Create function creates a new pmemobj pool of type examples::pmem_queue. The root object is like the root of a file system, since it can be used to reach all of the other objects in the pool (as long as these objects are linked properly and no pointers are lost due to coding errors).
auto q = pop.get_root();
Once you get the pointer to the queue object, the program checks the second argument in order to identify what type of action the queue should perform; that is, push, pop, or show.
switch (op) {
case QUEUE_PUSH:
q->push(pop, atoll(argv[3]));
break;
case QUEUE_POP:
std::cout << q->pop(pop) << std::endl;
break;
case QUEUE_SHOW:
q->show();
break;
default:
throw std::invalid_argument("invalid queue operation");
}
Queue operations
Push
Let's look at how the push function is implemented to make it persistent programming-aware. As shown in the code below, the transactional code is implemented as a lambda function wrapped in a C++ closure (this makes it easy to read and follow the code). If a power failure happens the data structure does not get corrupted because all changes are rolled back. For more information how transactions are implemented in C++, read C++ bindings for libpmemobj (part 6) - transactions on pmem.io.
Allocation functions are transactional as well, and they use transaction logic to enable allocation/delete rollback of the persistent state; make_persistent() is the constructor, while delete_persistent() is the destructor.
Calling make_persistent() inside a transaction allocates an object and returns a persistent object pointer. As the allocation is now part of the transaction, if it aborts, the allocation is rolled back, reverting the memory allocation back to its original state.
After the allocation, the value of n is initialized to the new value in the queue, and the next pointer is set to null.
void push(pool_base &pop, uint64_t value) {
transaction::exec_tx(pop, [&] {
auto n = make_persistent<pmem_entry>();
n->value = value;
n->next = nullptr;
if (head == nullptr && tail == nullptr) {
head = tail = n;
} else {
tail->next = n;
tail = n;
}
});
}
Data structure map for push functionality
2. Data structure for push functionality.
Pop
Similar to push, the pop function is shown below. Here we need a temporary variable to store a pointer to the next pmem_entry in the queue. This is needed in order to set the head of the queue to the next pmem_entry after deleting the head using delete_persistent(). Since this is done using a transaction, it is persistent-aware.
uint64_t pop(pool_base &pop){
uint64_t ret = 0;
transaction::exec_tx(pop, [&] {
if (head == nullptr)
transaction::abort(EINVAL);
ret = head->value;
auto n = head->next;
delete_persistent<pmem_entry>(head);
head = n;
if (head == nullptr)
tail = nullptr;
});
return ret;
}
Data structure map for pop functionality.
Figure 3. Data structure for pop functionality.
Build Instructions
Instructions to run the code sample
Download the source code from the PMDK GitHub* repository:
1. Git clone https://github.com/pmem/pmdk.git
command window with GitHub command
Figure 4. Download source code from the GitHub* repository.
2. cd pmdk and run make on the command line as shown below. This builds the complete source code tree.
command window with code
Figure 5. Building the source code.
3. cd pmdk/src/examples/libpmemobj++/queue
4. View command line options for the queue program:
./queue
5. Push command:
./queue TESTFILE push 8
Command window with code
Figure 6. PUSH command using command line.
6. Pop command:
./queue TESTFILE pop
7. Show command:
./queue TESTFILE show
Command window with code
Figure 7. POP command using command line.
Summary
In this article, we showed a simple implementation of a PMEM-aware queue using the C++ bindings of the PMDK library libpmemobj. To learn more about persistent memory programming with PMDK, visit the Intel® Developer Zone (Intel® DZ) Persistent Memory Programming site. There you will find articles, videos, and links to other important resources for PMEM developers.
About the Author
Praveen Kundurthy is a Developer Evangelist with over 14 years of experience in application development, optimization and porting to Intel platforms. Over the past few years at Intel, he has worked on topics spanning Storage technologies, Gaming, Virtual reality and Android on Intel platforms.
Информация о продукте и производительности
1
Производительность зависит от вида использования, конфигурации и других факторов. Дополнительная информация — по ссылке: www.Intel.com/PerformanceIndex.
|
__label__pos
| 0.914277 |
Installing Ghost on Hostinger?
7 minutes read
Installing Ghost on Hostinger can be done by following the steps below:
1. Login to your Hostinger account and navigate to the control panel.
2. Locate the "Website" section and click on "Auto Installer."
3. In the search bar, type "Ghost" and click on the icon when it appears.
4. On the installation page, choose the domain you want to install Ghost on.
5. Provide the desired administrator username and password for your Ghost blog.
6. Click on the "Install" button to begin the installation process.
7. Wait for the installation to complete. This may take a few minutes.
8. After the installation is finished, you will see a success message with login details.
9. Access your Ghost blog by navigating to your domain name in a web browser.
10. Use the provided administrator username and password to log in to the Ghost admin panel and start customizing your blog.
Note that some hosting plans on Hostinger may have limitations or requirements, so it's advisable to check with their support team or documentation for specific instructions or any known issues related to Ghost installations.
Best Cloud Hosting Services in 2024
1
DigitalOcean
Rating is 5 out of 5
DigitalOcean
2
AWS
Rating is 4.9 out of 5
AWS
3
Vultr
Rating is 4.8 out of 5
Vultr
4
Cloudways
Rating is 4.7 out of 5
Cloudways
What is a static site generator and why is it used in Ghost?
A static site generator (SSG) is a software tool that generates a static HTML-based website from dynamic content sources, such as markdown files or content management systems (CMS). Instead of generating web pages dynamically on each request, SSGs pre-build the website, resulting in faster load times and improved security.
In the case of Ghost, a popular open-source blogging platform, it uses a static site generator to render and serve its blog content. Ghost leverages the SSG approach to deliver a lightweight and efficient blogging experience. When a user creates or edits a post in Ghost, the SSG automatically generates the static HTML files for each page. These files can then be directly served to visitors, removing the need for database queries and rendering logic on each request.
Using a static site generator simplifies the server requirements, reduces processing overhead, and allows for better scalability and caching capabilities. The resulting static site can be hosted anywhere, including inexpensive hosting options, Content Delivery Networks (CDNs), or even deployed as a static page on platforms like GitHub Pages or Netlify. Overall, it provides a faster, more secure, and more scalable solution for publishing and serving content.
What is Ghost and how does it work?
Ghost is an open-source publishing platform that enables users to create and manage websites and blogs. Initially released in 2013, Ghost focuses on simplicity, performance, and ease of use.
Ghost operates on a Node.js runtime environment and is built using various web technologies like JavaScript, HTML, and CSS. It follows a client-server architecture, where the client-side handles the user interface, and the server-side manages content and data storage.
When working with Ghost, users interact with the platform through a web-based admin interface. This interface allows for easy content creation, editing, and publishing. Users can write and format their content using a Markdown editor or use the visual editor for a more WYSIWYG (What You See Is What You Get) experience.
Ghost supports various themes and templates, enabling users to customize the appearance and layout of their websites or blogs. Users can find and install themes from the Ghost marketplace or create their own using the platform's theming system.
Ghost also provides a robust API (Application Programming Interface) that allows developers to create custom integrations, build plugins, and extend the functionality of their Ghost-powered sites.
Behind the scenes, Ghost manages the content in a structured manner using a Database Management System (DBMS). By default, Ghost uses SQLite as the DBMS, but it also supports other systems like MySQL and PostgreSQL. This storage system ensures efficient retrieval and management of content.
Ghost follows a modern architecture and leverages caching and server-side rendering techniques to optimize performance. It focuses on delivering fast and responsive websites or blogs, which is especially crucial in today's digital landscape.
Overall, Ghost aims to simplify the process of website or blog creation and management while offering high performance and flexibility to users.
How to create a new MySQL/MariaDB database for Ghost on Hostinger?
To create a new MySQL/MariaDB database for Ghost on Hostinger, follow these steps:
1. Log in to your Hostinger account and go to the Hosting section.
2. In the Hosting dashboard, find the Database section and click on "MySQL Databases."
3. On the MySQL Databases page, you will see a Create a New Database section. Enter the desired database name and click "Create Database." Note down the database name as you will need it later.
4. Next, create a new Database User. Scroll down to the "MySQL Users" section and enter a username and password. Click "Create User" to create the user. Take note of the username and password as well.
5. Associate the user with the database. Scroll further down to the "Add User to Database" section. Select the previously created user and the database from the dropdown menus. Click "Add" to associate the user with the database.
6. A new page will open with privileges settings. Keep the privileges as they are (all privileges selected) and click "Change" to apply the changes.
Congratulations! You have successfully created a new MySQL/MariaDB database for Ghost on Hostinger. You can now use the database name, username, and password to configure your Ghost installation.
What is the MySQL/MariaDB hostname for Ghost installation on Hostinger?
The MySQL/MariaDB hostname for a Ghost installation on Hostinger is usually "localhost".
Facebook Twitter LinkedIn Telegram Whatsapp Pocket
Related Posts:
Running Ghost on Bluehost Ghost is a popular open-source blogging platform that is known for its simplicity and focus on writing. Bluehost is a web hosting service that provides a user-friendly platform for individuals and businesses to host their websites. If...
To launch HumHub on Hostinger, follow these steps:Access your Hostinger account: Log in to your Hostinger account using your username and password. Create a new website: Once logged in, locate the "Websites" section and click on "Create New Website...
Installing Ghost on Cloudways is a straightforward process that can be completed in a few simple steps. Cloudways is a cloud hosting platform that offers easy deployment and management of various applications, including Ghost, a popular open-source content man...
|
__label__pos
| 0.843454 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7340536 B2
Publication typeGrant
Application numberUS 10/054,422
Publication dateMar 4, 2008
Filing dateJan 22, 2002
Priority dateJun 27, 2001
Fee statusPaid
Also published asUS20030014548
Publication number054422, 10054422, US 7340536 B2, US 7340536B2, US-B2-7340536, US7340536 B2, US7340536B2
InventorsSimon Peter Valentine, Christopher Robert Linzell, Peter Wai Lam, Andrew Peter White
Original Assignee3Com Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and apparatus for determining unmanaged network devices in the topology of a network
US 7340536 B2
Abstract
A network management apparatus and method for determining the topology of a network 1 is described. The present invention uses data relating to discovered devices on the network 1, typically network management address table data, to build a network tree. Due to the presence of unsupported or unmanaged connecting network devices, some branches of the resulting tree may not be resolved. In order to address this, for each unresolved branch of the network tree, the present invention attempts to determine the type of each of the discovered network devices on the branch, and if the type of every discovered network device on the branch is determined to be an endstation type, the present invention determines that an undiscovered connecting device is present on the branch.
Images(4)
Previous page
Next page
Claims(10)
1. A method for determining the topology of a network when a network tree, built from data relating to discovered devices of the network, includes one or more unresolved branches, the method comprising:
for each unresolved branch of the network tree, attempting to determine the type of each of the discovered network devices on the branch,
if the type of each discovered network device on the branch is determined to be an endstation type, inferring that an undiscovered connecting device is present on the branch;
if the type of at least one discovered network device on the branch is not an endstation type, leaving the topology of the branch unresolved; and
presenting the determined network topology as a network map, the map comprising icons representing network devices and lines representing network links, wherein the inferred connecting device is represented differently from a discovered connecting device.
2. The method as claimed in claim 1 wherein, if an undiscovered network device is inferred to be present on a branch, the method further comprises the step of:
resolving the topology of the branch by determining that the discovered network devices on the branch are connected to respective ports of the inferred connecting device.
3. The method as claimed in claim 1 wherein the received data comprises address table data for the ports of one or more managed connecting devices on the network, the address table data including the identity of each said port and the identity of other network devices which the port has learned.
4. The method as claimed in claim 3 further comprising the steps, in building the network tree, of selecting a discovered connecting device as a root node, and building a data representation of the tree from the root node, the data representation comprising at least one branch from a respective port of the root node, each branch comprising the identity of the port and the identity of at least one child node on the branch.
5. The method as claimed in claim 4 wherein, after building the network tree, the method comprises the step of:
determining whether the topology of one or more branches of the tree is unresolved.
6. The method as claimed in claim 5 wherein the step of determining whether the topology of one or more branches of the tree is unresolved comprises the steps of:
a) selecting a port of the root mode;
b) considering whether the branch from the selected port has more than one child node, and
c) if the branch from the port has more than one child node, determining that the branch is unresolved.
7. The method as claimed in claim 6 further comprising the step of repeating steps a), b) and c) for each port of each discovered connecting device.
8. The method as claimed in claim 1 wherein the network tree is built using the steps of:
receiving data relating to discovered devices on the network, and
using the received data to build a network tree.
9. A computer readable medium including a computer program for determining the topology of a network when a network tree, built from data relating to discovered devices of the network, includes one or more unresolved branches, the program comprising the steps of:
attempting to determine the type of each of the discovered network devices on an unresolved branch of the network tree;
inferring that an undiscovered connecting device is present on the unresolved branch if the type of each discovered network device on the branch is determined to be an endstation type;
if at least one discovered network device on the unresolved branch is determined not to be an endstation type, leaving the topology of the branch unresolved; and
presenting the determined network topology as a network map, the map comprising icons representing network devices and lines representing network links, wherein the inferred connecting device is represented differently from a discovered connecting device.
10. A network management apparatus for determining the topology of a network, the apparatus comprising;
a memory for receiving and storing data relating to discovered devices on the network;
a processor, coupled to the memory, the processor configured to build a network tree using the received data, and, for each unresolved branch of the network tree, to attempt to determine the type of each of the discovered network devices on the branch;
wherein, if the type of every discovered network device on an unresolved branch is determined to be an endstation type, the processor infers that an undiscovered connecting device is present on the branch, and if at least one discovered network device on the unresolved branch is determined not to be an endstation type, the processor does not infer the topology of the unresolved branch of the network; and
means for presenting a network map showing the determined topology of the network selected from the group consisting of a display and a printer.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to network management systems and more particularly to a network management apparatus and method capable of determining the topology of a network.
2. Description of the Related Art
The following description is concerned with a data communications network, and in particular a local area network (LAN), but it will be appreciated that the present invention has more widespread applicability to other managed communications systems and networks including wide area networks (WANs) and wireless communications networks.
Data communications networks typically comprise a plurality of network devices (computers, peripherals and other electronic devices) capable of communicating with each other by sending and receiving data packets in accordance with predefined network protocols. Each network device is connected by at least one port to the network media, which in the case of a LAN network may be coaxial cable, twisted pair cable or fibre optic cable. Each device on the network typically has hardware for media access control (MAC) with its own unique MAC address. Data packets are sent and received in accordance with the MAC protocol (e.g. CSMA/CD protocol as defined by the standard IEEE 802.2, commonly known as Ethernet). Data packets transmitted using the MAC protocol identify the source MAC address (i.e. the MAC address of the device sending the data packet) and the destination MAC address (i.e. the MAC address of the device for which the data packet is destined) in the header of the data packet.
A network is generally configured with core devices having a plurality of ports, which can be used to interconnect a plurality of media links on the network. Such devices include hubs, repeaters, routers and switches which forward data packets received at one port to one or more of its other ports, depending upon the type of device. For example, a switch forwards a data packet, received at one port, only to a port known to be connected to the destination device specified in the data packet. Such core devices can either be managed or unmanaged.
A managed device is capable of monitoring data packets passing through its ports. For example, a managed device can learn the physical or MAC addresses of the devices connected to its ports by monitoring the source address of data packets passing through the respective ports. Identified source addresses transmitted from a port of a managed network device, such as a router, hub, repeater or switch, are stored in a respective “address table” associated with the port, as described further below.
Managed devices additionally have the capability of communicating using a management protocol such as the Simple Network Management Protocol (SNMP), as described in more detail below. Whilst the following description is concerned with the SNMP management protocol, the skilled person will appreciate that the invention is not limited to use with SNMP, but can be applied to managed networks using other network management protocols.
SNMP defines agents, managers and MIBs (where MIB is Management Information Base), as well as various predefined messages and commands for communication of management data. An agent is present in each managed network device and stores management data and responds to requests from the manager. A manager is present within the network management station of a network and automatically interrogates the agents of managed devices on the network using various SNMP commands, to obtain information suitable for use by the network administrator, whose function is described below. A MIB is a managed “object” database which stores management data obtained by managed devices and is accessible to agents for network management applications.
It is becoming increasingly common for an individual, called the “network administrator”, to be responsible for network management, and his or her computer system or workstation is typically designated the network management station. The network management station incorporates the manager, as defined in the SNMP protocol, i.e. the necessary hardware, and software applications to retrieve data from MIBs by sending standard SNMP requests to the agents of managed devices on the network.
Network management software applications are known which attempt to determine the topology of a network, i.e. the devices on the network and how they are linked together. In order to determine the network topology, the application retrieves MIB data from the managed devices on the network, which can provide information about the devices connected to the managed devices, for instance the aforementioned “address tables”. MIB data retrieved from managed devices can also provide information about device type, device addresses and details about the links. Using such data, the application can usually determine the topology of the entire network.
An example of a known network management software application capable of determining network topology is the 3Com Network Supervisor available from 3Com Corporation of Santa Clara, Calif., USA.
However, these network management systems are rarely able to determine the complete topology of the network, due to the presence of unmanaged network devices, and in particular, unmanaged or unsupported core or connecting network devices such as hubs and switches. In such cases the network map cannot depict the core network device correctly with its multiple ports and connections to other network devices.
The present invention seeks to address this problem.
SUMMARY OF THE INVENTION
In accordance with a first aspect, the present invention provides a method for determining the topology of a network when a network tree, built from data relating to discovered devices on the network, contains one or more unresolved branches, the method comprising: for each unresolved branch of the network tree, attempting to determine the type of each of the discovered network devices on the branch, and if the type of every discovered network device on the branch is determined to be an endstation type, inferring that an undiscovered connecting device is present on the branch.
Accordingly, the present invention enables the topology of the network to be resolved when an undiscovered connecting device, such as a switch or hub, is used solely to connect endstations, such as PCs and printers, to the network.
In a preferred embodiment, the inferred connecting device is created and represented on a network map or other graphical representation of the network topology. The network administrator is therefore presented with a clearer indication of the topology of the network.
In accordance with a second aspect, the present invention provides a computer readable medium having a computer program for carrying out the method in accordance with the first aspect of the present invention.
In accordance with a third aspect, the present invention provides a network management apparatus for determining the topology of a network, the apparatus comprising: a memory for receiving and storing data relating to discovered devices on the network; a processor, coupled to the memory, the processor configured to build a network tree using the received data, and, for each unresolved branch of the network tree, to attempt to determine the type of each of the discovered network devices on the branch; wherein, if the type of every discovered network device on an unresolved branch is determined to be an endstation type, the processor infers that an undiscovered connecting device is present on the branch.
Further preferred and optional features of the present invention will be apparent from the following description and accompanying claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram of a typical network including a network management apparatus in accordance with an embodiment of the present invention;
FIG. 2 illustrates a display screen displaying a map of the network of FIG. 1 determined in accordance with a prior art technique;
FIG. 3 illustrates a display screen displaying a map of the network of FIG. 1 determined in accordance with another prior art technique;
FIG. 4 illustrates a display screen displaying a map of the network of FIG. 1 determined in accordance with a preferred embodiment of the present invention, and
FIG. 5 is a flow diagram illustrating the steps performed by a computer program in accordance with a preferred embodiment of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIG. 1 shows a typical network 1 incorporating a network management system for use in accordance with the present invention. The network 1 comprises managed switches 3 having identifiers A and B, unmanaged or unsupported switch 5 having identifier U, management station 7A, and endstations 7 having identifiers s, t, w, x, y and z, and media links 9, (only one of which is numerically referenced). The following description will refer to each network device with reference to its identifier which is typically its IP address, physical address or name.
Management station m is connected to port 1 of switch A; switch B is connected to port 2 of switch A; endstation w is connected to port 3 of switch A; and switch U is connected to port 4 of switch A. Endstation s is connected to port 2 of switch B and endstation t is connected to port 3 of switch B. Endstations x, y and z are connected to respective ports of switch U.
Network management station m incorporates the necessary hardware and software for network management. In particular, network management station m includes a processor, a memory and a disk drive as well as user interfaces such as a keyboard and mouse, and a visual display unit 11 (see FIG. 2). Network management application software in accordance with the present invention is loaded into the memory of management station m for processing data as described in detail below.
The network management station m is capable of communicating with the managed switches A and B by means of a network management protocol, in the present embodiment the SNMP protocol, in order to obtain network management data. In particular, the management station m includes the SNMP manager. Each managed device monitors data traffic passing through its ports and includes an SNMP agent which stores MIB data in memory on the device, as is well known in the art, and communicates such data to the SNMP manager in the network management station 7A, as described below.
The network management station m includes a network management software application which determines the topology of the network 1. The determination of the topology of the network is typically performed upon setting up/installing the network management application, and subsequently on command by the network administrator.
The topology of the network is typically determined by building a “network tree”. In particular a network device or “node” is selected as a “root node” (which is typically a managed switch or bridge) and the system uses MIB data retrieved from the managed network devices to determine the identity of all the “child nodes” of each of the ports of a root node.
The child nodes of a given port are the devices, the addresses of which the port has learnt by monitoring the source addresses of data packets passing through the port. In other words, the child nodes are network devices which are connected on a “branch” of the network connected to the relevant port, and which have sent data packets to network devices on other branches of the network through the root device.
The process is then repeated for each of the child nodes, to determine which nodes are children of each child node, their orientation with respect to each other and thus the structure of the branch. This process builds up the “network tree”. More details of the manner of determining network topology in this way can be found in RFC 2108 “Definitions of Managed Objects for IEEE 802.3 Repeater Devices using SMIv2”, which is incorporated herein by reference.
Consider that the presence of switch U is recognised by the network management application, i.e. it has been discovered, but is unable to provide to the network management application data containing the addresses learnt by its ports. There are several reasons why this situation might arise. For instance, switch U may be unsupported (it cannot provide data in a format that the application can understand) or the network management application may not be authorised to access its data. The network management application is thus provided with the address table data indicated in Table 1 and Table 2 below.
TABLE 1
Port of Switch A Child nodes
A1 m
A2 B, s, t
A3 w
A4 U, x, y, z
TABLE 2
Port of Switch B Child nodes
B1 A, U, m, s, t, w, x, y, z
B2 s
B3 t
In accordance with the prior art technique, the management application builds the network topology by selecting a root node and building a tree from the root node. Consider the case where switch A is the root node. Since switch B is the only other device with topology information (Table 2), and port 1 of switch B is facing the root device, switch A, it is possible to make endstations s and t child nodes of ports 2 and 3 of switch B respectively.
Thus, the topology tree is determined as shown in Table 3.
TABLE 3
Port of root node Child node(s) Child of child node
A1 m
A2 B
B2 s
B3 t
A3 w
A4 U, x, y, z
Since switch U does not provide topology information to the network management application, it is not possible to resolve the topology of the network for the branch connected to port 4 of switch A. Accordingly, in accordance with a prior art technique, the network topology may be presented as a network map having a cloud connected to port 4 of switch A and with the child nodes thereof connected to the cloud as shown in FIG. 2.
Consider now the situation in which switch U is an unmanaged device, and is not only unable to provide, to the network management application, data containing the addresses learnt by its ports, but also is not even discovered by the network management application. In this case, the topology determined will be as set out in Table 4 below. Table 4 is the same as Table 3 above except that switch U will not be included as a child of port 4 of switch A.
TABLE 4
Port of root node Child node(s) Child of child node
A1 m
A2 B
B2 s
B3 t
A3 w
A4 x, y, z
This is presented on the network map as shown in FIG. 3, in which a cloud is connected to port 4 of switch A, and devices x, y and z, but not U, are connected to the cloud.
As will be appreciated from the above, in the case of the switch U being either unsupported or unmanaged the topology presented in the network map to the network administrator does not represent the true topology of the network. The present invention seeks to determine when an unmanaged or unsupported core or connecting device is present and to represent this on the network map.
In accordance with the present invention, an unmanaged or unsupported core network device, which is not discovered by the network management application, is determined or inferred if a branch of the network includes a plurality of child nodes which are all known to be endstations (e.g. a UNIX workstation, PC, printer or other non-connecting network device). In order to determine whether the child nodes are endstations, the network management application must first have determined, as far as possible, the type of each of the discovered network devices connected on the unresolved branch.
Various methods may be used to determine that a device is an endstation. For example, UK Patent Application No 0009044.9 entitled “Discovering Non-Managed Devices in a Network such as a LAN using Telnet” describes a way for a network management application to determine the type of a network device it has discovered by emulating a Telnet client and reading the identification string in the Telnet Login banner provided by the network device. This method can be used to identify endstations such as UNIX workstations, printers and print servers.
The skilled person will appreciate that many different methods may be used to determine that discovered devices are endstations. For example, endstations running Windows can be detected using Windows API calls. Other device types such as file servers and IP printers may be determined using well-known protocols.
The network management application of the preferred embodiment of the present invention attempts to determine the type of each of the discovered devices, using appropriate combinations of the above described techniques, as part of the network discovery process, i.e. prior to determining the network topology. Once the discovery process has been completed, the network management application determines that there is an unmanaged or unsupported connecting device in an unresolved branch if all the child nodes in the branch are endstations. The network management application infers that a connecting device is present, to which all of the child nodes in the branch are connected.
In accordance with a preferred embodiment, the method of the present invention is implemented in the form of a software application which may be provided in the form of a computer program on a computer readable medium. Such computer readable medium may be a disk which can be loaded in the disk drive of network management station m or the computer system carrying a website of, for example the website of the supplier of network devices, which permits downloading of the program over the internet by a network management station. Thus the present invention may be embodied in the form of a carrier wave with the computer program carried thereon.
FIG. 5 illustrates the program steps performed by the computer program in accordance with the method of the preferred embodiment of the present invention. The program steps will be described initially, followed by specific examples of how the program resolves the topology of the network of FIG. 1 in the different situations described above.
The program starts once the discovery process has been completed as described above.
At step 10, the program builds a network tree, in accordance with the conventional method as described above.
At step 20, the program sets a Current Node variable “currNode” as the first connecting device in the tree. The first connecting device is the root node and is typically the managed/supported connecting device that is closest to the network management station.
At step 30, the program sets a Current Port variable “currPort” as the first port on the Current Node that has at least one child node.
At step 40, the program considers whether the Current Port has multiple children. If the Current Port does not have multiple children, the program continues with step 50 by connecting the Current Port to the child node.
The program then continues with step 110 by considering whether the Current Node has a further port that has at least one child node in the discovered topology. If step 110 determines that the Current Node does have a further port, the program sets the variable currPort to the number of next port present in the topology, in the present example A2, and returns to step 40. Alternatively, if step 110 determines that the Current Node does not have a further port, the program continues with step 120 by considering whether there are any further connecting devices in the determined topology. If step 120 determines that there are further connecting devices in the topology, the program sets the variable currNode to the next connecting device present in the topology, and returns to step 30.
Returning to step 40, if step 40 determines that the Current Port does have multiple children, the program continues with step 60. Step 60 considers whether all the children of the Current Port are known to be endstations. In particular, the program considers the type of each of the child nodes of the Current Port, as determined during the discovery process.
If all the child nodes are an endstation type, that is, step 60 determines that all the children of the Current Port are endstations, then the Current Port is assumed to be connected to an unmanaged connecting device which has not been discovered in the network topology, and the program continues by creating a new object, specifically an unmanaged device object, at step 70. The program then continues with step 90 by connecting the Current Port to the new object, and with step 100 by connecting the children of the Current Port to the new object.
If step 60 determines that not all the children of the Current Port are endstations, that is, at least one of the child nodes of the Current Port is a connecting device or unknown type, then, in the preferred embodiment, the topology of the branch of the network connected to the Current Port is not inferred, since one or more of the child nodes could be a discovered, but unsupported, connecting device. In accordance with the preferred embodiment, as depicted in FIG. 5, the program continues with step 80 by creating a new object, specifically a cloud object, and at step 90 connecting the Current Port to the new object, and continuing with step 100 by connecting the children of the Current Port to the cloud object and continues with step 110.
The program stops when step 120 determines that there are no further connecting devices to consider in the topology.
EXAMPLE 1
Applying the program of FIG. 5 to the network of FIG. 1, assume step 10 builds the network tree as shown in Table 4 above, that is, switch U is not discovered, for example because it is unmanaged, and thus not included in the network tree. In addition, assume that step 20 sets currNode to A, switch A being the closest managed device to the management station m.
Step 30 sets currPort to A1, this being the first port on the current node that has at least one child node. Step 40 determines that currPort A1 does not have multiple children, and the program continues with step 50 by connecting currPort A1 to the child node m.
The program then continues with step 110 by considering whether the Current Node has a further port that has at least one child node in the discovered topology. Step 110 determines that the currNode A does have a further port and sets the variable currPort to the number of next port present in the topology, in the present example currPort is set to A2.
The program returns to step 40, which determines that the currPort only has one child node, B, as shown in Table 4, and step 50 connects A2 to B. At step 110 the program moves on to node A3 which is similarly determined in step 40 to have only one child node w, which is connected to A3 in step 50.
When the program sets currPort to A4, step 40 now determines that A4 does have multiple children. These are x, y and z as shown in Table 4.
In this case, the program continues with step 60 by considering whether all the children of the Current Port are known to be endstations. In particular, step 60 determines the type of each of the child nodes x, y and z.
Since all the child nodes of A4 are determined to be of an endstation type during the discovery process as described above, step 60 determines that all the children of the currPort A4 are endstations. Accordingly, it is assumed that an unmanaged connecting device, that has not been detected in the discovery process, must be present. This, of course, is undiscovered Switch U shown in FIG. 1. Accordingly, step 70 creates an unmanaged device object representing U and then step 90 connects A4 to the new object. Step 100 then connects each of the child nodes of A4, which are endstations, to respective ports of the new object.
Since there are no further ports on currNode A, step 120 sets currNode to B, which is the next connecting device in the discovered topology. The program then proceeds in the same manner as for Switch A, and in particular, connects child node s to port B2 of Switch B and child node t to port B3 of Switch B. With no further connecting devices to consider, the program then ends.
Preferably, the network management station 7A displays, on its display screen, a network map to depict the thus determined topology as shown in FIG. 4. The inferred device U is represented in a similar manner to other connecting devices but with an additional symbol to indicated that its presence is inferred by the network management application. Thus, in FIG. 4, a cloud symbol is depicted, in a corner of the rectangular icon used to represent a connecting device. It will be appreciated that other manners of depiction are possible. For example, the icon may be simply labelled as inferred, or alternatively represented in dotted or dashed outline, or by a different colour to discovered connecting devices. In another embodiment, a unique icon may be used to represent an inferred connecting device.
EXAMPLE 2
Applying the program of FIG. 5 to the network of FIG. 1, assume step 10 builds the network tree as shown in Table 3 above, that is, switch U is discovered and included in the network tree, but because it is unsupported, the position of switch U in the topology cannot be resolved. Also assume that step 20 sets currNode to A, Switch A being the closest managed device to the management station m.
The program proceeds in the same manner as Example 1, until currPort A4. In this case, step 60 determines that not all the children of the Current Port are endstations, because discovered device U, the unsupported Switch shown in FIG. 1, is of unknown type. Thus, step 80 creates a new object, specifically a cloud object, step 90 connects A4 to the cloud object, and step 100 connects the child nodes U, x, y and z to the cloud object. The program then moves on to Switch B and proceeds as in Example 1.
Preferably, the network management station 7A displays, on its display screen, a network map to depict the thus determined topology as shown in FIG. 2.
As the skilled person will appreciate, various modifications and changes may be made to the described embodiments. It is intended to include all such variations, modifications and equivalents which fall within the spirit and scope of the present invention as defined in the accompanying claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5276789 *May 14, 1990Jan 4, 1994Hewlett-Packard Co.Graphic display of network topology
US5297138 *Apr 30, 1991Mar 22, 1994Hewlett-Packard CompanyDetermining physical topology across repeaters and bridges in a computer network
US5319644 *Aug 21, 1992Jun 7, 1994Synoptics Communications, Inc.Method and apparatus for identifying port/station relationships in a network
US5706440 *Aug 23, 1995Jan 6, 1998International Business Machines CorporationMethod and system for determining hub topology of an ethernet LAN segment
US5708772 *Jun 7, 1995Jan 13, 1998Bay Networks, Inc.Network topology determination by dissecting unitary connections and detecting non-responsive nodes
US5727157 *Dec 19, 1996Mar 10, 1998Cabletron Systems, Inc.Apparatus and method for determining a computer network topology
US5964837 *Jun 28, 1995Oct 12, 1999International Business Machines CorporationComputer network management using dynamic switching between event-driven and polling type of monitoring from manager station
US6097727 *Apr 29, 1997Aug 1, 2000International Business Machines CorporationMethods, systems and computer program products for end-to-end route selection in compound wide/local area networks
US6108702 *Dec 2, 1998Aug 22, 2000Micromuse, Inc.Method and apparatus for determining accurate topology features of a network
US6205122 *Apr 2, 1999Mar 20, 2001Mercury Interactive CorporationAutomatic network topology analysis
US6289375 *Oct 30, 1998Sep 11, 2001International Business Machines CorporationMethod and apparatus for invoking network agent functions using a hash table
US6377987 *Apr 30, 1999Apr 23, 2002Cisco Technology, Inc.Mechanism for determining actual physical topology of network based on gathered configuration information representing true neighboring devices
US6405248 *Jul 6, 2000Jun 11, 2002Micromuse, Inc.Method and apparatus for determining accurate topology features of a network
US6411997 *Nov 15, 1996Jun 25, 2002Loran Network Systems LlcMethod of determining the topology of a network of objects
US6516345 *Dec 21, 2001Feb 4, 2003Cisco Technology, Inc.Approaches for determining actual physical topology of network based on gathered configuration information representing true neighboring devices
US6587440 *Jul 8, 1999Jul 1, 2003Loran Networks Management Ltd.Method for determining computer network topologies
US6697338 *Oct 28, 1999Feb 24, 2004Lucent Technologies Inc.Determination of physical topology of a communication network
US6826158 *Mar 1, 2001Nov 30, 2004Onfiber Communications, Inc.Broadband tree-configured ring for metropolitan area networks
US6845091 *Dec 1, 2000Jan 18, 2005Sri InternationalMobile ad hoc extensions for the internet
US6976087 *Nov 21, 2001Dec 13, 2005Redback Networks Inc.Service provisioning methods and apparatus
US6980233 *Jul 7, 1999Dec 27, 2005Canon Kabushiki KaishaImage pickup control apparatus, image pickup control method, image pickup control system, and storage medium
US20020124079 *Mar 2, 2001Sep 5, 2002Pulsipher Eric A.System for inference of presence of network infrastructure devices
US20050030955 *Dec 4, 2001Feb 10, 2005Liam GalinSystem for automatically identifying the physical location of network end devices
EP0849974A2Dec 18, 1997Jun 24, 1998Nokia Mobile Phones Ltd.Method for flexible use of a tree topology in a wireless ATM system
Non-Patent Citations
Reference
1 *Microsoft Computer Dictionary, 2002, Microsoft Press, Fifth Edition, p. 193.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8495202 *Jul 28, 2004Jul 23, 2013Brother Kogyo Kabushiki KaishaImaging device information management system
US8844041 *Feb 26, 2010Sep 23, 2014Symantec CorporationDetecting network devices and mapping topology using network introspection by collaborating endpoints
US20050111856 *Jul 28, 2004May 26, 2005Brother Kogyo Kabushiki KaishaImaging device information management system
Classifications
U.S. Classification709/252, 709/223
International ClassificationG06F15/16, G06F15/173, H04Q3/00
Cooperative ClassificationH04Q3/0083
European ClassificationH04Q3/00D4P
Legal Events
DateCodeEventDescription
Jan 22, 2002ASAssignment
Owner name: 3COM CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VALENTINE, SIMON PETER;LINZELL, CHRISTOPHER ROBERT;LAM, PETER WAI;AND OTHERS;REEL/FRAME:012524/0943;SIGNING DATES FROM 20011109 TO 20011215
Jul 6, 2010ASAssignment
Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA
Free format text: MERGER;ASSIGNOR:3COM CORPORATION;REEL/FRAME:024630/0820
Effective date: 20100428
Jul 15, 2010ASAssignment
Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA
Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SEE ATTACHED;ASSIGNOR:3COM CORPORATION;REEL/FRAME:025039/0844
Effective date: 20100428
Sep 6, 2011FPAYFee payment
Year of fee payment: 4
Dec 6, 2011ASAssignment
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:027329/0044
Effective date: 20030131
May 1, 2012ASAssignment
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS
Free format text: CORRECTIVE ASSIGNMENT PREVIUOSLY RECORDED ON REEL 027329 FRAME 0001 AND 0044;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:028911/0846
Effective date: 20111010
Aug 27, 2015FPAYFee payment
Year of fee payment: 8
Nov 9, 2015ASAssignment
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001
Effective date: 20151027
|
__label__pos
| 0.701117 |
Nike Yulistia Angreni Nike Yulistia Angreni - 7 months ago 21
PHP Question
Get Lower Number In each array value in PHP
i have array here
"data": [
{
"ohp_id": "40",
"parent_ohp_id": "",
"level": "1"
},
{
"ohp_id": "42",
"parent_ohp_id": "",
"level": "2"
},
{
"ohp_id": "45",
"parent_ohp_id": "",
"level": "5"
},
{
"ohp_id": "46",
"parent_ohp_id": "",
"level": "5"
},
{
"ohp_id": "47",
"parent_ohp_id": "",
"level": "5"
}
I need to compare each other array value and get lower number of
level
then get
ohp_id
of the lower level from each of them in PHP code. Here what i need to be:
"data": [
{
"ohp_id": "40",
"parent_ohp_id": "",
"level": "1"
},
{
"ohp_id": "42",
"parent_ohp_id": "40",
"level": "2"
},
{
"ohp_id": "45",
"parent_ohp_id": "42"
"level": "5"
},
{
"ohp_id": "46",
"parent_ohp_id": "42",
"level": "5"
},
{
"ohp_id": "47",
"parent_ohp_id": "42",
"level": "5"
}
I know it need looping, tried:
for ($i = 0; $i < count($arrPosition); $i++) {
$hasPosition->loadHas($orgId, $arrPosition[$i]);
if (!$hasPosition->id) {
$hasPosition->level=$arrLevel[$i];
$hasPosition->parent_ohp_id=<get ohp id from lower level>;
$hasPosition->ohp_id=$ohp_id;
$hasPosition->save();
} else {
if ($hasPosition->level!=$arrLevel[$i])
$hasPosition->level=$arrLevel[$i];
if ($hasPosition->seat!=$arrSeat[$i])
$hasPosition->seat=$arrSeat[$i];
$hasPosition->save(true);
}
}
But i don't know how to get
ohp_id
from lower level. Help me, thanks
Answer
Here's how you can get minimal object of one array:
$min = array_reduce($data, function($min, $ohp) {
return (!$min || $ohp['level'] < $min['level']) ? $ohp : $min;
});
|
__label__pos
| 0.967568 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Possible Duplicate:
UIDocumentInteractionController no longer works in iOS6
I can't open Instagram from my app. Here's the code I'm using:
NSString *imageToUpload = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/mapapic-instagram.igo"];
NSData *imageData = UIImageJPEGRepresentation(image, 0.8);
[imageData writeToFile:imageToUpload atomically:YES];
_documentInteractionController
= [UIDocumentInteractionController interactionControllerWithURL:[NSURL fileURLWithPath:imageToUpload]];
_documentInteractionController.delegate = self;
_documentInteractionController.UTI = @"com.instagram.exclusivegram";
NSString *defaultInstagramText = @"Hello";
_documentInteractionController.annotation = @{ @"InstagramCaption" : defaultInstagramText };
BOOL success = [_documentInteractionController presentOpenInMenuFromRect:CGRectZero
inView:self.view.window
animated:YES];
The last call returns YES. Yet nothing happens, and Instagram is not opened. The UIDocumentInteractionControllerDelegate methods are not called. I have the Instagram app installed. What am I doing wrong?
share|improve this question
Found the solution! See stackoverflow.com/questions/12631466/…. – BlackRider Jan 8 '13 at 2:47
Post your find as an answer and then accept it – esqew Jan 8 '13 at 2:57
add comment
marked as duplicate by BlackRider, Chris Wagner, esqew, Janak Nirmal, Anoop Vaidya Jan 8 '13 at 6:45
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1 Answer
Here's what I needed to change to make it work (changing self.view.window to self.view in the call below).
BOOL success = [_documentInteractionController presentOpenInMenuFromRect:CGRectZero
inView:self.view.window
animated:YES];
Apparently the behavior of UIDocumentInteractionController presentOpenInMenuFromRect:inView:animated was changed in iOS 6, which broke my code, which used to work before. I'll later test this change in iOS 5 to see if it still works there.
share|improve this answer
add comment
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.50065 |
BlogDevOps
Blackbox export of application metrics in Kubernetes using Grok Exporter and Prometheus
Intro
In order to gain a better understanding of how our applications behave in realtime and to be able to pinpoint and alert the possible problems that may occur, we must continually create telemetry. Telemetry is, in plain words, the process of automatic generation and the transmission of data to some system where it will be monitored and analyzed. One of the preferred monitoring tools is Prometheus.
In the context of application monitoring, we usually think about exporting certain application metrics from the application source code itself. This method is often referred to as white box monitoring. While this is usually the prefered method, sometimes, this approach is not possible (for example: you don’t have access to make changes in the source code, you are using a third party service which you depend upon, etc.)
In these situations, the only way to gain some knowledge about the behaviour of the application is by observing application logs and using them to construct metrics. This approach is usually referred to as black box monitoring. Fortunately, this is also possible using Prometheus with additional metric exporters. One of the most popular is Grok exporter.
These days, more and more applications are running as microservices in the Kubernetes ecosystem which is becoming a de-facto standard for the orchestration of containerized applications. In this article, I will focus on explaining how you can export metrics from the application logs using Grok exporter and Prometheus in Kubernetes. I will also explain the difference between exporting metrics from the running application in contrast to exporting metrics from the application running as a cron job (this especially becomes a bit more challenging to implement when working in the Kubernetes ecosystem).
This guide assumes you already have the Kubernetes cluster running. For experimentation purposes, you can deploy Kubernetes on your local machine using minikube.
Deploying Prometheus in Kubernetes
Before we go into the application part, we need to make sure that we have Prometheus running in Kubernetes. This can be done using the following Kubernetes resources:
• prometheus-configmap – contains a prometheus config file which defines one scrape config which points to the Grok exporter service running in Kubernetes (later, we will also deploy Grok exporter in Kubernetes)
• prometheus-deployment – this is a Kubernetes deployment resource which defined one Prometheus pod replica that will be deployed. In the production environment, you would define persistence for the Prometheus data, but that is beyond the scope of this article
• prometheus-service – this is a Kubernetes NodePort service resource which exposes the external Prometheus port on all Kubernetes cluster nodes through which external access to Prometheus dashboard will be available. In the production environment, you would put load balancer/ingress in front of the Prometheus application and enable SSL termination but that too is beyond the scope of this article
We will first create a namespace called and then apply the resources from above:
When we apply these resources, you should be able to access Prometheus dashboard through your browser:
Example application
Now that we have Prometheus up and running, next thing to do is to consider how we are going to run the application from which we will scrape metrics using Grok exporter and pull them from Prometheus.
For the purpose of making this article clearer and focused on the problem of blackbox monitoring, I’ve created a small “application” which just logs information which we will scrape to get metrics. The idea is to dockerize this application and run it in a Kubernetes environment.
Format of logged metrics looks like this:
Our assignment is to scrape values for each 4 metrics and push them to the Prometheus and then have the ability to show them as Prometheus GAUGE metric on the Prometheus. In contrast to COUNTER, which is a cumulative metric, GAUGE is a metric that represents a single numerical value that can arbitrarily go up and down. This way, we would be able to track how metric values are changing in the function of time. There are two possible scenarios in which this application is running: continually running application and application running as a cron job. This article will show both approaches in the context of exporting metrics from the logs.
Grok exporter
To be able to export metrics in the blackbox fashion, we are using the Grok Exporter tool which is a generic Prometheus exporter that extracts metrics from unstructured log data. Grok exporter uses Grok patterns for parsing log lines.
Before we dockerize and deploy the example application in Kubernetes, we must first apply the following Grok exporter Kubernetes resources:
• grok-exporter-configmap – contains a configuration file for the Grok exporter which contains the following key information:
• input – path to the log that will be scraped
• grok – configures location of Grok pattern definitions
• metrics – defines which metrics we want to scrape from the application log
• server – configures HTTP port
• grok-exporter-service – Kubernetes ClusterIP service that will be exposed internally in the Kubernetes cluster so that the Grok exporter is available from the Prometheus
We can apply these resources using following kubectl commands:
Scraping metrics from the continually running application
We will first look at how to scrape metrics from the running application. We need to do the following:
• run application which is writing data to the application log file
• run Grok exporter which takes data from the application log file, and based on the rules that we define, makes data available to Prometheus
• we already have Prometheus running and listening to the Grok exporter service for incoming metrics
The application can be found here. The application takes two arguments: number of times it will generate output with random values, output log where data will be stored. The Dockerfile for this application is defined here.
We will dockerize our application by simply doing this:
Now that we have the application docker image, it is time to use it in the Kubernetes deployment resource defined in the file: example-application-deployment
If we observe this file more carefully, we will see that pod contains two containers: example-application and grok-exporter. The usual scenario in Kubernetes, where one pod has multiple containers, is the case where one of them is a sidecontainer which needs to take output of the main running application container and do something with it (in our case, it needs to take log output and pass it to the Grok exporter application which will, based on the rules defined, scrape metrics for Prometheus)
To be able to do this, the important thing to know is that the containers running in the same pod are sharing the same volume which means that we can mount volume on each container and, thus, enable sharing of the files between them. For this purpose we are using a Volume called emptyDir which is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that Node. Containers in the Pod can all read and write the same files in the emptyDir volume. This basically means that while the example-application is logging information in the output log, that log is simultaneosly read by the grok-exporter.
Let’s apply this resource in Kubernetes:
If we open Prometheus dashboard , we can search 4 metrics that we created from application logs. You should be able to see them and show values in the graph. For example, for metric ACNTST_C, the graph can look like this:
ACNTST_C metric in Prometheus
Scraping metrics from the application running as cron job
Generally, when you want to run a cron job in Kubernetes, the obvious choice would be to use CronJob resource type. All we would need is to move our containers from Deployment resource to CronJob resource. However, there are a couple of problems with this approach:
• we cannot put the Grok exporter application container into the CronJob as well, since it needs to run constantly
• if we decide to run just the example application in CronJob and the grok-exporter in a separate Pod/Deployment, we encounter a situation where we have two pods which need to access the same Persistence volume which is discouraged practice in the Kubernetes world (collocation of pods, multiple access defined persistent volume types, etc.)
To avoid these problems, we will use the same approach as we did previously for the example-application, but with a significant change in the way we run the application in the container (we will run it inside the container as a cron job).
To do this, we need the following:
Create bash script called run_example_application which runs example_application.rb file
Modify Dockerfile so it looks like this:
As you can see, we are putting run_example_application script into the /example/cron directory which we will pass as an argument to the run-parts command which we put into the crontab. run-parts command will run every minute, every script found in the specified directory (in our case that is /example/cron). The important thing to note is that the name of the script must be without type (for example sh) and script must be executable.
Next thing to do is to build this Dockerfile:
Finally, we need to run the cron-example-application-deployment resource:
If we open Prometheus dashboard , we should be able to see our 4 metrics.
Conclusion
There are certain situations where you will not be able to modify code and export metrics like you want. In those cases, you will have to rely on the available application logs in order to construct/gather matrics. I hope this article showed you how it can be done using Grok exporter and Prometheus. Also, since more and more applications are designed as microservices, dockerized and run in the Kubernetes ecosystem, this article showed you how you can establish telemetry in those conditions. Finally, it presented an important difference between continuously and periodically running application in the context of exporting metrics in the Kubernetes ecosystem.
|
__label__pos
| 0.500586 |
Java Tutorial Java Advanced Java References
Java boolean Keyword
The Java boolean keyword is used to declare boolean datatype which can only take boolean values: True or False values.
Example:
In the example below, a boolean variable called MyBoolVal is declared to accept only boolean values.
public class MyClass {
public static void main(String[] args) {
boolean MyBoolVal = true;
System.out.println(MyBoolVal);
MyBoolVal = false;
System.out.println(MyBoolVal);
}
}
The output of the above code will be:
true
false
Boolean Expressions
A boolean expression in Java is an expression which returns boolean values: True or False values. In the example below, comparison operator is used in the boolean expression which returns true when left operand is greater than right operand else returns false.
public class MyClass {
public static void main(String[] args) {
int x = 10;
int y = 25;
System.out.println(x > y);
}
}
The output of the above code will be:
false
A logical operator can be used to combine two or more conditions to make complex boolean expression like && operator is used to combine conditions which returns true if all conditions are true else returns false. Please see the example below.
public class MyClass {
public static void main(String[] args) {
int x = 10;
System.out.println(x > 0 && x < 25);
}
}
The output of the above code will be:
true
❮ Java Keywords
5
|
__label__pos
| 0.999617 |
{unicode} is for questions about Unicode (an international standard for character encoding) and its implementations. For questions about input encodings in general, use {encodings}. For questions specifically about the `inputenc` package, use {inputenc}.
learn more… | top users | synonyms (1)
3
votes
3answers
570 views
mathaccent skewchar with XeTeX
How can I get the dots in the correct position: \font\test="XITS Math:script=math;mapping=italic" \skewchar\test=127 \XeTeXmathchardef\beta="0"1`β \def\ddot{\XeTeXmathaccent"7"1"0308} ...
4
votes
1answer
577 views
Converting LaTeX commands in BibTeX title field to UTF-8?
I've been Googling and downloading different software for the last two days and I'm not getting very far. I was referred to this site and it looks like a great resource. Hopefully I can get the ...
6
votes
2answers
800 views
Change XeTeX fonts automatically depending on Unicode blocks
I would like to move from p(La)TeX (which was a TeX system specifically tailored to Japanese in a pre-Unicode era) to Xe(La)TeX, and I'm having a problem. In p(La)TeX, Japanese fonts are only used ...
3
votes
0answers
461 views
Is there an Unicode code point for umlauts marked with a small e instead of a diaresis? [closed]
The original way to denote an umlaut in German was to write a small e on top of the umlauted letter, like in the second row of the image below. Is this form of the umlaut availabe via Unicode? In ...
4
votes
2answers
682 views
How to use Unicode characters with Sphinx rst documents and properly generate PDF files?
I discovered that if I use some Unicode characters inside .rst files, I will loose them when I convert the documentation to pdf using Sphinx. Example chars: "șț". When I run make latexpdf I get a ...
7
votes
1answer
264 views
Search and find an old style numeral in a pdf
I am compiling this tex file \documentclass{article} \usepackage{hfoldsty} \begin{document} 1 $1$ \end{document} with pdflatex and open the resulting pdf with evince. If I search the digit one (1), ...
22
votes
2answers
12k views
Which dot character to use in which context?
Wikipedia lists several dot characters in Unicode. These are the ones that are ambiguous to me: interpunct, middle dot (·) · · U+00B7 "midpoint (in typography)" ...
3
votes
1answer
206 views
Replacement for CJK's \Unicode macro
I need a replacement for the \Unicode macro from CJK. It takes two decimal numbers as arguments and inserts #1*256 + #2 into the token stream (or something like that; it's a way to write Unicode ...
10
votes
2answers
554 views
Replacing Unicode non-breakable spaces by normal spaces
I am using the Neo keyboard layout, which uses all kinds of modifier keys to input all kinds of characters (e.g. Greek letters and mathematical symbols). It also has shift+Mod3+space mapped to Unicode ...
5
votes
2answers
499 views
How do I use locale numbering for page numbers, footers etc.?
I am new to LaTeX and was wondering if it is possible to use another language for page numbers (and any other automatic numbering). I read here: http://www.personal.ceu.hu/tex/pagestyl.htm ...
1
vote
1answer
298 views
LaTeX and Glyph Scaling for Justified Text
I am very new to LaTeX - and I am not sure if Latex has the capability to do what I want, so I thought I would ask here. I am working with publishing books in Khmer (the language of Cambodia) in ...
2
votes
1answer
649 views
hyphenation and utf8
from Tex FAQ - hyphenation I see that trying to help Latex in correctly hyphenating words won't work when the utf8 encoding is used. Is there a work around?
4
votes
1answer
365 views
high and low CJK codepoints in a single XeLaTeX document
I often require a combination of CJK characters with both low and high codepoints in the same line of text. (Here "low" means "in the Basic Multilingual Plane" or BMP, with codepoint lower than hex ...
3
votes
1answer
655 views
UTF-8 (BMP character set) support in listings.
I'm submitting a paper to a journal using LaTeX, and I need to be able to write Unicode symbols into listings. So far, I've been able to get by with the moreverb package. It's listing environment is ...
11
votes
5answers
809 views
Editors supporting unicode
For some reason it seems that my favourite editor TeXnicCenter doesn't support unicode (correct me please if I'm wrong, which wouldn't be the first time). Can anyone recommend an editor which does, ...
1
vote
1answer
254 views
pdflatex.exe failed to compile an input file with BOM (Byte Order Mark) [duplicate]
Possible Duplicate: LaTeX baffled by BOM---Unicode's byte order mark. \documentclass[12pt,a4paper]{article} \usepackage{CJK} \usepackage{pinyin} \begin{document} ...
64
votes
4answers
17k views
utf8x vs. utf8 (inputenc)
I normally use \usepackage[utf8]{inputenc} for my latex document but on this site i saw a lot of code with \usepackage[utf8x]{inputenc}. What are the differences between the 2 options ? Is there one ...
0
votes
0answers
650 views
Display invisible characters in vim [closed]
I get the error message Unicode char \u8: not set up for use with LaTeX. Now I suspect that this is due to an invisible character. The command :set list doesn't show anything suspicious and ...
13
votes
4answers
628 views
Converting LaTeX into Unicode for email
My standard procedure for writing mathematical email has been to use pidgin-LaTeX for some time now, and many of those I communicate with do the same. However, someone I know has recently started a ...
14
votes
2answers
467 views
How does one publish/promote a new package?
I just took all the wonderful information supplied here, and wrote a small package that makes it possible to use Unicode characters for section, subsection, subsubsection, paragraph, subparagraph and ...
4
votes
2answers
324 views
Fix nested section numbers in RTL languages with polyglossia.
The third subsection in the second subsection of the first section should be numbered 1.2.3. This holds also for RTL languages, since numbers, including Dewey numbering is still LTR even in an RTL ...
14
votes
3answers
1k views
LaTeX baffled by BOM---Unicode's byte order mark.
Unicode applies the convention of using a byte order mark as signature at the beginning of a text stream, identifying the encoding used within it. The following three bytes at the beginning of a file: ...
3
votes
1answer
535 views
Invoking a macro with arguments in the body of \DeclareUnicodeCharacter.
I would like to define the Unicode section character, U+00A7 to be equivalent to the \section macro. What I have now is: \DeclareUnicodeCharacter{167}{\csname section\endcsname} but this does not ...
10
votes
2answers
1k views
Mapping from Unicode character to LaTeX-Symbol for BibTeX?
I'm writing a little BibTeX exporter for the publication database of my institute. We do have a lot of authors with all kind of weird characters in their names, which get the "WTF is ...
3
votes
0answers
191 views
Font Appears different [closed]
Hi I am using Sanskrit2003 font and it appears with additional circle in TeXWorks, just to verify I copied it to MS Word & open office and it appeared fine in word. I am attaching an image to ...
1
vote
1answer
147 views
How to Use Unicode Font with \polter command
\documentclass{article} \usepackage{holtpolt} \begin{document} $\polter{abc}{def}$ \end{document} Hi, I have a need to produce something thats shown in the above example however I want to use ...
8
votes
4answers
649 views
The § character
I need information about § character. I have seen it used for numbering (equivalent to nº) and as a separator. Which is its name or code? Where can it be used?
1
vote
1answer
947 views
Use unicode font for cyrillic in gnuplot
Why I need to do this? I'n writting my diploma in laTeX and almost finish them, but I want to use in graph labels and legend the same font that LateX uses. As I know, the default LaTeX font in CM, so ...
3
votes
2answers
774 views
How to input foreign unicode characters into XeLaTeX?
Friends, I am working on a XeLaTeX document. I am almost done with my document, but I need to input text from a foreign language. I am running Mac OS X. I go to Preferences > Language & Text and ...
7
votes
5answers
2k views
No simple UTF8 support in latex?
The following should look good in your browser, AND compile in Latex looking beautiful right? \documentclass{article} \usepackage[utf8]{inputenc} \begin{document} UTF-8 test: \begin{verbatim} logic: ...
11
votes
3answers
693 views
Hyperref: Scandinavian characters (æø) don't work in \url, hyperlink is wrong
Background A while ago it became possible to use the letters æ, ø and å in URLs, and some websites, like the encyclopaedia Store Norske Leksikon, has made use of this. Recently, a question was ...
3
votes
2answers
1k views
XeLaTeX, WinEdt 6.0 and UTF-8
I'm using WinEdt 6.0 and need to write German umlaute. Using XeLaTeX, i need to save the tex-file in utf-8 to have native support for this - does anyone know how to tweak WinEdt so that it saves in ...
4
votes
2answers
1k views
LaTeX: UTF8 and algorithm2e clash
I have this input file in utf8 encoding: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{algorithm2e} \begin{document} \begin{procedure} foo \caption{ö()} \end{procedure} ...
10
votes
2answers
2k views
UTF8 not working in LuaTeX in TeXLive 2010
I've been trying to set a document using lualatex in my TeXLive 2010 installation. Unfortunately, the non-ASCII characters are left out from the output. In the following minimal document produces a ...
2
votes
1answer
680 views
Arrows with Text (Using Unicode Devanagari fonts)
\documentclass[fleqn,12pt,a4paper]{article} % normal \usepackage[utf8x]{inputenc} \usepackage{fontspec,xltxtra,xunicode} \usepackage{fontenc} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{color} ...
8
votes
3answers
690 views
Asana Math Oversized “Big operators”
I am having this problem when using Asana Math with XeLaTeX + unicode-math, if you see the pdf found in the download link AsanaProb.zip you can see that \bigcap produces much bigger character than it ...
5
votes
1answer
1k views
What is wrong with my cyrillic text?
This is the document: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[T2A]{fontenc} \begin{document} This text is in Russian: проверка. \end{document} This is what TeXLive on Mac OS ...
27
votes
4answers
32k views
“inputenc Error: Unicode char \u8” error while trying to write a degree symbol (invisible character)
No matter how I try to do it I always get the following error : ! Package inputenc Error: Unicode char \u8: not set up for use with LaTeX. I have tried using $^{\circ}$, \deg, \textdegree, ...
4
votes
1answer
702 views
Math italics with unicode-math.
I'm using Tex Live 2010. Here's the example I want to discuss. \documentclass{article} \RequirePackage{amsmath} \RequirePackage{unicode-math} \setmainfont{Linux Libertine O} ...
5
votes
1answer
464 views
How do I convert a character to a numeric value?
Now that I've gotten Will Robertson's excellent unicode-math package working to change the colour and style of letters in my mathematics (trust me, there is a reason) thanks to his answer to my ...
10
votes
2answers
596 views
unicode \begin{verbatim} with cmtt?
A lot of leading-edge programming languages (like Coq and Agda) allow nearly unrestricted use of unicode symbols in program text, so you can have math characters like $\otimes$ (Unicode U+2A02) as ...
14
votes
7answers
7k views
utf8 or latin1 encoding - german
\usepackage[utf8]{inputenc} or \usepackage[latin1]{inputenc} I write in german if that matters. pro and contra?
7
votes
3answers
476 views
Glyph insertion
I use linux and have available to me the "compose" key available and thus can type characters like °, ß, ï, į, ḯ, etc. Is it inherently "bad" to use the compose key to insert the characters, should I ...
5
votes
3answers
2k views
LaTeX/XeTeX setup Tamil/Indic languages
I use TexMaker and LyX in Ubuntu. I'd like to typeset Tamil/Telugu/Hindi text, and so far I've been unsuccessful. Please suggest me a working TeX/LaTeX/variants setup for Indic languages, especially ...
80
votes
9answers
22k views
Entering Unicode characters in LaTeX
How do I enter Unicode characters in LaTeX? What packages do I need to install and what escape sequence do I type to specify Unicode characters in an ASCII source file?
1 2 3 4 5
|
__label__pos
| 0.917773 |
2
$\begingroup$
I am interested in testing for a given string (w) whether it is in the language (L) defined by a nondeterministic finite automaton (A). I have some blurry point in my mind.
What is the complexity of this test by using the NFA and backtracking? Assume my NFA has m states and string (w) has length n (|w| = n). For regular expressions, by using Thompson's algorithm, the resultant NFA has the same complexity (worst case) if I use backtracking?
What are the methods/algorithms other than backtracking and converting to DFA?
$\endgroup$
4
• 1
$\begingroup$ The most efficient way is described in the comments and hinty answer to this question. Your other questions can't really be answered because you're asking about the computational complexity of some variant of "Does this automaton accept this string?", which you haven't precisely defined. Is the automaton part of the input? Or is there a fixed automaton and you want to know if it accepts a string given as input to the problem? $\endgroup$ – David Richerby Dec 1 '14 at 8:18
• $\begingroup$ Thank you David. However for different methods there is a worst case scenario which specify a bounded upper limit for the given method. For example, backtracking has a complexity of $O(2^n)$ I think. This is not related to your automaton? $\endgroup$ – Deniz Dec 1 '14 at 8:32
• $\begingroup$ I'm not sure what you mean by "for different methods there is a worst case scenario which specify a bounded upper limit for the given method." $\endgroup$ – David Richerby Dec 1 '14 at 8:37
• $\begingroup$ I think I am talking about alternative NFA simulation algorithms like dynamic programming. May be I used some terms in the wrong place, not sure. Please refer to the comments link here for this answer. $\endgroup$ – Deniz Dec 1 '14 at 15:55
2
$\begingroup$
A naïve backtracking algorithm will still result in exponential behaviour ($O(2^{n})$ where $n$ is the length of the input string). This behaviour can occur even with very small automata (i.e. where the number of states $m$ is a fixed constant).
Consider for example the NFA $M$ below:
NFA which can has exponential worst case backtracking behaviour.
The alphabet is simply $\Sigma =\{0\}$, and the language is $L(M) = \{0^{k}\mid k \in \mathbb{N}\} = \Sigma^{\ast}$. If our algorithm chooses poorly for even the first step (for example if we use the obvious choice and go by state ID), then we're stuck between $q_{0}$ and $q_{1}$, where at each point in the input, we have the choice of moving to the opposite state, or staying where we are - i.e. a binary choice for each character, for the entire length of the string, giving $2^{n}$ steps before we manage to backtrack to $q_{2}$ and then move along the correct path to $q_{4}$.
So even with only 4 states and where every string is valid, we can still get terrible performance from a backtracking approach.
There is a much better approach however. Even without converting the NFA to a DFA we can decide acceptance in polynomial time. As we are dealing with a regular language, we don't need to remember the entire computation history up to this point (which is what backtracking does), we only need to know what state we are up to at the current point. In a DFA this is very easy, as we can only be in one state at any point, but an NFA (without knowing the final computation path) allows more possibilities. However we can simply keep track of these possibilities. The algorithm is as follows:
1. Initialise $S_{0}$ to $\{s_{0}\}$ where $s_{0}$ is the start state.
2. For each symbol of the input $c_{i}$ with $1 \leq i \leq n$, for each state $s \in S_{i-1}$, let $E(s,c_{i})$ be the set of states reachable from $s$ using $c_{i}$ and any number of $\varepsilon$-moves1. Set $C_{i} = C_{i} \cup E(s,c_{i})$.
3. If there is an accepting state in the set $C_{n}$, return $\mathrm{YES}$, otherwise return $\mathrm{NO}$.
So all the algorithm is really doing is keeping track of all the states we could be up to after having read $k$ symbols from the input.
Assuming that the NFA is fixed, this algorithm is $O(n)$-time in the length of the input string. If the NFA is part of the input, it's a little more complicated, but still polynomial time - $O(m^{2}n)$ where $m$ is the number of states (if I haven't missed an implicit loop in there).
Footnotes.
1. That is, we take the $\varepsilon$-closure of the state $s$, then from each of these states see if we can move using symbol $c_{i}$, then for those where we can, we take the $\varepsilon$-closure again, and this gives us $E(s,c_{i})$.
$\endgroup$
3
• $\begingroup$ Thank you Luke. However, I have questions in order to clarify your reply in my mind. You said complexity is $O(2^n)$, but is it the same for all cases? For example, add a new state $q_3$ and modify the automaton like the one here In that case, we have $q_3$ in addition to the $q_0$ and $q_1$ and as far as I understood we have $3^n$ possible path that we can try before backtracking to $q_4$. Therefore, the complexity goes to $O(m^n)$ where $m$ is the number of states (3 in that case). Am I right or missing something? $\endgroup$ – Deniz Dec 3 '14 at 9:42
• $\begingroup$ Your explanation is quite good and I am trying to learn more from your reply. Another question is for the method you mentioned to test the string in polynomial time $Q(m^2n)$. I think we can go to $m$ states from one state (for the current input) and for the next input we could have $m$ states for each state we find for the previous input. That's where $m^2$ comes from and we repeat this step for each of the input character so we have $n * m^2$ totally. Is it right? $\endgroup$ – Deniz Dec 3 '14 at 10:01
• $\begingroup$ @Deniz, you are correct in both cases. Different NFAs might give different complexity, and $O(m^{n})$ can be 'achieved' in the way you describe. Secondly, that is indeed where the $m^{2}\times n$ comes from. $\endgroup$ – Luke Mathieson Dec 4 '14 at 11:32
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.966373 |
Project
General
Profile
Actions
Feature #11158
closed
Introduce a Symbol.count API as a more efficient alternative to Symbol.all_symbols.size
Added by methodmissing (Lourens Naudé) over 7 years ago. Updated over 7 years ago.
Status:
Closed
Priority:
Normal
Target version:
-
[ruby-core:<unknown>]
Description
We're in the process of migrating a very large Rails codebase from a Ruby 2.1.6 runtime to Ruby 2.2.2 and as part of this migration process would like to keep track of Symbol counts and Symbol GC efficiency in our metrics system. Preferably still while on 2.1 (however this implies a backport to 2.1 as well), but would definitely be useful in 2.2 as well.
Currently the recommended and only reliable way to get to the Symbol counts is via Symbol.all_symbols.size, which:
• Allocates an Array
• rb_ary_push and walking the symbol table isn't exactly efficient
Here's some benchmarks:
./miniruby -Ilib -rbenchmark -e "p Benchmark.measure { 10_000.times{ Symbol.count } }"
#<Benchmark::Tms:0x007f8bc208bdd0 @label="", @real=0.0011274919961579144, @cstime=0.0, @cutime=0.0, @stime=0.0, @utime=0.01, @total=0.01>
./miniruby -Ilib -rbenchmark -e "p Benchmark.measure { 10_000.times{ Symbol.all_symbols.size } }"
#<Benchmark::Tms:0x007fa47205a550 @label="", @real=0.3135859479953069, @cstime=0.0, @cutime=0.0, @stime=0.03, @utime=0.29, @total=0.31999999999999995>
I implemented and attached a patch for a simple Symbol.count API that just returns a numeric version of the symbol table size, without having to do any iteration.
Please let me know if this is inline with an expected core API, anything I could clean up further and if there's any possibility of such a change also being backported to 2.1 as well? (happy to create a new patch for 2.1)
Files
symbol_count.patch (4.4 KB) symbol_count.patch Symbol.count patch file methodmissing (Lourens Naudé), 05/16/2015 04:12 AM
symbol_enumerator.patch (6.07 KB) symbol_enumerator.patch Symbol.each methodmissing (Lourens Naudé), 05/21/2015 02:14 AM
Related issues 1 (0 open1 closed)
Related to Ruby master - Feature #9963: Symbol.countFeedback06/19/2014Actions
Actions #1
Updated by nobu (Nobuyoshi Nakada) over 7 years ago
Lourens Naudé wrote:
Please let me know if this is inline with an expected core API, anything I could clean up further and if there's any possibility of such a change also being backported to 2.1 as well? (happy to create a new patch for 2.1)
New features are never backported to 2.2 or earlier.
Actions #2
Updated by methodmissing (Lourens Naudé) over 7 years ago
Makes sense, my bad, thanks for the consideration.
Actions #3
Updated by marcandre (Marc-Andre Lafortune) over 7 years ago
• Assignee set to matz (Yukihiro Matsumoto)
I'd recommend instead to introduce Symbol.each, which would accept a block and return an Enumerable when none is given.
Symbol.each.size would be then be an efficient (lazy) way of getting the number of symbols, and it would be a more versatile method in case someone wants to iterate on all Symbols for other purposes
Actions #4
Updated by methodmissing (Lourens Naudé) over 7 years ago
Sounds good, I'll take a stab tonight.
Actions #5
Updated by methodmissing (Lourens Naudé) over 7 years ago
Please find attached the changes as per Marc-Andre's suggestions. Exposes Symbol.each and extends with Enumerable
def test_each
x = Symbol.each.size
assert_kind_of(Fixnum, x)
assert_equal x, Symbol.all_symbols.size
assert_equal x, Symbol.count
assert_equal Symbol.to_a, Symbol.all_symbols
answer_to_life = :bacon_lettuce_tomato
assert_equal [:bacon_lettuce_tomato], Symbol.grep(/bacon_lettuce_tomato/)
end
Calling size on the enumerator is super efficient.
$ ./miniruby -Ilib -rbenchmark -e "p Benchmark.measure { 10_000.times{ Symbol.each.size } }"
#<Benchmark::Tms:0x007fea32039688 @label="", @real=0.005798012993182056, @cstime=0.0, @cutime=0.0, @stime=0.0, @utime=0.01, @total=0.01>
Symbol.count isn't though (not sure if it's possible to replace the definition with Symbol.each.size instead)
$ ./miniruby -Ilib -rbenchmark -e "p Benchmark.measure { 10_000.times{ Symbol.count } }"
#<Benchmark::Tms:0x007fa47907afb0 @label="", @real=0.36278180500085, @cstime=0.0, @cutime=0.0, @stime=0.0, @utime=0.36, @total=0.36>
Thoughts?
Actions #6
Updated by akr (Akira Tanaka) over 7 years ago
Updated by ko1 (Koichi Sasada) over 7 years ago
• Assignee changed from matz (Yukihiro Matsumoto) to ko1 (Koichi Sasada)
Updated by cesario (Franck Verrot) over 7 years ago
Lourens Naudé wrote:
Please find attached the changes as per Marc-Andre's suggestions. Exposes Symbol.each and extends with Enumerable
Hi Lourens,
I'm not sure to fully understand why we make Symbol extend Enumerable rather than returning a new enumerator object (probably also extending Enumerable) ? Isn't there way to much overhead to include Enumerable in Symbol?
Thoughts?
Nice work!
Updated by ko1 (Koichi Sasada) over 7 years ago
I don't against introduce Symbol.each for shortcut of Symbol.all_symbols.each.
However, For measurement purpose, we should introduce new measurement API into ObjectSpace because they have several types.
|immortal | mortal
--------+:-------:+:------:
static | (1) | (2)
dynamic | (3) | (4)
• Immortal symbols
• Static immortal symbols (1)
• Dynamic immortal symbols (3)
• Dynamic mortal symbols (4)
There are no (2) type symbols.
Current Symbol.all_symbols.size returns (1) + (3) + (4).
Maybe the number of (1) and (2) (or (1+2)) will be helpful for some kind of people who want to know details.
Updated by methodmissing (Lourens Naudé) over 7 years ago
Thanks for the feedback - I'll take a stab and circle back.
Updated by marcandre (Marc-Andre Lafortune) over 7 years ago
Franck Verrot wrote:
I'm not sure to fully understand why we make Symbol extend Enumerable rather than returning a new enumerator object
It's not "rather than". Symbol.each without a block will return an Enumerator, that we extend Enumerable or not.
Isn't there way to much overhead to include Enumerable in Symbol?
Not sure what you mean by overhead. There's no performance cost to it. It adds a bunch of methods to Symbol, and many won't be helpful (I doubt someone would use Symbol.map{...}, but I 'm not sure I see the downside.
Updated by cesario (Franck Verrot) over 7 years ago
Marc-Andre Lafortune wrote:
Franck Verrot wrote:
Isn't there way to much overhead to include Enumerable in Symbol?
Not sure what you mean by overhead. There's no performance cost to it. It adds a bunch of methods to Symbol, and many won't be helpful (I doubt someone would use Symbol.map{...}, but I 'm not sure I see the downside.
Sorry I haven't formulated this right :-) I was only wondering if including Enumerable in Symbol could lead some of us to rely on methods (like map as you said) that weren't really thought through at the time we introduced each. Maybe that doesn't make sense, so feel free to ignore this comment... still new to the Ruby VM internals and ways of designing its APIs :-)
Thanks!
Actions #13
Updated by ko1 (Koichi Sasada) over 7 years ago
• Status changed from Open to Closed
Applied in changeset r51654.
• ext/objspace/objspace.c: add a new method ObjectSpace.count_symbols.
[Feature #11158]
• symbol.c (rb_sym_immortal_count): added to count immortal symbols.
• symbol.h: ditto.
• test/objspace/test_objspace.rb: add a test for this method.
• NEWS: describe about this method.
Actions
Also available in: Atom PDF
Like0
Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0
|
__label__pos
| 0.811902 |
1
$\begingroup$
While reading the book What is Mathematics? by Courant and Robbins, I've found a statement that I don't know how to prove, although it seems that it shouldn't be really difficult. Literally, they write:
We have just seen that every quadratic residue $a$ of $p$ satisfies the congruence $a^{(p-1)/2} \equiv 1$ (mod $p$). Whithout serious difficulty it can be proved that for every non-residue $b$ we have the congruence $b^{(p-1)/2} \equiv -1$ (mod $p$).
Here, $p$ is a prime number and $a$ and $b$ are any integers not multiples of $p$. The case of quadratic residues is quite easy, you only have to apply Fermat's little theorem, but although they say that it can be proved whithout serious difficulty, I can't see how to do the case of non-residues. I've been looking on the Internet for some ideas, but my knolewdge of the theory of numbers is quite elementary (in fact, I only know some facts which are mentioned in the book: congruences, Fermat's little theorem, ...) and all the proofs I've found are out of my reach. Is there any elementary proof of the fact that $b^{(p-1)/2} \equiv -1$ (mod $p$)?
$\endgroup$
• 1
$\begingroup$ A polynomial of degree $d$, considered modulo a prime, can have no more than $d$ zeros. There are $(p-1)/2$ quadratic residues, which uses up all the room for zeros of $x^{(p-1)/2}-1$. $\endgroup$ – Gerry Myerson Nov 19 '14 at 12:04
• $\begingroup$ @Mike, done.${}$ $\endgroup$ – Gerry Myerson Nov 20 '14 at 1:29
2
$\begingroup$
Let $b$ be a quadratic non-residue modulo a prime $p$. Let $c=b^{(p-1)/2}$. By Fermat, $c^2\equiv1\pmod p$. It follows that either $c\equiv1\pmod p$ or else $c\equiv-1\pmod p$. Now, the congruence $x^{(p-1)/2}\equiv1\pmod p$ is of degree $(p-1)/2$ and has among its solutions all the quadratic residues, of which there are exactly $(p-1)/2$; therefore, its only solutions are the quadratic residues. So, we can't have $c\equiv1\pmod p$, and we must have $c\equiv-1\pmod p$.
We have used the fact that the integers modulo a prime form a field, and over a field the number of zeros of a polynomial can't exceed the degree of the polynomial.
$\endgroup$
-1
$\begingroup$
Note that $b^{p-1} \equiv 1 \pmod{p}$, assuming $p$ is a prime which does not divide $b$. Taking the square root of both sides, we get $b^{(p-1)/2} \equiv \pm 1 \pmod{p}$. Of course, it cannot be $1$, because it would then be a quadratic residue, so it has to be $-1$.
$\endgroup$
• $\begingroup$ Yes, that was my first try... Maybe I'm losing something, but I can't see why if $b^{(p-1)/2} \equiv 1$ (mod $p$) then $b$ is a quadratic residue. The converse is obviously true, for if $b \equiv x^2$ (mod $p$) for some $x$ then $b^{(p-1)/2} \equiv x^{p-1} \equiv 1$ (mod $p$) because of Fermat's little theorem. $\endgroup$ – Alex V. Nov 19 '14 at 11:20
• $\begingroup$ $b^{(p-1)/2}$ is nothing but $(b^{\frac{1}{2}})^{p-1}$, and it is congruent to $1$ modulo $p$. Thus $b^{\frac{1}{2}}$ is an integer, and $b$ is a square modulo $p$. $\endgroup$ – shardulc Nov 19 '14 at 11:26
• 2
$\begingroup$ I don't think that's a valid reasoning... You also have $$ 3^4 = 81 \equiv 1 \;(\text{mod }10) $$ And with your idea we could conclude that $3^{1/2}$ is an integer, because $(3^{1/2})^8 \equiv 1$ (mod $10$). $\endgroup$ – Alex V. Nov 19 '14 at 13:32
• $\begingroup$ Here, you go modulo 10; I can't seem to think of an example with primes. $\endgroup$ – shardulc Nov 19 '14 at 16:10
• 1
$\begingroup$ When you work modulo 7, $2^{1/2}=\pm3$. $\endgroup$ – Gerry Myerson Nov 19 '14 at 23:06
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.844442 |
neaumusic neaumusic - 1 year ago 73
CSS Question
CSS -- transparent "glass" modal, everything else darkened
THE ANSWER:
background-attachment
----- JSBin Example ----
The answer is to use
background-attachment
screenshot
ORIGINAL QUESTION
I'm working on a project where we want to display a modal that "sees through" to the backdrop, but everywhere outside of the modal panel is slightly masked.
I have successfully used
border: 10000px rgba(0,0,0,0.3)
with
border-radius: 10010px
but this is a hack, and I can't outline the modal with a
box-shadow
Is there any standard way for doing this? Bonus points if you can think of a way to apply a transparency filter gradient to an image.
Answer Source
screenshot
---- JSBin Example ---- GitHub Gist ----
The answer is to use background-attachment
background-attachment: fixed;
background-size: cover;
background-position: center center;
background-image: linear-gradient(rgba(0,0,0,0.4), rgba(0,0,0,0.4)), url(http://imgur.com/oVaQJ8F.png);
.modal-backdrop {
background: url(myurl.png) center center / cover no-repeat fixed
}
.modal-panel {
background: url(myurl.png) center center / cover no-repeat fixed
}
The best value would be fixed so that your backdrop and your modal can share the same viewport X and Y (0, 0) by default
You then scale with background-size percentages or cover
When using background: shorthand, make sure to use a / to separate background-position from background-scale amounts
I ran into a bug on an older device, so I used local then manually computed leftX and topY to line up backdrop and modal-panel background-position at (0, 0) on my viewport.
I then scaled both images with the same percentage, to cover the screen
I also used a gradient, credit -- How to darken a background Image
|
__label__pos
| 0.966995 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.