content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
|---|---|---|
Spinlock
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In software engineering, a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (that which holds the lock) blocks, or "goes to sleep".
Because they avoid overhead from operating system process rescheduling or context switching, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason, spinlocks are often used inside operating system kernels. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a lock is held by a thread, the greater the risk is that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
Implementing spin locks correctly is difficult because one must take into account the possibility of simultaneous access to the lock, which could cause race conditions. Generally, such implementation is possible only with special assembly language instructions, such as atomic test-and-set operations, and cannot be easily implemented in high-level programming languages or in languages not supporting truly atomic operations.[1] On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. Peterson's algorithm. But note that such an implementation may require more memory than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if out-of-order execution is allowed.
Example implementation[edit]
The following example uses x86 assembly language to implement a spinlock. It will work on any Intel 80386 compatible processor.
; Intel syntax
locked: ; The lock variable. 1 = locked, 0 = unlocked.
dd 0
spin_lock:
mov eax, 1 ; Set the EAX register to 1.
xchg eax, [locked] ; Atomically swap the EAX register with
; the lock variable.
; This will always store 1 to the lock, leaving
; the previous value in the EAX register.
test eax, eax ; Test EAX with itself. Among other things, this will
; set the processor's Zero Flag if EAX is 0.
; If EAX is 0, then the lock was unlocked and
; we just locked it.
; Otherwise, EAX is 1 and we didn't acquire the lock.
jnz spin_lock ; Jump back to the MOV instruction if the Zero Flag is
; not set; the lock was previously locked, and so
; we need to spin until it becomes unlocked.
ret ; The lock has been acquired, return to the calling
; function.
spin_unlock:
mov eax, 0 ; Set the EAX register to 0.
xchg eax, [locked] ; Atomically swap the EAX register with
; the lock variable.
ret ; The lock has been released.
Significant optimizations[edit]
The simple implementation above works on all CPUs using the x86 architecture. However, a number of performance optimizations are possible:
On later implementations of the x86 architecture, spin_unlock can safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtle memory ordering rules which support this, even though MOV is not a full memory barrier. However, some processors (some Cyrix processors, some revisions of the Intel Pentium Pro (due to bugs), and earlier Pentium and i486 SMP systems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such as IA-64, there are special "unlock" instructions which provide the needed memory ordering.
To reduce inter-CPU bus traffic, code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because of MESI caching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkably no bus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread.
Alternatives[edit]
The primary disadvantage of a spinlock is that, while waiting to acquire a lock, it wastes time that might be productively spent elsewhere. There are two ways to avoid this:
1. Do not acquire the lock. In many situations it is possible to design data structures that do not require locking, e.g. by using per-thread or per-CPU data and disabling interrupts.
2. Switch to a different thread while waiting. This typically involves attaching the current thread to a queue of threads waiting for the lock, followed by switching to another thread that is ready to do some useful work. This scheme also has the advantage that it guarantees that resource starvation does not occur as long as all threads eventually relinquish locks they acquire and scheduling decisions can be made about which thread should progress first. Spinlocks that never entail switching, usable by real-time operating system, are sometimes called raw spinlocks.[2]
Most operating systems (including Solaris, Mac OS X and FreeBSD) use a hybrid approach called "adaptive mutex". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if the thread is not currently running. (The latter is always the case on single-processor systems.)[3]
See also[edit]
References[edit]
1. ^ Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth Edition ed.). Addison-Wesley. pp. 176–179. ISBN 0-201-59292-4.
2. ^ Jonathan Corbet (9 December 2009). "Spinlock naming resolved". LWN.net. Retrieved 14 May 2013.
3. ^ Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth Edition ed.). Addison-Wesley. p. 198. ISBN 0-201-59292-4.
External links[edit]
|
__label__pos
| 0.72324 |
blob: fa8927b3c28f1ede7825c112b33f3e966aca27c4 [file] [log] [blame]
type Arr = @stride(16) array<f32, 2>;
struct buf0 {
x_GLF_uniform_float_values : Arr;
};
type Arr_1 = @stride(16) array<i32, 4>;
struct buf1 {
x_GLF_uniform_int_values : Arr_1;
};
@group(0) @binding(0) var<uniform> x_6 : buf0;
@group(0) @binding(1) var<uniform> x_10 : buf1;
var<private> x_GLF_color : vec4<f32>;
fn main_1() {
var v0 : vec4<f32>;
var v1 : vec4<f32>;
var a : i32;
var c : i32;
let x_41 : f32 = x_6.x_GLF_uniform_float_values[1];
v0 = vec4<f32>(x_41, x_41, x_41, x_41);
let x_44 : f32 = x_6.x_GLF_uniform_float_values[0];
v1 = vec4<f32>(x_44, x_44, x_44, x_44);
let x_47 : i32 = x_10.x_GLF_uniform_int_values[1];
a = x_47;
loop {
let x_52 : i32 = a;
let x_54 : i32 = x_10.x_GLF_uniform_int_values[0];
if ((x_52 < x_54)) {
} else {
break;
}
let x_58 : i32 = x_10.x_GLF_uniform_int_values[3];
c = x_58;
loop {
let x_63 : i32 = c;
let x_65 : i32 = x_10.x_GLF_uniform_int_values[2];
if ((x_63 < x_65)) {
} else {
break;
}
let x_68 : i32 = c;
let x_69 : i32 = clamp(x_68, 0, 3);
let x_71 : f32 = x_6.x_GLF_uniform_float_values[1];
let x_73 : f32 = v0[x_69];
v0[x_69] = (x_73 - x_71);
let x_77 : i32 = x_10.x_GLF_uniform_int_values[1];
let x_79 : i32 = x_10.x_GLF_uniform_int_values[3];
if ((x_77 == x_79)) {
let x_83 : i32 = a;
let x_85 : f32 = x_6.x_GLF_uniform_float_values[1];
let x_87 : f32 = x_6.x_GLF_uniform_float_values[1];
let x_89 : f32 = x_6.x_GLF_uniform_float_values[1];
let x_91 : vec4<f32> = v0;
let x_93 : i32 = a;
v1[x_83] = smoothStep(vec4<f32>(x_85, x_87, x_89, 3.0), vec4<f32>(1.0, 1.0, 1.0, 1.0), x_91)[x_93];
}
continuing {
let x_96 : i32 = c;
c = (x_96 + 1);
}
}
continuing {
let x_98 : i32 = a;
a = (x_98 + 1);
}
}
let x_101 : f32 = v1.x;
let x_103 : f32 = x_6.x_GLF_uniform_float_values[0];
if ((x_101 == x_103)) {
let x_109 : i32 = x_10.x_GLF_uniform_int_values[1];
let x_112 : i32 = x_10.x_GLF_uniform_int_values[3];
let x_115 : i32 = x_10.x_GLF_uniform_int_values[3];
let x_118 : i32 = x_10.x_GLF_uniform_int_values[1];
x_GLF_color = vec4<f32>(f32(x_109), f32(x_112), f32(x_115), f32(x_118));
} else {
let x_122 : i32 = x_10.x_GLF_uniform_int_values[3];
let x_123 : f32 = f32(x_122);
x_GLF_color = vec4<f32>(x_123, x_123, x_123, x_123);
}
return;
}
struct main_out {
@location(0)
x_GLF_color_1 : vec4<f32>;
};
@stage(fragment)
fn main() -> main_out {
main_1();
return main_out(x_GLF_color);
}
|
__label__pos
| 0.999825 |
Does Snap Score Update Instantly
Does Snap Score Update Instantly
Snapchat, the popular multimedia messaging app, has captivated users with its ephemeral nature and unique features since its inception. One of the intriguing aspects for users is the Snapchat Score, a numerical representation of their activity on the platform. However, the question often arises: does the Snapchat Score update instantly? To unravel the mystery behind this digital metric, we delve into the inner workings of Snapchat’s scoring system.
The Basics of Snapchat Score:
Before we delve into the update mechanism, let’s understand what Snapchat Score is. Your Snapchat Score is the sum of all the snaps you’ve sent and received, along with additional points for other activities on the app. These activities include posting stories, sending and receiving snaps, and using various features like filters and lenses. Essentially, it’s a numerical representation of your engagement with the platform.
Does Snapchat Score Update Instantly?
The answer to whether Snapchat Score updates instantly is not as straightforward as one might think. The platform uses a complex algorithm to calculate scores, and the process involves various factors. While some aspects of the Snapchat Score update in real-time, others may experience delays.
Snaps and Chats:
When you send or receive a snap or chat, your Snapchat Score is immediately updated. This real-time update is one of the instant aspects of the scoring system. Whether it’s a photo, video, or text-based communication, the moment it’s sent or received, your score is adjusted accordingly.
Stories:
Snapchat Stories are an integral part of the user experience. Each time you post a snap to your story, your Snapchat Score increases. The update is not instantaneous but occurs relatively quickly. The delay might be a few minutes, as Snapchat processes and adds the points to your score.
Other Factors:
Apart from direct interactions, Snapchat Score considers other factors like using filters, lenses, and special features. These activities also contribute to your score but may not update instantly. The delay in updating these specific actions might be due to the need for additional processing on the server side.
Score Refresh:
Snapchat may not display your updated score immediately after an activity. Users often report delays in seeing changes in their scores, leading to confusion. This delay is due to Snapchat’s periodic score refresh mechanism. The app doesn’t constantly update scores in real-time to reduce server load and optimize performance.
Factors Influencing Score Update Time:
Several factors can influence the time it takes for your Snapchat Score to update:
a. Server Load: The overall server load on Snapchat’s end can affect how quickly scores are updated. During peak usage times, delays may occur.
b. Internet Connection: Your internet connection speed can also impact the speed at which your Snapchat Score updates. A slow or unstable connection may result in delays.
c. App Version: Using an outdated version of the Snapchat app might contribute to delayed score updates. Ensuring you have the latest version can help optimize performance.
Conclusion:
While some aspects of your Snapchat Score update instantly, others may experience delays. The platform employs a sophisticated algorithm that considers various activities to calculate your score. The periodic refresh mechanism, server load, and other factors contribute to the time it takes for your score to update.
Understanding the dynamics behind Snapchat Score updates enhances the user experience. Users should be patient and aware that not all activities on the platform reflect in their scores immediately. As Snapchat continues to evolve, the intricacies of its scoring system may see further refinement, potentially influencing the speed of score updates.
Qurrat
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.931191 |
cancel
Showing results for
Search instead for
Did you mean:
How to derive value from another variable and use it to read master data?
Former Member
0 Kudos
Hi,
I have following case:
First I need to read the value from Cost Center -variable (ZIPCC) and then I need to use that value to determine the home currency of that cost center from the cost center master data table (field: OBJ_CURR).
I know to basic idea on how to do this, but don't seem to get the syntax correct. Could someone point me in the right direction?
So the objects in play are:
ZIPCC = Cost center selection variable (mandatory, single value)
ZIPCUR = Cost Center Currency variable (customer exit, single value)
0COSTCENTER
0OBJ_CURR = Field in cost center master data
Help will be greatly appreciated!
-m
Accepted Solutions (1)
Accepted Solutions (1)
Former Member
0 Kudos
Hi,
Please try this and modify based on your requirement:
D1 Like /BIC/0OBJ_CURR.
WHEN 'ZIPCUR'.
if I_STEP = 2.
loop at i_t_var_range into lt where vnam = 'ZIPCC'.
Select single OBJ_CURR into D1 from /BI0/PCOSTCENTER where COSTCENTER = i_t_var_range-low.
clear l_s_range.
l_s_range-low = D1.
l_s_range-sign = 'I'.
l_s_range-opt = 'EQ'.
append l_s_range to e_t_range.
endloop.
endif.
Regards,
Kams
Former Member
0 Kudos
Hi,
Thank you very much for your example I can see that this logic works! However I'm experiencing trouble in selecting the 0OBJ_CURR
This kind of declaration isn't working: "D1 Like /BIC/0OBJ_CURR."
And the same goes for the Selection:
"Select single OBJ_CURR into D1 from /BI0/PCOSTCENTER where COSTCENTER = i_t_var_range-low."
This is where my syntax also went wrong.
0OBJ_CURR is a unit, is there a special way to declare it and use it in the code?
(and 0CURRENCY is a reference unit for 0OBJ_CURR).
Do you have adivce on this?
-miikka
Former Member
0 Kudos
Hi,
Check the /BIO/PCOSTCENTER and see what is the field name for OBJ_CURR. You have to change it little bit based on your table structure. I gave you the logic.
Change:
D1 Like /BIC/PCOSTCENTER-OBJ_CURR or
D1 Type /BI0/OIOBJ_CURR
For
"Select single OBJ_CURR into D1 from /BI0/PCOSTCENTER where COSTCENTER = i_t_var_range-low."
Check the table /BI0/PCOSTCENTER and see what is the field name for 0OBJ_CURR and replace it in syntax.
Regards,
Kams
Former Member
0 Kudos
ok,
The problem solved!
Thanks a lot for your tips Kams!
The problem was in the master data, there were some duplicate records and I happened to test with one those duplicate rows...
Great work, points assigned.
-m
Edited by: Miikka Åkerman on Jun 8, 2009 6:49 PM
Answers (0)
|
__label__pos
| 0.582637 |
...
View Full Version : Headers not pushing on a true statement
horan1616
03-17-2010, 05:05 PM
I am setting up a basic reigstration page. I have been out of coding for alittle with now. I basically wanted to setup checks for valid data at the registration page however it just submits the query and runs any of the checks sucessfully. Any idea's?
<?php
require '/include/headerdb.php';
$username = $_POST['username'];
$email = $_POST['email'];
$fname = $_POST['fname'];
$lname = $_POST['lname'];
$vcode = rand(1000000000, 9999999999);
$password = $_POST['password'];
$vpassword = $_POST['vpassword'];
if ($password != $vpassword) {
header("Location: register.php?error=1");
}
if ($username == "") {
header("Location: register.php?error=4");
}
if ($email == "") {
header("Location: register.php?error=5");
}
if ($fname == "") {
header("Location: register.php?error=6");
}
if ($lname == "") {
header("Location: register.php?error=7");
}
if ($password == "") {
header("Location: register.php?error=8");
}
$verquery = mysql_query("SELECT * FROM users");
while ($verarray = mysql_fetch_array($verquery)) {
if ($username == $verarray['username']) {
header("Location: register.php?error=2");
}
if ($email == $verarray['email']) {
header("Location: register.php?error=3");
}
}
$newquery = mysql_query("INSERT INTO users (username, password, fname, lname, vcode, email)
VALUE ('$_POST[username]','$_POST[password]','$_POST[fname]','$_POST[lname]','$vcode','$email')");
if (!$newquery) {
header("Location: register.php?error=3");
} else {
header("Location: regcomplete.php");
}
?>
Fou-Lu
03-17-2010, 05:17 PM
You're not doing anything to stop it from processing. Any call to a header() only pushes the header onto the stack; it doesn't do anything to stop the page from processing. If you want it to halt processing at that point, call an exit or die.
The above can be simplified by only providing the error code associated with it to a tracked variable, and if the tracked variable is > 0, push a header for the location and call exit.
horan1616
03-17-2010, 05:25 PM
THANK YOU!
Completely forgot about having to use die(header(***));
:thumbsup::thumbsup::thumbsup:
Fou-Lu
03-17-2010, 05:34 PM
Do not use die(header(...)). It won't actually trigger errors since its not wrong, but it doesn't make any logical sense. Die should only have an argument if its either a string or an integer indicating error codes. Header is always void. Instead, you should be running with a:
header("Location: ....");
exit(0); // 0 is the default, can be called as: exit; or exit(); as well.
Since you're script terminates successfully (which coincidentally is the same result you'd get by wrapping a header with a die).
EZ Archive Ads Plugin for vBulletin Copyright 2006 Computer Help Forum
|
__label__pos
| 0.961538 |
miaoYu
miaoYu正在翻译
How to implement a programming language 6(Wrapping up)
原文链接: lisperator.net
Wrapping up
In the interest of actually finishing this book howto, I moved forward and added a few (non-essential, but good-to-have) features and fixes to the λanguage without describing their implementation here. Download the full code below.
Download lambda.js
The program is runnable with NodeJS. It reads the program in λanguage from STDIN and compiles and executes it. It produces the compiled result at STDERR and the program output at STDOUT. Example usage:
cat primitives.lambda program.lambda | node lambda.js
The final touches are:
• Support for the negation operator (!). It's non-essential because it can be easily implemented as a function:
not = λ(x) if x then false else true;
not(1 < 0) # ⇒ true
However, having a dedicated AST node for it allows us to generate more efficient code (a function call implies the GUARD-s and whatnot).
• I added a js:raw keyword. Using it you can insert arbitrary JavaScript code in the output (only complete expressions, not statements). Example use case is to easily define primitives from the λanguage:
length = js:raw "function(k, thing){
k(thing.length);
}";
arrayRef = js:raw "function(k, array, index){
k(array[index]);
}";
arraySet = js:raw "function(k, array, index, newValue){
k(array[index] = newValue);
}";
array = js:raw "function(k){
k([].slice.call(arguments, 1));
}";
I configured my syntax highlighter to use a reddish color for this keyword, because it's dangerous. Maybe not quite, but you have to know the implications of using it: the code you pass to js:raw will make it untouched and unchecked into the output JS. For instance the optimizer won't be able to see that you're accessing the local variable below, and it'll drop x = 10:
dumb = λ() {
let (x = 10) js:raw "x + x";
};
dumb = function(β_K1) {
β_K1(x + x);
};
That's not a bug. It's supposed to happen.
• Generate better code for boolean expressions — make_js, as we wrote it, could easily produce output like (a < 0 ? true : false) !== false, but that can obviously be simplified as just a < 0. This probably doesn't add speed, but at least the output code looks less stupid.
• Fixed the semantics of && and || operators such that false is the only falsy value in our language.
• It generates var declarations for globals and evaluates the final code under "use strict" (there appears to be a 5-10% speed improvement in strict mode).
Are we even close to a real language?
Believe it or not, λanguage is pretty close to Scheme. I promised you we won't be implementing a Lisp, and I kept my word. But the bigger part of the job is done: we have decent CPS transformer and optimizer. If we wanted to implement Scheme, all that's left is writing a Scheme parser that produces a compatible AST, and a pre-compiler pass to macro-expand. And a bunch of primitives for the core library. That shouldn't be too much work.
TODO
If we'd like to continue working on λanguage, the following should be on the radar.
Variable names
The JavaScript generator leaves variable names as they are, but that's generally not a good idea (not to mention it's a plain bug, since we allow in identifier names characters that JavaScript does not). We at least should prefix globals and replace illegal characters. The prefix I'm thinking about is “λ_”.
Variable arguments lists
Any practical language will need something like JavaScript's arguments. We could easily add some syntax for it, for example:
foo = λ(first, second, rest...) {
## rest here is an array with arguments after second
};
but wait, what even _is_ an array in our λanguage? (that's next on the list).
It might seem tempting to think that we can already use the “arguments” name (since we keep the same variable names in JS), but that won't work properly: both to_cps and the optimizer will assume it's a global, possible mess resulting.
To implement the syntax above without sacrificing much code size, we could use the GUARD function. Example output:
foo = function CC(first, second) {
var rest = GUARD(arguments, CC, 2); // returns arguments starting at index 2
};
Related to this feature, we also need an equivalent to Function.prototype.apply.
Arrays and objects
These are easy to define as primitive functions, as we did with js:raw above. However, implementing them at the syntax level would allow generating more efficient code, as well as giving us a familiar syntax; a[0] = 1 is no doubt nicer than arraySet(a, 0, 1).
One thing I'd like to avoid is ambiguity. For instance, in JavaScript the curly brackets are used both for representing bodies of code, and literal objects. The rule is “when open curly bracket occurs in statement position, then it's a code block; when in expression position, it's an object literal”. But we don't have statements in λanguage (which is actually a feature), and curly brackets denote a sequence of expressions. I'd like to keep it that way, so I wouldn't use the { ... } notation for object literals.
The “dot” notation
Related to the previous one, we should support the dot notation for accessing object properties. That's too ubiquitous to be ignored.
I'd expect it to be somewhat challenging to support “methods” like JavaScript does (i.e. the this keyword). The reason for this is that all our functions are transformed to CPS (and all function calls will insert the continuation as first argument). If we were to support JS-like methods, how could we know that we're calling a function in CPS (i.e. written in λanguage) and not a straight function (i.e. from some JS library)? This requires some thought.
Syntax (separators)
The requirement to separate expressions with semicolons in "prog" nodes is a bit too tight. For example the following is syntactically invalid, according to the current parser rules:
if foo() {
bar()
} # ← error
print("DONE")
The problem is on the marked line. Even though it ends with a closing curly bracket, there should be a semicolon following it (because the if is really an expression, and it ends at that closing bracket). That's not quite intuitive when coming from JavaScript; it might seem preferable to relax the rules and make the semicolon optional after an expression that ends with a curly bracket. But it's tricky because the following is also a valid program:
a = {
foo();
bar();
fib
}
(6);
Result of which would be to call foo(), bar() and then put the result of fib(6) into the variable a. Silly syntax, but you know what, most infix languages suffer from such weirdness, for example the following is syntactically a valid JS program; you get no parse error if you try it, although there will be obviously a run-time error when you call foo():
function foo() {
a = {
foo: 1,
bar: 2,
baz: 3
}
(10);
}
Exceptions
We could provide an exception system on top of reset and shift operators, or other primitives.
Moving on…
Even without the features listed in TODO, our λanguage is pretty powerful and I'll conclude this document with some samples comparing how you'd implement trivial programs in NodeJS versus λanguage. Read on.
© Mihai Bazon 2012 - 2017
Proudly NOT powered by WordPress.
Humbly powered by Common Lisp.
|
__label__pos
| 0.867094 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I have the following model:
class Model_GrantMinimal extends Model_Table {
public $table='grant';
function init() {
parent::init();
$this->hasOne('User');
$this->getField('id')->hidden(true);
$this->getField('user_id')->hidden(true);
$this->addField('grant_number');
$this->addField('grant_name');
}
}
And inside the page I have the following code:
$grant=$this->add('Model_GrantMinimal');
$grant->load($id);
$user=$grant->ref('user_id');
$field = $grantForm->addField('Dropdown','Manager');
$field->setModel($user);
$field
->validateNotNull()
->add('Icon',null,'after_field')
->set('arrows-left3')
->addStyle('cursor','pointer')
->js('click',$grantForm->js()->reload())
;
And everything works almost perfectly - how do I make sure the Dropdown ($field in php) is linked to the overall form, i.e. when I change the value in the dropdown that value is passed into the $grantForm->onSubmit - and how do I ensure the the defaultValue (pre-selected value) of the dropdown is the User that is set by user_id inside GrantMinimal
I'm loving the framework so far - its really impressive and coming from the .NET framework where MVVM and MVC are so common, specially with the latest WPF related. It has been a treat compared to the old way of writing HTML/PHP, just taking a while to fully understand whats what.
share|improve this question
1 Answer 1
up vote 0 down vote accepted
Figured it out after a couple of hours of debug tracing:
class Model_GrantMinimal extends Model_Table {
public $table='grant';
function init() {
parent::init();
$this->hasOne('User');
$this->getField('id')->hidden(true);
$this->getField('user_id')->hidden(true);
$this->addField('grant_number');
$this->addField('grant_name');
$this->hasOne('User');
$this->getField('user_id')->caption('Manager')->hidden(false);
}
}
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.768036 |
As we target more of the experience on touch devices like the iPad, where there is no hover state, we need to focus on making sure interaction works as intended.
For example, a lot of the sites we deploy at Imulus show a drop down navigation menu when a user hovers over a top level navigation item. Very standard interaction. On the iPad this interaction is hit-or-miss as the hover state on an iPad is actually a light tap, the click state being a more decisive tap. Well, it turns out in some instances the iPad will never recognize the first tap as a hover and will always treat it as a click — forcing the user to go directly to that page instead of seeing the hover menu. After some testing, we’ve found the fix.
See below for the code, the basic gist is we need to force a display change on the drop-down element on the :hover state.
Code that doesn’t work:
ul#nav li ul.drop-down {
padding: 20px 10px;
width: auto;
position: absolute;
top: 100%;
left: -9999px;
}
ul#nav li:hover ul.drop-down {
left: 0;
}
Code that fixes the issue (difference in bold):
ul#nav li ul.drop-down {
display: none;
padding: 20px 10px;
width: auto;
position: absolute;
top: 100%;
left: -9999px;
}
ul#nav li:hover ul.drop-down {
left: 0;
display: block;
}
It’s true that this makes the previous replacement technique (left:-9999px) irrelevant, but it works.
|
__label__pos
| 0.801865 |
SeqLock
SeqLock
Sequential lock is a common technique used to protect data that's frequently read and rarely updated. Writers are required to take exclusive lock to mutate the data. Reads are effectively lock free and optimistic. The data protected by a SeqLock usually needs to be trivially_copy_constructible.
// reader
std::atomoc<uint8_t> atom_{0}; // odd means write in progress, even otherwise
int data_;
void write(int data) {
while(true) {
uint8_t cur = atom_.load(std::memory_order::relaxed);
if (cur % 2 == 0) {
// no other writer is trying to perform a write
bool locked = atom_.compare_and_exchange(
cur /* expected */,
cur + 1 /* set it to an odd number */,
std::memory_order::relaxed /* mem order on success */,
std::memory_order::acquire /* mem order on failure */);
if (locked) {
data_ = data; // no data race due to mutual exclusion provided by the spin lock
atom_.store(cur + 2, std::memory_order::release);
}
}
}
}
int read() {
int data;
while(true) {
uint8_t begin = atom_.load(std::memory_order::relaxed);
data = data_; // unprotected access; data race; make copy;
uint8_t end = atom_.load(std::memory_order::acquire);
if (begin == end && begin % 2 == 0) {
// if atom is even (no writer is writing) and didn't
// change in the course of the read, it's safe to return data
return data;
}
}
}
Notice that we used a single std::atomic to achieve both spin-lock for mutual exclusion for writers and sequential lock for the readers.
There can be data races when reading i.e. the data_ = data; but as long as we don't return the data if a racing write is detected, the race is benign. Unless data if of non trivial type T that is not trivially copy constructible. Then T's copy constructor might lead to segfault. So usually adding static_assert(std::is_trivially_copy_constructible_v<T>, "data must be trivially copy constructible");is a good idea.
Show Comments
|
__label__pos
| 0.999709 |
Ensure managed disks use a specific set of disk encryption sets for customer-managed key encryption
Error: Managed disks do not use a specific set of disk encryption sets for customer-managed key encryption
Bridgecrew Policy ID: BC_AZR_GENERAL_29
Checkov Check ID: CKV_AZURE_93
Severity: LOW
Managed disks do not use a specific set of disk encryption sets for customer-managed key encryption
Description
Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk.
Fix - Buildtime
Terraform
• Resource: azurerm_managed_disk
• Argument: disk_encryption_set_id
resource "azurerm_managed_disk" "source" {
name = "acctestmd1"
location = "West US 2"
resource_group_name = azurerm_resource_group.example.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = "1"
+ disk_encryption_set_id = "koko"
tags = {
environment = "staging"
}
}
|
__label__pos
| 0.994053 |
EffectiveOC的学习-3
这里是EffectiveOC的 第 13,14条
2016-06-06 | 阅读
13. 使用Method Swizzling
Objective-C中,类的方法列表中将selector的名称映射到相关的方法实现上,即这个方法列表中对应了方法名与函数指针的关系,使得 动态消息派发系统能够据此找到应该调用的方法.而这种函数指针,被叫做IMP,对于IMP的定义一般为:
id (*IMP)(id,SEL,...)
所以,OC中的类的函数,实际是以C的函数来实现的,而这个函数的第一个参数是id,一般为self,第二个参数为SEL,然后才是函数定义的参数.
在说具体实现Method swizzling之前,要了解几个runtime的函数 :
BOOL class_addMethod(Class cls, SEL name, IMP imp, const char *types) : 为类添加函数,可以override父类中的函数,但是不能替换已经存在的函数. 最后一个参数typesruntime中的type encoding类型. 返回值表示函数是否添加成功,如果当前类中已有要添加的函数名对应的函数,则添加失败.
IMP method_setImplementation( Method method, IMP imp) : 为类设置函数实现, method代表了函数的声明,而IMP来表示具体函数指针,也就是具体的函数实现. 返回值类型是一个IMP,表示设置之前的函数实现.
void method_exchangeImplementations( Method m1, Method m2) : 交换函数实现,一般的Method swizzling就是通过该方法来实现,这个方法实际上调用了执行了以下操作:
IMP imp1 = method_getImplementation(m1);
IMP imp2 = method_getImplementation(m2);
method_setImplementation(m1, imp2);
method_setImplementation(m2, imp1);
IMP class_replaceMethod(Class cls, SEL name, IMP imp, const char *types) : 这个函数如果从名字上理解其功能,会觉得就与method_exchangeImplementations很相似,但是两者不做相同的操作 , class_replaceMethod只更换一个函数的具体实现,而不是将两个函数的实现继续交行. class_replaceMethod , 如果指定的sel当前已经存在,则执行method_setImplementation,即将这个sel对应的函数的实现替换成对应的IMP;而如果指定的SEL当前类内没有具体实现,则执行class_addMethod ,添加一个新的函数.
了解了runtime中对于函数的操作后,很容易想到method swizzling的具体实现.
首先,我们要先添加一个方法到指定的类上,添加方法,我们有两种方式,一种通过category直接为指定的类添加指定的方法,这种情况一般是我们知道具体要为指定的类和指定的方法进行替换.一种方式是通过class_addMethod来动态添加函数,多用于我们不确定要为哪些类进行method swizzling的时候,一般使用imp_implementationWithBlock来将一个block转换为IMP指针,如下:
SEL swizzledSelector = NSSelectorFromString(@"Beacon_tableView:didSelectRowAtIndexPath:");
Method originMethod = class_getInstanceMethod([delegate class], @selector(tableView:didSelectRowAtIndexPath:));
IMP swizzleIMP = imp_implementationWithBlock(^(id obj,UITableView *tableView,NSIndexPath *indexPath) {
NSLog(@"click");
[obj performSelector:swizzledSelector withObject:tableView withObject:indexPath];
});
BOOL didAddMethod = class_addMethod([delegate class], swizzledSelector, swizzleIMP, method_getTypeEncoding(originMethod));
if (didAddMethod) {
Method swizzleMethod = class_getInstanceMethod([delegate class], swizzledSelector);
method_exchangeImplementations(originMethod,swizzleMethod);
}
添加方法后,就可以使用method_exchangeImplementations来交换两个函数的实现了. 这里一般使用method_exchangeImplementations来交换一个类中两个函数的实现,而不是去通过class_replaceMethod来将一个函数单独的进行替换成我们想要的实现,原因是,很多时候我们还需要用到原函数的功能,所以要在类中保留原有函数的实现.
需要注意的一点,在上面代码中,我们在替换的函数实现中,调用了
[obj performSelector:swizzledSelector withObject:tableView withObject:indexPath];
而这个swizzledSelector是我们添加的函数的SEL,即Beacon_tableView:didSelectRowAtIndexPath:. 需要注意的就是,进行Method swizzling后, 所有的Beacon_tableView:didSelectRowAtIndexPath:都指向了替换前的函数tableView:didSelectRowAtIndexPath:的实现,而所有的tableView:didSelectRowAtIndexPath:的函数调用,实际上都是在调用添加的这个block的实现.
Method swizzling的特点在于,可以通过替换方法来实现一个钩子,可以在不被人察觉的情况进行特别的功能. 一般用于用户行为的收集,如上述,通过Method swizzling,对所有的TableViewDeletage添加方法,以在点击事件发生时,截获并记录事件的发生.
Method swizzling是一个危险而又有用的工具,危险在于你不了解它,如果你理解了它的实现,了解需要注意的地方,那它就只是一个有用的工具,而不再危险了.
14.理解”类对象”的含义
要理解Objective-C对象的本质,类对象是一个指向某块内存地址的指针.一般用id表示,而id如此定义:
struct objc_object {
Class isa;
};
typedef struct objc_object *id;
id是一个objc_object的结构体的指针,这个结构体只有一个Class对象. 也就表示了任何一个类的实例,在内存上地址的第一块,是一个isa指针,指向的是实现的类对象.而 Class对象是这样的定义的:
typedef struct objc_class *Class;
而这个objc_class具体定义如下:
struct objc_class {
Class isa; // isa指针,指出类的类型
Class super_class ; // 父类
const char *name; // 类名
long version ; // 版本信息,不重要
long info ;
long instance_size; // 类的实例变量的大小
struct objc_ivar_list *ivars; // 成员变量数组
struct objc_method_list **methodLists; // 方法数组
struct objc_cache *cache; // 方法的缓存,提高方法调用速率
struct objc_protocol_list *protocols; // 协议的数组.
}
通过这个定义,我们可以看到OC是如何使用C来实现一个类的. 其中需要重点讨论的是isa指针, 任何一个id对象都有一个isa指针,指向了对象的类,而这个类还有一个isa指针,这个指针指向了 另外一个类 metaClass,metaclass中放置了类对象本身所具有的元数据,即类方法定义在metaclass中. 所以一个对象的类 实际上由两个objc_class结构组成,一个objc_class存放了实例方法和实例变量,一个objc_class称为metaclass存放了类方法.结合继承的结构,如下图所示 :
类是单例的,即对于一个类,只有 一个类和一个metaclass.
NSObject协议(NSObject类并不是整个OC所有对象的积累)中有两个方法 与 类对象相关 :
// 检测对象是否是这个类或者派生类的实例
- (BOOL)isKindOfClass:(Class)aClass;
// 检测对象是否是这个类的实例.
- (BOOL)isMemberOfClass:(Class)aClass;
检测对象的类是否相同,也通过 :
if([objectA class] == [objectB class])
来进行判断.
|
__label__pos
| 0.869906 |
Agent Based Semantic Web
By / January 21, 2009 / No Comments
The Internet is a vast digital network of computers spreading around the globe. In 2008 the number of people connected to the Internet via servers, desktops and various mobile devices reached 1,463,632,361, which represents 21.9% of population of the world. One could argue that a quarter of all people who populate the Earth live in a global village – they can rapidly communicate with each other, exchange gossip, show photos, trade, provide services and ask each other for help.
— GEORGE RZEVSKI, PETR SKOBELEV
The Internet is a vast digital network of computers spreading around the globe. In 2008 the number of people connected to the Internet via servers, desktops and various mobile devices reached 1,463,632,361, which represents 21.9% of population of the world. One could argue that a quarter of all people who populate the Earth live in a global village – they can rapidly communicate with each other, exchange gossip, show photos, trade, provide services and ask each other for help.
The real breakthrough in establishing a useful global network came with the invention of the World Wide Web by Tim Berners-Lee in 1989. The Web is a network of documents in a standard format, stored on interconnected computers. In 2008 it was established that the indexable Web contains at least 63 billions web pages and Google announced that their search engine had discovered one trillion unique URLs. The significance of the Web is that it enables documents to be linked directly irrespective of their location.
The third stage in moving towards a true global village is well under way. The idea is to build a network of content stored on the web – the Semantic Web – making it possible for machines to understand meaning of data and to satisfy requests from people and machines to use the web content.
Semantics is the study of meaning in communication. The word derives from Greek semantikos "significant", from semaino "to signify, to indicate" and from sema "sign, mark, token". In linguistics it is the study of interpretation of signs as used by agents or communities within particular circumstances and contexts. It has related meanings in several other fields.
Tim Berners-Lee originally expressed his vision of the semantic web as follows:
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize” (Tim Berners-Lee, 1999).
Emergent Intelligence Technology Corporation has developed a version of intelligent agents mentioned by Tim Berners-Lee, agents capable of semantic analysis of web document content. The agent-based method for defining semantics enables computers to understand contents of documents written in a natural language such as English.
Possible applications of semantic analysis are numerous and include:
• Written communication between people and computers
• Written communication among computers
• Software translators
• Text referencing engines
• Semantic search engines
• Auto-abstracting engines
• Annotation and classification systems
• Semantic document-flow management systems
Despite a considerable research effort in areas such as computer linguistics, artificial intelligence and neural networks the problem of text understanding by computers has not been effectively solved. The reason may well be that the currently proposed solutions to this problem are strictly centralised, sequential and static. In contrast, the method described in this article is based on the concept of autonomous software agents dynamically co-operating, competing or arguing with each other and, through a process of pro-active negotiations, refining tentative semantic solutions until an agreed semantics of the text is established.
The main idea of the new approach is that a software agent is assigned to each word of the text under consideration. Agents have access to a comprehensive repository of knowledge about possible meanings of words in the text and engage into negotiation with each other until a consensus is reached on meanings of each word and each sentence. In some cases the method may discover several contradictory meanings of a sentence. The conflict is then resolved by an agent-triggered consultation with the user and consequent updating of the repository of knowledge. To simplify the process of extracting meanings, the method performs an initial morphological and syntactic analysis of the text.
Definitions
Key concepts of the proposed method are as follows.
An Agent is a software object capable of contributing to the accomplishment of a task by
• Accessing domain knowledge
• Reasoning about it’s task
• Composing meaningful messages
• Sending them to other agents or humans
• Interpreting received messages
• Making decisions based on domain knowledge and collected information
• Acting upon decisions in a meaningful manner
A Multi-Agent System is a software system consisting of agents competing or co-operating with each other with a view to accomplishing system tasks. The main principle of achieving goals within such system is a negotiation among agents, aimed at finding a balance between many different interests of individual agents.
Ontology is a conceptual description of a domain of the Universe under consideration. Concepts are organised in terms of objects, processes, attributes and relations. Values defining instances of concepts are stored in associated databases. Concepts and values together form the domain knowledge.
A Syntactic Descriptor is a network of words linked by syntactic relations representing a grammatically correct sentence.
A Semantic Descriptor is a network of grammatically and semantically compatible words, which represents a computer readable interpretation of the meaning of a text. If semantic ontology describes all possible meanings of words in a domain, a semantic descriptor describes the meaning of a particular text.
Self-organisation is the capability of a system to autonomously, ie, without human intervention, modify existing and/or establish new relationships among its components with a view to increasing a given value or recovering from a disturbance, such as, an unexpected addition or subtraction of a component. In the context of text understanding any autonomous change of a link between two agents representing different meanings of words is considered as a step in the process of self-organisation.
Evolution is the capability of a system to autonomously modify its components and/or links in response, or in anticipation of changes in its environment. In the context of text understanding any autonomous update of Ontology based on the newly acquired information is considered as a step in the process of evolution.
The Agent-Based Method for Semantic Analysis
The method consists of the following four steps:
1. Morphological analysis
2. Syntactic analysis
3. Semantic analysis
4. Pragmatics
The text is divided into sentences. Sentences are fed into the meaning extraction process one by one.
Morphological Analysis
1. An agent is assigned to each word in the sentence
2. Word Agents access Ontology and acquire relevant knowledge on morphology
3. Word Agents execute morphological analysis of the sentence and establish characteristics of each word, such as gender, number, case, time, etc.
4. If morphological analysis results in polysemy, ie, a situation in which some words could play several roles in a sentence (a noun or adjective or verb), several agents are assigned to the same word each representing one of its possible roles
Syntactical Analysis
5. Word Agents access Ontology and acquire relevant knowledge on syntax
6. Word Agents execute syntactical analysis where they aim at identifying the syntactical structure of the sentence. For example, a Subject searches for a Predicate of the same gender and number, and a Predicate looks for a suitable Subject and Objects. Conflicts are resolved through a process of negotiation. A grammatically correct sentence is represented by means of a Syntactic Descriptor
7. If results of the syntactical analysis are ambiguous, ie, several variants of the syntactic structure of the sentence under consideration are feasible, each feasible variant is represented by a different Syntactic Descriptor
Semantic Analysis
8. Word Agents access Ontology and acquire relevant knowledge on semantics
9. Each grammatically correct version of the sentence under consideration is subjected to semantic analysis. This analysis is aimed at establishing the semantic compatibility of words in each grammatically correct sentence. Word Agents learn from Ontology possible meanings of words that they represent and by consulting each other attempt to eliminate inappropriate alternatives
10. Once agents agree on a grammatically and semantically correct sentence, they create a Semantic Descriptor of the sentence, which is a network of concepts and values contained in the sentence
11. If a solution that satisfies all agents cannot be found, agents compose a message to the user explaining the difficulties and suggesting how the issues could be resolved
12. Each new grammatically and semantically correct sentence generated by the steps 1 – 11 is checked for semantic compatibility with Semantic Descriptors of preceding sentences. In the process agents may decide to modify previously agreed semantic interpretations of words or sentences (self-organisation)
13. When all sentences are processed, the final Semantic Descriptor of the whole document is constructed thus providing a computer readable semantic interpretation of the text
Pragmatics
14. Word Agents access Ontology and acquire relevant knowledge on pragmatics, which is closely related to the application at hand
15. At this stage agents consider their application-oriented tasks and decide if they need to execute any additional processes. For example, if the application is a Person – Computer Dialog, agents may decide that they need to ask the user to supply some additional information; if the application is a Search Engine, agents will compare the Semantic Descriptor of the search request with Semantic Descriptors of available search results. If the application is a Classifier, agents will compare Semantic Descriptors of different documents and form groups of documents with semantic proximity.
Let us recapitulate main features of the proposed method.
• Decision making rules are specified in ontology, which incorporates general knowledge on text understanding, language-oriented rules and specific knowledge on the problem domain
• Every word in the text under consideration is given the opportunity to autonomously and pro-actively search for its own meaning using knowledge available in ontology
• Tentative decisions are reached through a process of consultation and negotiation among all Word Agents
• The final decision on the meaning of every word is reached through a consensus among all Word Agents
• Semantic Descriptors are produced for individual sentences and for the whole text
• The extraction of meanings follows an autonomous trial-and-error pattern (selforganisation)
• The process of meaning extraction can be regulated by modifying ontology
An Example of Semantic Analysis
The proposed method has been applied to the problem of searching for relevant abstracts.
Fig. 1 shows a published abstract of a scientific paper, which needs to be converted into a computer readable format using the method described in this article.
Fig.1 A text of a selected article
Fig.1 A text of a selected article
The semantic descriptor of the title of the abstract is shown in Fig. 2a Note that the sentence has been completely understood by the system – the relations between the gene and locus and gene properties have been determined. Note that their meaning are shown at the bottom of the screen (in biology a locus by definition is a specific site of the particular gene or chromosome; according to domain ontology “cloning cassette” is a synonym of the semantic concept “locus”).
Fig. 2b shows how a tentative semantic descriptor of the whole text is modified during semantic analysis in a stepwise manner. Blue links indicate connections that were added to the semantic descriptor during the analysis of the last sentence of the text (the underlined sentence from Fig.1). As a result of the analysis of the last sentence the system discovered some new concepts and new relations between the existing nodes of the descriptor, including, a new relation «Have» between the gene and the locus; further more, the gene has obtained a new Insert relation and the relation «Have» has been established between the locus and the new node, operon (in biology operon by definition is a controllable unit of transcription consisting of a number of structural genes transcribed together; it contains at least two distinct regions: the operator and the promoter; therefore, according to ontology and the text of the abstract, the semantic descriptor includes the concept “operon”).
Fig.2a Semantic descriptors for the title
Fig.2a Semantic descriptors for the title
Fig.2b Semantic descriptors for the text
Fig.2b Semantic descriptors for the text
The final semantic descriptor of the whole abstract is shown in Fig.3
Fig.3 - Semantic descriptor of the abstract from Fig. 1
Fig.3 – Semantic descriptor of the abstract from Fig. 1
In addition to creating semantic descriptors for each abstract it is necessary to formulate a semantic descriptor of the enquiry. Fig.4 shows a semantic descriptor of a request to search for abstracts in which an organism is connected with a sequence through the relation Have.
Fig.4 A request to search for abstracts with a particular content
Fig.4 A request to search for abstracts with a particular content
In Fig.5 the best matching abstract is marked in blue; yellow denotes all abstracts, which match the request. All the comparisons are made based on rules specified in ontology. A change in ontology may change the ranking of semantic descriptors.
Fig.5 Comparison of semantic descriptors of analysed abstracts
Fig.5 Comparison of semantic descriptors of analysed abstracts
Acknowledgement
It is our pleasant duty to acknowledge the contributions to the development of this new method for semantic analysis by Dr Igor Minakov who has solved the problem of semantic matching of abstracts to queries described in this article.
Conclusion
Autonomous agents offer an effective way of allocating meanings to words primarily because the search algorithms are replaced by broadcasting of messages. Distributed decision making by agents assigned to words enables fast discovery of context within which text can be understood.
You might also like...
machine learning 300 x 224
Progress in Cognitive Computing and Machine Learning
Read More →
|
__label__pos
| 0.997674 |
Cody
Solution 1995042
Submitted on 29 Oct 2019
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
Test Suite
Test Status Code Input and Output
1 Pass
filetext = fileread('vmultiply.m'); assert(isempty(strfind(filetext,'regexp'))) assert(isempty(strfind(filetext,'switch')))
2 Pass
v1 = [1 2]; v2 = [5 0 0]; v3_correct = [6 0 0 0]; assert(isequal(vmultiply(v1,v2),v3_correct)) assert(isequal(vmultiply(v2,v1),v3_correct))
v1 = 12 v2 = 500 v3 = 6 0 0 0 v1 = 500 v2 = 12 v3 = 6 0 0 0
3 Pass
v1 = [9 9]; v2 = [9 9]; v3_correct = [9 8 0 1]; assert(isequal(vmultiply(v1,v2),v3_correct)) assert(isequal(vmultiply(v2,v1),v3_correct))
v1 = 99 v2 = 99 v3 = 9 8 0 1 v1 = 99 v2 = 99 v3 = 9 8 0 1
4 Fail
v1 = [8 3 4 5 7 1 6 9 4 0 2 0 2 1 5 8 9 4]; v2 = [1 0 0 6 6 9 4 3 1 8 2 7 0 5 4 5 5]; v3_correct = [8 4 0 1 5 8 5 8 2 5 5 9 5 7 5 5 3 3 8 6 7 6 3 6 6 1 1 1 5 0 1 7 7 0]; assert(isequal(vmultiply(v1,v2),v3_correct)) assert(isequal(vmultiply(v2,v1),v3_correct))
v1 = 8.3457e+17 v2 = 1.0067e+16
Error using dec2base (line 22) First argument must be an array of integers, 0 <= D <= flintmax. Error in vmultiply (line 6) v3 = dec2base(v1*v2,10) - '0' Error in Test4 (line 4) assert(isequal(vmultiply(v1,v2),v3_correct))
Suggested Problems
More from this Author34
|
__label__pos
| 0.985722 |
Documentation PHP
NumberFormatter::setTextAttribute
numfmt_set_text_attribute
(No version information available, might be only in CVS)
numfmt_set_text_attribute — Set a text attribute
Description
Object oriented style
bool NumberFormatter::setTextAttribute ( integer $attr , string $value )
Procedural style
bool numfmt_set_text_attribute ( NumberFormatter $fmt , integer $attr , string $value )
Set a text attribute associated with the formatter. An example of a text attribute is the suffix for positive numbers. If the formatter does not understand the attribute, U_UNSUPPORTED_ERROR error is produced. Rule-based formatters only understand NumberFormatter::DEFAULT_RULESET and NumberFormatter::PUBLIC_RULESETS.
Liste de paramètres
fmt
NumberFormatter object.
attr
Attribute specifier - one of the text attribute constants.
value
Text for the attribute value.
Valeurs de retour
Cette fonction retourne TRUE en cas de succès, FALSE en cas d'échec.
Exemples
Exemple #1 numfmt_set_text_attribute() example
<?php
$fmt
numfmt_create'de_DE'NumberFormatter::DECIMAL );
echo
"Prefix: ".numfmt_get_text_attribute($fmtNumberFormatter::NEGATIVE_PREFIX)."\n";
echo
numfmt_format($fmt, -1234567.891234567890000)."\n";
numfmt_set_text_attribute($fmtNumberFormatter::NEGATIVE_PREFIX"MINUS");
echo
"Prefix: ".numfmt_get_text_attribute($fmtNumberFormatter::NEGATIVE_PREFIX)."\n";
echo
numfmt_format($fmt, -1234567.891234567890000)."\n";
?>
Exemple #2 OO example
<?php
$fmt
= new NumberFormatter'de_DE'NumberFormatter::DECIMAL );
echo
"Prefix: ".$fmt->getTextAttribute(NumberFormatter::NEGATIVE_PREFIX)."\n";
echo
$fmt->format(-1234567.891234567890000)."\n";
$fmt->setTextAttribute(NumberFormatter::NEGATIVE_PREFIX"MINUS");
echo
"Prefix: ".$fmt->getTextAttribute(NumberFormatter::NEGATIVE_PREFIX)."\n";
echo
$fmt->format(-1234567.891234567890000)."\n";
?>
L'exemple ci-dessus va afficher :
Prefix: -
-1.234.567,891
Prefix: MINUS
MINUS1.234.567,891
Ceci n'est pas la documentation originale du langage de programmation php, pour y accéder visiter le site www.php.net
Support du web, outils, services, compteurs, scripts, générateurs et autres outils pour les webmasters gratuitement à 100%
Page générée en 0.00186 secondes.
|
__label__pos
| 0.596099 |
How to Show Custom Snackbar in Flutter?
snackbar in flutter
Snackbar is a quick piece of information bar that appears briefly at the bottom of the screen. It’s used to display short messages that the user doesn’t necessarily need to interact with, such as “Changes saved successfully.” or “No internet connection.” In Flutter, it’s easy to create a Snackbar using the Scaffold.showSnackBar() method.
Using Snackbar in Flutter is a quick and easy way to provide feedback to the user. It’s a great alternative to Toast in Flutter, which is a native Android widget because it allows the user to interact with it by dismissing it or performing an action.
Let’s have a sneak peek at Flutter toast vs snackbar.
Flutter Toast vs Snackbar
Toast and Snackbar are both used to display brief messages to the user, but there are some differences between them. Toast is a native Android widget that appears at the bottom of the screen and disappears after a certain period of time. It’s not interactive and the user cannot dismiss it.
On the other hand, Snackbar is a material design element that appears at the bottom of the screen and can be dismissed by the user by swiping it away or tapping on an action button. Snackbar also allows the user to undo an action by tapping on the “Undo” button.
How to Show Snackbar in Flutter
To show a Snackbar in Flutter, you need to use the Scaffold.showSnackBar() method. The Scaffold is a widget that provides a framework for organizing the visual structure of your app. It’s often used as the top-level container for an app.
Here’s an example of how to show a Snackbar in Flutter:
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text('This is Snackbar'),
),
);
In this example, the Snackbar will appear at the bottom of the screen and display the message This is a Snackbar.
You can see that here in the screenshot:
Output:
flutter snackbar
How to Show Flutter Snackbar on Top Position?
There is no such property of snackbar to assign the top position to it as in Toast. So, if you want to show snackbar on the top of your screen, you need to exclude the height, taking the bottom as a reference. As you can see the code here and the result in the screenshot:
ScaffoldMessenger.of(context).showSnackBar(
SnackBar(
content: const Text('This is snackbar'),
behavior: SnackBarBehavior.floating,
margin: EdgeInsets.only(
bottom: MediaQuery.of(context).size.height - 100,
left: 10,
right: 10,
),
),
);
Output
flutter snackbar top postition
Flutter Snackbar Without Context
There are certain times we don’t have the context to define depending upon certain business logic. In that case, we use scaffoldMessengerKey to show snackbars without needing context with just one GlobalKey.
Here is the code.
class SnackBarService {
static final scaffoldKey = GlobalKey<ScaffoldMessengerState>();
static void showSnackBar({required String content}) {
scaffoldKey.currentState?.showSnackBar(SnackBar(content: Text(content)));
}
}
return MaterialApp(
scaffoldMessengerKey: SnackBarService.scaffoldKey, /// Assign Key Here
title: 'Snackbar and Toast',
theme: AppTheme.lightTheme,
home: const HomePage(),
);
////// Call => SnackBarService.showSnackBar(content: 'This is snackbar'); or any custom message you want to display.
Now, let’s move toward changing snackbar color.
How to Change Flutter Snackbar Color
You can customize the color of the Snackbar using the backgroundColor property.
Here’s an example of how to show a Snackbar with a red background color:
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
backgroundColor: Colors.teal,
content: Text('This is snackbar'),
),
);
Output:
flutter snackbar color
Flutter Snackbar Duration
If you’re wondering how long would snackbar stay there then mind that the default snackar duration is set up to 4 seconds. However, you can specify a different duration using the duration property. Here’s an example of how to show a Snackbar that lasts for 8 seconds:
Scaffold.of(context).showSnackBar(SnackBar(
content: Text(‘This is a Snackbar’),
duration: Duration(seconds: 8),
));
So this will allow you to display the snackbar for 8 seconds. Moreover, you can specify any number of seconds to show the snackbar up to.
Flutter Floating Snackbar
You can create a floating Snackbar by setting the shape property to RoundedRectangleBorder and the elevation property to 8.0. Here’s an example of how to create a floating Snackbar:
Scaffold.of(context).showSnackBar(
SnackBar(
content: Text(‘This is a snackbar'),
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(10.0),
),
elevation: 8.0,
));
This will create a Snackbar with rounded corners and a shadow.
Conclusion
In this tutorial, we learned about Snackbar in Flutter and how to use it to display brief messages to the user. We also learned about the differences between Toast and Snackbar, and how to customize the duration, color, and shape of the Snackbar. We hope this tutorial helps you show snackbar and give you ways to customize it fully.
Feel free to share your thoughts or queries in the comments below. Or you can hire Flutter developers who would love to assist you to build highly intuitive apps for your business.
Leave a Comment
Your email address will not be published. Required fields are marked *
Share Post On
Haroon Ashraf
Flutter Developer
Detail-oriented full-stack engineer with 9+ years of industry experience. He’s been working on Flutter since 2018 and has delivered 30+ successful projects.He has experience in project management Trello, Github, Jira…..
Flutter
Xcode
IOS-Swift
Full Stack
Bloc/Cubit
Nodejs
Experience
Availability
9 years
Full-Time
Translate »
|
__label__pos
| 0.650919 |
Replace a String Within a String, Recursively
I recently needed a substring replacement function for inserting code into a module, by reading the code from a file. Unfortunately, in my case, commas are interpreted as delimiters, and the insertion requires a lot of post formatting. So, I replaced all the commas in the original file with a question mark. That way, when the file is inserted into a module, the ReplaceString function checks each line, and the question mark is replaced with a comma, then inserted into the module. I initially considered using the fConvert function published in “Remove Unwanted Characters” [“101 Tech Tips for VB Developers,” Supplement to VBPJ, February 1999]. I compared the speed of the two functions, ReplaceString and fConvert, in a separate project, using the Windows API GetTickCount function. The recursive function is nearly four times faster than the For…Loop. In situations where a single character needs to be replaced with something different, it’s a good way to go:
Public Function ReplaceString(strT As String) As String Dim iposn As Integer Dim strF As String Dim strR As String ' Function replaces one character with another. Using ' recursion if the character is found to check if any ' more such characters need to be replaced within ' the string. strT is the string in which a character ' or string in which replacement will take place. ' strF is the string which is to be replaced. ' strR the new or replacing string. strF = "?" strR = "," iposn = InStr(1, strT, strF) If iposn > 0 Then Mid(strT, iposn, 1) = strR strT = ReplaceString(strT) End If ReplaceString = strTEnd Function
Share the Post:
Share on facebook
Share on twitter
Share on linkedin
Overview
Recent Articles:
|
__label__pos
| 0.777935 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
PerlMonks
Learning to *really* love references
by astaines (Curate)
on Jul 29, 2002 at 23:44 UTC ( #186106=perlmeditation: print w/ replies, xml ) Need Help??
The following code used twice as much memory as necessary - why? And what would you (or did I) do about it...
process($data_ref,\@set_up,$template,*OUT); # Process sub process { my @data = @{ shift()}; my @set_up = @{ shift()}; my $template = shift; #For unpack local *FH = shift; foreach my $record (@data) { --do things--} }
The @data array was very big, and the subroutine above actually makes a copy of it. I didn't appreciate this when I first wrote it, but it was slow, and was consuming silly amounts of memory (even by Perl standards). Changing the relevant lines as follows makes it run faster, and it now uses less memory - about half as much.
my $data_ref = shift; #Big array foreach my $record (@{$data_ref}) { -- do things more efficiently -- }
I am now a happy perl loving bunny, and I can read in my very big files ;-)
--
Anthony Staines
PS - What's happening here, for monks whose Perl skills are closer to my own, is that I passed the array as a reference ($data_ref) to the subroutine. There, partly from force of habit I was turning it back into an array. This, as was obvious with hindsight, doubled the memory requirements of the program. Now @data was big, 10 MB to 50MB or so, and the program was not working well. Accessing the array, through the reference only, fixed the problem - hence the @{$data_ref} bit.
Comment on Learning to *really* love references
Select or Download Code
Re: Learning to *really* love references
by vladb (Vicar) on Jul 30, 2002 at 02:26 UTC
Well, I guess I had that in me from years spent coding C/C++, but alwasy pass references to data instead of replicating it. Undoubtfully this is a rather trivial skill to grasp and anyone who'd spent at least 6 month coding would *I hope* know that.
Perl is actually quite good when it comes to references. Certainly much easier to handle than C pointers for example. Take a look at this snippet:
my ($a, $b, $c) = qw/asdf foo bar/; s/d/D/g for ($a, $b, $c); print "$a;$b;$c\n";
Perl is farily well optimized and in this for loop no memory gets duplicated. Instead, the s/// operator works on data stored in the original place (allocated for the a,b,c variables).
There are numerous other examples one could bring up. Not the least is the ease with which you can pass references around in perl. In fact, I believe this is one of the first basic things that any novice Perl book would mention to its reader. For those who don't have a book however, I suggest refering to this excellent 'article' on Perl references here (written by fair brother dominus.
_____________________
# Under Construction
Re: Learning to *really* love references
by japhy (Canon) on Jul 30, 2002 at 03:49 UTC
Perl really needs better aliasing features. Currently, you can only create an alias with a package variable (using a glob) or in a for-loop. Other methods require you to use tie() or some other less elegant means.
_____________________________________________________
Jeff[japhy]Pinyan: Perl, regex, and perl hacker, who'd like a job (NYC-area)
s++=END;++y(;-P)}y js++=;shajsj<++y(p-q)}?print:??;
While afaik you cant alias scalar directly you can do aliasing with arrays and hashes using Array::RefElem and even do aliasing with arrays without anything more than the following utility sub:
sub alias_array { \@_ }
But you are right, perl needs better ways to define and manipulate aliases.
Yves / DeMerphq
---
Writing a good benchmark isnt as easy as it might look.
Monks~
Well, it might not do much now, but Perl6 will support aliasing via the := operator...
That doesn't help you in the short run, but after reading the apocalypses (especially 4) I have been waiting with bated breath for Perl6.
Boots
---
Computer science is merely the post-Turing decline of formal systems theory.
--???
you can only create an alias with a package variable (using a glob) or in a for-loop.
Not quite true, Japhy. In a subroutine call, the elements of @_ are aliased to the actual arguments.
I find that that and the for-loop technique answer about 98% of my aliasing needs.
Re: Learning to *really* love references
by LanceDeeply (Chaplain) on Jul 30, 2002 at 15:17 UTC
My question is:
What's the commonly accepted syntax to do this? I've never used:
foreach my $record (@{$array_ref}) {
The notation I'm comfortable w/ is:
foreach my $record ( @$array_ref ) {
Is there an accepted standard?
astaines:
since you said you're still learning about refferences - you might want to try deref'ing them like so:
my $arrayElement = $$arrayRef[0]; my $hashValue = $$hashRef{someKey};
Also, you may want to checkout wantarray for returning array's or their references to the caller.
Hope this helps
my $arrayElement = $$arrayRef[0]; my $hashValue = $$hashRef{someKey};
I prefer:
my $array_element = $array_ref->[0]; my $hash_value = $hash_ref->{some_key};
Using arrow notation makes it easier for me to recognize that the thingys in question are references (I find that the extra leading sigil gets lost in the code soup to my eyes). It also makes doing nested dereferences easier:
my $foo = $AoH_ref->[0]->{some_key};
although you'd be better off doing:
my $hash_ref = $AoH_ref->[0]; my $foo = $hash_ref->{some_key};
in most cases.
--
The hell with paco, vote for Erudil!
:wq
Well...
Long time ago i used to write my code like this:
my $arrayElement = $$arrayRef[0]; my $hashValue = $$hashRef{someKey};
but when I tried to debug or simply read it a few months after coding, it took me a long time to make my brain think the same, wired way as when I wrote the code ;-)
Since then I use
my $arrayElement = ${$arrayRef}[0]; my $hashValue = %{$hashRef}{key};
or
my $arrayElement = $arrayRef->[0]; my $hashValue = $hashRef->{key};
and it depends of the mood ;-)
Anyway IMHO the $$something notation is the worst to read or understand again after some time... If you code alone it's up to you which one to choose, but if I'm a part of a team I just try to be polite, writing my code more readable and easy to understand to my co-workers...
Greetz, Tom.
The notation is what I always use - I find it easier to stick to one form throughout. Don't get me wrong I appreciate Perl's flexibility, but consistency makes my life simpler. I make no claim for superiority here! However, I do agree that the $$ notation is hard to follow, and I never use it
--
Anthony Staines
PS -thanks for the tip about deref'ing - I'll try it out next time.
Until you want to do fancy things. What if you have an object that contains a hash or array ref? Ie...
$this = { _hashRef => {}, _arrayRef => [], };
Then you are stuck doing....
$x = ${$this->{'_hashRef'}}->{'blowMeDown'} @x = reverse @{$this->{'_arrayRef'}};
And that's one to grow on.
Re: Learning to *really* love references
by ash (Monk) on Jul 31, 2002 at 14:42 UTC
I prefer:
For scalars:
$$scalar
For hashes:
$hashref->{$key}
For arrays:
$arrayref->[$index]
IMHO i.e
$hashref->{key}{next}{next}{$_}
is much better to reaad than:
${$hashref}{key}{next}{next}{$_}
-- Ash/asksh <[email protected]>
Re: Learning to *really* love references
by tadman (Prior) on Jul 31, 2002 at 15:15 UTC
That function isn't very graceful, especially with that repetitive shifting and data duplication. Here's how you could rewrite it:
process($data_ref, \@set_up, $template, $fd_out); sub process { my ($data, $set_up, $template, $fd_out) = @_; foreach my $record (@$data) { # ... } }
Maybe it's just me, but I find reading function arguments with shift is usually pointless since you can just declare the works in a single line. It's also nice to have the function argument declaration in a similar format to how you call it, so you can see if things match up.
As of Perl 5.6 or thereabouts, you can open a filehandle that is put into a scalar (glob reference) using the open function:
my $fd_out; open($fd_out, $what_file); while (<$fd_out>) { # ... }
Or, of course, there is always IO::File:
my $fd_out = IO::File->new($what_file, 'r'); while (<$fd_out>) { # ... }
These filehandles are a lot easier to pass back and forth than the regular globs. If you're even thinking about using them as function arguments, don't use globs.
As a note: ${$foo} and $$foo are equivalent. It's often simpler to just leave off the extra braces since they don't serve any practical purpose. There are occasions like ${$foo->{bar}} where they more appropriate.
Re: Learning to *really* love references
by TGI (Vicar) on Jul 31, 2002 at 20:59 UTC
You can also use sub prototypes to help make things pretty. They'll give you some argument checking capability, and will convert your arrays into refs for you. Here's a simple example:
my @foo = qw(123 456 789); my @bar = qw(abc def ghi); sub qux(\@\@*) { print join "\t", @_, "\n"; my $foo = shift; my $bar = shift; print "foo: ", join "\t", @$foo, "\n"; print $FH "bar: ", join "\t", @$bar, "\n"; } qux(@foo, @bar, 'STDERR'); print "\n"; print \@foo . "\t" . \@bar;
Applying this to your code, we get something like (notice how clean the subroutine call is):
# the filehandle name must be quoted if you have strict subs enabled. process(@data,@set_up,$template, 'OUT'); # Process sub process (\@\@$*) { my $data_ref = shift(); my $set_up_ref = shift(); my $template = shift; #For unpack my $FH = shift; # The * gets us a handle reference foreach my $record (@data) { --do things--} }
Check out perlsub's section on prototypes for more info.
Update: Of course protoypes are a bit controversial...
TGI says moo
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://186106]
Approved by VSarkiss
Front-paged by Courage
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (5)
As of 2015-07-04 13:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
The top three priorities of my open tasks are (in descending order of likelihood to be worked on) ...
Results (60 votes), past polls
|
__label__pos
| 0.54024 |
Two-Factor Authentication (2FA)
Get Additional Protection for Your Data
University of Illinois uses the Duo 2FA service to help protect data with Two-Factor Authentication.
Are you enrolled with Duo?
Visit https://identity.uillinois.edu/ to find out and to enable your device (mobile phone or token) and set preferences.
Protect your information with 2FA. Here’s the Why, When, and How.
UIS is continuing its efforts to protect valuable assets and access by requiring Two- Factor Authentication (2FA) on more systems and services in March 2021.
Why 2FA?
It works.
2FA already helps protects University applications such as Banner, Direct Deposit and other System HR resources. Before implementing 2FA, university payroll was a large target for attackers attempting to steal employee paychecks. Since implementing this technology, attacks on payroll customers have effectively vanished.
The Illinois System experiences about 750 compromised accounts each month. Looking at other academic institutions who have implemented 2FA across their services, it has been proven that compromised accounts can drop to nearly zero.
A password is no longer enough.
Attacks on accounts are increasingly sophisticated. 2FA helps to determine that you are who you say you are and are not someone with a stolen password.
Who and What is Covered by 2FA?
Currently, you are required to use 2FA if you access any of the following applications:
Enterprise applications such as Banner, HRFE/Paris, HR Reporting Portal, and iBuy
Direct deposit
In March 2021, all Springfield campus faculty and staff will be required to use 2FA for services that are protected by Shibboleth and Office 365 (O365).
Shibboleth is used in front of applications such as Canvas, Box, LinkedIn Learning, Qualtrics, and all the apps running on apps.uis.edu (adviseu, attendance, time clock, course evaluations, parking permit, etc.)
O365 includes the Office online applications (Outlook online, Word online, SharePoint, OneNote, etc.) as well as the Office desktop apps such as Outlook, Teams, and more.
Note: Students are not required to use 2FA at this time unless they are enrolled in direct deposit.
How does 2FA work?
Duo Security is the campus provider of 2FA. Once you login with a NetID and password, Duo sends a request to confirm that you are who you say you are via mobile phone notification, phone call, or by another method such as a token. Clicking a button or entering a code informs Duo that you are a legitimate user of campus services. The process takes just a few clicks, taps, or keystrokes. Using the Duo phone app to verify is the fastest method. It works even without a wifi connection and in airplane mode.
What if I don’t want to use my phone for 2FA?
If an employee does not want to use a personal device, they may contact their manager about having their unit acquire a 2FA Token from the WebStore. You can learn more about tokens at https://answers.uillinois.edu/internal/page.php?id=72159
What if I don’t have Cellular or WiFi access?
The DUO mobile app, available for Apple and Android devices, works without any connectivity. You can replace your SIM card, change providers, turn on airplane mode, or travel internationally and the Duo App works. The common “Push” prompt won’t be available, but the App works by generating a short 6 digit code that you can type into the web application prompt.
What vendor can I use to purchase a token?
Only tokens purchased through the U of I Webstore are set up with the private identity and secret key specifically for the University’s 2FA service. The University has a tightly-controlled provisioning process with Yubikey in order to meet the University’s security needs. Only tokens purchased from the Webstore will work as your second factor.
What if I’m locked out?
The NetID Center allows you to set a recovery email address. It is recommended you set this to facilitate recovery. Temporary codes can be sent here in the event your phone is lost or you are otherwise unable to use your normal 2FA device. More information is available at, https://answers.uillinois.edu/internal/page.php?id=76500
Where can I find more information?
The 2FA Knowledge Base has many useful knowledge documents, troubleshooting tips, and frequently asked questions to assist both in signing up and understanding the 2FA service.
|
__label__pos
| 0.738095 |
Welcome to the in5 Answer Exchange! This a place to get answers about in5 (InDesign to HTML5) and make requests.
0 votes
Hi, How to prevent saving images publication as, or draging image to download it in browser., thank you
in how_to by (670 points)
1 Answer
0 votes
Because the final product is HTML, the content is pretty open and can easily be downloaded (such as images). You can take steps to protect the content, such as creating an app with it, or putting it on a membership or password-protected site (though the content could still be downloaded).
by (9.6k points)
|
__label__pos
| 0.896052 |
Entity Framework 查漏补缺 (二) – 【可乐不加冰】
数据加载
如下这样的一个lamda查询语句,不会立马去查询数据库,只有当需要用时去调用(如取某行,取某个字段、聚合),才会去操作数据库,EF中本身的查询方法返回的都是IQueryable接口。
链接:IEnumerable和IQueryable接口说明
其中聚合函数会影响数据加载,诸如:toList(),sum(),Count(),First()能使数据立即查询加载。
IQueryable中的Load方法
一般情况,我们都是使用ToList或First来完成预先加载数据操作。但在EF中还可以使用Load() 方法来显式加载,将获取的数据放到EF Context中,缓存起来备用。和ToList()很像,只是它不创建列表只是把数据缓存到EF Context中而已,开销较少。
using (var context = new TestDB())
{
context.Place.Where(t=>t.PlaceID==9).Load();
}
VS中的方法说明:
延迟加载
用之前的Place类和People为例
Place对象如下:
public class Place
{
[Key]
public int PlaceID { get; set;}
public string Provice { get; set; }
public string City { get; set; }
//导航属性
public virtual List<People> Population { get; set; }
}
下面查询,不会主动去查询出导航属性(Population )关联的数据
using (var context = new TestDB())
{
var obj = context.Place.Where(t => t.PlaceID == 9).FirstOrDefault();
}
可以看到Population为null
只有用到Population对象时,EF才会发起到数据库的查询;
当然导航数据必须标记virtual,配置延迟加载
//导航属性
public virtual Place Place { get; set; }
要注意的事:在延迟加载条件下,经常以为导航数据也加载了,从而在循环中去遍历导航属性,造成多次访问数据库。
立即加载
除了前面所说的,使用聚合函数(sum等)外来立即预加载数据,还可以使用Include方法
在上面的查询中,想要查询place以及关联的Population数据如下:
using (var context = new TestDB())
{
var obj = context.Place.Where(t => t.PlaceID == 9).Include(p=>p.Population).FirstOrDefault();
}
事务
在EF中,saveChanges()默认是开启了事务的,在调用saveChanges()之前,所有的操作都在同一个事务中,同一次数据库连接。若使用同一DbContext对象,EF的默认事务处理机制基本满足使用。
除此之外,以下两种情况怎么使用事务:
1. 数据分阶段保存,多次调用saveChanges()
2. 使用多个DbContext对象(尽量避免)
第一种情况:显式事务
using (var context = new TestDB())
{
using (var tran=context.Database.BeginTransaction())
{
try
{
context.Place.Add(new Place { City = "beijing", PlaceID = 11 });
context.SaveChanges();
context.People.Add(new People { Name = "xiaoli" });
context.SaveChanges();
tran.Commit();
}
catch (Exception)
{
tran.Rollback();
}
}
}
注意的是,不调用commit()提交,没有异常事务也不会默认提交。
第二种情况:TransactionScope分布式事务
• 引入System.Transactions.dll
• Windows需要开启MSDTC
• TransactionScope也于适用于第一种情况。这里只讨论连接多个DBcontext的事务使用
• 需要调用Complete(),否则事务不会提交
• 在事务内,报错会自动回滚
using (var tran = new TransactionScope())
{
try
{
using (var context = new TestDB())
{
context.Place.Add(new Place { City = "5555"});
context.SaveChanges();
}
using (var context2 = new TestDB2())
{
context2.Student.Add(new Student { Name="li"});
context2.SaveChanges();
}
throw new Exception();
tran.Complete();
}
catch (Exception)
{
}
}
上面代码在同一个事务内使用了多个DBcontext,会造次多次连接关闭数据库
题外话
如是多个DBcontext连着是同一个数据库的话,可以将一个己打开的数据库连接对象传给它,并且需要指定EF在DbContext对象销毁时不关闭数据库连接。
DbContext对象改造,增加重载构造函数;;传入两个参数
• 数据库连接DbConnection
• contextOwnsConnection=false(DbContext对象销毁时不关闭数据库连接):
public class TestDB2 : DbContext
{
public TestDB2():base("name=Test")
{
} public TestDB2(DbConnection conn, bool contextOwnsConnection) : base(conn, contextOwnsConnection) { } public DbSet<Student> Student { get; set; } }
事务代码如下:
using (TransactionScope scope = new TransactionScope())
{
String connStr = ……;
using (var conn = SqlConnection(connStr))
{
try
{
conn.Open();
using (var context1 = new MyDbContext(conn, contextOwnsConnection: false))
{
……
context1.SaveChanges();
}
using (var context2 = new MyDbContext(conn, contextOwnsConnection: false))
{
……
context2.SaveChanges();
}
scope.Complete(); }
catch (Exception e) { } finally { conn.Close(); } } }
DBcontent线程内唯一
链接:dbcontext实例创建问题
并发
在实际场景中,并发是很常见的事,同条记录同时被不同的两个用户修改
在EF中有两种常见的并发冲突检测
方法一:ConcurrencyCheck特性
可以指定对象的一个或多个属性用于并发检测,在对应属性加上ConcurrencyCheck特性
这里我们指定Student 对象的属性Name
public class Student
{
[Key]
public int ID { get; set; }
[ConcurrencyCheck]
public string Name { get; set; }
public int Age { get; set; }
}
用个两个线程同时去更新Student对象,模拟用户并发操作
static void Main(string[] args)
{
Task t1 = Task.Run(() => {
using (var context = new TestDB2())
{
var obj = context.Student.First();
obj.Name = "LiMing";
context.SaveChanges();
}
});
Task t2 = Task.Run(() => {
using (var context = new TestDB2())
{
var obj = context.Student.First();
obj.Age = 26;
context.SaveChanges();
}
});
Task.WaitAll(t1,t2);
}
并发冲突报错:
查看了sql server profiler,发现加了[ConcurrencyCheck]的属性名和值将出现在Where子句中
exec sp_executesql N'UPDATE [dbo].[Students]
SET [Age] = @0
WHERE (([ID] = @1) AND ([Name] = @2))
',N'@0 int,@1 int,@2 nvarchar(max) ',@0=26,@1=1,@2=N'WANG'
很显然:
t2再修改Age,根据并发检测属性Name的值已被改变,有其他用户在修改同一条数据,并发冲突。
为每个实体类都单独地设定检测属性实在太麻烦,应该由数据库来设定特殊字段值并维护更新会更好,下面就是另一种方法
方法二:timestamp
创建一个基类Base,指定一个特殊属性值,SQL Server中相应的字段类型为timestamp,自己项目中的实体类都可以继承它,
public class Base
{
[Timestamp]
public byte[] RowVersion { get; set; }
}
Student先基础base类,每次更新Student数据,RowVersion 字段就会由数据库生成一个新的值,根据这个特殊字段来检测并发冲突;实体类不再去考虑设置那个属性值和更新。
并发处理
同时更新并发,EF会抛出:DbUpdateConcurrencyException
两个更新线程如上:t1和t2
处理一
Task t1 = Task.Run(() => {
using (var context = new TestDB())
{
try
{
var obj = context.Student.First();
obj.Name = "LiMing2";
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
//从数据库重新加载数据并覆盖当前保存失败的对象
ex.Entries.Single().Reload();
context.SaveChanges();
}
}
});
也就是说,t1并发冲突更新失败,会重新从数据库拉取对象覆盖当前失败的对象,t1原本的更新被作废,于此同时的其他用户并发操作,如t2的更新将会被保存下来
处理二
Task t1 = Task.Run(() => {
using (var context = new TestDB())
{
try
{
var obj = context.Student.First();
obj.Name = "LiMing2";
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
context.SaveChanges();
}
}
});
从数据库重新获取值来替换保存失败的对象的属性原始值,再次提交更改,数据库就不会因为当前更新操作获取的原始值与数据库里现有值不同而产生异常(如检测属性的值已成一样),t1的更新操作就能顺利提交,其他并发操作如t2被覆盖
You must enable javascript to see captcha here!
Copyright © All Rights Reserved · Green Hope Theme by Sivan & schiy · Proudly powered by WordPress
无觅相关文章插件,快速提升流量
|
__label__pos
| 0.999702 |
Build
Getting Started
First token
Coin Intro
Coin Introduction
How to define Coin
It is defined in Rooch using the Coin structure, which is a generic structure. We can pass different CoinTypes to it to define different tokens. For example, Coin<USDT>, Coin<USDC>, etc. can be regarded as a container.
Our main concern is CoinType, which can have different abilities, allowing different modules to have different control permissions.
• CoinType must have key ability. At this time, it is private and Coin can only be operated within the module where CoinType is defined.
• When the store ability is added to CoinType, it becomes public. At this time, in addition to the modules that define CoinType, coin functions can also be operated.
When defining Coin, we register a certain Coin through CoinInfo. CoinInfo is a globally stored object that contains 5 fields.
• coin_type: defines the type of Coin.
• name: defines the name of Coin.
• symbol: Defines the symbol of the token.
• decimals defines the precision of the token. Because there are no decimals in Move, we must use precision to convert so that users can see Coin in decimal form.
• supply: Define the total amount of tokens.
How to use coins
Use public type Coin
This example will demonstrate the use of having a public type Coin.
module coins::fixed_supply_coin {
use std::string;
use moveos_std::signer;
use moveos_std::object::{Self, Object};
use rooch_framework::coin;
use rooch_framework::coin_store::{Self, CoinStore};
use rooch_framework::account_coin_store;
const TOTAL_SUPPLY: u256 = 210_000_000_000u256;
const DECIMALS: u8 = 1u8;
// The `FSC` CoinType has `key` and `store` ability.
// So `FSC` coin is public.
struct FSC has key, store {}
// construct the `FSC` coin and make it a global object that stored in `Treasury`.
struct Treasury has key {
coin_store: Object<CoinStore<FSC>>
}
fun init() {
let coin_info_obj = coin::register_extend<FSC>(
string::utf8(b"Fixed Supply Coin"),
string::utf8(b"FSC"),
DECIMALS,
);
// Mint the total supply of coins, and store it to the treasury
let coin = coin::mint_extend<FSC>(&mut coin_info_obj, TOTAL_SUPPLY);
// Frozen the CoinInfo object, so that no more coins can be minted
object::to_frozen(coin_info_obj);
let coin_store_obj = coin_store::create_coin_store<FSC>();
coin_store::deposit(&mut coin_store_obj, coin);
let treasury_obj = object::new_named_object(Treasury { coin_store: coin_store_obj });
// Make the treasury object to shared, so anyone can get mutable Treasury object
object::to_shared(treasury_obj);
}
/// Provide a faucet to give out coins to users
/// In a real world scenario, the coins should be given out in the application business logic.
public entry fun faucet(account: &signer, treasury_obj: &mut Object<Treasury>) {
let account_addr = signer::address_of(account);
let treasury = object::borrow_mut(treasury_obj);
let coin = coin_store::withdraw(&mut treasury.coin_store, 10000);
account_coin_store::deposit(account_addr, coin);
}
}
Define the TOTAL_SUPPLY constant to set the total supply of FSC, and define DECIMALS to set the precision value.
The FSC structure defines the type of token. When defining the type, you do not need to pass any fields, you only need to control the capabilities it possesses.
The Treasury structure defines the treasury of the token, which contains the CoinStore object. This object contains information about the FSC token: token name, token balance, and whether it is frozen.
The init function defines some logic for issuing FSC tokens:
• First, create the meta-information of the FSC token through the register_extend function, including: token type, token name, token symbol, token precision, and total token supply (the default initialization value of this function is 0).
• Then, use the mint_extend function to mint the token. After the mint is completed, use the to_frozen function to freeze the coin_info_obj object, which means the minting rights are frozen and the tokens cannot be minted.
• The create_coin_store function creates a CoinStore object for the FSC token.
• The deposit function deposits the tokens just minted into the Balance of CoinStore.
• Finally, put the CoinStore object into the treasury and turn the treasury object into a shared object.
The faucet function contains the logic of distributing tokens in the treasury to accounts. This function is only used for development and testing.
Use private type Coin
This example introduces the usage of the private type Coin.
module coins::private_coin {
use std::string;
use moveos_std::signer;
use moveos_std::object::{Self, Object};
use rooch_framework::coin::{Self, Coin, CoinInfo};
use rooch_framework::coin_store::{Self, CoinStore};
use rooch_framework::account_coin_store;
const ErrorTransferAmountTooLarge: u64 = 1;
/// This Coin has no `store` ability,
/// so it can not be operate via `account_coin_store::transfer`, `account_coin_store::deposit` and `account_coin_store::withdraw`
struct PRC has key {}
struct Treasury has key {
coin_store: Object<CoinStore<PRC>>
}
fun init() {
let coin_info_obj = coin::register_extend<PRC>(
string::utf8(b"Private Coin"),
string::utf8(b"PRC"),
1,
);
object::transfer(coin_info_obj, @coins);
let coin_store = coin_store::create_coin_store_extend<PRC>();
let treasury_obj = object::new_named_object(Treasury { coin_store });
object::transfer_extend(treasury_obj, @coins);
}
/// Provide a faucet to give out coins to users
/// In a real world scenario, the coins should be given out in the application business logic.
public entry fun faucet(account: &signer) {
let account_addr = signer::address_of(account);
let coin_signer = signer::module_signer<Treasury>();
let coin_info_obj = object::borrow_mut_object<CoinInfo<PRC>>(&coin_signer, coin::coin_info_id<PRC>());
let coin = coin::mint_extend<PRC>(coin_info_obj, 10000);
account_coin_store::deposit_extend(account_addr, coin);
}
/// This function shows how to use `coin::transfer_extend` to define a custom transfer logic
/// This transfer function limits the amount of transfer to 10000, and take 1% of the amount as fee
public entry fun transfer(from: &signer, to_addr: address, amount: u256) {
assert!(amount <= 10000u256, ErrorTransferAmountTooLarge);
let from_addr = signer::address_of(from);
let fee_amount = amount / 100u256;
if (fee_amount > 0u256) {
let fee = account_coin_store::withdraw_extend<PRC>(from_addr, fee_amount);
deposit_to_treaury(fee);
};
account_coin_store::transfer_extend<PRC>(from_addr, to_addr, amount);
}
fun deposit_to_treaury(coin: Coin<PRC>) {
let treasury_object_id = object::named_object_id<Treasury>();
let treasury_obj = object::borrow_mut_object_extend<Treasury>(treasury_object_id);
coin_store::deposit_extend(&mut object::borrow_mut(treasury_obj).coin_store, coin);
}
}
Likewise, the type of token and its treasury are defined through two structures. PRC is a private type of token.
Use the private type CoinType to create a token, which allows developers to customize the logic of transfer, withdrawal, and deposit, such as how to customize the transfer fee as demonstrated in the example.
Private type tokens cannot use the general transfer, withdrawl, deposit functions provided in the account_coin_store module.
init defines the minting logic for creating PRC tokens.
• The same is done to register the token information CoinInfo and transfer its object to the address where the token contract is issued.
• After using create_coin_store_extend to create the CoinStore object, store it in the treasury. Because this is a private Coin, you need to use the extend function with private generics to ensure that it can only be created by the module of CoinType.
• Finally, the treasury object is transferred to the address where the token is issued.
Define the faucet function to mint tokens and transfer them to the account.
The transfer function shows how to use coin::transfer_extend to define custom transfer logic. This transfer function limits the transfer amount to 10000 and charges a gas fee of 1% of the amount.
|
__label__pos
| 0.975366 |
In AI the object is not resizing properly
In Adobe Illustrator when resizing the multiple objects all object are not equally resizing properly below is the view screenshot for better understanding.
https://imgur.com/a/6nDVK6Z
Because your corner radius is staying the same.
If you use the scale tool does the same thing happen?
Also check your Illustrator Preferences (CMD K or CTRL K) or go to the Illustrator in the Menu and go to Preferences - or in Windows go to File>Prefereneces
Check the General menu
And there are options for Scale Corners and Scale Strokes & Effects
Toggle these and try again.
1 Like
Thanks You, The scale corner present in the preferences:General does the trick.
|
__label__pos
| 0.968766 |
Sarthaks Test
0 votes
3 views
ago in Complex Numbers by (18.0k points)
√(16 - 30i)
Please log in or register to answer this question.
1 Answer
0 votes
ago by (17.2k points)
Let, (a + ib)2 = 16 -30i
Now using, (a + b)2 = a2 + b2 + 2ab
⇒ a2 + (bi)2 + 2abi = 16 - 30i
Since i2 = -1
⇒ a2 - b2 + 2abi = 16 - 30i
Now, separating real and complex parts, we get
⇒ a2 - b2 = 16…………..eq.1
⇒ 2ab = - 30…….. eq.2
⇒ a = -15/b
Now, using the value of a in eq.1, we get
⇒ \((-\frac{15}{b})^2\) – b2 = 16
⇒ 225 – b4 = 16b2
⇒ b4 +16b2 - 225= 0
Simplify and get the value of b2 we get,
b2 = -25 or b2 = 9
As b is real no. so, b2 = 9
b = 3 or b = -3
Therefore, a = - 5 or a = 5
Hence the square root of the complex no. is - 5 + 3i and 5 - 3i.
Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students.
...
|
__label__pos
| 0.981242 |
heiswayi nrird Life Experience, Technology & Research
PowerShell Script - Serial Port Reader
Initial release of my PowerShell script SerialPortReader.ps1 for reading data from a serial port and export the captured data into a file. This script also available on my public gist.
<#
.SYNOPSIS
Listens to and read data from a serial port (e.g. COM port)
.DESCRIPTION
The purpose of this script is to keep listening and read data from a serial port.
All the data captured will be displayed and log into a file.
.EXAMPLE
./SerialPortReader.ps1
.EXAMPLE
./SerialPortReader.ps1 -PortName COM3 -BaudRate 9600
.EXAMPLE
./SerialPortReader.ps1 -PortName COM3 -BaudRate 9600 -Parity None -DataBits 8 -StopBits One -Handshake None -OutputFile ".\output1.log" -ReadInterval 1000
.LINK
http://heiswayi.github.io
.AUTHOR
Heiswayi Nrird, 2016
.LICENSE
MIT
#>
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='Medium')]
[Alias()]
[OutputType([int])]
Param
(
[Parameter(Mandatory=$true, Position=0)]
[ValidateNotNullOrEmpty()]
[string]$PortName = "COM1",
[Parameter(Mandatory=$true, Position=1)]
[ValidateNotNullOrEmpty()]
[int]$BaudRate = 9600,
[Parameter(Mandatory=$false, Position=2)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.Parity]$Parity = [System.IO.Ports.Parity]"None",
[Parameter(Mandatory=$false, Position=3)]
[ValidateNotNullOrEmpty()]
[int]$DataBits = 8,
[Parameter(Mandatory=$false, Position=4)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.StopBits]$StopBits = [System.IO.Ports.StopBits]"One",
[Parameter(Mandatory=$false, Position=5)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.Handshake]$Handshake = [System.IO.Ports.Handshake]"None",
[Parameter(Mandatory=$false, Position=6)]
[ValidateNotNullOrEmpty()]
[string]$OutputFile = "notempty",
[Parameter(Mandatory=$false, Position=7)]
[ValidateNotNullOrEmpty()]
[int]$ReadInterval = 1000
)
<#
.SYNOPSIS
Listens to and read data from a serial port (e.g. COM port)
.DESCRIPTION
The purpose of this script is to keep listening and read data from a serial port.
All the data captured will be displayed and log into a file.
.EXAMPLE
SerialPortReader
.EXAMPLE
SerialPortReader -PortName COM3 -BaudRate 9600
.EXAMPLE
SerialPortReader -PortName COM3 -BaudRate 9600 -Parity None -DataBits 8 -StopBits One -Handshake None -OutputFile ".\output1.log" -ReadInterval 1000
.LINK
http://heiswayi.github.io
.AUTHOR
Heiswayi Nrird, 2016
.LICENSE
MIT
#>
function SerialPortReader
{
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='Medium')]
[Alias()]
[OutputType([int])]
Param
(
[Parameter(Mandatory=$true, Position=0)]
[ValidateNotNullOrEmpty()]
[string]$PortName = "COM1",
[Parameter(Mandatory=$true, Position=1)]
[ValidateNotNullOrEmpty()]
[int]$BaudRate = 9600,
[Parameter(Mandatory=$false, Position=2)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.Parity]$Parity = [System.IO.Ports.Parity]"None",
[Parameter(Mandatory=$false, Position=3)]
[ValidateNotNullOrEmpty()]
[int]$DataBits = 8,
[Parameter(Mandatory=$false, Position=4)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.StopBits]$StopBits = [System.IO.Ports.StopBits]"One",
[Parameter(Mandatory=$false, Position=5)]
[ValidateNotNullOrEmpty()]
[System.IO.Ports.Handshake]$Handshake = [System.IO.Ports.Handshake]"None",
[Parameter(Mandatory=$false, Position=6)]
[ValidateNotNullOrEmpty()]
[string]$OutputFile = "notempty",
[Parameter(Mandatory=$false, Position=7)]
[ValidateNotNullOrEmpty()]
[int]$ReadInterval = 1000
)
$proceed = $false
Write-Output ("Checking PortName...")
foreach ($item in [System.IO.Ports.SerialPort]::GetPortNames())
{
if ($item -eq $PortName)
{
$proceed = $true
Write-Output ("--> PortName " + $PortName + " is available")
break
}
}
if ($proceed -eq $false)
{
Write-Warning ("--> PortName " + $PortName + " not found")
return
}
$filename = ""
try
{
$port = New-Object System.IO.Ports.SerialPort
$port.PortName = $PortName
$port.BaudRate = $BaudRate
$port.Parity = $Parity
$port.DataBits = $DataBits
$port.StopBits = $StopBits
$port.Handshake = $Handshake
$currentDate = Get-Date -Format yyyyMMdd
$fileExt = ".log"
if ($OutputFile -eq "notempty")
{
$filename = (Get-Item -Path ".\" -Verbose).FullName + "\SerialPortReader_" + $currentDate + $fileExt
}
else
{
$filename = $OutputFile
}
Write-Output ("Establishing connection to the port...")
Start-Sleep -Milliseconds 1000
$port.Open()
Write-Output $port
Write-Output ("--> Connection established.")
Write-Output ("")
}
catch [System.Exception]
{
Write-Error ("Failed to connect : " + $_)
$error[0] | Format-List -Force
if ($port -ne $null) { $port.Close() }
exit 1
}
Start-Sleep -Milliseconds 1000
do
{
$key = if ($host.UI.RawUI.KeyAvailable) { $host.UI.RawUI.ReadKey('NoEcho, IncludeKeyDown') }
if ($port.IsOpen)
{
Start-Sleep -Milliseconds $ReadInterval
#$data = $port.ReadLine()
$data = $port.ReadExisting()
<#
[byte[]]$readBuffer = New-Object byte[] ($port.ReadBufferSize + 1)
try
{
[int]$count = $port.Read($readBuffer, 0, $port.ReadBufferSize)
[string]$data = [System.Text.Encoding]::ASCII.GetString($readBuffer, 0, $count)
}
catch { }
#>
$length = $data.Length
# remove newline chars
$data = $data -replace [System.Environment]::NewLine,""
$getTimestamp = Get-Date -Format yyyy-MM-dd HH:mm:ss.fff
Write-Output ("[" + $getTimestamp + "] " + $data)
if ($length -gt 0) { ExportToFile -Filename $filename -Timestamp $getTimestamp -Data $data }
}
} until ($key.VirtualKeyCode -eq 81) # until 'q' is pressed
if ($port -ne $null) { $port.Close() }
}
<#
.SYNOPSIS
Listens to and read data from a serial port (e.g. COM port)
.DESCRIPTION
The purpose of this script is to keep listening and read data from a serial port.
All the data captured will be displayed and log into a file.
.EXAMPLE
ExportToFile -Filename <fullpath> -Timestamp <time string> -Data <data string>
.AUTHOR
Heiswayi Nrird, 2016
.LICENSE
MIT
#>
function ExportToFile
{
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='Medium')]
[OutputType([string])]
Param
(
[Parameter(Mandatory=$true, Position=0)]
[string]$Filename,
[Parameter(Mandatory=$true, Position=1)]
[string]$Timestamp,
[Parameter(Mandatory=$true, Position=2)]
[string]$Data
)
try
{
"[" + $Timestamp + "] " + $Data | Out-File -FilePath $Filename -Encoding ascii -Append
}
catch [System.Exception]
{
Write-Warning ("Failed to export captured data : " + $_)
return
}
}
if (-not ($myinvocation.line.Contains("`$here\`$sut"))) {
SerialPortReader -PortName $PortName -BaudRate $BaudRate -Parity $Parity -DataBits $DataBits -StopBits $StopBits -Handshake $Handshake -OutputFile $OutputFile -ReadInterval $ReadInterval
}
|
__label__pos
| 0.998444 |
WPF自定义控件(1)——仪表盘设计[1]
0、小叙闲言
又接一个新的类了,再来平等蹩脚上位机开发。网上发成百上千控件库,做仪表盘(gauge)的也罢未丢掉,功能吗很强劲,但是个人认为库很臃肿,自己不怕计划入手来形容一个控件库,一凡是啊念,二凡是为着项目。下面是自家花了平下午底时空举行出来的,先押效能:
图片 1
其一表盘当前还比丑,后面会同样步一步地全面它的,包括各种美化,相信自己力所能及就的,加油!!这为是自家个人第一不行写博客,我会坚持下去,同时为会见大力表述清楚各国一个技术细节。源码地址:https://github.com/Endless-Coding/MyGauge/blob/master/CustomControl.zip
1、表盘总体设计
一个表面,就大概来拘禁,应该由四单部分构成,即:表盘外轮廓、刻度(包括小刻度和大刻度)、刻度值、指针。在打造的历程被,略微用了有数学知识,只要用思想考,都十分易之。设计外观的进程被,用到了对诺如下知识点。当然也席卷一些C#同WPF的基础知识,如果来不知情的地方,可以省刘铁猛先生的《深入浅出WPF》
表盘外轮廓 刻度 刻度值 指针
Path路径绘图 直线 TextBlock控件 Path路径绘图
2、表盘外轮廓
初步设计,外轮廓由三段子组成:yellow、green、red,借助WPF强大的绘图功能,做了一个逐年变色,稍微美化了转,如下图。(此圆的半径为:200px)
图片 2
确定性好拘留出来,这个圆由三段弧组成的,如果观察仔细的言辞,可以隐约看到2根稍稍白线,就是三截弧的分界处。
1.黄色弧绘制
代码如下:
1 <Path StrokeThickness="30" Width="420" Height="400" StrokeStartLineCap="Round">
2 <Path.Data>
3 <PathGeometry Figures="M 0,200 A 200,200 0 0 1 58.57864,58.57864"/>
4 </Path.Data>
5 <Path.Stroke>
6 <LinearGradientBrush StartPoint="0,0" EndPoint="0,1">
7 <LinearGradientBrush.GradientStops>
8 <GradientStop Offset="0" Color="Green"/>
9 <GradientStop Offset="1.0" Color="Yellow"/>
10 </LinearGradientBrush.GradientStops>
11 </LinearGradientBrush>
12 </Path.Stroke>
13 </Path>
里最为关键之代码是第3执。对那个数据点的诠释如下表。有未知晓的地方,先记下来,后面呢会见为此到,会慢慢懂得的。
M 0,200 A 200,200 0 0 1 58.57864,58.57864
M是Path绘图的起点标记
弧的起点坐标为(0,200)
A(arc)是弧的标记
(200,200)表示x轴半径:200;y轴半径:200
圆弧旋转角度[0](有起点和终点,个人感觉这个值并没有什么用)
优势弧的标记[0](否,弧角度小于180)
正负角度标记[1](顺时针画圆)
表示终点,用数学公式计算出来的
(58.57864,58.57864)的乘除方式如下:
黄色弧占1/4,故该角度为180*1/4=45渡过,黄色点的坐标计算如下图。
图片 3
2.绿色和革命弧绘制
有了方黄色弧绘制作为基础,绿色和辛亥革命都是同一的道理,下面直接吃出绘制三截弧的代码
1 <Path Stroke="Yellow" StrokeThickness="30" Width="420" Height="400" StrokeStartLineCap="Round">
2 <Path.Data>
3 <PathGeometry Figures="M 0,200 A 200,200 0 0 1 58.57864,58.57864"/>
4 </Path.Data>
5 </Path>
6 <Path Stroke="Green" StrokeThickness="30" Width="420" Height="400">
7 <Path.Data>
8 <PathGeometry Figures="M 58.57864,58.57864 A 200,200 0 0 1 341.42136,58.57864" />
9 </Path.Data>
10 </Path>
11 <Path Stroke="Red" StrokeThickness="30" Width="420" Height="400" StrokeEndLineCap="Round">
12 <Path.Data>
13 <PathGeometry Figures="M 341.42136,58.57864 A 200,200 0 0 1 400,200" />
14 </Path.Data>
15 </Path>
代码的首要以加粗的数字有,为力保代码简洁,结构清晰,去丢了日益变色的拍卖,后面更加上。上述代码的力量使下图:
图片 4
3、表盘刻度绘制
于表盘刻度,是出于众直线段组成,同样可以用XAML语言绘制出。但是这么,代码量有接触杀,同时我们为使手动输入过多以标值,方法很笨,完全没有发表出C#的功。下面我先用XAML语言写有一个刻度(小刻度),以说明原理,然后用C#语言在后台绘出所有刻度,这样有利于后期代码的维护与仪表盘的天性化定做。在20度比的直线刻度两只坐标的精打细算如下图所示。直线刻度的起点是以圆心为(200,200),半径为180底圆上;终点是于圆心为(200,200),半径为170之圆上。
图片 5
根据上述计算出底结果,写起直线的XAML语言的代码和效能如下:
<Line Stroke="Green" StrokeThickness="2" X1="30.85533" Y1="138.43637" X2="40.25225" Y2="141.85658"/>
图片 6
1.略带刻度绘制
发出了面的基础知识,所有的多少刻度可以挺容易绘制出,先押C#的后台代码:
1 public MainWindow()
2 {
3 InitializeComponent();
4 this.DrawScale();
5 }
6 /// <summary>
7 /// 画表盘的刻度
8 /// </summary>
9 private void DrawScale()
10 {
11 for (int i = 0; i <= 180; i += 5)
12 {
13 //添加刻度线
14 Line lineScale = new Line();
15
16 lineScale.Stroke = new SolidColorBrush(Color.FromRgb(0xFF, 0x00, 0));//使用红色的线
17 lineScale.StrokeThickness = 1;//线条的粗细为1
18 //直线刻度的起点,注意角度转为弧度制
19 lineScale.X1 = 200 - 170 * Math.Cos(i * Math.PI / 180);
20 lineScale.Y1 = 200 - 170 * Math.Sin(i * Math.PI / 180);
21 //直线刻度的终点,注意角度转为弧度制
22 lineScale.X2 = 200 - 180 * Math.Cos(i * Math.PI / 180);
23 lineScale.Y2 = 200 - 180 * Math.Sin(i * Math.PI / 180);
24 //将直线画在Canvas画布上
25 this.gaugeCanvas.Children.Add(lineScale);
26 }
27 }
一律,代码的显要点还是在第19,20,22,23推行,180-170=10,这个10象征的就是是有点刻度的长短。画来富有刻度后,效果如下:
图片 7
2.不行刻度绘制
大刻度是每5个小刻度就应运而生同样不行,长度为20px,有了画画小刻度的底子,实现在刻度非常容易,只需要对DrawScale函数稍加修改,如下面的粗体代码所示
1 private void DrawScale()
2 {
3 for (int i = 0; i <= 180; i += 5)
4 {
5 //添加刻度线
6 Line lineScale = new Line();
7
8 if (i % 25 == 0)//说明已经画了5个小刻度了,加一个大刻度
9 {
10 lineScale.X1 = 200 - 160 * Math.Cos(i * Math.PI / 180);
11 lineScale.Y1 = 200 - 160 * Math.Sin(i * Math.PI / 180);
12 lineScale.Stroke = new SolidColorBrush(Color.FromRgb(0x00, 0xFF, 0));
13 lineScale.StrokeThickness = 3;
14 }
15 else
16 {
17 lineScale.X1 = 200 - 170 * Math.Cos(i * Math.PI / 180);
18 lineScale.Y1 = 200 - 170 * Math.Sin(i * Math.PI / 180);
19 lineScale.Stroke = new SolidColorBrush(Color.FromRgb(0xFF, 0x00, 0));
20 lineScale.StrokeThickness = 1;
21 }
22 //直线刻度的终点,注意角度转为弧度制
23 lineScale.X2 = 200 - 180 * Math.Cos(i * Math.PI / 180);
24 lineScale.Y2 = 200 - 180 * Math.Sin(i * Math.PI / 180);
25 //将直线画在Canvas画布上
26 this.gaugeCanvas.Children.Add(lineScale);
27 }
28 }
末尾兑现之职能使下图所著,已经越来越接近了。
图片 8
4、表盘刻度值加加
刻度值是为此文本块表示的(TextBlock控件)。控制好文本块在表面中之坐标就行,实现啊非常爱,这里发生若留意的某些凡,由于负有控件是的坐标起点是盖左上角为零点,当角度过90度的时,坐标应当持有补偿。直接说可能说不清楚,在代码中知晓,依旧是针对DrawScale()函数进行改动如下:
1 private void DrawScale()
2 {
3 for (int i = 0; i <= 180; i += 5)
4 {
5 //添加刻度线
6 Line lineScale = new Line();
7
8 if (i % 25 == 0)
9 {
10 lineScale.X1 = 200 - 160 * Math.Cos(i * Math.PI / 180);
11 lineScale.Y1 = 200 - 160 * Math.Sin(i * Math.PI / 180);
12 lineScale.Stroke = new SolidColorBrush(Color.FromRgb(0x00, 0xFF, 0));
13 lineScale.StrokeThickness = 3;
14
15 //添加刻度值
16 TextBlock txtScale = new TextBlock();
17 txtScale.Text = (i).ToString();
18 txtScale.FontSize = 10;
19 if (i <= 90)//对坐标值进行一定的修正
20 {
21 Canvas.SetLeft(txtScale, 200 - 155 * Math.Cos(i * Math.PI / 180));
22 }
23 else
24 {
25 Canvas.SetLeft(txtScale, 190 - 155 * Math.Cos(i * Math.PI / 180));
26 }
27 Canvas.SetTop(txtScale, 200 - 155 * Math.Sin(i * Math.PI / 180));
28 this.gaugeCanvas.Children.Add(txtScale);
29 }
30 else
31 {
32 lineScale.X1 = 200 - 170 * Math.Cos(i * Math.PI / 180);
33 lineScale.Y1 = 200 - 170 * Math.Sin(i * Math.PI / 180);
34 lineScale.Stroke = new SolidColorBrush(Color.FromRgb(0xFF, 0x00, 0));
35 lineScale.StrokeThickness = 1;
36 }
37
38 lineScale.X2 = 200 - 180 * Math.Cos(i * Math.PI / 180);
39 lineScale.Y2 = 200 - 180 * Math.Sin(i * Math.PI / 180);
40
41 this.gaugeCanvas.Children.Add(lineScale);
42 }
43 }
补加刻度值后的职能使下图
图片 9
5、指针绘制
开表面的指针,可以发诸多方案,网上发过多人说,用图片代替,但是图一旦放开后,就会见变得模糊,因此,我要自己下手,做了一个简单的指针,同样是以Path方法,使用路径绘图,XAML代码如下,指针主要出于2久直线与平等绝望弧组成,使用了橙色填充。
1 <Path x:Name="indicatorPin" Fill="Orange">
2 <Path.Data>
3 <PathGeometry>
4 <PathGeometry.Figures>
5 <PathFigure StartPoint="200,195" IsClosed="True">
6 <PathFigure.Segments>
7 <LineSegment Point="20,200"/>
8 <LineSegment Point="200,205"/>
9 </PathFigure.Segments>
10 </PathFigure>
11 </PathGeometry.Figures>
12 </PathGeometry>
13 </Path.Data>
14 </Path>
很多表盘中间产生一个指令数值的,这里,我啊用一个文本块来仿制一下,XAML语言如下,注意,文本块的职务设分配好。
<TextBlock x:Name="currentValueTxtBlock" FontSize="20" Canvas.Left="140" Canvas.Top="150"/>
最终外观如下图所示:
图片 10
6、让指针转起来
指南针的旋,很显著,是因(200,200)为圆心,各种角度转动,使用了RotateTransform和DoubleAnimation实现转动画。动画的工夫长度根据角度大小分配,1过8个毫秒。转动的角度大小目前是本机生成的,在这,我用转动画写于canvas_MouseDown事件中。C#代码如下:
1 private void Canvas_MouseDown(object sender, MouseButtonEventArgs e)
2 {
3 RotateTransform rt = new RotateTransform();
4 rt.CenterX = 200;
5 rt.CenterY = 200;
6
7 this.indicatorPin.RenderTransform = rt;
8
9 angelCurrent = angleNext;
10 Random random = new Random();
11 angleNext = random.Next(180);
12
13 double timeAnimation = Math.Abs(angelCurrent - angleNext) * 8;
14 DoubleAnimation da = new DoubleAnimation(angelCurrent, angleNext, new Duration(TimeSpan.FromMilliseconds(timeAnimation)));
15 da.AccelerationRatio = 1;
16 rt.BeginAnimation(RotateTransform.AngleProperty, da);
最终效果如下,终于做扫尾了,享受一下收获!!
图片 11
总心得
其一表盘目前虽说老简短,但是自己平步一步思考然后开下的,后面要用丰富定制越来越有力的意义,相信得心应手的。第一次写博客,比想像吃的不便多了,感觉很多物都难以发挥清楚。同时为重新好的达效果,用了visio制图,和MathType公式编辑器,还用了录屏软件录制窗口视频,然后据此迅雷看看截出gif图,最后以gif图处理一下,发布到博客里,着实无轻,相信后面会愈爱的。
再就是,下同样对象,将之表盘美化和封装成用户控件,供项目调用。
下下一对象,制作图纸控件,敬请关注!!
|
__label__pos
| 0.973634 |
Editing is a core, fundamental Stack Exchange value; we allow editing by registered and unregistered users (if peer reviewed).
learn more… | top users | synonyms
2
votes
0answers
37 views
Cancel an edit suggestion [duplicate]
It happened many times that while I was submitting an edit to a question, someone else edited the same question (with the same phylosophy, i.e. introduced code formatting, or correct a typo in the ...
6
votes
1answer
51 views
Clicking to load new edits doesn't load the new automated duplicate box
When someone has voted to close the question as a duplicate, then someone makes an edit to the question, the new automated duplicate box does not get loaded with the edit. If it was already there, it ...
9
votes
2answers
107 views
Is the fact that a question is older and downvoted a good reason to reject an edit?
I was cleaning up some questions for the Homework tag, and I was told that I shouldn't bother since it is an old question, is this true. It happened in 2 occasions, here and here by the same user. ...
7
votes
1answer
72 views
Could we add a tab for rejected edits?
I saw this post, but I was wondering if perhaps we could add a tab under activity for rejected edits. What I am suggesting if it is unclear is an additional tab that shows rejected edits. Like this: ...
1
vote
0answers
35 views
Can't suggest edit on a question
I've tried removing the XHTML tag and the _L signature in this post. After clicking Save Edits, all seems fine until I check my activity box under my profile and notice my suggested edit is ...
2
votes
1answer
33 views
Title length should be checked while asking the question also
I recently tried to a make a edit to a post, I made changes to the body of the post and kept the title as it is. I got the following message and then I had to edit the title to place the edit. So, ...
5
votes
3answers
102 views
Can we get a better conflict resolution to edits vs suggested edits
So reading this meta question, it sounds like a real edit will always take precedence over a suggested edit, when it comes to edit conflict resolution, even if the suggested edit is better. This also ...
6
votes
2answers
96 views
What to do about unanswered questions that have been edited to completely change the meaning?
The question I'm referring to can be found here: How to create a suitable for each loop and output? As you can see from the revision history, it used to be a completely different question about some ...
-1
votes
1answer
84 views
Retag question reputation required = 500? [duplicate]
While editing the questions I have been adding tags for quite some time now. As proof I have the Organizer badge which is awarded for first retag. Yet I am able to do so with a reputation under 500. ...
1
vote
1answer
116 views
Editing a question doesn't pop it high in the question list anymore?
I'm wondering if this a new change in SO. This question of mine was asked 7 hours ago. I edited it twice now. Both the times I couldn't see it popped up in the SO question list. Previously whenever I ...
0
votes
1answer
47 views
How and why an edit is counted?
To day I edited a post and fixed a spelling mistake , Edited question but this edit doen't show in my editing history, . Thus my questions are: Why it's not showing in my edit history? And/Or how ...
1
vote
1answer
36 views
Edit revisions - possibility to hide a revision for users with the Edit priviledge (2000+ rep)
I remember when there was an option here - a checkbox - saying "minor revision". Such revision would not be visible in revisions history (I think). Now, I don't particulary miss that feature, however ...
0
votes
0answers
33 views
proper way to edit a question that needs 1 character [duplicate]
Possible Duplicate: What about lowering the edit character limit for characters in code snippets? I often find that code samples within questions are missing a semicolon or a trailing ]. ...
2
votes
0answers
37 views
Edit own question doesn't pick up changes just done
While editing this answer DOS batch FOR loop with FIND.exe is stripping out blank lines? of mine, yesterday and today, think I had found a bug. After finishing doing a major overhall of the question, ...
12
votes
1answer
166 views
Why can't someone edit more than five of his/her own posts per day?
I was improving some of my old posts on Stack Overflow when I ended up encountering this: You have already edited 5 of your own post today; futher edits are not allowed until tomorrow I ...
10
votes
1answer
211 views
Does Stack Exchange have an official stance on users doing a large number of trivial edits to old questions?
It drives me mildly insane to see the "Active" page of a Stack Exchange site filled with old questions that have all had trivial edits from one user. Modifications I consider trivial are: a single ...
4
votes
1answer
86 views
When are syntax errors protected from edits? [duplicate]
Possible Duplicate: How far can I refactor someone else’s code? I stumbled across an interesting post, Are self-closing tags valid in HTML5?. There's an error in the initial question; ...
4
votes
1answer
79 views
AJAX reload after edit omits non-BMP unicode characters
When editing a post that contains a non-BMP unicode character (codepoint U+10000 or up), the AJAX-loaded post when you submit the edit does not include that character. I suspect this is a ...
-11
votes
3answers
125 views
Issues with Stack Overflow users editing my questions
I have a couple of questions regarding what guidelines the users are following when editing my questions (or answers). Examples: They are removing my signature, for example "Regards, Gustavo". Every ...
3
votes
2answers
107 views
Code in your answer was edited
I like the new extra-specific notifications, such as "Code in your answer to Iterating through a map randomly causes segfault was edited" which relates to revision 2 of this question. However, ...
9
votes
0answers
251 views
“Bare with me” isn't a thing, yet it's used all over
Somewhere in the 339 questions containing "bare with me" there may be one or two intentional uses of "bare" instead of "bear", but for the most part it's 339 post accidentally asking us all to get ...
8
votes
2answers
132 views
What should be done with a closed question that has been completely rewritten?
This question was originally a closed as "not constructive"- in its original form, it was asking which of three R machine learning packages was best. The author has since edited the question so that ...
3
votes
1answer
34 views
Editing tags when there's long tag names breaks the edit box
This can best be displayed with an example. When the question has a lot of long tags the edit tags button wraps over to a new line. Clicking on it most of the tags disappear to the left of the ...
3
votes
0answers
50 views
How is a more substantive edit less substantive?
Both Blazemonger and myself edited this answer at roughly the same time. My edit couldn't be saved as it was less substantive than Blazemonger's. I changed absolutely everything that Blazemonger did; ...
2
votes
0answers
37 views
Post edits bug: Not saving changes
I happened to me too many times. I edit a post, usually questions, I add the description of the changes, and then I press to Save to suggest the edit, but it acts the same as I pressed to cancel. ...
0
votes
2answers
59 views
Unable to edit my response to a locked question
The image is broken in my response to the "favorite programmer cartoon" question. I'd be happy to fix this, but I can't (not enough rep? not a moderator?). Here's the image in the Internet Archive: ...
-4
votes
1answer
85 views
Can questions be renamed? [duplicate]
Possible Duplicate: How does editing work? I couldn't find the answer to this anywhere. I always assumed users with extremely high reputation would be able to do this, but it doesn't appear ...
16
votes
3answers
180 views
Is suggesting an edit soon after initial posting a discouraged behavior?
I ask this because I've taken a liking to checking new questions for poor formatting, broken English, and questionable phrasing. My goal in this is to edit otherwise decent questions that get ...
7
votes
1answer
92 views
Do edits affect rel=nofollow addition?
I answered a question here: Parsing HTML to fix microtypography & glyph issues and remember remarking to my "somewhat-friend" that I was happy because the links didn't contain rel=nofollow. He's ...
5
votes
0answers
85 views
Should an exactly equal edit been dropped by Stack Overflow?
I just improved an answer which was one second later edited by another user with the same actions, so that the diff is completely empty. I think the later edit should been dropped automatically. Here ...
7
votes
0answers
44 views
Editing a post with a pending edit after already reviewing 20 edits
Note: Somewhat related to Unable to edit posts with pending edits. As a Suggested Edits queue reviewer, it's frustrating when reviewing a question that requires an edit, and going to edit only to ...
-2
votes
2answers
75 views
Make editing someone else's question or answer visually different from editing your own
Sometimes users with high reputation end up editing someone else's answer thinking that they are editing their own. This has happened to me several times - both when I edited someone's response and ...
4
votes
0answers
45 views
Question got edited to duplicate of another question, after answer accepted
Yesterday, I came across this question about HTML5 cache manifest with no answers. I read through the question, quickly checked against the corresponding Wikipedia article as a reference, and found ...
-5
votes
3answers
245 views
Is 2000 Rep really needed to change one letter?
I looked for this question on Meta Stack Overflow, and found the topic discussed but never really resolved. Just saw this typo, "...Has anyone every experienced this kind of debugging ...
10
votes
1answer
164 views
Edits that use grave accent tags just for highlighting
Moments ago, in question Overlaying a rectangle over map in iphone (specifically this revision), I found some strange formatting with grave accent(`) tags used not for the code but just for ...
7
votes
2answers
87 views
Is editing out tags in titles too minor when there is nothing else wrong with the post?
Yesterday, a 2k'er added tag to the title of this post. I, knowing that there should not be any tags in the title, put the tag at the end of the title to "work it in organically". Since I do not have ...
1
vote
2answers
81 views
Should upvote for a question balance the downvote if there are more down- than upvotes?
On Stack Overflow there was a very ambiguous and poorly written question, which got edited by me and a few others. Before we've edited the question, the user reputation was 1 (with -8 because of 4 ...
1
vote
1answer
31 views
Backspace key from edit screen in review exits the review queue
This is a pretty simple one. Pressing the backspace key (my preferred method of going back a page), while on the edit page of a review, leaves the review queue. The expected behavior is that I will ...
2
votes
0answers
25 views
Error when editing answer [duplicate]
Possible Duplicate: Why do I always get an error when trying to edit an answer? When editing any of my answers, I get... An error occurred saving the edit (click on this box to dismiss) ...
15
votes
1answer
106 views
Why do I always get an error when trying to edit an answer?
In past I didn't have this problem, but now if I try to edit an answer, I always get an error: After this I usually reload the page, if I see that the message has been actually edited (so I got ...
1
vote
1answer
85 views
Suggested edit accepted by 3 users then rolled back
I made a much necessary cleanup on a very popular CSS question (46k+ views). It was a big edit but nevertheless got approved by three users. Then a fourth user rolled back to the previous low-quality ...
-14
votes
5answers
495 views
Dog-piling of close votes is a real problem [duplicate]
Possible Duplicate: “not a real question” close trigger happy? Aggressive closing of questions suggestion Consider this question: http://stackoverflow.com/posts/14262913/revisions As ...
-1
votes
2answers
96 views
Stack Overflow allowing to edit comments from other users? [duplicate]
Possible Duplicate: How does editing work? Is this normal behavior in Stack Overflow that after just signing up one can edit not only his/her own comments, but also others' comments? If so, ...
1
vote
1answer
83 views
Comments looking odd to my answer
I am writing an answer and someone gives some comments for my answer. I faithfully change my answer based on the comments and my answer looks the best now. Yet the comment added before remains and ...
3
votes
1answer
116 views
“Don't just link to jsfiddle.net” when editing a post but not when asking the question?
I was trying to edit a post which had a jsfiddle link. The whole content was not just the link, but I believe there are other posts (like this, for example) that address this. Anyway, when I hit the ...
1
vote
2answers
76 views
What to do about wrongly edited code in answer?
I know that we shouldn't edit code in questions, for obvious reasons. But what about code in answers? Does the same rule apply? In this case, the code in the answer was changed by an edit to reflect ...
1
vote
1answer
114 views
How to handle someone trivially editing an answer that isn't an answer?
I frequent the review queues on SO and in the middle of flagging an answer as "not an answer", I noticed it had just been edited. This was odd since this answer was actually asking a question. The ...
3
votes
1answer
33 views
Repeated tag wiki edits only adding chat room links
Just been going through some edits, and there seems to be a lot from a single person where the only edit is just adding a chat room edit... This must be the 10th at least I've seen ...
9
votes
2answers
105 views
Why does an Edit action kick a post out of the Close Votes queue?
According to Emmett's answer, an Edit action on a post immediately dequeues it from the close votes queue. But why? That doesn't seem very productive. Just because someone edits a question doesn't ...
1
vote
2answers
55 views
Recent edit on a question
The question that was posted first had a recent change on that question based on the comments or from answers, for which I would have earlier answered. How does stackoverflow or stackexchange do ...
1 3 4 5 6 7 31
|
__label__pos
| 0.572796 |
What is the Holochain? Holo Token Explained.
Holochain Blockchain platform and it’s acceptance in the crypto world.
Introduction
In barter systems, some kind of human interaction can be observed, but take a moment to realize the current situation, in which the service is primarily bought and sold. The human beings inherently are hardwired for such interactions with the other people and live as a society as a whole, but when we replace a service with a piece of paper (currency), human beings get a feeling that they did something for the other person but the other person just replaced it with a future promise. As a long-term consequence of which human beings are inherent, suffer from loneliness as well as a lack of trust between each other. However, currencies have transformed the way the financial economy works and also have added on to the efficiency of the cryptocurrenciesLet’s get to know about Holo Coin and How Holochain works?
The Holochain Blockchain
The Holochain Blockchain platform is one such cryptocurrency token initiated recently, in order to optimize the functioning of the Smart contracts as well as the Decentralised applications. The Holochain blockchain platform predominantly aims to shift the focus from a data-centric blockchain initiative, towards an agent-centric Blockchain initiative. It would be highly amusing to know that the cryptocurrency blockchain platform doesn’t incorporate any kind of consensus mechanism, therefore there must be some or the other kind of alternative. In case of Holochain blockchain platform, this is compensated by the Side Chains that can be developed on a particular new node of the network. This process significantly reduces stress from the main blockchain and therefore provides with infinite scalability.
How does it actually work?
According to most of the professional expert from various fields, it has been estimated that the name of the Holochain Blockchain initiative, is pegged to the hologram. Which is indirectly used to simulate a 3D picture. Specifically, the Holochain blockchain platform also uses the holistic mechanism for its technical operations. Here, each node maintains its own blockchain, and the blockchain interacts with the sidechains through their unique Cryptographic key.
As the Holochain blockchain platform uses the concept of SideChains, the speed of the transactional confirmations os certainly not a bottleneck. The Blockchain platform uses the Hash Table data structure where key information about all the SideChains is stored in them. The Holochain network continuously grows in size, and so is the inf8vifual transactional capabilities.
Languages supported
The Holochain blockchain platform is estimated to be deployed to the social media platforms. However, they can also be adopted by the loyalty programs, supply chain management, also in the peer to peer and also the collective intelligence projects. The entire cryptocurrency platform is developed on the, Go programming language. However, leisure is provided to the developers to use Lisp or JavaScript. JavaScript, CSS as well as HTML can be vested for the front end development purposes.
Effective solutions
As the Holochain platform, doesn’t incorporate any form of Consensus mechanism, it is highly eco-friendly in nature and occupies a very negligible amount of bandwidth when compared to major blockchains like Bitcoin and Ethereum. This nature of the holo tokens has made them one of the most beautiful things for the environmental issues as no electricity wasted as such. The team members of the Holochain are some of the most highly experienced in the field of programming. The Co-founders itself have an experience of 34 years in the field of programming. Also, the Chief architect of the cryptocurrency project has been a contract coder along with working on Artificial Intelligence and an online currency development, since the year 1984.
Is it worth investing in?
Investment in the coin can be expected but since it is not as popular as other coins on a relative scale, it might not form a great fit in portfolios. During its ICO in 2018, it managed to raise almost $20 million in ETH. Staking funds is not free from risks, in this vulnerable crypto domain and an expert consultancy is recommended prior to taking any such initiatives. It is available on Binance and can be stored in any ERC 20 powered crypto wallets.
spot_img
|
__label__pos
| 0.84392 |
Parábola
Origem: Wikipédia, a enciclopédia livre.
Ir para: navegação, pesquisa
Disambig grey.svg Nota: Para outros significados, veja Parábola (desambiguação).
Uma parábola
Parábola (do grego: παραβολή) é uma seção cônica gerada pela interseção de uma superfície cônica de segundo grau e um plano paralelo à reta geratriz do cone, sendo que o plano não contém esta. Equivalentemente, uma parábola é a curva plana definida como o conjunto dos pontos que são equidistantes de um ponto dado (chamado de foco) e de uma reta dada (chamada de diretriz)[1] [2] . Aplicações práticas são encontradas em diversas áreas da física e da engenharia como no projeto de antenas parabólicas, radares, faróis de automóveis.
Definições e visão geral[editar | editar código-fonte]
Parábola de foco F(p,0) e diretriz x = -p.
Equações da geometria analítica[editar | editar código-fonte]
Uma parábola é o conjunto de pontos no plano que são equidistantes de um ponto dado F (foco) e uma reta dada r (diretriz) que não contém F[3] . Assim, em coordenadas cartesianas, uma parábola de foco F(p, 0) e reta diretriz r: x = -p tem equação[4]
y^2 = 4px.
Uma parábola é dita estar em uma posição padrão quando seu foco está sobre o eixo das abscissas ou sobre o eixo das ordenadas e sua diretriz é, respectivamente, paralela ao eixo das ordenadas ou ao eixo das abscissas. A equação de uma parábola em uma posição padrão é chamada de equação padrão. Assim, além da equação acima, temos que:
x^2 = 4py
é, também, uma equação padrão. Esta caracteriza uma parábola de foco F(0, p) e diretriz r: y = -p. De fato, por definição, P(x, y) pertence à parábola se, e somente se:
d(P, F) = d(P, r)
onde, d(\cdot, \cdot) denota a distância euclidiana. Assim, para uma parábola de foco F(p, 0) e diretriz r:x=-p, temos:
\sqrt{(x-p)^2 + y^2} = \sqrt{(x+p)^2} \Leftrightarrow (x-p)^2 + y^2 = (x+p)^2
que é equivalente à equação y^2 = 4px. O procedimento é análogo para uma parábola de foco F(0, p) e diretriz r:y=-p, mostrando que, neste caso, x^2 = 4py.
O eixo de simetria de uma parábola é definido como a reta que passa por seu foco F e é perpendicular a sua reta diretriz r. O vértice de uma parábola é definido pela intersecção da parábola com seu eixo de simetria. Notemos que nas equações acima |p| corresponde a distância do vértice ao foco, bem como, à diretriz.
Um gráfico mostrando as propriedade reflexivas,a diretriz (em verde), e as linhas conectando o foco e e diretriz à parábola (em azul)
Observamos que, por translação, obtemos a equação de uma parábola com vértice V(h,k), foco F(h+p, k) e diretriz r: x = h - p por:
(y - k)^2 = 4p(x-h).
Analogamente, uma parábola com vértice V(h,k), foco F(h, k+p) e diretriz r: y = k - p é descrita pela equação:
(x - h)^2 = 4p(y - k).
De maneira geral, uma parábola é uma curva no plano cartesiano definida por uma equação irredutível de coeficientes reais da forma:
a x^2 + 2b xy + c y^2 + d x + e y + f = 0
com b^2 = ac, a + c \neq 0. O fato da equação ser irredutível significa que ela não pode ser fatorada como um produto de dois fatores lineares.
Outras definições geométricas[carece de fontes?][editar | editar código-fonte]
Parábola como seção cônica.
Uma parábola também pode ser caracterizada com uma secção cônica com uma excentricidade igual a 1. Como uma consequência disso, todas as parábolas são similares.
Uma parábola também pode ser obtida como o limite de uma sequência de elipses onde um foco é mantido fixo e o outro pode se mover para uma distância cada vez maior do foco em uma direção. Desta forma, uma parábola pode ser considerada a seção do segmento de uma elipse que possui um foco no infinito. A parábola é a transformada inversa de um cardióide.
Se girarmos uma parábola através de seu eixo em um gráfico de três dimensões temos uma forma conhecida como o parabolóide de revolução.
Dedução das equações[editar | editar código-fonte]
Exemplo de uma parábola com eixo de simetria vertical.
Em coordenadas cartesiana[editar | editar código-fonte]
Eixo vertical de simetria[editar | editar código-fonte]
Estas deduções se baseiam em uma parábola com eixo vertical de simetria, com vértice V(h, k) e distância p entre o vértice e o foco. Por convenção, se o vértice estiver abaixo do foco p é positivo, caso contrário p é negativo.[2]
Como, por definição, um ponto P(x, y) na parábola dista do foco F(h, k+p) tanto quanto da reta diretriz r: y = k-p, podemos escrever:
d(P, F) = d(P, r) \Rightarrow \sqrt{(x - h)^2 + (y - k - p)^2} = |y - k + p|
onde, d(\cdot, \cdot) denota a distância euclidiana e |\cdot| denota a função valor absoluto. Lembrando que |x| = \sqrt{x^2} para qualquer x real, temos:
(x - h)^2 + (y - k - p)^2 = (y - k + p)^2
(x - h)^2 = (y - k - p)^2 - (y - k + p)^2
(x - h)^2 = 4 p (y - k)
a qual é a equação padrão procurada.
Comumente, esta equação aparece reescrita na forma de um trinômio do segundo grau:
y = ax^2 + bx + c
onde:
a = \frac{1}{4p}; \ \ b = \frac{-h}{2p}; \ \ c = \frac{h^2}{4p} + k; h = \frac{-b}{2a}; \ \ k = -\frac{b^2 - 4ac}{4a}.
Muitas vezes é útil descrever uma parábola via equações paramétricas. Tomando x = x(t), por exemplo x(t) = 2pt + h, e substituindo na equação padrão, obtemos y = y(t). Isto nos fornece a seguinte parametrização de uma tal parábola:
\left\{ \begin{array}{ll} x(t) = 2pt + h\\ y(t) = pt^2 + k \end{array}\right. , \quad t\in\mathbb{R}.
Observamos que a parametrização de x, i.e. x = x(t), é arbitrária, sendo que diferentes escolhas levam a um conjunto diferente de equações paramétricas.
Eixo horizontal de simetria[editar | editar código-fonte]
Exemplo de uma parábola com eixo de simetria horizontal.
Analogamente, uma parábola com eixo horizontal de simetria, vértice V(h,~k) e distância p entre o vértice e o foco tem equação padrão:
(y - k)^2 = 4p(x - h)
Notemos que esta pode ser reescrita no trinômio de segundo grau:
x = ay^2 + by + c
tomando:
a = \frac{1}{4p}; \ \ b = \frac{-k}{2p}; \ \ c = \frac{k^2}{4p} + h; h = -\frac{b^2 - 4ac}{4a}; \ \ k = \frac{-b}{2a}.
Tomando y = y(t), y(t) = 2pt + k, e substituindo na equação padrão, obtemos as seguintes equações paramétricas para uma tal parábola:
\left\{\begin{array}{ll}x(t) = pt^2 + h \\ y(t) = 2pt + k\end{array}\right.,\quad \forall t\in\mathbb{R}
Em coordenadas polares[editar | editar código-fonte]
Esboço da parábola r(1 + \cos(\theta)) = 2a com a = 5/2.
Em coordenadas polares, uma parábola com o foco na origem e reta diretriz x = -2a é dada pela equação[5] :
r = \frac{2a}{1 - \cos(\theta)}
De fato, tomando r = \sqrt{x^2 + y^2} e \cos(\theta) = \frac{x}{\sqrt{x^2 + y^2}} e substituindo na equação polar, obtemos:
y^2 = 4a(x + a)
que é a equação padrão da parábola de vértice V(-a,0) e reta diretriz x = -2a.
Forma em coordenadas gaussianas[editar | editar código-fonte]
A forma em coordenadas gaussianas é dada por:[carece de fontes?]
(\tan^2\phi,2\tan\phi)
e possui a normal (\cos\phi,\mathrm{sen}\,\phi).
Equação Quadrática[editar | editar código-fonte]
De forma geral, uma parábola é descrita por uma equação quadrática de coeficientes reais da forma:
ax^2 + 2bxy + cy^2 + dx + ey + f = 0
com b^2 = ac e a+c \neq 0. A presença do termo cruzado xy (i.e., b \neq 0) indica que a parábola tem eixo de simetria transversal em relação aos eixos canônicos x, y.
Esboço de uma parábola em posição não padrão. Aqui, \lambda_1 = 0, \lambda_2 = 10, a rotação é dada poru = -\frac{\sqrt{10}}{10}x + \frac{3\sqrt{10}}{10}y e v = \frac{3\sqrt{10}}{10}x + \frac{\sqrt{10}}{10}y e a translação é dada por u' = u + 2 e v' = v - 1.
Tal equação pode ser escrita na seguinte forma matricial[6] :
\mathbf{x}^T A \mathbf{x} + \mathbf{b}\mathbf{x} + f = 0
onde \mathbf{x} = \left[\begin{array}{l}x\\y\end{array}\right] é o vetor real bidimensional das incógnitas,
A = \left[\begin{array}{ll}a & b \\ b & c\end{array}\right]
é uma matriz real simétrica de autovalores reais \lambda_1 e \lambda_2, sendo exatamente um deles nulo,
\mathbf{b} = \left[d~~e\right]
é o vetor real bidimensional, e f é um escalar real.
Rotação[editar | editar código-fonte]
Uma parábola cujo eixo de simetria não é paralelo ao eixo das abscissas nem ao eixo das ordenadas pode ser descrita como uma rotação de uma parábola em uma posição padrão. Notemos que a matriz A é ortogonalmente diagonalizável[6] , i.e.:
P^T A P = \left[\begin{array}{ll}\lambda_1 & 0 \\ 0 & \lambda_2 \end{array}\right]
onde P = \left[\mathbf{v}_1 ~~ \mathbf{v}_2 \right] é a matriz ortogonal, cujas colunas são autovetores \mathbf{v}_1, \mathbf{v}_2 associados aos autovalores \lambda_1 e \lambda_2, respectivamente.
Fazendo a mudança de variável:
\mathbf{x} = P\mathbf{y},\quad\mathbf{y} = \left[\begin{array}{l}u\\v\end{array}\right],
podemos escrever a equação da parábola nas novas variáveis u, v como:
\mathbf{y}^T \left(P^T A P\right)\mathbf{y} + \left(\mathbf{b}P\right)\mathbf{y} + f = 0
a qual representa uma parábola cujo eixo de simetria é paralelo ao eixo u, dado pelo autovetor \mathbf{v_1}, ou ao eixo v, dado pelo autovetor \mathbf{v}_2.
Translação[editar | editar código-fonte]
Uma parábola de vértice V(h,k) pode ser vista como uma translação de uma parábola de vértice na origem. Ou seja, fazendo a mudança de variável:
\mathbf{z} = \left[\begin{array}{l}u'\\ v'\end{array}\right] = \left[\begin{array}{l}u\\ v\end{array}\right] - P^T\left[\begin{array}{l}h\\ k\end{array}\right]
obtemos a equação padrão da parábola escrita nas variáveis u', v'.
Propriedade Refletora[editar | editar código-fonte]
Propriedade refletora de uma parábola.
Para uma superfície parabólica que seja construída com material reflexivo, um feixe de partículas paralelas ao eixo de simetria é direcionado para o seu foco.[7]
De fato, consideramos, sem perda de generalidade, a parábola y = 4px^2 ilustrada na figura ao lado. Nela, F(0,p) denota seu foco, A(0,0) seu vértice e E(x, 4px^2) o ponto de incidência de um feixe de partículas paralelo ao eixo de simetria dessa parábola. A reta paralela ao eixo de simetria que contém a trajetória da onda tem interseção com o eixo das abscissas no ponto D(x, 0) e com a diretriz da parábola no ponto C(x, -p). Observamos que o segmento FC tem interseção com o eixo das abscissas no ponto B\left(\frac{x}{2}, 0\right), i.e. no ponto médio entre os pontos A e D. Por essa razão e mais o fato de que F e C são equidistantes do eixo das abscissas, vemos que BEF e CEB são triângulos congruentes. Notamos, agora, que a reta que passa pelos pontos B e E têm inclinação \text{arc tg}\left(\frac{|DE|}{|BD|}\right) = \text{arc tg } (8px ) e, portanto, é a reta tangente à parábola no ponto E, pois y' = 8px neste ponto. Assim, se \alpha é o ângulo de incidência do feixe com a reta tangente no ponto E (equivalentemente, com um elemento infinitesimal do comprimento do arco da parábola no mesmo ponto) , temos que o feixe é refletido pela parábola com o mesmo ângulo. Pela congruência dos triângulos BEF e CEB, vemos que a onda refletida alcança o ponto F, i.e. o foco da parábola.
Referências
1. Affonso Rocha Giongo (1974). Curso de Desenho Geométrico Nobel [S.l.] Capítulo: Retificação da circunferência 78 p.
2. a b «Sítio de internet do curso Cálculo e Geometria Analítica da UFRGS - Cônicas». Instituto de Matemática da UFRGS. Consultado em 24/10/2014.
3. «Parabola - from Wolfram MathWorld». Wolfram Research, Inc. Consultado em 24/10/2014.
4. Boulos, Paulo; Camargo, Ivan de (1987). Geometria Analítica. Um Tratamento Vetorial 2 ed. (São Paulo: McGrall-Hill). p. 266. ISBN 0074500465.
5. Reginaldo J. Santos (2001). «Seções Cônicas» (PDF). Consultado em 25/10/2014.
6. a b KOLMAN, BERNARD (2013). Álgebra Linear com Aplicações 9 ed. LTC [S.l.] ISBN 9788521622086.
7. Lima, Elon Lages (2006). A matemática do ensino médio - volume 1 SBM [S.l.] ISBN 8585818107.
Bibliografia[editar | editar código-fonte]
• Braga, Theodoro - Desenho linear geométrico. Ed. Cone, São Paulo: 1997.
• Carvalho, Benjamim - Desenho Geométrico. Ed. Ao Livro Técnico, São Paulo: 1982.
• Giongo, Affonso Rocha - Curso de Desenho Geométrico. Ed. Nobel, São Paulo: 1954.
• Mandarino, Denis - Desenho Geométrico, construções com régua e compasso. Ed. Plêiade, São Paulo: 2007.
• Marmo, Carlos - Desenho Geométrico. Ed. Scipione, São Paulo: 1995.
• Putnoki, José Carlos - Elementos de geometria e desenho geométrico. Vol. 1 e 2. Ed. Scipione, São Paulo: 1990.
Ver também[editar | editar código-fonte]
Ligações externas[editar | editar código-fonte]
Commons
O Commons possui imagens e outras mídias sobre Parábola
|
__label__pos
| 0.978063 |
3
I've tried everything I can think of to get this checkbox to work. This code is just supposed to trigger a function when a checkbox is checked. It currently does not trigger the alert, it only "visually" checks. Thank you for any input.
<apex:page >
<script type="text/javascript">
function checked() {
alert("This has been triggered");
}
</script>
<apex:form >
<apex:inputcheckbox onclick="checked()"/>
</apex:form>
</apex:page>
0
2 Answers 2
4
The problem is reported in the console:
Uncaught TypeError: checked is not a function
It seems to be a name conflict but I can't see where.
Changing the function name to e.g. isChecked works around the problem.
PS
The fix that relates to the root cause is this:
<apex:inputcheckbox onclick="window.checked()"/>
that references the global function checked that is otherwise shadowed (hidden) by the checked property of the input checkbox element.
7
• I should have included the error message. I didn't think renaming would work... still it's probably not a great pattern to follow. That's a quick and dirty workaround though.
– Adrian Larson
Jul 15, 2016 at 19:09
• @AdrianLarson I disagree that there is anything dirty or workaround about it and would argue its a great pattern to follow unless a more complicated solution pays back in some way.
– Keith C
Jul 15, 2016 at 19:49
• I would argue that unobtrusive listeners are a lot less vulnerable to API changes and naming conflicts. Leaking functions into the global state is not just a code smell, it can have harmful side-effects as demonstrated by this very question. What if a naming conflict also gets introduced that breaks isChecked?
– Adrian Larson
Jul 15, 2016 at 19:54
• I guess here it was not a Javascript conflict, but I still think that naming functions that do not need to be named and leaking them into the global state is an avoidable smell.
– Adrian Larson
Jul 15, 2016 at 19:59
• @AdrianLarson Yep thats an argument for the complexity. Caused me to Google a bit and find what is the difference between overengineering, underengineering and rightengineering?.
– Keith C
Jul 15, 2016 at 20:01
4
The first thing you should always do when you face a Javascript error is open up the console. Here's how to do it on:
If you had done so, you would have seen the following error when you click the checkbox:
Uncaught TypeError: checked is not a function
There are a few ways to deal with this. My preference is to use unobtrusive listeners. Try this vanilla Javascript:
<apex:page >
<script>
(function (D) {
"use strict";
D.addEventListener('DOMContentLoaded', function () {
var i, checkboxes = D.querySelectorAll('input.myClass');
for (i = 0; i < checkboxes.length; i++) {
checkboxes[i].addEventListener('click', function () {
alert('fire');
});
}
});
})(document);
</script>
<apex:form >
<apex:inputcheckbox styleClass="myClass" />
</apex:form>
</apex:page>
Or its jQuery equivalent:
<apex:page >
<script src="https://code.jquery.com/jquery-2.2.4.min.js"></script>
<script>
(function ($) {
"use strict";
$(function () {
$('input.myClass').click(function () {
alert('fire');
});
});
})(jQuery);
</script>
<apex:form >
<apex:inputcheckbox styleClass="myClass" />
</apex:form>
</apex:page>
1
• @AerinC A vital tool for any Javascript development!
– Adrian Larson
Jul 18, 2016 at 13:28
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.624675 |
Viber VoIP Number Error: 3 Easy Ways to Fix it Today
by Vladimir Popescu
Vladimir Popescu
Vladimir Popescu
Managing Editor
Being an artist his entire life while also playing handball at a professional level, Vladimir has also developed a passion for all things computer-related. With an innate fascination... read more
Affiliate Disclosure
• You don’t need to install Viber on your smartphone before you can use Viber.
• Toggle on the Viber Local Number option in settings to use VoIP numbers.
• The camera isn’t working option is an alternative option for easier access to Viber on PC.
XINSTALL BY CLICKING THE DOWNLOAD FILE
To fix various PC problems, we recommend Restoro PC Repair Tool:
This software will repair common computer errors, protect you from file loss, malware, hardware failure and optimize your PC for maximum performance. Fix PC issues and remove viruses now in 3 easy steps:
1. Download Restoro PC Repair Tool that comes with Patented Technologies (patent available here).
2. Click Start Scan to find Windows issues that could be causing PC problems.
3. Click Repair All to fix issues affecting your computer's security and performance
• Restoro has been downloaded by 0 readers this month.
Viber is popular because it is a free call and instant messaging app. Nonetheless, issues like Viber VoIP number errors plague the app, preventing users from getting through the Viber activation process.
Viber frowns at spamming, and as such, they have stopped VoIP numbers from being accepted for activations. This is a big issue as some users rely on VoIP numbers.
Fortunately, there are ways to fix these issues and other problems you can encounter while using Viber.
How can I register my number on Viber?
1. Install Viber on your phone.
2. On the welcome screen, click continue.
3. On the registration page, input your phone number (without the first zero).
4. Then, you will receive a code through either text or automated calls.
5. After inputting the code, you’ll be directed to the profile interface and fill it up with your information.
After doing this, your number registration is complete, and you can freely access the app.
How can I activate Viber without a QR code?
1. Click camera doesn’t work on the QR code page, and you’ll be sent an activation link.
2. Copy the link, and send it to yourself.
3. Open the message with a phone using Viber.
4. Then, click the link and open It.
How can I fix Viber if the activation fails?
1. Update your Viber app
1. Go to Google Playstore and search for Viber.
2. Click on it and press the Update button.
2. Contact Viber Support
1. Go to the official Viber site, scroll down and click Contact Us.
2. Fill in the information and request access to an account.
It should be noted that this process is strictly at the mercy of Viber. Therefore, there are no guarantees that your request will be granted.
3. Configure Viber settings
1. Launch the Viber app and go to settings.
2. Locate the Viber Local Number option, then click on it.
3. Select your preferred country & area code, then input the VoIP number to start receiving calls.
Indeed, this will solve the Viber VoIP number error, and you will be able to register.
Can I use Viber without a SIM?
Yes, you can register on Viber with a sim. Register Viber with a VoIP (Voice Over Internet Protocol) number. However, it would be best if you made some changes to your Viber app before it can work.
The above fixes will solve the Viber VoIP number error issues. In addition, you should check our page for solutions to some Viber-related other problems.
idee restoro Still having issues? Fix them with this tool:
1. Download this PC Repair Tool rated Great on TrustPilot.com (download starts on this page).
2. Click Start Scan to find Windows issues that could be causing PC problems.
3. Click Repair All to fix issues with Patented Technologies (Exclusive Discount for our readers).
Restoro has been downloaded by 0 readers this month.
This article covers:Topics:
|
__label__pos
| 0.849695 |
PostGIS 2.3.8dev-r@@SVN_REVISION@@
◆ lwpoly_perimeter()
double lwpoly_perimeter ( const LWPOLY poly)
Compute the sum of polygon rings length.
Could use a more numerically stable calculator...
Definition at line 517 of file lwpoly.c.
References LWDEBUGF, LWPOLY::nrings, ptarray_length(), and LWPOLY::rings.
Referenced by lwgeom_perimeter().
518 {
519 double result=0.0;
520 int i;
521
522 LWDEBUGF(2, "in lwgeom_polygon_perimeter (%d rings)", poly->nrings);
523
524 for (i=0; i<poly->nrings; i++)
525 result += ptarray_length(poly->rings[i]);
526
527 return result;
528 }
POINTARRAY ** rings
Definition: liblwgeom.h:456
int nrings
Definition: liblwgeom.h:454
double ptarray_length(const POINTARRAY *pts)
Find the 3d/2d length of the given POINTARRAY (depending on its dimensionality)
Definition: ptarray.c:1673
#define LWDEBUGF(level, msg,...)
Definition: lwgeom_log.h:88
Here is the call graph for this function:
Here is the caller graph for this function:
|
__label__pos
| 0.620374 |
DataTypes.net
This website uses cookies to improve your experience. You consent to our cookies if you continue to use this website. I Agree Read more
>> JNB file
JNB file format description
Many people share .jnb files without attaching instructions on how to use it. Yet it isn’t evident for everyone which program a .jnb file can be edited, converted or printed with. On this page, we try to provide assistance for handling .jnb files.
1 filename extension(s) found in our database.
.jnb - SigmaPlot Workbook
JNB file is a SigmaPlot Workbook. SigmaPlot is a proprietary software package for scientific graphing and data analysis.
.jnb file format description:
JNB format
Application:
SigmaPlot
Category:
Document files
Mime-type:
application/octet-stream
Magic:
- / -
Aliases:
-
SigmaPlot Workbook related links:
-
SigmaPlot Workbook related extensions:
.spw
SigmaPlot Worksheet
Naturally, other applications may also use the .jnb file extension. Even harmful programs can create .jnb files. Be especially cautious with .jnb files coming from an unknown source!
Can't open a .jnb file?
When you double-click a file to open it, Windows examines the filename extension. If Windows recognizes the filename extension, it opens the file in the program that is associated with that filename extension. When Windows does not recognize a filename extension, you receive the following message:
Windows can't open this file:
example.jnb
To open this file, Windows needs to know what program you want to use to open it. Windows can go online to look it up automatically, or you can manually select one from a list of programs that are installed on your computer.
To avoid this error, you need to set the file association correctly.
The .jnb file extension is often given incorrectly!
According to the searches on our site, these misspellings were the most common in the past year:
inb, njb, mnb, jng, jb, nb, knb, jbb
Is it possible that the filename extension is misspelled?
Similar file extensions in our database:
.inb
Delft3D Initial Bed Composition Data
.njb
Nikon Photo Index
.jb
JBit Program
.jng
Jpeg Network Graphics
.knb
Neuro-Expert Knowledge Base Data
.mnb
MuPAD Notebook Document
If you find the information on this page useful, please feel free to link to this page.
https://datatypes.net/open-jnb-files
If you have useful information about the .jnb file format, then write to us!
|
__label__pos
| 0.513491 |
Take the tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I'm using Modernizr to detect the features supported in the browser our users are running, so far so good. But I've come up against a theoretical problem when testing for base64 compatibility. The patch for this support is detailed here, and works- except for a weird case with IE8- it only allows base64 encoded images of up to 32KB.
I don't really want to embed a 32KB long base64 string inside my JS file, it'll add a crazy amount of bloat. So, could I create a 32KB- valid- image using JS? I'm thinking repeating some kind of pattern within a string until it reaches 32KB in length, that sort of thing. Or maybe taking an existing tiny string (like the one in the Modernizr patch) and adding junk data at the end that still results in a valid image.
I know next to nothing about base64 encoding, other than how to manipulate an existing image. Does anyone have any ideas?
share|improve this question
add comment
1 Answer
up vote 1 down vote accepted
I think I have an answer. I tried all sorts of techniques (repeated text chunks in the PNG source that I could manually add, etc) until I found that adding line breaks appears to do the job:
var b64test = new Image();
b64test.onload = function() {
alert("yay!")
}
b64test.onerror = function() {
alert("boo")
}
/* A 1x1 GIF image */
var base64str = "R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw=="
while (base64str.length < 33000) {
base64str = "\r\n" + base64str;
}
b64test.src= "data:image/gif;base64," + base64str;
Fails in IE8, works in IE9 and others. I'd love to hear any alternatives, though.
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.873019 |
A way to cool dependency Hell?
How to break a deploy: Take one codebase. Sieve in a new class. Mix in the dry ingredients and a new runtime dependency. Place another dependency on a pre-warmed Hudson, bake for 10 minutes (on a medium heat) and then deploy. Oh dear. It didn’t deploy.
We’re a bit crap about managing the external dependencies of our code. I’m not talking about libraries but more basic dependencies that your application might have, like native code libraries, or commands. There’s two ways you can do this:
• You can make people responsible for the care and feeding of your testing and production environments. This is easy to implement, but stupid. I think it would only work in an environment with exceptional communication.
• Or you can insist that any application must declare what it depends on.
Keeping environments up to date keeps lots of people in a job. It’s a really dumb job. At my day job we’ve taken the latter route, using Puppet.
Puppet is a tool for systems administration. But you can use it even if you don’t know fsck from fmt. The way we’re using it is to be an executable specification of the dependencies that your application needs. For example: A test just failed on a new server – with this output:
Validation failed: Avatar /tmp/stream20091208-22414-y3anvf-0 is not recognized by the 'identify' command.
I realised that it needed thelibmagic1 package and possibly libmagic-dev. I could have installed them then and there onto the machine. I’d have forgotten about it in the excitement. So I added them to a file on that project called dependencies.rb. This file is run by Puppet before we deploy. It gives our developers enough control over the target operating systems so that they can make small changes to the deployment environments. We’ve been running this via a Capistrano task on our project; we typically run it as the deployment user; that way we can easily make changes to crontabs. Puppet won’t exit cleanly if it can’t install all the dependencies, so it’s a good way to test.
Here’s an abridged version of our dependencies file:
class dependencies {
include $operatingsystem
class ubuntu {
package {
'libcurl3': ensure => present;
'libcurl3-gnutls': ensure => present;
'libcurl4-openssl-dev': ensure => present;
'g++': ensure => present;
'build-essential': ensure => present;
'libmysqlclient15-dev': ensure => present;
'libxml2-dev': ensure => present;
'libxslt1-dev': ensure => present;
'libmagic1': ensure => present;
'libmagic-dev': ensure => present;
}
}
class gentoo {
cron {
'some cron job':
command => '/engineyard/bin/command blah blah',
user => 'admin',
hour => ['*/2'],
minute => ['12'],
ensure => 'present';
}
}
class darwin {
notice("Nothing to do in this OS.")
}
}
node default {
include dependencies
}
In this file we define a class (dependencies), which doesn’t do much but look for a corresponding inner class to match your operating system. Right now we have a very simple arrangement: The dependencies::gentoo class contains crontabs and the like for EngineYard. The dependencies::ubuntu class names all the native dependencies of our rubygems. We have an empty class for Darwin to stop the Mac OS X machines from complaining. That’s it. Here’s the Capistrano task:
desc "Run Puppet to make sure that dependencies are met"
task :dependencies, roles => :app, :except => {:no_release => true} do
run "cd #{release_path} && rake dependencies"
end
Image courtesy of eflon
Tagged
Follow
Get every new post delivered to your Inbox.
Join 3,391 other followers
%d bloggers like this:
|
__label__pos
| 0.584624 |
It is currently 19 Oct 2017, 15:17
Close
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.
Close
Request Expert Reply
Confirm Cancel
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
Can Any one provide assistance??
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
Intern
Intern
avatar
Joined: 18 Jan 2012
Posts: 1
Kudos [?]: 5 [0], given: 0
Location: Pakistan
Concentration: Finance, Social Entrepreneurship
GMAT Date: 08-01-2014
GPA: 3.34
WE: Education (Education)
Can Any one provide assistance?? [#permalink]
Show Tags
New post 18 Jan 2012, 09:10
I have some problems, if anyone knows the solution please guide me, thanks.
1. A number being successively divided by 3,5 and 8 leaves remainder 1,4 and 7 respectively. Find respective remainders if the orders of divisors is reversed?
2. When a certain number is multiplied by 13,the product consists entirely of fives. The smallest such number is:
a.41625 b.42135 c.42515 d.42735
(i do know the answer is d, but dn't know how)
3. The least number by which 72 must be multiplied in order to produce a multiple of 112, is:
a.6 b.12 c.14 d.18
4. A number when divided by 3 leaves a remainder 1. When the quotient is divided by 2, it leaves a remainder1. What will be the remainder when the number is divided by 6.
Kudos [?]: 5 [0], given: 0
Expert Post
Magoosh GMAT Instructor
User avatar
G
Joined: 28 Dec 2011
Posts: 4423
Kudos [?]: 8438 [0], given: 102
Re: Can Any one provide assistance?? [#permalink]
Show Tags
New post 31 Jan 2012, 15:44
I'm happy to contribute here. :)
This post is a real doozie! Four tough questions.
First, I will say: Questions #1 & #2, I believe, are right out as far as the GMAT -- I think they are much harder than anything the GMAT will expect you to do.
Question #3 is a perfectly legitimate GMAT question, a bit on the harder side, but well within expectations.
Question #4 could be on the GMAT -- it's at the harder end of what might be possible.
====================================================================================
Question #1:
A number being successively divided by 3,5 and 8 leaves remainder 1,4 and 7 respectively. Find respective remainders if the orders of divisors is reversed?
Well, I'm going to go in reverse order. Let's say q is the final quotient. On the final division, we divided by 8 and got a remainder of 7, so the number we had before that was: 8q + 7
That, in turn, was the quotient of the previous division, where we divided by 5 with a remainder of 4. Before we divided, we must have had: 5(8q + 7) + 4.
That, in turn, was the quotient of the previous division, where we divided by 3 with a remainder of 1. Before we divided, we must have had: 3[5(8q + 7) + 4] + 1 = N, the original dividend.
I purposely left that unmultiplied out, so you could see each step's contribution. Now, multiplying out, we have N = 240q + 118.
Now, divide in the reverse order --- obviously, 3, 5, and 8 all go into 240q, so the real question is what happens when we divide the aggregate remainder term, viz, 118.
118/8 = 14, remainder = 6
14/5 = 2, remainder = 4
2/3 = 0, remainder = 2
I believe {6, 4, 2} is the remainder chain when you divide with the chain of divisors in the opposite order.
Question #2:
2. When a certain number is multiplied by 13,the product consists entirely of fives. The smallest such number is:
a. 41625
b. 42135
c. 42515
d. 42735
e. none of the above
Well, start with the units digit --- of course, that has to be 5, so get a five in the unit's digit of the product. 5 x 13 = 65
Well, the next step is to get a 5 in the tens digit of the product. We have a 6 there already from the previous multiplication, and we can only add, not subtract, so we'll need to add a 9 --- the way to get a 9 is to multiply 13 by a 3
35 x 13 = 455
Now, we have a 4 in the hundreds place, so if we can add a one in that place we'd have a five. The way to 1 in the hundred's digit is to multiply 13 by 700
735 x 13 = 9555
Now, we have a 9 in the thousands place, so we need to add 6 --- multiply by 2000
2735 x 13 = 35555
Now, we have a 3 in the ten thousands place, so we need to add 2 --- multiply by 40000
42735 x 13 = 555555
Mirabilis dictu! As it happens, by windfall luck, we get not only a 5 in the ten thousands place, but also one in the hundred thousands place, making it a number with every digit equal to 5.
That's why D is the answer. Again, I can't imagine anyone in their right mind would expect you to slog through that without a calculator, and whatever else we say about the folks who write the GMAT, they are generally reasonable about this sort of thing. :)
Question #3:
3. The least number by which 72 must be multiplied in order to produce a multiple of 112, is:
a. 6
b. 12
c. 14
d. 18
e. none of the above
As I indicated above, this is bonafide, legitimate GMAT-type question. A questions like this do appear on the real GMAT.
The solution involves looking at the prime factorizations
72 = 8 * 9 = 2 * 2 * 2 * 3 * 3
112 = 2 * 56 = 2 * 8 * 7 = 2 * 2 * 2 * 2 * 7
So, the GCF of 72 and 112 is 2 * 2 * 2 = 8.
72 = 8 * 9
112 = 8 * 14
So, if we multiply 72 by 14, we will have a multiple of 112. In fact, in doing so, we will have created the LCM of 72 and 112. So the answer is C.
If those steps are unfamiliar, take a look at this blog article of mine: http://magoosh.com/gmat/2012/gmat-math-factors/
Question #4:
4. A number when divided by 3 leaves a remainder 1. When the quotient is divided by 2, it leaves a remainder1. What will be the remainder when the number is divided by 6
Again, I would say this is at the outer limit of what the GMAT might ask, assuming you were getting all the quant right and getting a steady diet of the hardest questions.
Same strategy as in Question #1 above -- work backwards.
Final quotient is q. To get that, we divided by 2 with a remainder of 1, so what we had before that division was 2q + 1.
That, in turn, was the quotient when we divided by 3 and get a remainder of 1, so what we had before that division was 3(2q + 1) + 1 = N
N = 3(2q + 1) + 1 = 6q + 4
Divide that by 6, and you get a remainder of 4.
====================================================================================================
Does all that make sense? If you have any questions, please do not hesitate to ask.
Mike :-)
_________________
Mike McGarry
Magoosh Test Prep
Image
Image
Kudos [?]: 8438 [0], given: 102
Re: Can Any one provide assistance?? [#permalink] 31 Jan 2012, 15:44
Display posts from previous: Sort by
Can Any one provide assistance??
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
GMAT Club MBA Forum Home| About| Terms and Conditions| GMAT Club Rules| Contact| Sitemap
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
__label__pos
| 0.739502 |
Пример #1
0
func (this *Clusters) clusterSummary(zkcluster *zk.ZkCluster) (brokers, topics, partitions int, flat, cum int64) {
brokerInfos := zkcluster.Brokers()
brokers = len(brokerInfos)
kfk, err := sarama.NewClient(zkcluster.BrokerList(), saramaConfig())
if err != nil {
this.Ui.Error(err.Error())
return
}
defer kfk.Close()
topicInfos, _ := kfk.Topics()
topics = len(topicInfos)
for _, t := range topicInfos {
alivePartitions, _ := kfk.WritablePartitions(t)
partitions += len(alivePartitions)
for _, partitionID := range alivePartitions {
latestOffset, _ := kfk.GetOffset(t, partitionID, sarama.OffsetNewest)
oldestOffset, _ := kfk.GetOffset(t, partitionID, sarama.OffsetOldest)
flat += (latestOffset - oldestOffset)
cum += latestOffset
}
}
return
}
Пример #2
0
func (this *Topics) addTopic(zkcluster *zk.ZkCluster, topic string, replicas,
partitions int) error {
this.Ui.Info(fmt.Sprintf("creating kafka topic: %s", topic))
ts := sla.DefaultSla()
ts.Partitions = partitions
ts.Replicas = replicas
lines, err := zkcluster.AddTopic(topic, ts)
if err != nil {
return err
}
for _, l := range lines {
this.Ui.Output(color.Yellow(l))
}
if this.ipInNumber {
this.Ui.Output(fmt.Sprintf("\tzookeeper.connect: %s", zkcluster.ZkConnectAddr()))
this.Ui.Output(fmt.Sprintf("\t broker.list: %s",
strings.Join(zkcluster.BrokerList(), ",")))
} else {
this.Ui.Output(fmt.Sprintf("\tzookeeper.connect: %s", zkcluster.NamedZkConnectAddr()))
this.Ui.Output(fmt.Sprintf("\t broker.list: %s",
strings.Join(zkcluster.NamedBrokerList(), ",")))
}
return nil
}
Пример #3
0
func (this *Peek) consumeCluster(zkcluster *zk.ZkCluster, topicPattern string,
partitionId int, msgChan chan *sarama.ConsumerMessage) {
brokerList := zkcluster.BrokerList()
if len(brokerList) == 0 {
return
}
kfk, err := sarama.NewClient(brokerList, sarama.NewConfig())
if err != nil {
this.Ui.Output(err.Error())
return
}
//defer kfk.Close() // FIXME how to close it
topics, err := kfk.Topics()
if err != nil {
this.Ui.Output(err.Error())
return
}
for _, t := range topics {
if patternMatched(t, topicPattern) {
go this.simpleConsumeTopic(zkcluster, kfk, t, int32(partitionId), msgChan)
}
}
}
Пример #4
0
func (this *Mirror) makePub(c2 *zk.ZkCluster) (sarama.AsyncProducer, error) {
cf := sarama.NewConfig()
cf.Metadata.RefreshFrequency = time.Minute * 10
cf.Metadata.Retry.Max = 3
cf.Metadata.Retry.Backoff = time.Second * 3
cf.ChannelBufferSize = 1000
cf.Producer.Return.Errors = true
cf.Producer.Flush.Messages = 2000 // 2000 message in batch
cf.Producer.Flush.Frequency = time.Second // flush interval
cf.Producer.Flush.MaxMessages = 0 // unlimited
cf.Producer.RequiredAcks = sarama.WaitForLocal
cf.Producer.Retry.Backoff = time.Second * 4
cf.Producer.Retry.Max = 3
cf.Net.DialTimeout = time.Second * 30
cf.Net.WriteTimeout = time.Second * 30
cf.Net.ReadTimeout = time.Second * 30
switch this.Compress {
case "gzip":
cf.Producer.Compression = sarama.CompressionGZIP
case "snappy":
cf.Producer.Compression = sarama.CompressionSnappy
}
return sarama.NewAsyncProducer(c2.BrokerList(), cf)
}
Пример #5
0
func (this *Topics) clusterSummary(zkcluster *zk.ZkCluster) []topicSummary {
r := make([]topicSummary, 0, 10)
kfk, err := sarama.NewClient(zkcluster.BrokerList(), saramaConfig())
if err != nil {
this.Ui.Error(err.Error())
return nil
}
defer kfk.Close()
topicInfos, _ := kfk.Topics()
for _, t := range topicInfos {
flat := int64(0)
cum := int64(0)
alivePartitions, _ := kfk.WritablePartitions(t)
for _, partitionID := range alivePartitions {
latestOffset, _ := kfk.GetOffset(t, partitionID, sarama.OffsetNewest)
oldestOffset, _ := kfk.GetOffset(t, partitionID, sarama.OffsetOldest)
flat += (latestOffset - oldestOffset)
cum += latestOffset
}
r = append(r, topicSummary{zkcluster.ZkZone().Name(), zkcluster.Name(), t, len(alivePartitions), flat, cum})
}
return r
}
Пример #6
0
func (this *Mirror) makePub(c2 *zk.ZkCluster) (sarama.AsyncProducer, error) {
// TODO setup batch size
cf := sarama.NewConfig()
switch this.compress {
case "gzip":
cf.Producer.Compression = sarama.CompressionGZIP
case "snappy":
cf.Producer.Compression = sarama.CompressionSnappy
}
return sarama.NewAsyncProducer(c2.BrokerList(), cf)
}
Пример #7
0
func (this *TopBroker) clusterTopProducers(zkcluster *zk.ZkCluster) {
kfk, err := sarama.NewClient(zkcluster.BrokerList(), sarama.NewConfig())
if err != nil {
return
}
defer kfk.Close()
for {
hostOffsets := make(map[string]int64)
topics, err := kfk.Topics()
swallow(err)
<-this.signalsCh[zkcluster.Name()]
for _, topic := range topics {
if !patternMatched(topic, this.topic) {
continue
}
partions, err := kfk.WritablePartitions(topic)
swallow(err)
for _, partitionID := range partions {
leader, err := kfk.Leader(topic, partitionID)
swallow(err)
latestOffset, err := kfk.GetOffset(topic, partitionID,
sarama.OffsetNewest)
swallow(err)
host, _, err := net.SplitHostPort(leader.Addr())
swallow(err)
if this.shortIp {
host = shortIp(host)
}
if _, present := hostOffsets[host]; !present {
hostOffsets[host] = 0
}
hostOffsets[host] += latestOffset
}
}
this.hostOffsetCh <- hostOffsets
kfk.RefreshMetadata(topics...)
}
}
Пример #8
0
func (this *UnderReplicated) displayUnderReplicatedPartitionsOfCluster(zkcluster *zk.ZkCluster) []string {
brokerList := zkcluster.BrokerList()
if len(brokerList) == 0 {
this.Ui.Warn(fmt.Sprintf("%s empty brokers", zkcluster.Name()))
return nil
}
kfk, err := sarama.NewClient(brokerList, saramaConfig())
if err != nil {
this.Ui.Error(fmt.Sprintf("%s %+v %s", zkcluster.Name(), brokerList, err.Error()))
return nil
}
defer kfk.Close()
topics, err := kfk.Topics()
swallow(err)
if len(topics) == 0 {
return nil
}
lines := make([]string, 0, 10)
for _, topic := range topics {
// get partitions and check if some dead
alivePartitions, err := kfk.WritablePartitions(topic)
if err != nil {
this.Ui.Error(fmt.Sprintf("%s topic[%s] cannot fetch writable partitions: %v", zkcluster.Name(), topic, err))
continue
}
partions, err := kfk.Partitions(topic)
if err != nil {
this.Ui.Error(fmt.Sprintf("%s topic[%s] cannot fetch partitions: %v", zkcluster.Name(), topic, err))
continue
}
if len(alivePartitions) != len(partions) {
this.Ui.Error(fmt.Sprintf("%s topic[%s] has %s partitions: %+v/%+v", zkcluster.Name(),
topic, color.Red("dead"), alivePartitions, partions))
}
for _, partitionID := range alivePartitions {
replicas, err := kfk.Replicas(topic, partitionID)
if err != nil {
this.Ui.Error(fmt.Sprintf("%s topic[%s] P:%d: %v", zkcluster.Name(), topic, partitionID, err))
continue
}
isr, isrMtime, partitionCtime := zkcluster.Isr(topic, partitionID)
underReplicated := false
if len(isr) != len(replicas) {
underReplicated = true
}
if underReplicated {
leader, err := kfk.Leader(topic, partitionID)
swallow(err)
latestOffset, err := kfk.GetOffset(topic, partitionID, sarama.OffsetNewest)
swallow(err)
oldestOffset, err := kfk.GetOffset(topic, partitionID, sarama.OffsetOldest)
swallow(err)
lines = append(lines, fmt.Sprintf("\t%s Partition:%d/%s Leader:%d Replicas:%+v Isr:%+v/%s Offset:%d-%d Num:%d",
topic, partitionID,
gofmt.PrettySince(partitionCtime),
leader.ID(), replicas, isr,
gofmt.PrettySince(isrMtime),
oldestOffset, latestOffset, latestOffset-oldestOffset))
}
}
}
return lines
}
Пример #9
0
func (this *Topics) displayTopicsOfCluster(zkcluster *zk.ZkCluster) {
echoBuffer := func(lines []string) {
for _, l := range lines {
this.Ui.Output(l)
}
}
linesInTopicMode := make([]string, 0)
if this.verbose {
linesInTopicMode = this.echoOrBuffer(zkcluster.Name(), linesInTopicMode)
}
// get all alive brokers within this cluster
brokers := zkcluster.Brokers()
if len(brokers) == 0 {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%4s%s", " ",
color.Red("%s empty brokers", zkcluster.Name())), linesInTopicMode)
echoBuffer(linesInTopicMode)
return
}
if this.verbose {
sortedBrokerIds := make([]string, 0, len(brokers))
for brokerId, _ := range brokers {
sortedBrokerIds = append(sortedBrokerIds, brokerId)
}
sort.Strings(sortedBrokerIds)
for _, brokerId := range sortedBrokerIds {
if this.ipInNumber {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%4s%s %s", " ",
color.Green(brokerId), brokers[brokerId]), linesInTopicMode)
} else {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%4s%s %s", " ",
color.Green(brokerId), brokers[brokerId].NamedString()), linesInTopicMode)
}
}
}
kfk, err := sarama.NewClient(zkcluster.BrokerList(), saramaConfig())
if err != nil {
if this.verbose {
linesInTopicMode = this.echoOrBuffer(color.Yellow("%5s%+v %s", " ",
zkcluster.BrokerList(), err.Error()), linesInTopicMode)
}
return
}
defer kfk.Close()
topics, err := kfk.Topics()
swallow(err)
if len(topics) == 0 {
if this.topicPattern == "" && this.verbose {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%5s%s", " ",
color.Magenta("no topics")), linesInTopicMode)
echoBuffer(linesInTopicMode)
}
return
}
sortedTopics := make([]string, 0, len(topics))
for _, t := range topics {
sortedTopics = append(sortedTopics, t)
}
sort.Strings(sortedTopics)
topicsCtime := zkcluster.TopicsCtime()
hasTopicMatched := false
for _, topic := range sortedTopics {
if !patternMatched(topic, this.topicPattern) {
continue
}
if this.since > 0 && time.Since(topicsCtime[topic]) > this.since {
continue
}
this.topicN++
hasTopicMatched = true
if this.verbose {
linesInTopicMode = this.echoOrBuffer(strings.Repeat(" ", 4)+color.Cyan(topic), linesInTopicMode)
}
// get partitions and check if some dead
alivePartitions, err := kfk.WritablePartitions(topic)
swallow(err)
partions, err := kfk.Partitions(topic)
swallow(err)
if len(alivePartitions) != len(partions) {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%30s %s %s P: %s/%+v",
zkcluster.Name(), color.Cyan("%-50s", topic), color.Red("partial dead"), color.Green("%+v", alivePartitions), partions), linesInTopicMode)
}
replicas, err := kfk.Replicas(topic, partions[0])
if err != nil {
this.Ui.Error(fmt.Sprintf("%s/%d %v", topic, partions[0], err))
}
this.partitionN += len(partions)
if !this.verbose {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%30s %s %3dP %dR %s",
zkcluster.Name(),
color.Cyan("%-50s", topic),
len(partions), len(replicas),
gofmt.PrettySince(topicsCtime[topic])), linesInTopicMode)
continue
}
for _, partitionID := range alivePartitions {
leader, err := kfk.Leader(topic, partitionID)
swallow(err)
replicas, err := kfk.Replicas(topic, partitionID)
if err != nil {
this.Ui.Error(fmt.Sprintf("%s/%d %v", topic, partitionID, err))
}
isr, isrMtime, partitionCtime := zkcluster.Isr(topic, partitionID)
isrMtimeSince := gofmt.PrettySince(isrMtime)
if time.Since(isrMtime).Hours() < 24 {
// ever out of sync last 24h
isrMtimeSince = color.Magenta(isrMtimeSince)
}
underReplicated := false
if len(isr) != len(replicas) {
underReplicated = true
}
latestOffset, err := kfk.GetOffset(topic, partitionID,
sarama.OffsetNewest)
swallow(err)
oldestOffset, err := kfk.GetOffset(topic, partitionID,
sarama.OffsetOldest)
swallow(err)
if this.count > 0 && (latestOffset-oldestOffset) < this.count {
continue
}
this.totalMsgs += latestOffset - oldestOffset
this.totalOffsets += latestOffset
if !underReplicated {
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%8d Leader:%s Replicas:%+v Isr:%+v Offset:%16s - %-16s Num:%-15s %s-%s",
partitionID,
color.Green("%d", leader.ID()), replicas, isr,
gofmt.Comma(oldestOffset), gofmt.Comma(latestOffset), gofmt.Comma(latestOffset-oldestOffset),
gofmt.PrettySince(partitionCtime), isrMtimeSince), linesInTopicMode)
} else {
// use red for alert
linesInTopicMode = this.echoOrBuffer(fmt.Sprintf("%8d Leader:%s Replicas:%+v Isr:%s Offset:%16s - %-16s Num:%-15s %s-%s",
partitionID,
color.Green("%d", leader.ID()), replicas, color.Red("%+v", isr),
gofmt.Comma(oldestOffset), gofmt.Comma(latestOffset), gofmt.Comma(latestOffset-oldestOffset),
gofmt.PrettySince(partitionCtime), isrMtimeSince), linesInTopicMode)
}
}
}
if this.topicPattern != "" {
if hasTopicMatched {
echoBuffer(linesInTopicMode)
}
} else {
echoBuffer(linesInTopicMode)
}
}
Пример #10
0
func (this *Top) clusterTopProducers(zkcluster *zk.ZkCluster) {
cluster := zkcluster.Name()
brokerList := zkcluster.BrokerList()
if len(brokerList) == 0 {
return
}
kfk, err := sarama.NewClient(brokerList, sarama.NewConfig())
if err != nil {
return
}
defer kfk.Close()
for {
topics, err := kfk.Topics()
if err != nil || len(topics) == 0 {
return
}
for _, topic := range topics {
if !patternMatched(topic, this.topicPattern) {
continue
}
msgs := int64(0)
alivePartitions, err := kfk.WritablePartitions(topic)
swallow(err)
for _, partitionID := range alivePartitions {
latestOffset, err := kfk.GetOffset(topic, partitionID,
sarama.OffsetNewest)
if err != nil {
// this broker is down
continue
}
msgs += latestOffset
}
this.mu.Lock()
if _, present := this.brokers[cluster+":"+topic]; !present {
// calculate the broker leading partitions
leadingPartitions := make(map[string]int) // broker:lead partitions n
for _, pid := range alivePartitions {
leader, err := kfk.Leader(topic, pid)
swallow(err)
leadingPartitions[leader.Addr()]++
}
brokers := make([]string, 0)
for addr, n := range leadingPartitions {
brokers = append(brokers, fmt.Sprintf("%d@%s", n, addr))
}
this.brokers[cluster+":"+topic] = this.discardPortOfBrokerAddr(brokers)
}
this.counters[cluster+":"+topic] = float64(msgs)
this.partitions[cluster+":"+topic] = len(alivePartitions)
this.mu.Unlock()
}
time.Sleep(time.Second)
kfk.RefreshMetadata(topics...)
}
}
|
__label__pos
| 0.997754 |
What is a URL? [Definition] Uniform Resource Locator
Get well-versed in What is a URL? How does it function, Definition, structure, Syntax, and Working Mechanism? Understand all these points inside this blog post.
Updated: 19 Dec, 22 by Susith Nonis 10 Min
List of content you will read in this article:
The most popular method of internet surfing is looking through websites. However, even while it could appear straightforward to open a laptop, double-click the Chrome or Firefox browser, and start surfing, a lot happens behind the scenes which is never seen.
The first step is the web browser, which acts as the process's initial gateway to the internet. A web browser is a straightforward software that shows websites on the internet.
Consider a web browser to be similar to a computer's display. The items that the computer's operating system generates are shown on a screen. The web browser functions something like an internet screen.
An address known as a URL aids the web browser in finding a particular webpage, image, file, or other resources.
The rest of the URL in your browser displays the route to the particular file on that server once your browser has taken the address and converted the domain name to the server's IP address.
A domain name is the overall "address" for the entire website or server, whereas a URL points to a specific file or page.
A URL (Uniform Resource Locator) is a special identifier known as a web address. A URL is made up of numerous elements, just like a physical address, depending on the type of web page and the area of the website being viewed.
Theoretically, every legitimate URL leads to a different resource. These resources could be an image, a CSS file, an HTML page, etc. There are a few exceptions, the most frequent of which is a URL leading to a resource that has either relocated or vanished.
It is the owner of the web server's responsibility to properly maintain the resource that the URL represents as well as the URL itself because the Web server is in charge of both.
A URL is often found in the address bar or Omnibox at the top of the browser window. The URL is always accessible on laptops and desktop computers unless your browser is being shown in full-screen mode.
Most mobile and tablet browsers only display the domain when the address bar is visible, with the URL disappearing as you scroll down. Scroll up the page if the address bar isn't visible.
Tapping on the address bar displays the complete address if only the domain is displayed.
The most common URL types are often absolute and relative. An absolute URL contains all the necessary information from the protocol to the route to resources or arguments.
In contrast, a relative URL merely contains the resources' path.
Here are several other uniform resource locators, listed according to their function:
• Canonical URLs: If they have duplicate content, website admins might use them. Search engines can be instructed which website to crawl and index by designating one URL as canonical.
• Callback URLs: When users finish a task on an external system, they refer to a home destination.
• Vanity URLs: They are simple-to-remember web addresses, also referred to as bespoke short URLs. A vanity URL typically serves as a redirect for a lengthier URL. A vanity URL can be made by website owners using a URL shortening service, such as Bitly, TinyURL, or Short.io.
URL consists of several components. So, let's delve more deeply into its structure.
The protocol or scheme
It is used to access an online resource. Protocols include mailto, http, https, file, and ftps.
One can access the resource through the domain name system (DNS) name.
Subdomain
Any phrases or words before the first dot in a URL are referred to as subdomains. The most used type is www which refers to the World Wide Web.
It signifies that a website is reachable over the internet and communicates using HTTP.
In addition, website owners are free to use any phrase as a subdomain as long as it leads to a particular directory from the main domain. The most well-liked choices include "news" and "blog."
Domain name
Users enter a domain name into their browser's address bar to access a website. It comprises a domain name and an extension, like google.com.
Each name is distinct and corresponds to a certain IP address. This specific IP address connects to the server hosting the website. In other words, it makes it easier for people to visit websites.
Domain extension
The part of a website name that comes after the dot is known as a top-level domain (TLD). The most common extension, .com, is used on 53% of all websites.
Path to the resource
The area to the right of the TLD is a route to the resource. It's frequently referred to as the website's folder structure. A web server can direct users to a specific location by using the route to the resource, which provides additional information.
Several paths to resources may reference a page, post, or file. Multiple paths to resources can lead to the same URL.
The forward-slash symbol (/) will demarcate them at that point. The more resource pathways there are in a URL, the more precise the location.
Parameters
A parameter is a query string or a variable in a URL. They are the part of a URL that comes after the question mark. Keys and values are separated in parameters by the equal sign (=). A URL can also contain several variables.
In that instance, each will be separated by the ampersand sign (&).
Syntax describes a set of guidelines. It establishes which element and symbol are permitted in a URL in the case of URL syntax.
Furthermore, only numbers, letters, and ()!$-‘_*+ characters are permitted in uniform resource locators.
Site owners must convert other characters into programming code to use them. For instance, since URL spaces are prohibited, website owners frequently use the plus sign or hyphens to replace them.
An anchor link, also known as a page jump or fragment identifier, can be found in URLs. The element is denoted by the pound sign (#), which serves as a bookmark for a particular site section.
An HTML file with a page jump causes a web browser to leap to the chosen area. A modern internet browser will play a video or audio file following the anchor's timestamp.
One of the simplest ways to open a URL is to type it into the address box if you know the full website URL. If not, here are a few different approaches you can use:
Clicking a hyperlink
Links to other HTML files on the internet might be text, icons, or images. Users can spot a hyperlink by moving their mouse cursor over the linked text or image.
Then, a URL link indicating the link's destination will appear at the window's bottom.
Scanning a QR code
QR stands for quick response code and is a digital device-readable black and white barcode you can use to open a URL. It keeps various data, such as account information, web links, and encryption specifications.
Copying and pasting
Copying and pasting a website address into the address bar would open it if it has no links or QR codes.
You can have your custom directories that link to your page on social networking websites such as Instagram and e-commerce sites like Etsy.
For instance, you can visit the Facebook page for Computer Hope at "facebook.com/computerhope". However, this URL only refers to a portion of your user profile and is not the whole address.
You must purchase a custom domain from a domain name registrar to create a unique URL, such as "computerhope.com". These businesses let you buy domain names linked to your website(s) or directed to any other website of your choice.
Typically, you have to renew your domain once a year. Prices for domains are determined by their marketability and prior usage. The cost is also impacted by domain suffixes like.com, .net, or.org.
Once acquired, domain names can be linked to other websites or moved between registrars while being under your ownership.
A general step-by-step tutorial on purchasing a domain is provided below:
• Verify the availability of the name. You can use a checker to search for this.
• Click Search after entering the desired name and extension. Following that, it will give you a list of available names.
• Continue to the checkout. You will select the registration period in this step. A registrar typically requires a minimum of one year. Despite this, some registrars provide up to ten years of registration.
• Finish the registration procedure. Your name, email address, address, and other contact details may be required on a setup form that the registrar will ask you to complete after the money has been approved. Make sure you correctly enter all the information.
• Verify who owns the name. A few minutes after finishing the registration procedure, a verification link will appear on the email you used for registration. You can submit a request from the control panel if it doesn't.
Because the registration is ongoing, each owner must keep track of the dates on which their domains will expire.
A complete web address referring to a particular file on the internet is known as a uniform resource location (URL). A URL, for instance, can direct visitors to a website, a web page, or perhaps an image.
You should know exactly what a URL is, including its domain name, path, and protocols. Keep your site's URLs succinct and to the point and the subject of each page to get the most out of them.
When making adjustments, don't forget to redirect any outdated URLs, especially those that have already gathered backlinks and brought organic traffic to your website.
Users must register a domain with a reliable registrar to build and change a website's URL. Alternatively, you could choose a reputable host offering these registration services.
People also read:
Susith Nonis
Susith Nonis
I'm fascinated by the IT world and how the 1's and 0's work. While I venture into the world of Technology, I try to share what I know in the simplest way with you. Not a fan of coffee, a travel addict, and a self-accredited 'master chef'.
|
__label__pos
| 0.880237 |
A apresentação está carregando. Por favor, espere
A apresentação está carregando. Por favor, espere
1 Banco de Dados SQL TRIGGERS (Gatilhos) Elaini Simoni Angelotti.
Apresentações semelhantes
Apresentação em tema: "1 Banco de Dados SQL TRIGGERS (Gatilhos) Elaini Simoni Angelotti."— Transcrição da apresentação:
1 1 Banco de Dados SQL TRIGGERS (Gatilhos) Elaini Simoni Angelotti
2 2 TRIGGERS (GATILHOS) Uma TRIGGER é um tipo especial de sp que é executado automaticamente em conseqüência de uma modificação (INSERT, UPDATE, DELETE) na tabela na qual a TRIGGER foi configurada. Chama-se disparar a trigger a execução automática da mesma Não podem ser executadas usando EXEC.
3 3 Uma TRIGGER é sempre associada a uma tabela, porém os comandos que formam a TRIGGER podem acessar dados de outras tabelas. Ex: dadas as tabelas Nota_Fiscal(Num_nota, valor_total) Produto(Cod_Prod, nome, preço, estoque) Nota_Prod(Num_nota, Cod_Prod, quantidade) Pode-se criar uma Trigger para a operação de INSERT na tabela Nota_Prod. Sempre que for inserido um novo item de pedido na tabela Nota_Prod será disparada um Trigger que atualiza o nível de estoque do produto que está sendo vendido
4 4 Com o uso de TRIGGERs pode-se definir Regras de Negócio do BD –Representam regras do mundo real –Ex: Aprovar financiamento maiores que um determinado valor Pode-se usar TRIGGERS para exclusão e atualização em cascata Se o comando que está sendo executado violar a definição de uma CONSTRAINT definida, a TRIGGER não irá disparar
5 5 NO SQLServer 2000/2005 existem alguns tipos de TRIGGERS: –DELETE –UPDATE AFTER –INSERT –INSTEAD OF AFTER: é disparada APÓS todos os comandos de uma TRIGGER associada com um DELETE, UPDATE e INSERT terem sidos executados INSTEAD OF é disparada ANTES dos comandos serem executados. Processa as constraints antes da execução da trigger.
6 6 O SQLServer 2000/2005 permite especificar TRIGGERs em Views (INSTEAD OF) Nos comandos que definem a TRIGGER pode- se usar a maioria dos comandos SQL, inclusive estruturas IF..ELSE e WHILE. Não são permitidos os seguintes comandos: ALTER DATABASE, CREATE DATABASE, DROP DATABASE, LOAD DATABASE, LOAD LOG, RESTORE DATABASE, RESTORE LOG, RECONFIGURE
7 7 Os comandos que compõe a TRIGGER tem acesso a duas tabelas especiais: –DELETED TABLE –INSERTED TABLE Essas tabelas existem apenas na memória do servidor, não sendo gravadas em disco Os registros dessas tabelas são acessíveis somente durante a execução da TRIGGER Para referenciar essas tabelas temporárias dentro da TRIGGER usa-se os nomes –DELETED –INSERTED
8 8 A tabela DELETED armazena cópias de registros afetados por um comando DELETE ou UPDATE –Armazena os registros antes da alteração A tabela INSERTED armazena cópias dos registros afetados por um comando INSERT ou UPDATE. –Os registros na tabela INSERTED são cópias dos novos registros da tabela da tabela que disparou a TRIGGER
9 9 Sintaxe: CREATE TRIGGER nome_da_trigger ON nome_da_tabela ou nome_da_view [WITH ENCRYPTION] {FOR | AFTER| INSTEAD OF} {[DELETE] [,] [INSERT] [,] [UPDATE]} AS comando 1 comando comando n
10 10 Exemplo: Criar uma TRIGGER que evite que sejam inseridos novos clientes na tabela CLIENTE (banco de dados LOCADORA) em que o compo UF seja igual a AC ou PA. Essa TRIGGER será criada para a ação INSERT. CREATE TRIGGER TG_Permite_UF ON Cliente FOR INSERT AS IF EXISTS (SELECT * FROM INSERTED WHERE UF_CLI IN ('PA', 'AC')) BEGIN PRINT 'INSERÇÃO DE REGISTRO CANCELADA.' PRINT 'ESTADO (UF) PROIBIDO!!' ROLLBACK END ELSE PRINT 'PAÍS PERMITIDO!'
11 11 Crie uma TRIGGER calcule e insera a data de devolução prevista na tabela EMP_DEV sempre que uma fita for emprestada CREATE TRIGGER tg_CALCULA_DATA_DEV_PREV ON EMP_DEV FOR INSERT AS IF EXISTS (SELECT * FROM INSERTED) BEGIN UPDATE EMP_DEV SET DATA_DEV_PREV = DATEADD(DD,1,DATA_EMP) END
12 12 Crie uma TRIGGER que calcule e insira a data de devolução prevista na tabela EMP_DEV sempre que uma fita for emprestada. Se a fita for de catálogo ela tem dois dias para ser entregue. Lançamentos podem ficar locadas apenas 1 dia. alter table Fita add Tipo_fita varchar (10) Constraint CKTipo_fita check (Tipo_fita in ('catálogo','Lançamento')) DROP TRIGGER tg_CALCULA_DATA_DEV_PREV
13 13 CREATE TRIGGER tg_CALCULA_DATA_DEV_PREV ON EMP_DEV FOR INSERT AS IF EXISTS (SELECT * FROM INSERTED INNER JOIN FITA ON FITA.COD_FITA = INSERTED.COD_FITA WHERE Tipo_fita = 'catálogo') BEGIN UPDATE EMP_DEV SET DATA_DEV_PREV = DATEADD(DD,2,DATA_EMP) WHERE emp_dev.cod_fita = (select inserted.cod_fita from inserted) END ELSE IF EXISTS (SELECT * FROM INSERTED INNER JOIN FITA ON FITA.COD_FITA = INSERTED.COD_FITA WHERE Tipo_fita = 'Lançamento') BEGIN UPDATE EMP_DEV SET DATA_DEV_PREV = DATEADD(DD,1,DATA_EMP) WHERE emp_dev.cod_fita = (select inserted.cod_fita from inserted) END
14 14 Crie uma TRIGGER que calcule o valor da multa de um cliente sempre que o mesmo devolver a fita à locadora. Isso significa que toda a vez que o campo dev_efet for preenchido (UPDATE) a multa será calculada. CREATE TRIGGER tg_CALCULA_MULTA ON EMP_DEV FOR UPDATE AS IF UPDATE (DATA_DEV_EFET) BEGIN UPDATE EMP_DEV SET multa = 1.5 * DATEDIFF(DD,DATA_DEV_PREV,DATA_DEV_EFET) WHERE DATEDIFF(DD,DATA_DEV_PREV,DATA_DEV_EFET) > 0 END IF UPDATE (DATA_DEV_EFET) BEGIN UPDATE EMP_DEV SET multa = 0 WHERE DATEDIFF(DD,DATA_DEV_PREV,DATA_DEV_EFET) <= 0 END
15 15 Vamos supor que, por ordem da admistraçao não seja permitido fazer alterações e inserções na tabela Fornecedor. Para garantir esta norma implemente um trigger que dispare em resposta a comandos UPDATE e INSERT na tabela Fornecedor. Esta trigger deve emitir um aviso de que as alterações e inserções foram suspensas e registrar em uma tabela o nome do usuário que tentou fazer a alteração e o nome do fornecedor que tentou-se alterar ou inserir. CREATE TABLE TENTOU_ALTERAR ( FORNECEDOR VARCHAR (50) NOT NULL, USUÁRIO CHAR (30) NOT NULL )
16 16 CREATE TRIGGER Tg_NÃO_ALTERAINSERE_FORNECEDOR ON FORNECEDOR FOR INSERT, UPDATE AS -- VARIAVEL QUE SERÁ UTILIZADA NA TRIGGER VARCHAR(50) -- VERIFICA SE FOI FEITA ALGUMA ALTERAÇÃO (INSERT OU UPDATE) IF EXISTS (SELECT * FROM DELETED) BEGIN = (SELECT NOME_FORN FROM DELETED) PRINT 'VC NÃO PODE ALTERAR O REGISTRO DE UM FORNECEDOR' ROLLBACK INSERT INTO TENTOU_ALTERAR VALUES CURRENT_USER) END IF EXISTS (SELECT * FROM INSERTED) BEGIN = (SELECT NOME_FORN FROM INSERTED) PRINT 'VC NÃO PODE INSERIR NOVOS FORNECEDORES' ROLLBACK INSERT INTO TENTOU_ALTERAR VALUES CURRENT_USER) END
17 17 Habilitando e Desabilitando Trigger Para desabilitar temporariamente uma trigger: ALTER TABLE Nome_da_Tabela DISABLE TRIGGER Nome_da_Trigger Para habilitar novamente uma trigger: ALTER TABLE Nome_da_Tabela ENABLE TRIGGER Nome_da_Trigger
Carregar ppt "1 Banco de Dados SQL TRIGGERS (Gatilhos) Elaini Simoni Angelotti."
Apresentações semelhantes
Anúncios Google
|
__label__pos
| 0.879191 |
weixin_39670246
weixin_39670246
2020-12-07 12:09
permute for 3 dim tensor
Hi jaybdub,
If I got a tensor which has x.shape = [1,1024,22], and I want to permute the dims with x = x.permute(2,0,1) but it went wrong(AssertionError).What should I do to get the same result and the process can be transferred to tensorrt?
该提问来源于开源项目:NVIDIA-AI-IOT/torch2trt
• 点赞
• 回答
• 收藏
• 复制链接分享
4条回答
为你推荐
换一换
|
__label__pos
| 0.934642 |
Mathematica Stack Exchange is a question and answer site for users of Mathematica. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I'm looking for a robust way to add vertices to a graph by modifying its AdjacencyMatrix.
Here's what I have so far:
addNode[matrix_,in_,out_]:=Module[{mod},
mod=ArrayPad[matrix,1];
mod[[1,2]]=1;
mod[[2,1]]=1;
mod[[Length@mod,-out]]=1;
mod[[-out,Length@mod]]=1;
mod
]
This will add a vertex connected to vertex 1 and another vertex connected to vertex out. Notice that I only add 1x1 0's to my matrix. This is because at the moment I only need to add 2 vertices to my graphs.
Examples:
Mathematica graphics Mathematica graphics
I've tried implementing a similar function that accepts the in argument, but I couldn't get it to work properly. It seems as if I have to program special cases for it to work.
Is there a better way to do this?
share|improve this question
Do you want an adjacency matrix as the output or a graph as the output? Or does it not matter? – R. M. Aug 8 '12 at 23:22
@RM I'd prefer an adjacency matrix, since I'll be able to call the function on itself if I need to add more vertices. But in the end, it's not crucial. – CHM Aug 8 '12 at 23:25
There are functions that allow you to add vertices and edges, namely VertexAdd and EdgeAdd. You can use these to conveniently add vertices and manage connections on the fly. Here's an example that accepts either an AdjacencyMatrix or a Graph object.
Clear@extendGraph
extendGraph[mat_?MatrixQ, vertices_, connect_] :=
AdjacencyGraph[mat, GraphLayout -> "SpringEmbedding",
VertexLabels -> "Name", ImagePadding -> 5] ~VertexAdd~ vertices ~EdgeAdd~ connect
extendGraph[graph_?GraphQ, vertices_, connect_] := graph ~VertexAdd~ vertices ~EdgeAdd~ connect
You can then extend this further to delete edges/vertices using the EdgeDelete and VertexDelete functions in a similar way.
Here are some example usages:
a = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0}};
g1 = extendGraph[a, {5, 6}, {1 \[UndirectedEdge] 5, 4 \[UndirectedEdge] 6}]
enter image description here
g2 = extendGraph[g1, {7, 8}, {7 \[UndirectedEdge] 2, 8 \[UndirectedEdge] 4}]
enter image description here
Use AdjacencyMatrix on the above graphs to get the matrix (although, it is not necessary, since my definition allows you to use it again on the graph itself)
AdjacencyMatrix@g2 // Normal
(* {{0, 1, 0, 1, 1, 0, 0, 0},
{1, 0, 1, 0, 0, 0, 1, 0},
{0, 1, 0, 1, 0, 0, 0, 0},
{1, 0, 1, 0, 0, 1, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0}} *)
share|improve this answer
This works great, thanks! I didn't know about VertexAdd and EdgeAdd. – CHM Aug 8 '12 at 23:50
I answered a similar question before using interactive solution in this example. To construct a basic function you could use EdgeList:
f[m_, v1_, v2_] := AdjacencyMatrix[Graph[EdgeList[AdjacencyGraph[m,
DirectedEdges -> False]]~Join~{v1 \[UndirectedEdge] v2}]]
Here how it works. If you start from a matrix:
m = {{0, 1, 1, 0},
{1, 0, 1, 1},
{1, 1, 0, 1},
{0, 1, 1, 0}};
this is what you will get:
f[m, 4, 5] // Normal // MatrixForm
enter image description here
These are the graphs:
AdjacencyGraph /@ {m, %}
enter image description here
Now if you want to add a few vertexes at a time you could modify your function like:
g[m_, l_] := AdjacencyMatrix[Graph[EdgeList[AdjacencyGraph[m,
DirectedEdges -> False]]~Join~(UndirectedEdge @@@ l)]]
Don't for get that labels can be pretty arbitrary and new vertexes can be disconnected from original graph - it still will work:
g[m, {{4, 5}, {5, "CAT"}, {"DOG", "BIRD"}}] // Normal // MatrixForm
enter image description here
AdjacencyGraph /@ {m, %}
enter image description here
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.978301 |
jQuery > Effects methods
.toggle() - Toggle display element in jQuery
How to toggle display an element in jQuery?
To toggle display a html element, toggle() method can be used.
<input type="button" id="btnToggle" value="Toggle" />
<div id="divToggle" style="height: 50px; width: 50px; background: brown; left: 300px; position: absolute;">
</div>
<script>
$("#btnToggle").click(function () {
$("#divToggle").toggle("slow");
});
</script>
In the above code, on click of “btnToggle” button “divToggle” element toggle display (ie. if the element is hidden then it will be shown and vice versa). A second parameter as function name can also be specified as we were doing for slideToggle() to execute when the toggle animation is complete.
.toggle() function is the combination of .hide() and .show() function.
Views: 1513 | Post Order: 95
Write for us
Hosting Recommendations
|
__label__pos
| 0.952137 |
fbpx
MEAN Stack- Next Generation Development
MEAN Stack- Next Generation Development
Why Mean Stack Developer? Why mean stack? For this, we need to understand what is mean stack. MEAN stack also called a MEAN.JS is a web development completely based on Javascript. The word MEAN stands for: MongoDb as database. Express is used for back-end framework....
Pin It on Pinterest
|
__label__pos
| 0.963888 |
Aurora Corporate documentation
Configuring OnlyOffice Docs with non-standard port
When enabling office document editor and installing OnlyOffice Docs, usually it's recommended to install it on a separate server, as it provides a complete webserver bundle which may conflict with an existing web server setup.
Still, it's possible to have OnlyOffice Docs installed on the same server as Aurora Corporate. It comes with Nginx webserver so, for example, if you already run Apache you can have both on the same system, as long as they don't use the same set of ports.
The following guidelines assumed OnlyOffice Docs is installed on CentOS - we'll cover the case of Debian/Ubuntu, too.
NB: be sure to backup all the configuration files before and after making changes; there's a chance some of the configuration files may get overwritten during the upgrade.
1. By default, Nginx would use port 80 and won't start if it's already used by another webserver running there. You'll need a different port supplied in /etc/nginx/nginx.conf file:
server {
listen 808 default_server;
# listen [::]:80 default_server;
NB: This port is only specified as a default value and will not actually be used for accessing OnlyOffice Docs.
1. DocumentServer needs to have a dedicated port as well, specified in /etc/onlyoffice/documentserver/default.json file:
"services": {
"CoAuthoring": {
"server": {
"port": 8081,
1. Now we need to tell Nginx webserver which port is used by DocumentServer, /etc/onlyoffice/documentserver/nginx/includes/http-common.conf file:
upstream docservice {
server localhost:8081;
}
1. One last thing is to tell Nginx where web scripts of DocumentServer are found, /etc/onlyoffice/documentserver/nginx/ds.conf file:
server {
listen 0.0.0.0:8088;
listen [::]:8088 default_server;
server_tokens off;
NB: this is actually the port that will be used in URL for accessing OnlyOffice Docs.
If you install OnlyOffice Docs on Debian/Ubuntu, the above approach mostly applies, with 2 exceptions:
1) Before installing OnlyOffice Docs, specify port from section 4 by entering the following command:
echo onlyoffice-documentserver onlyoffice/ds-port select 8088 | sudo debconf-set-selections
2) Port from section 1 is specified in /etc/nginx/sites-available/default configuration file.
|
__label__pos
| 0.919796 |
program HelloWorld; {$MODE Delphi} // comment / uncomment to change mode of DoQuery, uses local var for Psqlite3_stmt if defined {$DEFINE LVAR} uses {$IFNDEF MSWINDOWS}cthreads,{$ENDIF} cmem, SysUtils, Classes, slsqlite; Var sqlDB: TslSqliteDB = nil; sql_statement: Psqlite3_stmt = nil; TH1,TH2,TH3,TH4 : TThread; //threadsafe: integer; procedure CreateConnection(); begin // create db with pragma if wanted sqlDB := TslSqliteDB.Create('mydatabase.db', ''); // some other dbs use: synchronous=FULL;synchronous=OFF; sqlDB.ExecSQL( 'DROP TABLE IF EXISTS USERS' ); // insert some entries to our database sqlDB.ExecSQL( 'CREATE TABLE IF NOT EXISTS USERS (U_NAME VARCHAR(255) NOT NULL)' ); sqlDB.ExecSQL( 'INSERT OR IGNORE INTO USERS (U_NAME) VALUES (''hello'');' ); sqlDB.ExecSQL( 'INSERT OR IGNORE INTO USERS (U_NAME) VALUES (''world'');' ); sql_statement := sqlDB.Open('SELECT * FROM USERS'); end; procedure DoQuery(); {$IFDEF LVAR} var s: Psqlite3_stmt; {$ENDIF} begin {$IFDEF LVAR} s := sqlDB.Open('SELECT * FROM USERS'); {$ELSE} sqlDB.Open(sql_statement); {$ENDIF} {$IFDEF LVAR} while sqlDB.Step(s) do {$ELSE} while sqlDB.Step(sql_statement) do {$ENDIF} begin {$IFDEF LVAR} Writeln(IntTostr(TThread.CurrentThread.ThreadID) + ' with local variable - Username is: ' + sqlDB.column_text(s, 0)); {$ELSE} Writeln(IntTostr(TThread.CurrentThread.ThreadID) + ' with global - Username is: ' + sqlDB.column_text(sql_statement, 0)); {$ENDIF} end; end; begin writeln('start!'); if not slsqlite_inited then begin writeln('SQLite3 not initialised. Error: ' + slsqlite_error); exit; end else writeln('Using SQLite3 version: ' + slSqliteVersion); // create sqlite3 connection and insert some examples createconnection(); // not supported by this pascal lib //threadsafe := sqlite3_threadsafe(); //if threadsafe > 0 then // writeln('your sqlite3 does support multithreading with mode: ' + inttostr(threadsafe)) //else // writeln('sorry, no support for multithreading'); try th1:=tthread.executeinthread(@doquery, nil); th2:=tthread.executeinthread(@doquery, nil); th3:=tthread.executeinthread(@doquery, nil); th4:=tthread.executeinthread(@doquery, nil); writeln('main thread done'); th1.waitfor; th2.waitfor; th3.waitfor; th4.waitfor; except on E: Exception do begin writeln('Exception: ' + E.Message); end; end; writeln('end!'); end.
|
__label__pos
| 0.986951 |
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Common values and helper functions for the ChaCha and XChaCha stream ciphers.
*
* XChaCha extends ChaCha's nonce to 192 bits, while provably retaining ChaCha's
* security. Here they share the same key size, tfm context, and setkey
* function; only their IV size and encrypt/decrypt function differ.
*
* The ChaCha paper specifies 20, 12, and 8-round variants. In general, it is
* recommended to use the 20-round variant ChaCha20. However, the other
* variants can be needed in some performance-sensitive scenarios. The generic
* ChaCha code currently allows only the 20 and 12-round variants.
*/
#ifndef _CRYPTO_CHACHA_H
#define _CRYPTO_CHACHA_H
#include <asm/unaligned.h>
#include <linux/types.h>
/* 32-bit stream position, then 96-bit nonce (RFC7539 convention) */
#define CHACHA_IV_SIZE 16
#define CHACHA_KEY_SIZE 32
#define CHACHA_BLOCK_SIZE 64
#define CHACHAPOLY_IV_SIZE 12
#ifdef CONFIG_X86_64
#define CHACHA_STATE_WORDS ((CHACHA_BLOCK_SIZE + 12) / sizeof(u32))
#else
#define CHACHA_STATE_WORDS (CHACHA_BLOCK_SIZE / sizeof(u32))
#endif
/* 192-bit nonce, then 64-bit stream position */
#define XCHACHA_IV_SIZE 32
void chacha_block_generic(u32 *state, u8 *stream, int nrounds);
static inline void chacha20_block(u32 *state, u8 *stream)
{
chacha_block_generic(state, stream, 20);
}
void hchacha_block_arch(const u32 *state, u32 *out, int nrounds);
void hchacha_block_generic(const u32 *state, u32 *out, int nrounds);
static inline void hchacha_block(const u32 *state, u32 *out, int nrounds)
{
if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA))
hchacha_block_arch(state, out, nrounds);
else
hchacha_block_generic(state, out, nrounds);
}
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
static inline void chacha_init_generic(u32 *state, const u32 *key, const u8 *iv)
{
state[0] = 0x61707865; /* "expa" */
state[1] = 0x3320646e; /* "nd 3" */
state[2] = 0x79622d32; /* "2-by" */
state[3] = 0x6b206574; /* "te k" */
state[4] = key[0];
state[5] = key[1];
state[6] = key[2];
state[7] = key[3];
state[8] = key[4];
state[9] = key[5];
state[10] = key[6];
state[11] = key[7];
state[12] = get_unaligned_le32(iv + 0);
state[13] = get_unaligned_le32(iv + 4);
state[14] = get_unaligned_le32(iv + 8);
state[15] = get_unaligned_le32(iv + 12);
}
static inline void chacha_init(u32 *state, const u32 *key, const u8 *iv)
{
if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA))
chacha_init_arch(state, key, iv);
else
chacha_init_generic(state, key, iv);
}
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes, int nrounds);
void chacha_crypt_generic(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes, int nrounds);
static inline void chacha_crypt(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes, int nrounds)
{
if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA))
chacha_crypt_arch(state, dst, src, bytes, nrounds);
else
chacha_crypt_generic(state, dst, src, bytes, nrounds);
}
static inline void chacha20_crypt(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes)
{
chacha_crypt(state, dst, src, bytes, 20);
}
#endif /* _CRYPTO_CHACHA_H */
|
__label__pos
| 0.993235 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS6366293 B1
Publication typeGrant
Application numberUS 09/163,934
Publication date2 Apr 2002
Filing date29 Sep 1998
Priority date29 Sep 1998
Fee statusPaid
Publication number09163934, 163934, US 6366293 B1, US 6366293B1, US-B1-6366293, US6366293 B1, US6366293B1
InventorsJeffrey L. Hamilton, Bret D. Schlussman
Original AssigneeRockwell Software Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and apparatus for manipulating and displaying graphical objects in a computer display device
US 6366293 B1
Abstract
A graphical object in an object-oriented environment is comprised of a plurality of child graphical objects. The parent graphical object and each of the child graphical objects have a property corresponding to the orientation of a representation of the respective object. A connection tree is formed from the parent graphical object which has the initial values of each property of the child graphical objects. During operation, the value of the property of the graphical object may be altered corresponding to a change in the position of the object's graphical representation. The altered value is broadcast through the connection tree to allow recalculation of each child object's property based upon its initial value so that the parent object and its child objects can be graphically displayed based on the changed position.
Images(13)
Previous page
Next page
Claims(6)
We claim:
1. A method of manipulating and displaying a graphical object on a computer display device of a computer system which includes the computer display device, a processor, and memory, the method comprising the steps of:
creating a graphical object in an object-oriented environment and storing the graphical object in the memory of the computer system, the graphical object comprising a plurality of child graphical objects, the graphical object and each of the child graphical objects having a property corresponding to the orientation of a representation of the respective graphical object, at least two of said graphical objects being operatively connected to one another through an anchor point with one of said graphical objects having an anchor property corresponding to rotation of the respective graphical object representation about the anchor point, the anchor property having locked and unlocked settings, the property of each of the respective graphical objects having an initial value;
scanning the graphical object by traversing through each of the child graphical objects to form a connection tree having the initial values of each property of the respective graphical objects;
altering the value of the property of the graphical object from the initial value corresponding to a change in the position of the representation of the graphical object; and
graphically displaying the representation of the graphical object on the display device by traversing through the connection tree to broadcast the altered value of the graphical object to each of the child graphical objects, recalculating the value of each property of the child graphical objects based on its initial value and the altered value of the graphical object, the value of the anchor properly remaining unchanged when in the locked setting and altered when in the unlocked setting where a change of position occurs to at least one of the representations of the graphical objects which are operatively connected to one another through the anchor point, and displaying the representation of the graphical object including its child graphical objects on the display device with said recalculated values.
2. The method of claim 1, wherein creating a graphical object includes:
determining a relationship between at least two physical components based on a physical proximity factor; and
enabling a user to adjust the orientation of the representation of at least two child graphical objects relative to one another by operating a pointer device operatively connected to the computer system so that the orientation of the representations of the at least two graphical objects graphically represents the physical proximity factor of the two physical components.
3. The method of claim 2, wherein the step of scanning the graphical object includes the step of storing the connection tree in the memory of the computer system.
4. A computer system comprising:
a computer operating in an object-oriented environment;
a memory operatively coupled to the computer;
a graphical object stored within the memory of the computer, the graphical object comprising a plurality of child graphical objects, the graphical object and each of the child graphical objects having a property corresponding to the orientation of a representation of the respective graphical object, at least two of said graphical objects being operatively connected to one another through an anchor point with one of said graphical objects having an anchor property corresponding to rotation of the respective graphical object representation about the anchor point; the anchor property having locked and unlocked settings, the property of each of the respective graphical objects having an initial value;
a display screen, operatively coupled to the memory, for graphically displaying a representation of the graphical object including representations of each of its child graphical objects;
means for scanning the graphical object to form a connection tree having the initial values of each property of the child graphical objects;
means for altering the value of the property of the graphical object representing a change in the position of the graphical object; and
means for graphically displaying the representation of the graphical object on the display screen by broadcasting the altered value of the graphical object to each of the child graphical objects, recalculating the value of each property of the child graphical objects based on its initial value and the altered value of the graphical object, the value of the anchor property remaining unchanged when in the locked setting and altered when in the unlocked setting where a change of position occurs to at least one of the representations of the graphical objects which are operatively connected to one another through the anchor point, and displaying the representation of the graphical object including its child graphical objects on the display screen with said recalculated values.
5. The computer system of claim 4, further comprising a user interface, operatively coupled to the display screen, for dragging, in response to commands issued by a user, the graphically displayed representation of the graphical object, and for dropping, in response to commands issued by the user, the graphically displayed representation of the graphical object in a desired orientation during a non-runtime state.
6. The computer system of claim 5, further comprising means for storing the connection tree in the memory of the computer system.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to computer systems having a graphical user interface. More particularly, the invention pertains to a method and apparatus for manipulating and displaying graphical objects in an object oriented computing environment.
2. Description of the Prior Art
Graphical user interfaces employing an object-oriented programming paradigm are commonly used in application programs such as word processing and database programs, as well as other graphic desktop applications. A graphical user interface provides manipulable graphical objects such as icons, windows, and various controls that can be used to control underlying software or hardware represented by the graphical objects. Typically, a user interacts with the graphical user interface using a graphical pointer that is controlled by a pointing device, such as a mouse or trackball, to accomplish conventional drag and drop techniques and other graphical object manipulations.
The conventional object-oriented, graphical user interface provides an effective manner to monitor and control underlying components represented by the graphical objects. However, applications that display animation or graphical movement between connected components have required the assistance of computer programmers and specially designed custom software. Examples of such applications are computer simulation programs, mechanical emulation programs, and user display or control applications that graphically display moving components of an automated process. Accordingly, these programs are typically difficult and expensive to develop making them generally unavailable to many industries and possible applications.
As will be described in greater detail hereinafter, the method and apparatus of the present invention solves the aforementioned problems by employing an object oriented paradigm to represent connectable graphical objects and employs a number of novel features that render it highly advantageous over the prior art.
SUMMARY OF THE INVENTION
Accordingly, it is an object of this invention to provide a system and method for manipulating and displaying graphical objects.
Another object of this invention is to provide a method for displaying graphical objects that are operatively connected to one another so that their representations do not become distorted after repeated movement.
Still another object of this invention is to provide a method and apparatus which can be easily used by systems engineers or designers to provide applications having graphical objects virtually connected to one another to form new graphical objects that can be moved or manipulated in a display device without having to rely upon the assistance of computer programmers and specially designed custom software.
To achieve the foregoing and other objectives, and in accordance with the purposes of the present invention a method and apparatus of manipulating and displaying graphical objects on a computer display device of a computer system are provided. The computer system preferably includes the display device, a processor, and memory for storing created graphical objects. In one embodiment, the graphical object comprises a plurality of child graphical objects where both the parent and children graphical objects have a property corresponding to the orientation of a representation of the respective graphical object.
In accordance with an aspect of the invention, a connection tree will be formed by scanning the graphical object to traverse through each of the child graphical objects. The connection tree will contain the initial values of each property of the graphical objects. In use, the property values will become altered which correspond to a change in the position of the representation of the respective graphical object. The change of position is then broadcast down the connection tree and the property values of respective graphical objects are recalculated based on the initial and altered value.
Other objects, features and advantages of the invention will become more readily apparent upon reference to the following description when taken in conjunction with the accompanying drawings, which drawings illustrate several embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
FIG. 1 depicts an exemplary computer system configured according to the teachings of the present invention;
FIG. 2 is a block diagram of one preferred container application according to the teachings of the present invention;
FIG. 3 is a block diagram of the container application depicting its hierarchical structure;
FIG. 4 depicts a grouping function for objects;
FIG. 5 depicts a preferred embodiment of a graphical object;
FIG. 6 is a flow diagram of graphical object processing;
FIG. 7 depicts persistence of the objects according to the teachings of the present invention;
FIG. 8 is a block diagram of one preferred embodiment of server communication;
FIG. 9 is an exemplary embodiment of a communication network connected with remote devices;
FIG. 10 is a flow diagram of one preferred embodiment of data transmission through the server; and
FIGS. 11-15 depict graphical interface screens of a preferred embodiment according to the teachings of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention generally relates to joining and manipulating graphical objects, as later described, on a computer display screen. The invention may be run on a variety of computers or computing systems including personal computers, mainframe systems, and distributed computing environments.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be obvious, however, to one skilled in the art that the present invention may be practiced without many of these specific details. In other instances, well-known structures, circuits, and programming techniques have not been shown in detail in order not to unnecessarily obscure the present invention.
Referring to FIG. 1, a computer system 10 of an exemplary embodiment of the present invention includes a system unit 12. The system unit 12 includes a system bus 14 to which various components are connected and communicate through. Microprocessor 16 is connected to the system bus 14 along with random access memory (RAM) 18 and read only memory (ROM) 20. The Microprocessor can be any number of conventional processors including the Intel PENTIUM microprocessors, IBM POWERPC microprocessors, or others.
The RAM 18 is the main memory of the system 10 while the ROM 20 typically includes the BIOS and other basic operating instructions, data and objects used by the system 12 to perform its functions. The operating system 22 and application program 24, when loaded into the system 12, are typically retained within the RAM 18, though portions may be retained on the disk drive or storage medium 26, such as a hard disk drive or CD ROM. One skilled in the art would appreciate that the storage of application programs may extend over various mediums and that during run time it is not uncommon that an application program or portions thereof may be residing in several mediums at once or may even be distributed across a network in several different systems. The keyboard controller 28, pointer or mouse controller 30 and video controller 32 are connected to the system bus 14 and provide hardware control and interconnection for the keyboard 34, pointer or mouse 36, and graphic display 38 which are connected to respective controllers. The graphic display 38 has a display screen 39 for displaying representations of graphical objects thereon. For discussion purposes herein, it should be understood that reference to displaying a particular graphical object refers to displaying a graphic representation of the graphical object and discussion of same may be used interchangeably. I/O module or interface 40 is connected to the system bus 14 and enables communication over a network 42, as described later in more detail.
The pointer or mouse 36 can be any type of pointing device including, for example, a mouse, trackball, or touch-sensitive pad. The pointer or mouse 36 is well known in the art and is used by a user to generate control signals having two or more dimensions. The control signals are transmitted to the microprocessor 16. For example, movement of the mouse 36 across a surface will generate control signals in an x-axis and y-axis. The mouse 36 further includes one or more buttons or actuators that can be selectively actuated by the user to generate further control signals to the microprocessor 16. The use of the mouse or similar pointer device is described later with respect to the features of dragging and dropping of graphical objects displayed on the display 38. The implementation of dragging and dropping in a windows graphical environment by using the control signals generated by the mouse is well known in the art.
Object oriented programming paradigms and characteristics relating to encapsulation, inheritance, and polymorphism are well known in the computer programming arts. Accordingly, for brevity, various conventional techniques and terminology have been omitted. Further information and details about these subjects, as well as C++ and OLE, may be obtained from “Inside Visual C++” by David J. Kruglinski, 1996, Microsoft Press, and “Inside OLE 2” by Kraig Brockschmidt, 1994, Microsoft Press, both of which are hereby incorporated by reference.
Referring to FIG. 2, the application 24 in a preferred embodiment is a container application written in C++ capable of managing the graphical objects described below. The container application 24 includes a server application or data addin 44 that manages the data of the graphical objects as later described. Additionally, toolbar addins 46, ActiveX controls 48, and Microsoft Visual Basic Application (MS VBA) 50, all of which are known in the art, are preferably incorporated into the container application 24 to provide additional functionality, usable tools or ease of programmability. It is significant to note that various exemplary and preferred embodiments of the present invention will be described by the C++ programming language, however one skilled in the art would appreciate that other object models and techniques can be defined using other programming languages.
Graphical Objects
Referring to FIG. 3, the hierarchy structure of the container application 24 is represented. Graphical objects 52 are placed on a graphical page 54, where the page 54 is part of a document 56. In some cases the page is the document, or there is no distinction between a document and the pages in the document.
Referring to FIG. 4, page 54 of the container application comprises a data structure CDRAWOBJ. All primitives, such as text, rectangles, arcs, ellipses, polygons, and graphical objects are derived from CDRAWOBJ. For illustrative purposes, page 54 includes a rectangle 58, ellipse 60, and graphic 62. As a data structure, each are represented as CDRAWOBJ. When performing a group function 61 on any number of CDRAWOBJs a new CDRAWOBJ is thereby defined as a group having children defined by the selected group items. Window 63 illustrates CDRAWOBJs 58, 60 being grouped into CDRAWOBJ Group2, shown in window 65, having rectangle 58 and ellipse 60 as children. Further groups of still other groups can be grouped to create CDRAWOBJs that could include hundreds or any desired number of primitives. It should be understood that windows 63, 65 are for illustrative purposes to depict the grouping function and do not correspond to visual or graphical display windows.
Once grouped, a graphical object is formed. It should be noted that the graphical objects will also be referred to herein simply as objects. A graphical object is typically a collection of object primitives. Whatever happens to a graphical object or parent graphical object, as a group, occurs to all its children. In the preferred embodiment, each graphical object is defined as follows:
CDrawObj
CRectObj
CTextObj
CPolyObj
CProgrammable
CComplexObj
CActiveXObj
The CRectObj is an object defining a rectangle. The CTextObj is an object defining a text-based box or window and CPolyObj is an object defining a polygon type shape. The CRectObj, CTextObj and CPolyObj are configured similarly to those corresponding objects that one would find independently in Windows C++ programming or through the use of MS VBA.
CProgrammable is an object that includes anchor, spin, and rotation attributes to the CDrawObj and will be described later in more detail. CProgrammable includes CComplexOBj which is an array of CDrawObjs that provides the ability and complexity of allowing the CDrawObj to be comprised of other CDrawObjs or groups thereof The functionality of CProgrammable with respect to anchor, spin or rotation is an important part of the present invention. In another embodiment, for example, the CDrawObj graphical object could comprise only one or more of these CProgrammable elements without the other text or geometric based objects.
Graphical Object Properties
Each graphical object 52 includes properties 64 and may also include methods 66, as represented in FIG. 5. The functionality of the Anchor, AnchorLock, AnchorLockAutoReturn, AnchorX, AnchorY, Rotation, RotationX and RotationY properties serve an important role in one preferred embodiment of the present invention where the application of same is used for mechanical emulation purposes later described. In alternative embodiments, one or more of these properties will be provided to serve the emulation or movement characteristics of a particular application. The functionality of these properties 64, along with an exemplary syntax are described below.
The Anchor property specifies the angle, in degrees, that an object can move relative to its anchor point. Its exemplary syntax is shown as:
Object.Anchor[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric value expressing degrees. Preferably, this value is in the range of 0 to 360 degrees.
The AnchorLock property specifies whether the anchored object maintains its original angle while rotating with its parent object. Its exemplary syntax is shown as:
Object.AnchorLock[=boolean]
where the Object is a required portion corresponding to the graphical object being evaluated and the boolean portion is an optional boolean expression that specifies whether to lock an anchor point. With the setting of the boolean to true, the anchored object's angle changes relative to its parent object. With the setting of the boolean to false, the anchored object maintains its original angle as it rotates with its parent object.
The AnchorLockAutoReturn property returns the value of the Anchor property to its original design value. Its exemplary syntax is shown as:
Object.AnchorLockAutoReturn[=boolean]
where the Object is a required portion corresponding to the graphical object being evaluated and the boolean portion is an optional boolean expression that specifies whether to reset the original value of the Anchor property. With the setting of the boolean to true, the anchor point position is reset to its original value. With the setting of the boolean to false, the anchor point position remains at its current value.
The AnchorX property specifies the horizontal distance between the center of the object and its anchor point. Its exemplary syntax is shown as:
Object.AnchorX[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric expression specifying distance. The AnchorX property is expressed in twips. If the AnchorX value is positive, the anchor point is set to the right of the object's center point. If the AnchorX value is negative, the anchor point is set to the left of the object's center point. If the object is spun or rotated, or its anchor is moved, the AnchorX value of the object changes accordingly.
The AnchorY property specifies the vertical distance between the center of the object and its anchor point. Its exemplary syntax is shown as:
Object.AnchorY[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric expression specifying distance. The AnchorY property is expressed in twips. If the AnchorY value is positive, the anchor point is set below the object's center point. If the AnchorY value is negative, the anchor point is set above the object's center point. If the object is spun or rotated, or its anchor is moved, the AnchorX value of the object changes accordingly.
The Rotation property specifies the degree of rotation of an object around its center. Its exemplary syntax is shown as:
Object.Rotation[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric expression specifying degree of angle. In a preferred embodiment, the value of the Rotation property can only be set during runtime. The Rotation property is expressed in twips and the value of this property changes as the object is moved left or right by the user or by code. Preferably, this value is in the range of 0 to 360 degrees.
The RotationX property specifies the distance between the center of the object and its rotation point. Its exemplary syntax is shown as:
Object.RotationX[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric expression specifying distance. The RotationX property is expressed in twips. If the RotationX value is positive, the rotation point is set to the right of the object's center point. If the RotationX value is negative, the rotation point is set to the left of the object's center point. If the object is spun or rotated, or its rotation point is moved, the RotationX value of the object changes accordingly.
The RotationY property specifies the distance between the center of the object and its rotation point. Its exemplary syntax is shown as:
Object.RotationY[=double]
where the Object is a required portion corresponding to the graphical object being evaluated and the double portion is an optional numeric expression specifing distance. The RotationY property is expressed in twips. If the RotationY value is positive, the rotation point is below the object's center point. If the RotationY value is negative, the rotation point is set above the object's center point. If the object is spun or rotated, or its rotation point is moved, the RotationY value of the object changes accordingly.
To add further functionality, a graphic object may also include any of the additional properties as follows: BackColor() as Color, returns or sets the background color of a graphic; ForeColor() as Color, returns or sets the foreground color of a graphic; Height() as Double, returns or sets the dimensions of a graphic; HsliderEnd() as Double, returns or sets the horizontal end position of a graphics slider of the type known in the art; HsliderEndValue() as Double, returns or sets the maximum value of the graphics slider; HsliderMouseEnabled() as Boolean, returns or sets a value indicating whether the graphics slider is enabled; HsliderStart() as Double, returns or sets the horizontal start position of the graphics slider; HsliderStartValue() as Double, returns or sets the minimum value of the graphics slider; HsliderSteps() as Double, returns or sets the amount of change to the HsliderValue property setting when the user clicks the area between the scroll graphic and the HsliderEnd property; HsliderValue() as Double, returns or sets the current position of the scroll bar, whose return value is between the values for the HsliderEndValue and HsliderStartValue properties; Left() as Double, returns or sets the distance between the internal left edge of a graphical object and the left edge of its container; Spin() as Double, returns or sets the rotation angle from the center of the graphical object; Top() as Double, returns or sets the distance between the internal top edge of a graphic and the top edge of its container; Visible() as Boolean; returns or sets a value indicating whether a graphical object is visible or hidden; VsliderEnd() as Double, returns or sets the vertical end position of the graphics slider, VsliderEndValue() as Double, returns or sets the maximum vertical value of the graphics slider; VsliderMouseEnabled() as Boolean, returns or sets a value indicating whether the graphics slider is enabled; VsliderStart() as Double, returns or sets the vertical start position of the graphics slider; VsliderStartValue() as Double, returns or sets the minimum vertical value of the graphics slider; VsliderSteps() as Double, returns or sets the amount of change to the VsliderValue property setting when the user clicks the area between the scroll graphic and the VsliderEnd property; VsliderValue() as Double, returns or sets the current position of the scroll bar, whose return value is between the values for the VsliderEndValue and VsliderStartValue properties; and Width() as Double, returns or sets the dimensions of a graphical object.
Anchor Partners
A graphical object may be composed of several other objects anchored together. When one of the several other objects is rotated or moved, it is necessary to execute a process in which the other connected objects are informed of the move and are also moved in accordance with the manner of connection. In a preferred embodiment, this functionality is achieved by a process that scans all aspects of the connections of an object to form a collection. Whenever an action is performed on one part or child of a parent object, that same action is broadcast through the collection of objects that make up that one object. In the preferred embodiment illustrated in FIG. 6, a method referred to herein as AddAnchorPartners performs this function to quickly find all aspects of the relationships to construct the action collection. Using C++ pseudocode, the method definition of AddAnchorPartners is depicted below.
void CProgrammable::AddAnchorPartners(CDrawObjList& olSelection,
BOOL bChildrenOnly, CDrawObjList* olNonRigidParents )
It should be understood that all of the C++ pseudocode examples herein could be written in other languages. Further, in the programming arts there are numerous ways that the functionality of object anchoring could be implemented, such as by using different data structures, alternative object linking procedures, lookup tables or linked lists.
The olNonRigidParents provides a list of items that are connected to an object where the AnchorLock property is turned off. The data structure olSelection provides the list of attached objects in the exemplary embodiment. Referring to FIG. 6, AddAnchorPartners is the method, starting at step 69, to create the olSelection list, which is depicted at step 70. This list is created by reviewing the anchoring relationship defined in each object in relationship to the object being acted on.
The list olSelection can be built as follows:
CDrawObjList olSelection;
olSelection.AddTail(this);
AddAnchorPartners(olSelection);
The AddTail function used above adds a new object into a collection at the end of the list. In the preferred embodiment described herein, rigid partners refers to two objects anchored together. The one object that is anchored to another has its AnchorLock flag set to True. If the AnchorLock property of an object is set to True, then it is considered rigid. If it is set to False, it is considered nonridig. Movement of rigid and nonrigid objects with respect to the AnchorLock property occurs differently as will be described later in further detail.
To find the objects anchored to a particular object at step 72, depicted below is C++ pseudocode illustrating an example thereof
if( m_pAnchorTo && (!bChildrenOnly) )
{
if(olNonRigidParents==NULL)
{
if( !olSelection.Find(m_pAnchorTo) )
{
// not in selection
olSelection.AddTail(m_pAnchorTo);
m_pAnchorTo−>AddAnchorPartners(olSelection,
bChildrenOnly, olNonRigidParents);
}
} else
{
// only add rigid object
if(m_bAnchorLock)
{
if( !olSelection.Find(m_pAnchorTo) )
{
// not in selection yet
olSelection.AddTail(m_pAnchorTo);
m_pAnchorTo-
>AddAnchorPartners(olSelection, bChildrenOnly, olNonRigidParents);
}
} else
{
// not rigid! just collect
if( !olNonRigidParents−>Find(m_pAnchorTo) )
{
olNonRigidParents−>AddTail(m_pAnchorTo);
}
}
}
}
To find the objects anchored from a particular object at step 74, depicted below is C++ pseudocode illustrating an example thereof.
//check for children
int cnt = m_arAnchorFrom.GetSize( );
for(int i=0; i<cnt; i++)
{
if(olNonRigidParents==NULL)
{
// traditional implementation, collect everything
if( !olSelection.Find(m_arAnchorFrom[i]) )
{
// not in selection yet
olSelection.AddTail(m_arAnchorFrom[i]);
m_arAnchorFrom[i]−
>AddAnchorPartners(olSelection, bChildrenOnly, olNonRigidParents);
}
} else
{
// only add rigid children
if(m_arAnchorFrom[i]−>m_bAnchorLock)
{
if( !olSelection.Find(m_arAnchorFrom[i]) )
{
// not in selection yet
olSelection.AddTail(m_arAnchorFrom[i]);
m_arAnchorFrom[i]−
>AddAnchorPartners(olSelection, bChildrenOnly, olNonRigidParents);
}
} else
{
// not rigid! just collect
if( !olNonRigidParents−>Find(m_arAnchorFrom[i])
}
}
olNonRigidParents−
>AddTail(m_arAnchorFrom[i]);
}
}
}
}
}
In the user interface described later, when a user links two objects, the user will have selected a first object and a second object. After actuating the linking or anchoring process, the first object's m_pAnchorTo is set with the second object, and the first object is added to the second object's m_arAnchrorFrom collection. It should be appreciated that any number of objects could be anchored to one particular item. Accordingly, it should be understood that in the foregoing pseudocode that m_arAnchorFrom[i] refers to the i'th object in the collection.
As previously discussed, once the connection tree is stored in a collection, the appropriate operations or movements on the sub-parts or objects of a complex object can be performed. Through the use of recursion, the ability to reach the finite atomic object is accomplished. When moving an object, as described later in further detail, all the objects anchored to it are moved accordingly.
In one preferred embodiment, before performing operations on these objects, the initial design state of the objects is saved at step 76. When an object is rotated, its true shape or definition is lost over time by the action performed on it due to mathematical round-off. To speed up operations when a screen is repainting, the information that makes up a shape is transformed, but its initial design state is saved so that the object will not lose its true appearance as numerous operations are performed on it. This step is depicted below in C++ pseudocode.
pos = olSelection.GetHeadPosition( );
while(pos)
{
pC = (CProgrammable*) olSelection.GetNext(pos);
if(!pC−>m_bWasDesignStateSaved)
{
pC−>SaveDesignState( );
pC−>m_bWasDesignStateSaved = TRUE;
}
}
After the design state has been saved, various rotate, spin, and movement functions can be performed to the objects at step 78. The application of these motion functions by themselves for a single unconnected object is known in the art. However, any such movement of an object of the present invention will then require corresponding movement to anchored objects. This is achieved by utilizing the anchor partner functionality previously described so that the movement is broadcast to each anchored object for effectuating a movement that corresponds to the connection therebetween. For example, when an object spins, its position will also move. Accordingly, a calculation of how the object has moved is made and all objects attached to this object are offset by the calculated number of pixels.
In summary, one embodiment of the foregoing method of manipulating and displaying graphical objects on the computer display device of the system includes the step of first creating a graphical object in an object-oriented environment and storing the graphical object in the memory of the computer system. The graphical object may be comprised of a plurality of child graphical objects where the graphical object and each of the child graphical objects has at least one property corresponding to the orientation of a representation of the respective graphical object, such as the Anchor property. Next, the graphical object is scanned by traversing through each of the child graphical objects to form a connection tree having initial values of each property of the respective graphical objects. The connection tree being preferably stored in the memory of the system. In operation, a value of the property of the graphical object will become altered from the initial value which corresponds to a change in the position of the representation of the graphical object. The representation of the graphical object will be graphically displayed on the display device by traversing through the connection tree to broadcast the altered value of the graphical object to each of the child graphical objects, recalculating the value of each property of the child graphical objects based on its initial value and the altered value of the graphical object, and displaying the representation of the graphical object including its child graphical objects on the display device with the recalculated values.
Persistence
During operation of the system as previously described, it will be appreciated that the various graphical objects created will change positions relative to one another. When ending a presently running program or application, the graphical objects and the values of their present properties can be retained in the conventional manner of saving the various information to disk. However, the data structures of the graphical objects include linked lists. The pointers of these linked lists existing at the time the application or program is running are not saved to disk. Without this information, objects later reloaded and reassembled would not have the proper orientation of their graphical representations.
In a preferred embodiment illustrated in FIG. 7, persistence 80 is accomplished by creating a Z-order property of a graphical object. Persistence refers to the permanence of an object, that is, the amount of time for which it is allocated space and remains accessible in the system's memory. When the object is reloaded, individual segments of the object are reassembled accordingly. At step 82, a Z-order array 81 is created and stored in memory 18. When the present state of the graphical objects are to be retained, each graphical object is traversed at step 84 and is provided at step 86 with an indexing number. At step 88, the numbers are saved in the Z-order array 81 which provide the numerical index based on the Z-order. At step 90, the Z-order array 81 is stored to disk drive 26 or other storage medium.
Server and Networking
The server or data addin 44 is in communication with the network 42 (FIG. 1) to read and write data across the network 42 to a particular node, which may be another device, processor, etc. Further, the server moves the data into the container application to update particular variables with new or changed values. The server 44 can be a separate application or interface coupled with the container application internally or remotely. Alternatively, the two applications could be integrated together. As later described, the server in one embodiment updates the values in the Anchor or Rotation properties to provide mechanical emulation.
In one embodiment shown in FIG. 8, the server communicates through Dynamic Data Exchange (DDE) and maintains the connection with the communication network. The server uses DDE request strings 92 to access information. The DDE request string, graphically depicted in FIG. 8, is formed of three elements including the application 94, topic 96, and item 98. The application 94 identifies the source of the request. The topic 96 identifies the device to be communicated with and the item 98 is the name of the particular address for which a value is to be returned. For example, a DDE request having an application name DDESERVER, a topic name PLC5, and an item C5:0.ACC, corresponds to returning the value stored in address C5:0.ACC of PLC5 100 to the server DDESERVER 44′.
It should be appreciated that the server could be configured in a number of manners. The server could include a separate processor operating remote from the container application with the information being transferred across the communication lines. Additionally, the server could be software based, such as RSLinx™ produced by Rockwell Software, Inc. of West Allis, Wis., installed within the same computing device that is operating the container application along with an associated I/O module or card of conventional type. As a further example, the server could be an OLE for Process Control (OPC™) server which provides a standardized mechanism to provide data from a data source, communicate the data to an application, and update the data therebetween.
An exemplary communication network of conventional design is implemented herein. The choice of network will be typically based on the type of system application. In FIG. 9, a ControlNet™ network, produced by the Allen-Bradley Company, LLC of Milwaukee, Wis. is illustrated. The ControlNet™ network meets the demands of real-time, high-throughput applications and is well-suited for industrial automation applications where the network will be linked to PLC processors, I/O, computers, and other devices or networks. For example, in this exemplary embodiment, the network 42, such as the ControlNet™ network, is connected with other computer systems 10 or programmable logic controllers 102. The network 42 may be directly connected with an automation system or device 104 or may be further connected to another network 106 or system. Shown in the FIG. 9, the network 106 is DeviceNet™ Network, produced by the Allen-Bradley Company, LLC of Milwaukee, Wis. The network 106 is connected to network 42 through line 109 in the embodiment shown here. The network 106 is connected with various devices which may include a machine or robotic device 108, valve 110, motor 112, sensor 114, or other devices or components 116. It should be understood that the particular configuration will vary depending on the application.
In an industrial automation or other time-sensitive applications, the representations or graphical images of the graphical objects are updated in substantially real-time to reflect the changes in position attributes which are represented as values of particular variables. Referring to FIG. 10, the server operates independently from the container application to maintain communication with the network and to update any changed values of the properties, as previously discussed. At step 120, the server operates with a conventional interface technique, such as one operating through DDE, OPC™ or Component Object Model (COM) interface. At the step 122, the interface monitors the condition of the variable to detect a change. If a change occurs, the server will receive the new value of the variable at step 124. If at step 126 the value is to be manipulated for any purpose, then such action occurs at step 128 by executing code residing in a corresponding graphical object in the form of an event that is triggered. For example, the value as received from the server may exist in raw data form that must be processed by the event. In other cases, the value may need to be properly scaled for use in the parameter range that has been previously set for the corresponding property. At step 130, the corresponding property is updated with the new or modified value. A recursive function is executed at step 132 to update the anchor partners, described later in more detail. The representation of the graphical object is updated on the display screen at step 134 to reflect the most recent change.
Referring now to FIG. 11, a graphical user interface 140 of a preferred embodiment is provided. The interface 140 is generated by the container application 24 on the display screen 39 previously described. The interface 140 incorporates the interface for the MS VBA 50 and accordingly has a similar look and feel. However, the interface 140 includes its own menus, toolbars, and other user interface components and functionality as will be described below.
The interface 140 illustrated in FIG. 11 includes a toolbar 142, a document or page window 144, a toolbox window 146, a properties window 148, a project explorer window 149, and library window 150. The toolbox window 146 displays icons 152 that represent types of objects that a user can place on the page window 144. These objects include ActiveX controls 48 (FIG. 2) and those of the RSTools program which was developed and is sold by Allen-Bradley Company, LLC, the assignee of the subject patent application.
The properties window 148 displays all the properties for a selected object. For example, the properties window 148 shown in FIG. 11 is displaying the properties 64 for Graphic1 154 which is a graphical object 52. The graphical representation of Graphic1 154 is displayed on the page window 144. In the present exemplary embodiment, Graphic1 154 was placed on the page window 144 by first opening a graphical object library 156 having a series of preformed graphical objects 158 shown within the library window 150. One of the preformed graphical objects 158, such as Base3, was selected by the user and dragged and dropped with the pointing device or mouse 36 (FIG. 1) on the page window 144 in a desired location or orientation indicated by arrow 159.
The project explorer window 149 displays a hierarchical list of the currently open projects and its contents. In the present example illustrated in FIG. 11, the project explorer window 149 displays that the project pertaining to Page1 shown on page window 144 includes Graphic1 154 thereon.
With respect to creating graphical objects 52, a user can create a graphical object by performing a group function 61 (FIG. 4), as previously discussed, on an existing shape, control, symbol or other graphic. Accordingly, a graphical object 52 could be an imported bitmap, symbol or even an existing control object. As soon as the grouping function is applied, it becomes a graphical object 52 and inherits the properties, methods and events of that object. Accordingly, to perform the grouping function of FIG. 4 using interface 140 of FIG. 11, the user would select the shapes to be grouped and then actuate the group function within the application menu. Some graphics, such as the preformed graphical objects 158 from library 156 previously discussed, automatically become a graphical object as soon as they are dropped onto the page window 144. Once grouped, the individual shapes of the graphic object become joined as a single object. Accordingly, dragging or otherwise moving the graphical object, as later described, will automatically move the individual shapes of the graphical object so that the graphical object retains its original form with respect to the relationships between the individual shapes.
Referring now to FIG. 12, a second graphical object 52, Graphic2 160, has been added to page window 144. In this exemplary embodiment, Graphic2 160 was placed on the page window 144, similar to Graphic1 154, by accessing the library 156 and selecting one of said preformed graphical objects 158. In this case, object Lwrarm 162, which forms the basis of Graphic2 160, was selected by the user from the library 156 and dragged and dropped with the pointing device or mouse 36 (FIG. 1) on page window 144 in a desired location and orientation relative to the other graphical object, Graphic1 154.
In this particular example, Graphic2 160 is a lower arm of a robot device and in the physical sense would be mounted to pivot about the base, represented here as Graphic1 154. Accordingly, Graphic2 is positioned on Graphic1 to represent the known physical device. Next, the two graphical objects must be anchored together. Basically, anchoring allows the application to keep the two objects together so that one object can be moved about another.
In one preferred embodiment, anchoring involves selecting both objects in an appropriate order to designate which object is being anchored to the other. Next, the anchoring function is actuated, such as by clicking the anchor icon 164 on the toolbar 142, as shown in FIG. 12. Once clicked, from an externally displayed user interface standpoint, the objects have been anchored. Internally, the container application 24 will implement the previously discussed anchor partner functionality. Once anchored, an anchor point is created which designates the pivot point corresponding to how the objects will move relative to one another. Referring back to FIG. 12, the anchor point between anchored objects 154, 160 is illustrated graphically as anchor point 166. The anchor point 166 belongs to object 160, the object that is anchored to another, so that anchor point can be changed through the properties of that corresponding object, as discussed below.
In a preferred embodiment, the anchor point can be changed at design time or at runtime. At design time, the anchor point is changed by changing the value of the horizontal distance by the AnchorX property or the value of the vertical distance by the AnchorY. The AnchorX and AnchorY properties were previously discussed and may be modified directly by the user during design time through access to the properties window 148 for each respective object. Alternatively, by clicking and dragging the anchor point 166 with the mouse 36, the AnchorX and AnchorY properties can be automatically changed. At runtime, the anchor point can be changed by setting the value of the AnchorX or AnchorY properties to a numbered value or to the value of another property.
Referring to FIG. 13, an exemplary embodiment of a robotic device 168 has been constructed by adding objects Graphic3 170 and Graphic 4 172 in manner similar to objects 154, 160 discussed above. In particular, Graphic3 170 has been anchored to Graphic2 160 and has an anchor point 174 corresponding thereto. Graphic4 172 has been anchored to Graphic3 170 and has an anchor point 176 corresponding thereto. It should be understood that the anchoring of objects for movement relative to one another could be applied in numerous applications including machines, such as the robotic device of this exemplary embodiment, automated processes or systems, or any other application where two more objects are displayed such that one of the objects moves relative to another one of the objects.
Once the graphical objects have been anchored to one another, one can apply the necessary code or use the appropriate controls to move one of those objects at runtime. For example, a slider tool from the toolbox 146 represented by the slider icon 178 (FIG. 13) can be configured to control one of said objects. The slider tool per se could be one such as the RSSlider tool from the RSTools program previously mentioned. The slider tool comprises a slider object, for example RSSlider1, the graphical representation of which resembles a slider switch. By adding code to RSSlider1, the value of the Anchor property of one of the graphical objects could be tied to the value between StartValue and EndValue properties of the RSSlider1 object. Example code for a subroutine of RSSlider1 to achieve the foregoing is as follows:
Private Sub RSSlider1_Change(ByVal Value As Double,
Graphic2.Anchor = Value
End Sub
Accordingly, during runtime, movement of the graphical slider switch by the user with the mouse 36 will change the value of the RSSlider1 object, which will At correspondingly change the Anchor property of Graphic2. Since, as previously described, the Anchor property relates to the angle that an object can move relative to its anchor point, Graphic2 will pivot from its anchor point 166 from Graphic1 relative to the change of the RSSlider1. Further, any other objects anchored to Graphic2, will move with Graphic2. However, whether these other anchored objects maintain their original angle while rotating with its parent object will depend on the setting of the AnchorLock property which was previously described. While the foregoing illustrates one way to move an object, it will be appreciated that using controls from RSTools or Visual Basic, for example, one can apply other mechanical emulation techniques to these objects. Further, the values of the properties of the graphical objects can be tied to other components or even physical objects associated with the graphical objects through the server 44 to provide real-time mechanical emulation, as previously discussed.
In the preferred embodiment described above, the rotation point of an object can also be changed at design time or at runtime. The rotation point of an object represents the location around which an object is rotated. At design time or runtime, the rotation point can be changed by changing the value of the horizontal distance by the RotationX property or the value of the vertical distance by the RotationY property. These properties were previously discussed and may be modified directly by the user during design time through access to the properties window 148 for each respective object. Alternatively, by clicking and dragging the rotation point with the mouse 36, the RotationX and RotationY properties can be automatically changed. In the exemplary embodiment of the robotic device shown in FIG. 13, the rotation point of Graphic3 is shown for illustrative purposes as point 180. However, it should be appreciated that in this embodiment, each object would have a rotation point that could be graphically represented. At runtime, the rotation point can be changed by setting the value of the RotationX or RotationY properties to a numbered value or to the value of another property. Further, the Rotation property of each object may be changed at design time or runtime as similarly described.
One can also apply the necessary code or server association to rotate an object at runtime, similar to the Anchor controls previously described. As previously described and illustrated in FIG. 8, the server 44 can be used to update the values of graphical object properties. Referring to FIG. 14, one preferred embodiment of utilizing the server 44 through the user interface 140 is disclosed. FIG. 14 further utilizes the exemplary embodiment of the robotic device 168, however, it should be understood that the underlining technique of linking server variables to particular graphical objects could be accomplished in a variety of forms and with any configuration of graphical objects.
Interface 140 includes a server window 180 linked to the server 44 (FIG. 8). Defined servers are listed in the server window 180 to facilitate the data linking process by the user. In the embodiment of FIG. 14, server window 180 includes an excel link 182 having a data address rlcl, which, for example, could relate to a particular memory address from a remote device such as a PLC. In the illustrated embodiment of interface 140, the data address rlcl can be linked to a graphical object by dragging and dropping the data address rlcl on the graphical object. In the present example, line 184 illustrates address rlcl being dragged and dropped with the mouse onto Graphic3. Once dropped, a select property window 186 prompts the user to select the property of the object to be linked. In the present example, the Anchor property has been selected for illustrative purposes. A property datalink window 188 designates that the Anchor property for Graphic1 is linked to rlcl in block 190. In this present example, Graphic3 is the upper arm of the robotic device 168. The angle of Graphic3 from Graphic2 is thereby determined by its Anchor property and is displayed in text block 192 where Label I is associated with the excel link 182.
Referring to FIG. 15, a Microsoft Excel application 194 produced by the Microsoft Corporation of Redmond, Wash., has been executed. The excel link 182, shown as data link 196 in FIG. 8, is tied to this Excel application 194, except the Excel application 194 serves as the server 44, illustrated here as DDESERVER 44. Since the Excel application 194 uses the DDE, it is being used in this exemplary embodiment to illustrate an application of the server 44. However, it should be understood, as previously described, that the present invention could utilize any one of a number of server protocols or applications including RSLinx™, OPC™ or other data transmission methods. Further, it should be understood that the application and use of a server as described herein, such as the DDESERVER 44, includes any necessary kernel process which manages the resources for the server and completes any interprocess communication that is necessary.
During execution of Page1 from the container application 24 (FIG. 3), a runtime window 198 is displayed showing the graphical objects, which were previously configured and anchored, and moving the graphical objects in accordance with updates of values of any properties of the objects. In the present exemplary embodiment, the Anchor property of Graphic3 originally had a value of 0, as shown in block 200 of FIG. 14. During execution, Label1 of the Excel application 194, shown in block 202 of FIG. 15, has been updated to a value of 30. Accordingly, through the DDE and data link process previously described, the Anchor property of Graphic3 has been updated to a value of 30, as also shown in the text block 192. Accordingly, Graphic3 has moved 30 degrees from its original position designated at position 204.
Since Graphic3 has pivoted from its anchor point 174 (FIG. 13) with Graphic2, Graphic1 and Graphic2 have remained in their original positions. However, where Graphic4 is anchored to Graphic3, Graphic4 has moved with Graphic3. Since the AnchorLock property of Graphic4 was set to true, Graphic4 has maintained its original angled relationship with Graphic3 through their connection at anchor point 176 (FIG. 13). Since at runtime, the various design time graphical representations of anchor points are not typically needed, anchor point 176 is represented in FIG. 15 at position 206 for reference purposes only. If on the other hand, the AnchorLock property of Graphic4 had been set to false, Graphic4 would have moved with Graphic3 since these objects are anchored to one another. However, Graphic4 would not have maintained the same angled relationship with Graphic3. Instead, Graphic4 would remain in a similar orientation represented generally at position 208.
Through the foregoing example, the mechanical emulation created by the graphical objects can be appreciated. Further, for example, the robotic device 168 could be constructed based on a physical robotic device where the server would then update the robotic device 168 in accordance with all movements of the various components of the physical robotic device where each of the various components are associated with particular graphical objects of the device 168 being represented. Alternatively, since the server can both read or write updates, the reader should appreciate that the graphical objects could equally be used to control an external device or process where the flow of data is simply reversed, as illustrated by the dual data flow representation of network 42 in FIGS. 1 and 8, to allow the server to write data updates across the network to a receiving location. In this case, manipulation of the graphical objects will cause the changed values of the properties to be sent from the server to control the linked components or devices. Additionally, in some applications, the container application 24 may be limited to only displaying the graphical objects in a runtime mode where such a display will serve to monitor or control the particular application that has been modeled.
Accordingly, it can be seen that the method for joining graphical objects displayed on the graphical window of the display screen of the present invention allows movement of one of the graphical objects to correspondingly move another one of the graphical objects joined or anchored therewith. In the preferred method or system, the computer system is operated in an object-oriented environment. First and second graphical objects are provided with a representation of them being dragged in the graphical window in response to position commands from a user interface coupled with the computer system to position the representations in a desired orientation relative to one another. The first and second graphical objects are operatively joined or anchored at an anchor point. Each graphical object has an anchor property corresponding to the graphical object's position relative to its anchor point.
The method or system of graphically monitoring an automated process having a plurality of different types of computer-monitored or physical components can be summarized in one preferred embodiment by the following. First and second graphical objects are provided and operatively connected to one another such that movement of a representation of one of the graphical objects on the display screen correspondingly affects the movement of the other representation. Each of the first and second graphical objects are associated or linked with one of the plurality of different types of computer-monitored components. Data is received from the automated process where the data represents position or state changes of the computer-monitored components.
In another embodiment, the graphical objects are selected from a library database. Further, the representations of the selected objects may have a graphical shape corresponding to physical attributes or a general appearance of the computer-monitored components. Where the system is to provide mechanical emulation for user display or control purposes, during design time the user will determine a relationship between the two computer-monitored components or sub-components thereof based on a physical proximity factor and a physical connection factor. The physical proximity factor corresponds to distance or orientation between the components or the lapped relationship between. The physical connection factor relates to the manner of mechanical connection, such as a pivot point. These relationships may be inherently known by the user, obtained from visual or measured inspection of the components by the user, or among other ways, obtained from review of the component's specifications. Accordingly, once the first and second graphical objects have been created or retrieved, their representations may be graphically displaying on the display device with the physical proximity factor being represented with the orientation of the representations of the first and second graphical objects relative to one another. In some cases this may require positioning the graphical objects in lapped relationship. The physical connection factor is represented with positioning and implementing an anchor point through the anchoring process, as previously discussed, which serves to operatively connect the representations of the first and second graphical objects from that point, as well as provide a pivot point, if desired.
The physical components may correspond to actual physical components that the user may examine. Alternatively, the physical components may relate to known components having known relationships with one another or may relate only exist in a virtual sense in a proposed design to be modeled or simulated with graphical objects of the present invention.
Referring back to the previous discussion where data is received from the automated process, predetermined properties of the first and second graphical objects are then updated with the data. These properties are predetermined in the preferred embodiment by nature of their association in the data linking process. The representations of the first and second graphical objects are displayed and then moved in response to updating the predetermined properties with the data as it is received.
With respect to the use of the term “automated process,” it should be noted that this term used herein refers to those processes or system that are automated with respect to the general meaning of the term, as well as those systems that are automated through the use of some network or computer sharing or distribution of information, but not to say that all human observation, effort, or decision making has been eliminated. For example, a factory automation process may include a conveyor system or production line having a series of workstations. A user display for monitoring the factory automation process may have being configured in accordance with the teachings of the present invention where components of the process are represented and linked with graphical objects. However, the fact that some workstations or aspects of the process are not completely automated in the system does not prevent the ability of the user display from representing some of the process or its state of present operation through mechanical emulation of the accessible components of the process.
Although the invention has been described by reference to some embodiments it is not intended that the novel device be limited thereby, but that modifications thereof are intended to be included as falling within the broad scope and spirit of the foregoing disclosure, the following claims and the appended drawings.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5043929 *13 Jun 198927 Aug 1991Schlumberger Technologies, Inc.Closed-form kinematics
US5367622 *16 Nov 199222 Nov 1994Reliance ElectricElectronic template system and method
US54002465 Aug 199221 Mar 1995Ansan Industries, Ltd.Peripheral data acquisition, monitor, and adaptive control system via personal computer
US55009263 Jun 199219 Mar 1996Mitsubishi Denki Kabushiki KaishaMechanism conceptual drawing operation display apparatus, transmission apparatus and mechanism conceptual drawing formation apparatus
US551115713 Dec 199323 Apr 1996International Business Machines CorporationConnection of sliders to 3D objects to allow easy user manipulation and viewing of objects
US5511158 *4 Aug 199423 Apr 1996Thinking Machines CorporationSystem and method for creating and evolving directed graphs
US55880987 Jun 199524 Dec 1996Apple Computer, Inc.Method and apparatus for direct manipulation of 3-D objects on computer displays
US562341814 Jun 199322 Apr 1997Lsi Logic CorporationSystem and method for creating and validating structural description of electronic system
US57178776 Jun 199510 Feb 1998Object Licensing Licensing CorporationObject-oriented data access framework system
US5760786 *27 Dec 19952 Jun 1998Mitsubishi Electric Information Technology Center America, Inc.Simultaneous constructive solid geometry (CSG) modeling for multiple objects
US576785427 Sep 199616 Jun 1998Anwar; Mohammed S.Multidimensional data display and manipulation system and methods for using same
US5831875 *3 Jun 19963 Nov 1998Fujitsu LimitedLink mechanism analyzer and link mechanism joint data arithmetic apparatus
US5835693 *22 Jul 199410 Nov 1998Lynch; James D.Interactive system for simulation and display of multi-body systems in three dimensions
US5844566 *12 Feb 19961 Dec 1998Dassault SystemesMethod and apparatus for controlling shadow geometry on computer displays
US5859643 *5 Jun 199512 Jan 1999Fujitsu LimitedLowering geometric drawing resolution by filtering out data based on threshold values to increase retrieval speed
US586188919 Apr 199619 Jan 19993D-Eye, Inc.Three dimensional computer graphics tool facilitating movement of displayed object
US586739921 Apr 19972 Feb 1999Lsi Logic CorporationSystem and method for creating and validating structural description of electronic system from higher-level and behavior-oriented description
US5883639 *3 Jul 199716 Mar 1999Hewlett-Packard CompanyVisual software engineering system and method for developing visual prototypes and for connecting user code to them
US5889528 *31 Jul 199630 Mar 1999Silicon Graphics, Inc.Manipulation of branching graphic structures using inverse kinematics
US589431019 Apr 199613 Apr 1999Visionary Design Systems, Inc.Intelligent shapes for authoring three-dimensional models
US59008709 Nov 19944 May 1999Massachusetts Institute Of TechnologyObject-oriented computer user interface
US599917917 Nov 19977 Dec 1999Fujitsu LimitedPlatform independent computer network management client
US60161475 Nov 199618 Jan 2000Autodesk, Inc.Method and system for interactively determining and displaying geometric relationships between three dimensional objects based on predetermined geometric constraints and position of an input device
US602584115 Jul 199715 Feb 2000Microsoft CorporationMethod for managing simultaneous display of multiple windows in a graphical user interface
US604083931 Jan 199721 Mar 2000Van Eldik; Benjamin J.Referencing system and method for three-dimensional objects displayed on a computer generated display
US60675726 May 199823 May 2000Novell, Inc.Extrinsically influenced near-optimal path apparatus and method
US6078329 *27 Sep 199620 Jun 2000Kabushiki Kaisha ToshibaVirtual object display apparatus and method employing viewpoint updating for realistic movement display in virtual reality
US6144385 *25 Mar 19987 Nov 2000Michael J. GirardStep-driven character animation derived from animation data without footstep information
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6549199 *19 Mar 199915 Apr 2003Corel Inc.System and method for adjusting a graphical object
US6670961 *13 Feb 200130 Dec 2003Autodesk, Inc.Method and apparatus for enhanced connectors and connection manager
US6927771 *17 Aug 20019 Aug 2005International Business Machines CorporationMethod for displaying data on computer system
US69501131 Nov 200227 Sep 2005Autodesk, Inc.Method and apparatus for automatically displaying and manipulating identifiers of a mechanical design
US6992680 *6 Jun 200031 Jan 2006Autodesk, Inc.Dynamic positioning and alignment aids for shape objects
US7088360 *20 Jul 20028 Aug 2006Autodesk, Inc.Simplified identifier generation in a computer aided design (CAD) environment
US734258925 Sep 200311 Mar 2008Rockwell Automation Technologies, Inc.System and method for managing graphical data
US7406409 *13 Feb 200429 Jul 2008Mitsubishi Electric Research Laboratories, Inc.System and method for recording and reproducing multimedia based on an audio signal
US7466313 *22 Dec 200016 Dec 2008Hewlett-Packard Development Company, L.P.Method for interactive construction of virtual 3D models
US762090421 Jun 200217 Nov 2009Autodesk, Inc.On demand identifier and geometry piece association in computer aided design (CAD) environment
US763609625 Jun 200422 Dec 2009Autodesk, Inc.Automatically ballooning an assembly drawing of a computer aided design
US764390710 Feb 20055 Jan 2010Abb Research Ltd.Method and apparatus for developing a metadata-infused software program for controlling a robot
US7672822 *11 Apr 20052 Mar 2010Dassault Systemes Solid Works CorporationAutomated three-dimensional alternative position viewer
US789552714 Jul 200622 Feb 2011Siemens Medical Solutions Usa, Inc.Systems, user interfaces, and methods for processing medical data
US79242706 Feb 200612 Apr 2011Abacalab, Inc.Apparatus and method for mobile graphical cheminformatic
US8504337 *17 Jan 20086 Aug 2013Caterpillar Inc.Method and system for analyzing three-dimensional linkages
US8510088 *18 Sep 200913 Aug 2013Uchicago Argonne LlcFlexible evaluator for vehicle propulsion systems
US86943982 May 20068 Apr 2014Trading Technologies International, IncClick based trading with market depth display
US87688163 May 20061 Jul 2014Trading Technologies International, Inc.System and method for automatic scalping a tradeable object in an electronic trading environment
US876882431 May 20131 Jul 2014Trading Technologies International, IncUser interface for semi-fungible trading
US883198813 Aug 20129 Sep 2014Trading Technologies International, Inc.Repositioning of market information on trading screens
US20010007095 *22 Dec 20005 Jul 2001Klaus KehrleMethod for interactive construction of virtual 3D circuit models
US20040199367 *20 Apr 20047 Oct 2004Koichi KondoApparatus and method for obtaining shape data of analytic surface approximate expression
US20100082303 *18 Sep 20091 Apr 2010Uchicago Argonne, LlcFlexible evaluator for vehicle propulsion systems
US20140358284 *31 May 20134 Dec 2014Brain CorporationAdaptive robotic interface apparatus and methods
US20150039488 *29 Dec 20135 Feb 2015Trading Technologies International, Inc.System and Method to Provide Informational Depth via a Gradient Indicator
WO2007011930A2 *17 Jul 200625 Jan 2007Siemens Med Solutions HealthSystems, user interfaces, and methods for processing medical data
WO2007092842A2 *6 Feb 200716 Aug 2007Abacalab IncAn apparatus and method for mobile graphical cheminformatic
Classifications
U.S. Classification345/649, 700/263, 703/7, 345/420, 345/652, 345/650
International ClassificationG06T13/20, G06T19/20, B25J9/16
Cooperative ClassificationG06T2200/24, G06T19/20, G06T2219/2016, G06T13/20, B25J9/1671, G05B2219/40097, G05B2219/40099
European ClassificationG06T19/00, B25J9/16P5
Legal Events
DateCodeEventDescription
29 Sep 1998ASAssignment
Owner name: ALLEN-BRADLEY COMPANY, LLC, WISCONSIN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAMILTON, JEFFREY L.;SCHLUSSMAN, BRET D.;REEL/FRAME:009493/0961
Effective date: 19980929
3 Oct 2005FPAYFee payment
Year of fee payment: 4
2 Oct 2009FPAYFee payment
Year of fee payment: 8
2 Oct 2013FPAYFee payment
Year of fee payment: 12
|
__label__pos
| 0.861033 |
What Is PageRank?
By: Katana Graph
June 01, 2022
What Is PageRank?
The PageRank algorithm uses incoming links between a graph’s nodes to rank the nodes, giving nodes with more incoming links a higher rank. For example, if the nodes represent web pages, the algorithm estimates the importance of each relative to the others by evaluating the number of links each page gets from other websites. Links from more important nodes count more for a node’s rank than ranks from less important nodes.
The algorithm is called PageRank for two reasons: first, because it was originally developed to rank the importance of web pages, and second, because one of its inventors was Google co-founder Larry Page.
PageRank computes the rank of nodes in a homogeneous graph (or a homogeneous projection of a heterogeneous graph) based on the value of incoming and outgoing links. It measures the importance of nodes by assigning each node a rank, determined by the weights of the edges connecting them, that represents the probability of arriving at a given node. The higher the rank, the more probable it is that the node will be visited.
Before going further, we should add some relevant definitions aimed at understanding homogeneous graphs:
• Bijection: a one-to-one correspondence between the elements of two sets such that each element of one set is paired with exactly one element of a second set, and each element of the second set is paired with exactly one element of the first set and there are no unpaired elements.
• Graph isomorphism: a one-to-one correspondence between two sets such that all binary relationships between elements of the sets are preserved. Since the set of natural numbers can be mapped onto the set of even natural numbers by multiplying each natural number by 2, the relationship between the sets is isomorphic. In graph theory, an isomorphism of two graphs is a bijection between the vertex sets of each graph.
• Induced subgraph: a graph, formed from a subset of the vertices of another graph where all of the edges of the original graph vertices form connecting pairs of vertices in the subgraph.
• Graph automorphism: a form of symmetry in which a permutation of a graph is mapped onto itself while preserving all edge-vertex connectivity.
• Homogeneous graph: a graph is homogeneous if any isomorphism between finite induced subgraphs extends to an automorphism of the graph.
The direction of edges is extremely important, as each node's rank influences the rank of the nodes it points to both in terms of the pooling of rank among nodes (how many nodes point at a given node) and the sharing of rank from one node to another (how many nodes a given node points to). As one node’s importance grows, the nodes it references increase in importance.
The calculation is an iterative process, beginning with each node initially given equal rank before computation. In every iteration, each node divides its current rank equally across its outgoing links, and the new rank value for each node is the sum of the shares sent to it. The algorithm is stopped after a given number of iterations or if the value differences between iterations are less than a predefined value.
It is, of course, the key enabler of search engines to efficiently determine which pages are most relevant when users search for information. While it can be used to determine the probability that a webpage will be visited, it is also used to find objects of importance, such as an influential academic paper or patent.
PageRank is one of the few algorithms that can be run over any kind of graph data, so it is up to the user to ensure their graph contains one kind of node and one type of relationship to ensure realistic results.
The Katana Graph Intelligence Platform is used to solve problems. Speak to a Katana Graph Intelligence Platform expert to learn how we will help your organization.
Learn more about Katana Graph’s High-Performance Graph Analytics Library.
Get to Know Katana Graph
We thrive on testing new and diverse ideas.
Katana Graph was born of cutting-edge research and scientific rigor, and these beginnings have a powerful effect on who we are to this day. We’re devoted to problem solving, and are relentless in our pursuit of more effective and more efficient solutions to real-world challenges. Continuous improvement is the very foundation of our success.
share
Newsletter Sign Up
Subgraph Extraction
Graph mining is a growing field of study, and its use applies to many real-world applications. In.
Read More
ETL with the Katana Graph Library
ETL (extract, transform, and load) refers to the process data engineers use to pull data from.
Read More
Similarity Metrics Part 2 - Cosine Similarity
Similarity metrics — quantification of how similar two entities are — exist at the core of.
Read More
View All Resources
Let’s Talk
Turn Your Unmanageable
Data Into Answers
Find out how Katana Graph can help provide the foundation for your future of data-driven innovation.
Contact Sales
|
__label__pos
| 0.988132 |
2
My MackBook, on startup, its showing the following:
enter image description here
They are just a bunch of vertical black and white lines, and it gets stuck there.
What could be the problem?
3
It's definitely hardware. It's either the display has gone bad or your GPU (logic board) has gone bad (most likely case)
The first thing I would do is hook it up to an external monitor. If it's present there as well, then you know it's GPU/logic board related. If it goes away, then run Apple Hardware Test (AHT) to get some details. (Hold the D key while booting from a powered off state with the AC adapter connected)
Either way, you will need to take it in for service.
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.857053 |
☻ 唐鳳 ☺ > methods-0.12 > methods
Download:
methods-0.12.tar.gz
Dependencies
Annotate this POD
Website
CPAN RT
New 1
Open 0
View/Report Bugs
Module Version: 0.12 Source
NAME ^
methods - Provide method syntax and sweep namespaces
SYNOPSIS ^
use methods;
# with signature
method foo($bar, %opts) {
$self->bar(reverse $bar) if $opts{rev};
}
# attributes
method foo : lvalue { $self->{foo} }
# change invocant name
method foo ($class: $bar) { $class->bar($bar) }
# "1;" no longer required here
With invoker support:
use methods-invoker;
method foo() {
$->bar(); # Write "$self->method" as "$->method"
}
DESCRIPTION ^
This module uses Method::Signatures::Simple to provide named and anonymous methods with parameters, except with a shorter module name.
It also imports namespace::sweep so the method helper function (as well as any imported helper functions) won't become methods in the importing module.
Finally, it also imports true so there's no need to put 1; in the end of the importing module anymore.
OPTIONS ^
If the first argument on the use line is -invoker, then it also imports invoker automatically so one can write $self->method as $->method.
Other arguments are passed verbatim into Method::Signatures::Simple's import function.
SEE ALSO ^
invoker, signatures
AUTHORS ^
唐鳳 <[email protected]>
CC0 1.0 Universal ^
To the extent possible under law, 唐鳳 has waived all copyright and related or neighboring rights to methods.
This work is published from Taiwan.
http://creativecommons.org/publicdomain/zero/1.0
syntax highlighting:
|
__label__pos
| 0.968858 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
I'm fairly new to VB.net and WPF (and programming in general), so please bear with me. I'm attempting to write a useful database app for my business as a learning project. I did a draft in WinForms, referring to the (very good) Murach book, but am now trying to re-write it with a WPF UI and cleaner code.
I think I understand the basics of WPF binding, but am tearing my hair out trying to find a simple way of binding to parent details - which I would have thought was a very basic scenario, but I can't find anything on it. For example, let's say I have
• a 'Customers' table with CustomerID, Name, various contact details and then foreign keys CityID and GroupID;
• a 'City' table with CityID, CityName and foreign key StateID;
• a 'State' table with StateID, StateName;
• a 'Groups' table with GroupID, GroupName etc.
...with a strongly typed dataset including relationships and a bunch of tableadapters.
All I want to do in this case is display the details for a specific Customer (when the user has entered their CustomerID). I can easily bind to the data in the 'Customers' table, but how do I use bindings to retrieve the CityName, StateName and GroupName from the related tables? A partial example below.
...
<Window.Resources>
<CollectionViewSource x:Key="CustomersViewSource"
Source="{Binding Path=Customers,Source={StaticResource CustomersDataSet}}" />
</Window.Resources>
<Grid DataContext="{Binding Source={StaticResource CustomersViewSource}}">
<TextBox Name="customerName" Text="{Binding Path=Name}" />
<TextBox Name="customerPhone" Text="{Binding Path=PhoneNumber}" />
</Grid>
<Grid>
<Label Name="lblGroup" Content="[needs to bind to GroupName]">
<Label Name="lblCity" Content="[needs to bind to CityName]">
</Grid>
I've read articles on setting up master/detail scenarios, which is straightforward, so I could bind customers into a datagrid for a selected city, for example, by using the foreign key as the binding path for the customer control. Here I'm trying to do it back the other way, though, looking up the parent details based on a selected customer. Can anyone nudge me in the right direction?
Failing this, I've only been able to do it through VB - and it's pretty nasty. Assuming that I've created a CustomerViewSource and looked up the relevant customer, and filled the various tables in my dataset, something like this:
Dim rowCustomer As System.Data.DataRowView = CustomerViewSource.View.CurrentItem
Dim drCustomer As System.Data.Datarow = rowCustomer.Row
Dim drGroup as drCustomer.GetParentRow("FK_Customers_Groups")
lblGroup.Content = drGroup.Item("GroupName")
I shouldn't have to do this so clumsily though... should I?
One more thing: at the moment I'm selecting a customer by filling the dataset using a parameterized query on the tableadapter, e.g.
SELECT CustomerID, CustomerName
FROM Customers
WHERE CustomerID = @CustomerID
And then just moving to the first record (as there will only be one). Is this a normal method of looking up a customer? I could also use this method by getting the foreign keys from my customer row and filling the different tables in the dataset all using parameterised queries, then just binding to each table separately - but again, seems very messy.
Any help with this problem, or even general advice, hugely appreciated. If I haven't been clear enough (or provided good examples), happy to explain further. Cheers.
share|improve this question
up vote 1 down vote accepted
First, these are actually child rows, not parent rows. It's a little counterintuitive, but the parent is the "one" side of the one-to-many relation, and the child is the "many," so Group, City, and State are all parents of Customer. Your code example shows this.
Next, in ADO.NET, you can accomplish everything you're trying to do here by creating a DataColumn in the Customer DataTable and setting its Expression. Create a column named StateName and set its expression to Parent(FK_Customers_States).StateName. Now you can bind to it the way you'd bind to any other column.
share|improve this answer
Brilliant, thank you Robert, I wasn't aware of Expressions at all and that is exactly what I needed to know here. Perhaps I should take a step back and make sure I have my head around ADO.NET more thoroughly before I dive ahead with the programming! And yes, I understand which is which with parent/child, probably just didn't say it right. Cheers. – Chad_Valiant Aug 2 '11 at 3:00
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.683029 |
Install multiple python libs with pip
Published: 2018-07-10 09:42:56 +0000
Categories: Ansible,
Language
Ansible
Description
When installing multiple python libraries with pip, you might not want each to be an individual named task within ansible. This method allows you to have a single ansible task and install multiple libraries along with their dependancies (they'll still be installed sequentially)
Snippet
- name: Install Python Dependancies with Pip
pip: name={{item}} state=present
with_items:
- lib1
- lib2
- lib3
Usage Example
- name: Install Python Dependancies
pip: name={{item}} state=present
with_items:
- flask
- werkzeug
- bcrypt
- gnupg
- pyopenssl
tags: deps
Keywords
ansible, pip, multiple, with_items,
Latest Posts
Copyright © 2019 Ben Tasker | Sitemap | Privacy Policy
Available at snippets.bentasker.co.uk and snippets.6zdgh5a5e6zpchdz.onion
|
__label__pos
| 0.903318 |
How to remove Ringtone cutter
5/5 (1)
What is Ringtone cutter?
Ringtone cutter is questionable and stubborn at the same time browser extension which is not so easy to get rid of. Although it’s available for download from the official Chrome store, in most cases, it’s distributed via third-party freeware programs (bundling method). Once installed, it modifies browser and system settings to prevent the risk of being deleted. Then, Ringtone cutter injects its advertisements into the browser so that the user will be forced to encounter numerous pop-ups while browsing the Web. Beware of clicking such ad links as they might lead you to insecure websites spreading malware. This step-by-step guide describes how you can uninstall the browser hijacker and remove Ringtone cutter Extension from your browser.
delete Ringtone cutter virus
How to remove Ringtone cutter?
Performing an antimalware scan with Norton Antivirus would automatically search out and delete all elements related to Ringtone cutter. It is not only the easiest way to eliminate Ringtone cutter, but also the safest and most assuring one.
Download Norton Antivirus
mac compatible
Combo Cleaner Antivirus is a well-established tool for Mac users that can clear your computer from malware like Ringtone cutter and all related files from your computer. Another important advantage of the program is an up-to-date database of computer threats which is perfect to protect your computer in case of a new malware attack.
Download Combo Cleaner
mac compatible
The full version of Combo Cleaner costs $39,95 (you get 6 months of subscription). By clicking the button, you agree to EULA and Privacy Policy. Downloading will start automatically.
Steps of Ringtone cutter manual removal
To make sure that the hijacker won’t appear again, you need to delete Ringtone cutter completely. For this, you need to remove the application from the Control Panel and then check the drives for such leftovers as Ringtone cutter files and registry entries. We should warn you that performing some of the steps may require above-average skills, so if you don’t feel experienced enough, you may apply to the automatic removal tool.
Uninstall Ringtone cutter from Control Panel
As it was stated before, more likely that the hijacker appeared on your system brought by other software. So, to get rid of Ringtone cutter you need to call to memory what you have installed recently.
How to remove Ringtone cutter from Mac
1. Open a Finder window
2. Click Applications line on the sidebar
3. Select the application related to Ringtone cutter right-click it and choose Move to Trash
How to remove Ringtone cutter from Windows XP
1. Click the Start button and open Control Panel
2. Go to Add or Remove Programs
3. Find the application related to Ringtone cutter and click Uninstall
How to remove Ringtone cutter from Windows 7/Vista
1. Click the Start button and open Control Panel
2. Go to Uninstall Program
3. Find the application related to Ringtone cutter and click Uninstall
How to remove Ringtone cutter from Windows 8/8.1
1. Right-click the menu icon in left bottom corner
2. Choose Control Panel
3. Select the Uninstall Program line
4. Uninstall the application related to Ringtone cutter
How to remove Ringtone cutter from Windows 10
1. Press Win+X to open Windows Power menu
2. Click Control Panel
3. Choose Uninstall a Program
4. Select the application related to Ringtone cutter and remove it
noteIf you experience problems with removing Ringtone cutter from Control Panel: there is no such title on the list, or you receive an error preventing you from deleting the application, see the article dedicated to this issue.
Read what to do if program won’t uninstall from Control Panel
Remove Ringtone cutter from browsers
How to unlock Windows Group Policies
Before you will started to remove Ringtone cutter from browser you should perform following instructions in Command Prompt
This step is necessary to delete Windows Group Policies created by Ringtone cutter
Ringtone cutter
1. Start Command Prompt as Administrator
2. To do this in Windows 10/8 or Windows 7 click Start and in the search box type cmd. Right-click on the found result and choose Run as Administrator.
3. While in command prompt type:
rd /S /Q "%WinDir%\System32\GroupPolicyUsers"
4. Press Enter button.
5. Then type:
rd /S /Q "%WinDir%\System32\GroupPolicy"
6. Press Enter button.
7. Finally, type:
gpupdate /force
8. Press Enter button.
Since some of the hijacker threats use a disguise of a browser add-on, you will need to check the list of extensions/add-ons in your browser.
How to remove Ringtone cutter from Safari
1. Start Safari
2. Click on Safari menu button, then go to the Extensions
3. Delete Ringtone cutter or other extensions that look suspicious and you don’t remember installing them
How to remove Ringtone cutter from Google Chrome
1. Start Google Chrome
2. Click on More tools, then go to the Extensions
3. Delete Ringtone cutter or other extensions that look suspicious and you don’t remember installing them
How to remove Ringtone cutter from Internet Explorer
1. Launch Internet Explorer
2. Click on the Tools/Gear icon, then select Manage Add-ons
3. Delete Ringtone cutter or other extensions that look suspicious and you don’t remember installing them
How to remove Ringtone cutter from Mozilla Firefox
1. Start Mozilla Firefox
2. Click on the right-upper corner button
3. Click Add-ons, then go to Extensions
4. Delete Ringtone cutter or other extensions that look suspicious and you don’t remember installing them
How to remove Ringtone cutter from Microsoft Edge
1. Start Microsoft Edge
2. Click the three-dot button in the upper right corner
3. Choose Extensions
4. Click the gear icon near Ringtone cutter or other extensions that look suspicious and you don’t remember installing them
5. Choose Remove
Reset your browsers
How to reset settings in Google Chrome
1. Click on the icon in the right-upper corner
2. Choose Settings
3. Click Advanced settings
4. Click the Reset button
5. In “reset” window click the Reset button
How to reset settings in Mozilla Firefox
1. Click the icon in the upper right corner
2. Choose Help
3. Select Troubleshooting Information
4. Click the Refresh Firefox… button
How to reset settings in Internet Explorer
1. Click on the Tools button
2. Go to Internet options
3. Go to the Advanced tab
4. Click Reset
How to reset settings in Microsoft Edge
1. Start Microsoft Edge
2. Click the three-dot button in the upper right corner
3. Choose Settings
4. Under the Clear browsing data category select Choose what to clear
5. Select everything and click Clear
If the above-mentioned methods didn’t help in eliminating the threat, then it’s better to rely on an automatic way of deleting Ringtone cutter.
Download Norton Antivirus
We also recommend to download and use Norton Antivirus to scan the system after Ringtone cutter removal to make sure that it is completely gone. The antimalware application will detect any vicious components left among system files and registry entries that can recover Ringtone cutter.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.892797 |
Any reason there's a mandatory framerate? Why not allow variable framerate?
Discussion in 'MSI AfterBurner Application Development Forum' started by mzso, Mar 25, 2019.
1. mzso
mzso Active Member
Messages:
70
Likes Received:
1
GPU:
DUAL-RX580-O4G
Hi!
For example the emulator runs at 50fps for pal games. (The actual framerate might be even less)
So a fixed 60 fps capture limit doesn't do any good for me. In other cases games might run at even less fps due to limited GPU power, which might also vary during gameplay.
So why not give an option to record at a variable framerate?
2. Unwinder
Unwinder Moderator Staff Member
Messages:
15,691
Likes Received:
3,154
You're misunderstanding how videocapture works. There is no "mandatory" framerate and it is already variable.
3. mzso
mzso Active Member
Messages:
70
Likes Received:
1
GPU:
DUAL-RX580-O4G
I don't think so. Though, I must have mistook something because the files produced now are variable framerate.
(Frame rate mode : Variable
Original frame rate : 25.000 FPS)
Anyway. What that the "Framerate" setting in afterburner do then? The tooltip says it "adjusts the framerate of the encoded video".
I can set it from 1 to 100 fps. There's no variable or off option. Does this only concern the internal encoders? (so those are not variable framerate?) Because then logically it should be dimmed when I select VFW.
[IMG]
4. Andy_K
Andy_K Master Guru
Messages:
638
Likes Received:
120
GPU:
MSI GTX 960 OC
This setting is the maximum fps encoding rate.
If your GPU drops below the selected limit the encoder will stretch the display time of frames to have a fluid playback but there won't be 60 frames every second obligated in the encoded video it can be less with elongated display time. That's why it is variable.
5. Unwinder
Unwinder Moderator Staff Member
Messages:
15,691
Likes Received:
3,154
And I don’t think that I have wish/time to debate, sorry. Each video, with fixed and variable framerate ALWAYS MUST have some target framerate specified in the header to let player to guess approximate timestamp calculation strategy. It absolutely doesn’t mean that actual frames are timestamped on exact fixed timings.
6. mzso
mzso Active Member
Messages:
70
Likes Received:
1
GPU:
DUAL-RX580-O4G
I see, thanks. Too bad it doesn't have an unambiguous name such as "capture fps limit", or "encoding fps limit"
Okay. Cool. I just wanted to know what that option does because it wasn't clear. The option name is ambiguous.
(Though I wonder why how the original fps value for x264vfw encodes becomes invariably 25, I guess it might be the codec default.)
7. Andy_K
Andy_K Master Guru
Messages:
638
Likes Received:
120
GPU:
MSI GTX 960 OC
Do you use option -zerolatency for recording?
x264 --fullhelp
Code:
...
- zerolatency:
--bframes 0 --force-cfr --no-mbtree
--sync-lookahead 0 --sliced-threads
--rc-lookahead 0
...
this forces cfr (constant frame rate)
8. mzso
mzso Active Member
Messages:
70
Likes Received:
1
GPU:
DUAL-RX580-O4G
Hmm... I missed that.
Though how is it that the resulting file has these properties:
Frame rate mode : Variable
Original frame rate : 25.000 FPS
So it is CFR despite this?
Per chance could you tell me how can I disable this option specifically? (overriding -zerolatency)
Udate:
Even more interesting that the encoding settings I see via Mediainfo doesn't show this:
Code:
Writing library : x264 core 152 r2851bm ba24899
Encoding settings : cabac=0 / ref=1 / deblock=0:0:0 / analyse=0:0 / me=dia / subme=0 / psy=0 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=0 / 8x8dct=0 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=0 / threads=12 / lookahead_threads=12 / sliced_threads=1 / slices=12 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=0 / keyint=250 / keyint_min=25 / scenecut=0 / intra_refresh=0 / rc=cqp / mbtree=0 / qp=0
And I get the same thing if I add all the options of --zerolatency manually beside --force-cfr.
Last edited: Mar 27, 2019
9. Andy_K
Andy_K Master Guru
Messages:
638
Likes Received:
120
GPU:
MSI GTX 960 OC
So opposed to your statement getting fixed 25fps this shows your video does not have cfr but vfr with possible varying display times for some frames. The target (output) frame rate is 25fps, does not mean the video file contains 25 frames every single second.
Share This Page
|
__label__pos
| 0.933591 |
monoids-0.1.20: Monoids, specialized containers and a general map/reduce frameworkSource codeContentsIndex
Data.Ring.ModularArithmetic
Portabilitynon-portable (MPTCs, scoped types, empty decls, type operators)
Stabilityexperimental
MaintainerEdward Kmett <[email protected]>
Description
Documentation
module Data.Ring
data Mod a s Source
show/hide Instances
Eq a => Eq (Mod a s)
(Modular s a, Integral a) => Num (Mod a s)
Show a => Show (Mod a s)
(Modular s a, Integral a) => Monoid (Mod a s)
(Modular s a, Integral a) => Multiplicative (Mod a s)
(Modular s a, Integral a) => LeftSemiNearRing (Mod a s)
(Modular s a, Integral a) => RightSemiNearRing (Mod a s)
(Modular s a, Integral a) => SemiRing (Mod a s)
(Modular s a, Integral a) => Group (Mod a s)
(Modular s a, Integral a) => Ring (Mod a s)
class Modular s a | s -> a whereSource
Methods
modulus :: s -> aSource
show/hide Instances
(ReflectedNum s, Num a) => Modular (ModulusNum s a) a
modulus :: Modular s a => s -> aSource
withIntegralModulus :: Integral a => a -> (forall s. Modular s a => w `Mod` s) -> wSource
Produced by Haddock version 2.4.1
|
__label__pos
| 0.622718 |
In a programming class Professor Madge has a total of n students, and she wants to assign
Question:
In a programming class Professor Madge has a total of n students, and she wants to assign teams of m students to each of p computer projects. If each student must be assigned to the same number of projects,
(a) In how many projects will each individual student are involved?
(b) In how many projects will each pair of students be involved?
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Question Posted:
|
__label__pos
| 0.970752 |
Jump to content
1. Welcome to GTAForums!
1. GTANet.com
1. GTA Online
1. Los Santos Drug Wars
2. Updates
3. Find Lobbies & Players
4. Guides & Strategies
5. Vehicles
6. Content Creator
7. Help & Support
2. Red Dead Online
1. Blood Money
2. Frontier Pursuits
3. Find Lobbies & Outlaws
4. Help & Support
3. Crews
1. Grand Theft Auto Series
1. Bugs*
2. St. Andrews Cathedral
2. GTA VI
3. GTA V
1. Guides & Strategies
2. Help & Support
4. GTA IV
1. The Lost and Damned
2. The Ballad of Gay Tony
3. Guides & Strategies
4. Help & Support
5. GTA San Andreas
1. Classic GTA SA
2. Guides & Strategies
3. Help & Support
6. GTA Vice City
1. Classic GTA VC
2. Guides & Strategies
3. Help & Support
7. GTA III
1. Classic GTA III
2. Guides & Strategies
3. Help & Support
8. Portable Games
1. GTA Chinatown Wars
2. GTA Vice City Stories
3. GTA Liberty City Stories
9. Top-Down Games
1. GTA Advance
2. GTA 2
3. GTA
1. Red Dead Redemption 2
1. PC
2. Help & Support
2. Red Dead Redemption
1. GTA Mods
1. GTA V
2. GTA IV
3. GTA III, VC & SA
4. Tutorials
2. Red Dead Mods
1. Documentation
3. Mod Showroom
1. Scripts & Plugins
2. Maps
3. Total Conversions
4. Vehicles
5. Textures
6. Characters
7. Tools
8. Other
9. Workshop
4. Featured Mods
1. Design Your Own Mission
2. OpenIV
3. GTA: Underground
4. GTA: Liberty City
5. GTA: State of Liberty
1. Rockstar Games
2. Rockstar Collectors
1. Off-Topic
1. General Chat
2. Gaming
3. Technology
4. Movies & TV
5. Music
6. Sports
7. Vehicles
2. Expression
1. Graphics / Visual Arts
2. GFX Requests & Tutorials
3. Writers' Discussion
4. Debates & Discussion
1. Announcements
2. Forum Support
3. Suggestions
FXAA, SMAA injectors???
qazwsxqaz
Share
Recommended Posts
So i have gtx1070 64bit computer, gta 4 1.0.4.0 and injectFXAA v10.0, injectSMAA v1.2 (ofc for dx9) but they both won't work with my game for some reason. I also have "Simple ENB for natural and realistic lighting" afaik its not about that one because even i change the d3d9.dll to not run the enb the fxaa still don't work. I tried with nvidia control panel too but no luck there either. Is there a commonly known thing i don't know about? I
Link to comment
Share on other sites
• 2 weeks later...
• 1 month later...
You could try Nvidia DSR in Nvdia control panel but it will kill your performance with ENB but you can try it.
Link to comment
Share on other sites
• 1 month later...
On 3/26/2019 at 5:42 AM, DayL said:
You could try Nvidia DSR in Nvdia control panel but it will kill your performance with ENB but you can try it.
hmmm DSR is unavailable at nvidia control panels gta4 settings, will doing it globally work?
Link to comment
Share on other sites
8 minutes ago, qazwsxqaz said:
hmmm DSR is unavailable at nvidia control panels gta4 settings, will doing it globally work?
yes
Link to comment
Share on other sites
got it, thanks i'll inform you tomorrow about the results when i try, gotta go to sleep now.
Link to comment
Share on other sites
qazwsxqaz
In 1.0.4.0 we don't have those and because of my enb i can't update it.
Link to comment
Share on other sites
• 8 months later...
boomboom5950
On 5/3/2019 at 9:30 AM, qazwsxqaz said:
In 1.0.4.0 we don't have those and because of my enb i can't update it.
use reshade
Link to comment
Share on other sites
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new account
Sign in
Already have an account? Sign in here.
Sign In Now
Share
• 1 User Currently Viewing
0 members, 0 Anonymous, 1 Guest
×
×
• Create New...
Important Information
By using GTAForums.com, you agree to our Terms of Use and Privacy Policy.
|
__label__pos
| 0.986191 |
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I'm planning a short course on few topics and applications of nonlinear functional analysis, and I'd like a reference for a quick and possibly self-contained construction of a structure of a Banach differentiable manifold for the space of continuous mappings $C^0(K,M)$, where $K$ is a compact topological space (even metric if it helps) and $M$ is a (finite dimensional) differentiable manifold.
A construction of a differentiable structure of Banach manifold for this space can be found e.g. in Lang's book Fundamentals of differential geometry (1999). The main tools are the exponential map and tubular nbds (having fixed a Riemannian structure on $M$. This is OK but I believe there should be something even more basic.
Does anybody have a reference for alternative constructions (not necessarily elementary) ?
share|improve this question
2 Answers 2
up vote 6 down vote accepted
Well, there is always my old "Foundations of Global Nonlinear Analysis", which is about just this.
share|improve this answer
Are there copies of this book available? Doing a little Googling around I wasn't able to find it. – Ryan Budney Sep 3 '10 at 1:19
I tend to focus more on the manifold of smooth maps, so this list should be understood as being more from that perspective than of continuous maps, but there's a fair amount of overlap in the two cases so I hope it's of some use.
1. Constructing Smooth Manifolds of Loop Spaces. Despite my disqualifier above, this does deal with continuous maps as well as smooth ones. However, it deals with $K = S^1$ only. On the other hand, the role of $K$ in the construction is "not a lot" so understanding the case $K = S^1$ gives a lot of insight about the general case.
2. Manifolds of differentiable mappings by Peter Michor. Actually, search MathSciNet for Michor with "manifold" in the title.
3. The Convenient Setting of Global Analysis contains a whole section on manifolds of mapping spaces.
4. nLab pages: Manifold structure of mapping spaces. Bit of a "work in progress" and concentrating on smooth maps; the bit that might interest you is the condition on the source: all that is needed is that it be sequentially compact.
5. The differential topology of loop spaces. An interpolation between (1) and (3) above. Written for people without the stamina to read (3) but who were interested in loop spaces. Again, concentrates on "smooth", but as (1) shows then that's not really important.
share|improve this answer
Thank you, I'll have a good reading! I think I once had a glance to 3. Isn't that an important feature of these works is the differential of $f:U\subset X\to Y$ as a map $Df:U\times X\to Y$ rather than a map $Df:U\to L(X,Y)$? (which in infinite dimension really makes a different notion of a continuously differentiable map)? – Pietro Majer Sep 2 '10 at 15:25
The great thing about the calculus of Kriegl, Michor, Frolicher (and others) is that continuity is secondary. So worrying about whether or not a map is continuous or what topology to put on L(X,Y) is no longer relevant. What matters is the effect on smooth curves, and in that realm then the exponential law holds and the derivative can be viewed in both ways. – Loop Space Sep 2 '10 at 18:58
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.866017 |
Calculer un effectif connaissant une proportion
Exprimer une proportion de différentes manières
Les notions de proportions et de pourcentages sont fondamentales aussi bien dans la vie courante que dans la vie professionnelle.
Il est absolument nécessaire d’abord de maîtriser les concepts, puis de savoir effectuer les calculs appropriés. Notamment pour appliquer ou calculer, puis exprimer une proportion sous différentes formes (décimale, fractionnaire, pourcentage) et calculer des proportions de proportions.
4. Calculer un effectif connaissant une proportion
On commence par identifier de façon précise la population de référence $E$ et la sous-population $A$. On suppose cette fois que la proportion et l’un des deux effectifs sont connus.
Propriété 3.
Pour calculer l’effectif de la sous-population $A$ ou l’effectif de la population $E$, on écrit la formule de calcul d’une proportion, puis on écrit l’égalité des produits en croix. Ce qui donne :
$\quad\color{red}{\boxed{\; p_A = \dfrac{n_A}{n_E}\; }}\; (1)\quad$ $\color{red}{\boxed{\; n_A =p_A\times n_E\; }}\; (2)\quad$ $\color{red}{\boxed{\; n_E = \dfrac{n_A}{p_A}\; }}\; (3)$
Exercice résolu 1.
La classe de 1èreS1 contient 35 élèves dont 28% de filles. Calculer le nombre de filles dans cette classe.
Corrigé.
La population de référence $E$= Ensemble des élèves de la classe de 1èreSM1. Effectif $n_E= 33$.
La sous-population $F$ concernée est le groupe des filles de la classe 1èreSM1. Effectif $n_F=?$ à calculer.
D’après l’énoncé, la proportion des filles dans cette classe est $p_F=28\% =\dfrac{28}{100}=0,28$ .
On pose donc : $$n_F=p_F\times n_E$$
On écrit l’égalité des produits en croix. Ce qui donne : $$n_F=0,28\times 35 = 9,80$$
Le nombre de filles étant un nombre entier, on doit arrondir le résultat à l’unité. Ce qui donne : $9,80\simeq 10$.
Conclusion. L’effectif des filles dans la classe 1èreSM1 est égal à $10$.
Haut de page
|
__label__pos
| 0.954498 |
background_processing.cpp
Go to the documentation of this file.
1 /*********************************************************************
2 * Software License Agreement (BSD License)
3 *
4 * Copyright (c) 2012, Willow Garage, Inc.
5 * All rights reserved.
6 *
7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions
9 * are met:
10 *
11 * * Redistributions of source code must retain the above copyright
12 * notice, this list of conditions and the following disclaimer.
13 * * Redistributions in binary form must reproduce the above
14 * copyright notice, this list of conditions and the following
15 * disclaimer in the documentation and/or other materials provided
16 * with the distribution.
17 * * Neither the name of Willow Garage nor the names of its
18 * contributors may be used to endorse or promote products derived
19 * from this software without specific prior written permission.
20 *
21 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
22 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
23 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
24 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
25 * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
26 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
27 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
28 * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
29 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
30 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
31 * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
32 * POSSIBILITY OF SUCH DAMAGE.
33 *********************************************************************/
34
35 /* Author: Ioan Sucan */
36
38 #include <ros/console.h>
39
40 namespace moveit
41 {
42 namespace tools
43 {
45 {
46 // spin a thread that will process user events
48 processing_ = false;
49 processing_thread_.reset(new boost::thread(boost::bind(&BackgroundProcessing::processingThread, this)));
50 }
51
53 {
54 run_processing_thread_ = false;
56 processing_thread_->join();
57 }
58
60 {
61 boost::unique_lock<boost::mutex> ulock(action_lock_);
62
64 {
65 while (actions_.empty() && run_processing_thread_)
67
68 while (!actions_.empty())
69 {
70 JobCallback fn = actions_.front();
71 std::string action_name = action_names_.front();
72 actions_.pop_front();
73 action_names_.pop_front();
74 processing_ = true;
75
76 // make sure we are unlocked while we process the event
77 action_lock_.unlock();
78 try
79 {
80 ROS_DEBUG_NAMED("background_processing", "Begin executing '%s'", action_name.c_str());
81 fn();
82 ROS_DEBUG_NAMED("background_processing", "Done executing '%s'", action_name.c_str());
83 }
84 catch (std::exception& ex)
85 {
86 ROS_ERROR_NAMED("background_processing", "Exception caught while processing action '%s': %s",
87 action_name.c_str(), ex.what());
88 }
89 processing_ = false;
91 queue_change_event_(COMPLETE, action_name);
92 action_lock_.lock();
93 }
94 }
95 }
96
97 void BackgroundProcessing::addJob(const boost::function<void()>& job, const std::string& name)
98 {
99 {
100 boost::mutex::scoped_lock _(action_lock_);
101 actions_.push_back(job);
102 action_names_.push_back(name);
104 }
106 queue_change_event_(ADD, name);
107 }
108
110 {
111 bool update = false;
112 std::deque<std::string> removed;
113 {
114 boost::mutex::scoped_lock _(action_lock_);
115 update = !actions_.empty();
116 actions_.clear();
117 action_names_.swap(removed);
118 }
119 if (update && queue_change_event_)
120 for (std::deque<std::string>::iterator it = removed.begin(); it != removed.end(); ++it)
122 }
123
125 {
126 boost::mutex::scoped_lock _(action_lock_);
127 return actions_.size() + (processing_ ? 1 : 0);
128 }
129
131 {
132 boost::mutex::scoped_lock _(action_lock_);
133 queue_change_event_ = event;
134 }
135
137 {
139 }
140
141 } // end of namespace tools
142 } // end of namespace moveit
void notify_all() BOOST_NOEXCEPT
void addJob(const JobCallback &job, const std::string &name)
Add a job to the queue of jobs to execute. A name is also specifies for the job.
void wait(unique_lock< mutex > &m)
BackgroundProcessing()
Constructor. The background thread is activated automatically.
Called when a job is removed from the queue without execution.
Called when a job is completed (and removed from the queue)
void update(const std::string &key, const XmlRpc::XmlRpcValue &v)
~BackgroundProcessing()
Finishes currently executing job, clears the remaining queue.
#define ROS_DEBUG_NAMED(name,...)
std::unique_ptr< boost::thread > processing_thread_
std::size_t getJobCount() const
Get the size of the queue of jobs (includes currently processed job).
void clear()
Clear the queue of jobs.
boost::function< void()> JobCallback
The signature for job callbacks.
void clearJobUpdateEvent()
Clear the callback to be triggered when events in JobEvent take place.
boost::function< void(JobEvent, const std::string &)> JobUpdateCallback
The signature for callback triggered when job events take place: the event that took place and the na...
#define ROS_ERROR_NAMED(name,...)
Called when a job is added to the queue.
Main namespace for MoveIt!
boost::condition_variable new_action_condition_
void setJobUpdateEvent(const JobUpdateCallback &event)
Set the callback to be triggered when events in JobEvent take place.
moveit_core
Author(s): Ioan Sucan , Sachin Chitta , Acorn Pooley
autogenerated on Mon Jun 10 2019 14:07:15
|
__label__pos
| 0.960766 |
Jump to content
View more
Image of the Day
Isn't this a lovely apple tempart placeholder thing #gamedev worth a #screenshotsaturday even I would say. https://t.co/fQH1d0ySIG
IOTD | Top Screenshots
The latest, straight to your Inbox.
Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
Sign up now
How/Who create the GameObjects?
2: Adsense
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
• You cannot reply to this topic
41 replies to this topic
#1 Icebone1000 Members
1939
Like
0Likes
Like
Posted 22 October 2012 - 01:52 PM
[source lang="cpp"]class Game{ list < GameObjects * > m_gos; void Update(float delta){ for(list::it goIt ..) (*goIt)->Update(delta); } ...};[/source]
Pseudo code..
Id like to make the Class Game the sole responsible for the creation and deletion of the gameobjects.
I always do this with a template function:
[source lang="cpp"]template< class derivedGO>derivedGO* Create(){ derivedGO *p = new derivedGO(); m_gos.add(p); return p;}[/source]
And this is the only way to give the Game class the objects (so if they got created externally, they will not be part of the game).
The only (really annoying imo) problem is that derivedGO must provide a compatible constructor, this sucks, cause I always, then, have to create a Init(params) function that is always called just after calling create..
Is this poor design?
Edited by Icebone1000, 22 October 2012 - 01:54 PM.
#2 Servant of the Lord Members
33692
Like
5Likes
Like
Posted 22 October 2012 - 02:09 PM
*
POPULAR
I can't really comment on the design itself, but if you are using C++11, you can forward the arguments (without knowing how many arguments or what types) to the constructor like this:
template<typename derivedGO, typename Arg>
derivedGO* Create(Arg &&arg){
derivedGO *p = new derivedGO(std::forward<Arg>(arg));
m_gos.add(p);
return p;
}
('Game' sounds like it's a monolith class, though)
It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal | [Fly with me on Twitter]
#3 L. Spiro Members
25270
Like
10Likes
Like
Posted 22 October 2012 - 06:09 PM
*
POPULAR
It is not particularly good design. You are trying to do too much for the users. There comes a point when you give users so little control that your engine/framework/whatever simply becomes useless.
Your game class is doing way too much. It is a mega-class.
Break it down into smaller classes each with a specific task.
For example, objects don’t exist inside the game, they exist inside scenes.
So for starters, get all the objects out of your Game class and put them into a Scene class.
On my first few engine attempts I also thought there should be at most only one scene active at a time.
This caused me to create hacky solutions when trying to render one set of objects with one camera and another set of objects with another camera.
The correct solution is not to limit how many scenes you can have.
So a scene can have any number of objects inside it. The Game class can have any number of scenes inside it.
Back to my first point, why do objects have to be created only through the Game (or Scene) class?
You probably think you are doing your users a favor and making their lives easier by doing some of the heavy lifting for them.
You’re not.
There is no logical reason why objects in the scene must strictly be created by the scene manager.
Of course, creating objects via the scene manager should be one option, but not the only one.
m_dmipInstance = m_sgpScene->CreateDrawableModelInstance( "Cayman.lsm" );
m_dmipInstance->Orientation().SetPos( CVector3( 0.0f, -m_dmipInstance->Aabb().m_vMin.y, 0.0f ) );
m_dmipInstance->SetCellShading( true );
CDrawableModelInstancePtr pdmipTemp = m_sgpScene->CreateDrawableModelInstance( "Ground.lsm" );
pdmipTemp->SetCastsShadow( false );
This is just one way to create models. You can also do this:
CDrawableModelInstancePtr dmipCustomLoad;
dmipCustomLoad.New();
dmipCustomLoad->LoadFile( "BMW X6" );
dmipCustomLoad->SetCookTorranceShading( true );
m_sgpScene->AddDrawableModelInstance( dmipCustomLoad );
dmipCustomLoad.New(); // Notice how this is a smart pointer.
// Custom setting of vertices, normals, etc. here.
m_sgpScene->AddDrawableModelInstance( dmipCustomLoad );
I am speaking from experience when I say that loading models strictly through the scene manager is a pain in the ass.
If you think you have to do it that way, you have a problem with your overall design/architecture that you need to fix.
If your goal is to make the engine easy to use, employ smart pointers and use them for the objects in your scene. And don’t be picky about how models end up in the scene.
L. Spiro
#4 uglybdavis Members
1065
Like
6Likes
Like
Posted 22 October 2012 - 07:59 PM
*
POPULAR
All pseudo code here, but you should reconsider your approach to the general design on the game engine. I apologize in advance, i started typing all this out and got bored by the end, if i skipped something and you would like some explanation, just ask. But it should all be self-explanatory.
You have a game, which contains a stack of scenes. Only the top most scene is updated, but any scene can be rendered. This makes it so if you have an in game menu, you can pause the game (by no longer updating the game scene) and render it as well as the menu on top. Makes it pretty easy. The scene has game objects, which are basically just glue objects containing components (Transforms, character controllers, ui's etc). Each component does something specific, with variables you can set at any time. You can retrieve game objects from the scene, and components from game objects. All objects are created by Game, as it is the factory. When game starts, it creates the main menu scene, and you just take it from there. This may seem a bit fuzzy or abstract, but the implementation is simple, consider the following:
Some class definitions
[source lang="cpp"]class Component() { enum Type { RENDERER, ENEMY, PLAYER, UI, ANIMATION, SCRIPT } Type m_eComponentType; virtual void Awake() = 0; // Called when component is created virtual void Destroy() = 0; // Called when component is destroyed virtual void Start() = 0; // Called when component is activated virtual void Stop() = 0; // Called when component is deactivated virtual void Update(float dt) = 0; // I'm not going to explain}class Renderer : public Component { int m_nTextureHandle; // All sorts of component specific members int m_nBufferId; // Look, we even have a buffer! void Awake(); // Overload void Destroy(); // Overload void Start(); // Overload void Stop(); // Overload void Update(float dt); // Overload void Render(); // Again, component specific!}class Enemy : public Component { // Two components, we're on fire! int m_nHealth; // Enemy specific junk AStar* m_pPathFinder; // Specific junk void Awake(); // Override void Destroy(); // Override void Start(); // Override void Stop(); // Override void Update(float dt); // Override}class Player : public Component { // See above void Awake(); void Destroy(); void Start(); void Stop(); void Update(float dt);}class UI : public Component { // See above void Awake(); void Destroy(); void Start(); void Stop(); void Update(float dt);}class GameObject() { // Now for the fun bit std::vector<Component*> m_vComponents; // A game object has components std::string m_strGameObjectName; // So we can retrieve game objects later bool m_bIsActive; // Does not always need to be active void GameObjectWasCreated(); // Created void GameObjectDidBecomeActive(); // Became active void GameObjectDidBecomeInactive(); // Became inactive void GameObjectWasDestroyed(); // Destroyed void AddComponent(Component*); void RemoveComponent(Component*); Component* GetComponent(Component::Type); void Update(float dt); void Render();}class Scene() { std::vector<GameObject*> m_vGameObjectList; // You can be more creative bool m_bRenderWhenNotActive; void SceneWasCreated(); void SceneDidBecomeActive(); void SceneDidBecomeInactive(); void SceneWasDestroyed(); void AddGameObject(GameObject*); void RemoveGameObject(GameObject*); GameObject* GetGameObject(std::string); void Update(float dt); void Render();}class Game { std::vector<Scene*> m_vSceneStack; bool Initialize(); // Initializes the game (Keyboard, mouse, managers, etc) void Shutdown(); // Destroys the game (blows away scene stack, game objects etc, shuts down managers) Scene* CreateNewScene(); // Creates a scene object GameObject* CreateGameObject(); // Creates a game object Component* CreateComponent(Component::Type); // Creates a component void DestroyScene(Scene*); // Delete the scene void DestroyGameObject(GameObject*); // Delete the game object void DestroyComponent(Component*); // Delete the component void Update(float dt); // Updates the top most object void Render();}[/source]
A little bit of implementation
[source lang="cpp"]void Renderer::Awake() { m_eComponentType = RENDERER; m_bIsActive = true; } // Do other stuff to set up your buffers and texturesvoid Renderer::Destroy() { } // Release your buffers and textures herevoid Renderer::Start() { } // Don't carevoid Renderer::Stop() { } // Don't carevoid Renderer::Update(float dt) { } // Don't carevoid Renderer::Render() { BindBuffer(m_nBufferId); BindTexture(m_nTextureId); RenderBatch(); }void Enemy::Awake() { m_eComponentType = ENEMY; m_bIsActive = true; } // Do other enemy stuff, like ai initialization and whatnotvoid Enemy::Destroy() { } // Shutdownvoid Enemy::Start() { if (m_pAStar == 0) m_pAStar = AStar::Factory::NewPathFinder(); } // I don't know, just arbitrary shitvoid Enemy::Stop() { if (m_pAStart != 0) AStar::Factory::Destroy(m_pAStar); m_pAStar = 0; } // Make sure you shut shit downvoid Enemy::Update(float dt) { m_pAStar->Find("Player"); AttackPlayerWhenPossible(); }// Be creative, make your player and ui components and junk!// For extra credit, make a script component, being able to script your engine will make life easy.class GameObject() { std::vector<Component*> m_vComponents; std::string m_strGameObjectName; bool m_bIsActive; void GameObject::GameObjectWasCreated() { m_bIsActive = true; } // Depending on your game, go crazyvoid GameObject::GameObjectDidBecomeActive() { // Let all the active components know, they are active again! for (int i = 0, size = m_vComponents.size(); i < size; ++i ) { if (m_vComponents[i].m_bIsActive) m_vComponents[i].Start(); }} void GameObject::GameObjectDidBecomeInactive() { // Active components, go away! for (int i = 0, size = m_vComponents.size(); i < size; ++i ) { if (m_vComponents[i].m_bIsActive) m_vComponents[i].Stop(); }} void GameObject::GameObjectWasDestroyed() { DestroyAllComponents(); } // No more game object, no more componentsvoid GameObject::AddComponent(Component* p) { // Make sure no other component of this type exists (be creative) m_vComponents.push_back(p); p->Awake(); p->Start();}void GameObject::RemoveComponent(Component* p) { // Make sure p is in component list // Get the index of the component p->Stop(); p->Destroy(); m_vComponents.erase(m_vComponents.begin() + componentIndex);}Component* GameObject::GetComponent(Component::Type t) { for (int i = 0; i < m_vComponents.size(); i < size; ++i) if (m_vComponents[i].m_eComponentType == t) // Remember, add component is only allowing one of each type return m_vComponents[i]; return 0;} void GameObject::Update(float dt) { // Propogate update to all components for (int i = 0; i < m_vComponents.size(); i < size; ++i) if (m_vComponents[i].m_bIsActive) m_vComponents[i].Update(dt);}void GameObject::Render() { // Only render render components for (int i = 0; i < m_vComponents.size(); i < size; ++i) if (m_vComponents[i].m_eComponentType == RENDERER && m_vComponents[i].m_bIsActive) ((RendererComponent*)m_vComponents[i])->Render();}void Scene::SceneWasCreated() { } // Do junk void Scene::SceneDidBecomeActive() { for (int i = 0, size = m_vGameObjectList.size(); i < size; ++i) if (m_vGameObjectList[i]->m_bIsActive) // Only active objects need to wake up m_vGameObjectList[i]->GameObjectDidBecomeActive();} void Scene::SceneDidBecomeInactive() { for (int i = 0, size = m_vGameObjectList.size(); i < size; ++i) if (m_vGameObjectList[i]->m_bIsActive) // Only active objects need to go to sleep m_vGameObjectList[i]->GameObjectDidBecomeActive();}void Scene::SceneWasDestroyed() { } // Do the opposite of create void Scene::AddGameObject(GameObject* p) { // Should not have two addresses of the same object in the list m_vGameObjectList.push_back(p); p->GameObjectWasCreated(); // Game object created in scene p->GameObjectDidBecomeActive(); // Game object is now active} void Scene::RemoveGameObject(GameObject* p) { // Make sure that p is in game object list p->GameObjectDidBecomeInactive(); // Game object no longer active p->GameObjectWasDestroyed(); // Game object is no longer in scene}GameObject* Scene::GetGameObject(std::string name) { // Retrieve first instance of game object (or null) based on name string} void Scene::Update(float dt) { for (int i = 0, size = m_vGameObjectList.size(); i < size; ++i) if (m_vGameObjectList[i]->m_bIsActive) // Only propogate to active game objects m_vGameObjectList[i]->GameObjectDidBecomeActive();} void Scene::Render() { for (int i = 0, size = m_vGameObjectList.size(); i < size; ++i) if (m_vGameObjectList[i]->m_bIsActive && m_vGameObjectList[i]->GetComponent(RENDERER) != 0) m_vGameObjectList[i]->Render(); // Don't need the above check, but figured i'd demonstrate}bool Game::Initialize() { Scene* mainMenu = CreateNewScene(); mainMenu->m_bRenderWhenNotActive = true; GameObject* mainMenuUIObject = CreateGameObject(); mainMenuUIObject->m_strGameObjectName = "MainMenuUI"; Component* mainMenuUIComponent = CreateComponent(UI); // Set all sorts of textures and junk for the main menu ui mainMenu->AddGameObject(mainMenuUIObject); mainMenuUIObject->AddComponent(mainMenuUIComponent); // You can make other scenes here, or inside of main menu, the point is // each new scene will be pushed on top of the scene stack. // How the stack may be used: // Push main menu, push audio options, set volume, pop audio options // Press play, pushes the game, now your playing // Exit to menu pops the game, now your back to the main menu}void Game::Shutdown() { // Tedious cleanup} Scene* Game::CreateNewScene() { return new Scene(); // You have a factory, be more creative} GameObject* Game::CreateGameObject() { return new GameObject(); // Same as above} Component* Game::CreateComponent(Component::Type t) { switch (t) { // Return appropriate component subclass }} void Game::DestroyScene(Scene* s) { delete s; // I'm getting bored of typing all this} void Game::DestroyGameObject(GameObject* p) { delete p;}void Game::DestroyComponent(Component* p) { delete p;} void Game::Update(float dt) { // Only update the top most scene m_vSceneStack[m_vSceneStack.size() - 1]->Update(dt);}void Game::Render() { // Render all renderable scenes // So, if you have the following stack // main menu > game > options // and game and options are both rendering, and options are opaque // It just looks like game is paused under options. for (int i = 0; i < m_vSceneStack.size() - 1; ++i) if (m_vSceneStack[i]->m_bRenderWhenNotActive) m_vSceneStack[i]->Render(); m_vSceneStack[m_vSceneStack.size() - 1]->Render();}[/source]
The limitations of this system are obvious, but you could easily make script components, and add a lot of power to your engine.
Hope this helps!
Edit, didn't see L.Spyro's response when i wrote this. Consider this the long version.
Edited by uglybdavis, 22 October 2012 - 08:01 PM.
#5 rdragon1 Members
1205
Like
0Likes
Like
Posted 22 October 2012 - 11:51 PM
For example, objects don’t exist inside the game, they exist inside scenes.
What about game objects that need to persist across scenes? Like puzzle logic state that spans levels? Or the main player character + all of his stats/score/whatever?
#6 L. Spiro Members
25270
Like
0Likes
Like
Posted 23 October 2012 - 04:42 AM
Data that needs to persist across scenes/states goes on the Game class. That is specifically its job if nothing else.
But that does not mean the 3D or 2D rendering data for your main character etc. That just means your current level, current HP, etc. The bare minimum that could be accessed by any part of the game at any time.
[EDIT]
Note that I failed to mention that all that data that belongs just to one game or another should be part of your “MyGame” class which inherits from “Game”.
Game itself is general across all games and should obviously not be the place for that kind of data.
[/EDIT]
L. Spiro
Edited by L. Spiro, 23 October 2012 - 01:53 PM.
#7 uart777 Members
-126
Like
-4Likes
Like
Posted 23 October 2012 - 05:35 AM
To represent different scenes:
enum { SCENE_LOGO, SCENE_TITLE, SCENE_OPTIONS, SCENE_PLAY, SCENE_GAME_OVER };
int c_scene=SCENE_LOGO; // current scene
#8 nox_pp Members
499
Like
6Likes
Like
Posted 23 October 2012 - 05:43 AM
*
POPULAR
What about game objects that need to persist across scenes? Like puzzle logic state that spans levels? Or the main player character + all of his stats/score/whatever?
Segregate your shared data, so that you can better identify the character of it--what it means and where it comes from. Then, you don't need to expose it globally in your Game class, you can just pass it between scenes that need it (as parameters.) Yes, it's more work, but it's also more explicit, and ultimately much safer.
Is this poor design?
Yes, for a variety of reasons. What Servant of the Lord said is true, and will work well if you want to carry on with your design, but the design is flawed. Think to yourself, will a virtual Update method always be enough to fully achieve the purpose of a particular GameObject? The answer is no. Which means that you will unavoidably have to pluck GameObjects out of wherever they polymorphically reside, and cast them to their proper types. This is folly. How can you be sure of the types, except by experimentation with dynamic_cast or divine knowledge? Don't store heterogeneous types homogeneously. You might think that they're homogeneous types, because they derive from the same base class, but they're nearly useless if they're never casted.
It can seem messier to deal only with concrete types in this instance, but it's really only being more honest about what is actually happening. Furthermore, there should be a strict separation between GAME code and ENGINE code. This looks like Game code to me, so it doesn't really matter if it's directly oriented toward your game, and concretely references your derived GameObjects. If you've got too many derived GameObjects for this to be feasible, that's merely an indicator that something else is wrong.
Between Scylla and Charybdis: First Look <-- The game I'm working on
Object-Oriented Programming Sucks <-- The kind of thing I say
#9 uart777 Members
-126
Like
-22Likes
Like
Posted 23 October 2012 - 05:54 AM
nox_pp: I agree that OOP sucks :) C++ is for newbies who know nothing about real programming, but I must admit that uglybdavis presented some smart code. I admire that.
typedef struct {
void *p;
int x, y, w, h,
bpp, key, alpha;
} IMAGE;
int load_image(IMAGE *i, char *file);
void move_image(IMAGE *i, int x, int y);
void draw_image(IMAGE *i);
My ASM programming site: http://sungod777.zxq.net/
#10 L. Spiro Members
25270
Like
5Likes
Like
Posted 23 October 2012 - 01:47 PM
*
POPULAR
To represent different scenes:
enum { SCENE_LOGO, SCENE_TITLE, SCENE_OPTIONS, SCENE_PLAY, SCENE_GAME_OVER };
int c_scene=SCENE_LOGO; // current scene
And then what? A switch case to handle the logic between different scenes based on just that integer?
General Game/Engine Structure
nox_pp: I agree that OOP sucks Posted Image C++ is for newbies who know nothing about real programming, but I must admit that uglybdavis presented some smart code. I admire that.
typedef struct {
void *p;
int x, y, w, h,
bpp, key, alpha;
} IMAGE;
int load_image(IMAGE *i, char *file);
void move_image(IMAGE *i, int x, int y);
void draw_image(IMAGE *i);
I don’t get the point of this post.
No one asked for opinions on object-oriented programming nor do IMAGE structures (with C code trying to mimic objects with them) have anything to do with the topic.
If you are trying to imply that the original poster should abandon C++ and code in the style you proposed (because C++ is for newbies?), I would point out that your proposed style leaves much to be desired.
• const-correctness. Use “const char *”, not “char *”. Does draw_image() modify the image pointer? If so, why? If not, why is the pointer not const?
• Type-appropriateness. When will an image have a negative width or height? Don’t use signed types for things that are unsigned. You chose “int” out of laziness.
• Why do you have to move the image in a separate step from drawing it? This is superfluous. We aren’t working with lines where the previous position of the line might have a desirable side effect on the current draw operation.
• You basically just manually implemented objects in C. The class is “IMAGE” and it has 3 methods:
• bool load_image( const char * file )
• void move_image( int x, int y )
• void draw_image()
Why would you not just use C++ instead? What exactly is the benefit from doing it the long “non-newbie” way? It is more verbose, less obvious that load_image() was actually supposed to return bool, and you don’t get the extra benefits of virtual functions, templates, etc.
I just don’t get the point.
L. Spiro
#11 uglybdavis Members
1065
Like
1Likes
Like
Posted 23 October 2012 - 03:00 PM
What about game objects that need to persist across scenes? Like puzzle logic state that spans levels? Or the main player character + all of his stats/score/whatever?
Switching betwen scenes also presents a pretty good chance to save your data to disk, then re-read it when the next scene loads. Rather than using globals to track things you could just treat data as data. This might not be a good case for a lot of things, but it works out very well with iCloud.
My ASM programming site: http://sungod777.zxq.net/
Assemblers are dumb and they suck. ASM is is for newbies who know nothing about how to code. Real programmers use butterflies. They open their hands and let the butterflies delicate wing flap once. The disturbance ripples outwards, changing the flow of the eddy currents in the upper atmosphere; these cause momentary pockets of high pressure air to form. These pockets act as lenses that deflect incoming cosmic rays focusing them to strike the drive platter and flip the desired bits. It's all explained right here: http://xkcd.com/378/
#12 uart777 Members
-126
Like
-12Likes
Like
Posted 25 October 2012 - 01:26 AM
"I don’t get the point of this post" - I merely demonstrated that OOP is not neccessary.
"When will an image have a negative width or height?" - When it's invalid/inverted (-1/0xFFFFFFFF).
"Why do you have to move the image in a separate step from drawing it?" - Because it's faster to send less parameters, but you don't know anything about push-call sequences.
L. Spiro: Let me show how to draw/paint/airbrush/sculpt anything: http://www.facebook....7/photos_stream You think you're so right, but you don't even know what your code converts to. How can you expect anyone to use your library?
uglybdavis: Forgive me for complimenting you Posted Image "Assemblers are dumb and they suck. ASM is is for newbies who know nothing about how to code" - This statement shows how little you know about programming. You defined a PIXEL wrong. Alpha should be in leftmost byte (0x*AA*BBCCDD). Stop changing byte orders Posted Image You disrespect ASM because you don't know anything about the processor's language.
"It's all explained right here" - Dummy Posted Image
Edited by uart777, 25 October 2012 - 02:07 AM.
#13 jbadams Senior Staff
25295
Like
4Likes
Like
Posted 25 October 2012 - 02:14 AM
uglybdavis: Forgive me for complimenting you [...] This statement shows how little you know about programming.
He was joking... very obviously joking -- he even linked the comic he was referencing. Did you really think someone was seriously recommending that you should use the disturbances caused by the flapping of a butterfly's wings to deflect cosmic rays at a drive platter rather than programming?
It is however a tongue-in-cheek observation of the fact that assembly simply isn't as important as you seem to think. Your constant recommendations to learn assembly (from your own mid-90s-style website no-less) just aren't relevant in most cases. Learning assembly is valuable, and I'd even say it's something the majority of programmers should take the time to do eventually, but it just isn't useful to beginners and it isn't relevant to the original poster's question or any of the following discussion.
This statement shows how little you know about programming. You defined a PIXEL wrong. Alpha should be in leftmost byte (0x*AA*BBCCDD).
Actually, THIS statement shows how little YOU know about programming. You've now asserted the alpha should always be the leftmost byte of a pixel -- and that to do otherwise is incorrect -- more than once, but as has already been pointed out to you, that's simply a convention and isn't significantly more or less popular than alternatives.
"I don’t get the point of this post" - I merely demonstrated that OOP is not neccessary.
Actually, you sort of showed -- in a rather poor way that was probably meaningless to the original poster given you neglected to to say what you were doing rather than just dumping some code -- that OO can be implemented in C even though the language doesn't explicitly support the paradigm through language features. OOP is a set of guiding principles for designing code -- not just use of the "class" (or related) keyword(s) -- and like any other paradigm is useful in some situations when used correctly, but isn't universally applicable.
Sharing your opinion and contributing to the discussion are good. Acting like you're better than everyone else while spouting nonsense and spamming your (often irrelevant) website link are not. Drop the attitude...
L. Spiro: Let me show how to draw/paint/airbrush/sculpt anything
...and consider this an official moderator instruction to stop bringing up irrelevant stuff to show off every time you feel you've been challenged. This topic isn't even slightly about art or yours -- or anyone else's -- artistic abilities, and I recall this isn't the first topic in which you've done so.
But before you accuse me of abusing moderator powers or anything of the sort, note that you're welcome to challenge anything I've said as long as you stay within the site's rules -- you're only being instructed not to post things that are completely off topic.
- Jason Astle-Adams
#14 L. Spiro Members
25270
Like
4Likes
Like
Posted 25 October 2012 - 02:16 AM
You think you're so right, but you don't even know what your code converts to.
Because it's faster to send less[sic] parameters, but you don't know anything about push-call sequences.
Actually I have written a C compiler, a disassembler, and an assembler. And a debugger and everything else listed here: http://memoryhacking.com/feature.php
L. Spiro: Let me show how to draw/paint/airbrush/sculpt anything: http://www.facebook....7/photos_stream
I don’t understand why you posted this completely unrelated bit, but, since we are sharing, I drew this when I was 12:
Posted Image
http://l-spiro.devia...gallery/4844241
uglybdavis: Forgive me for complimenting you Posted Image "Assemblers are dumb and they suck. ASM is is for newbies who know nothing about how to code" - This statement shows how little you know about programming.
By the way, he didn’t attack you. It was a joke and a reference to xkcd.
It would generally be better if you didn’t try to make yourself sound amazing in a place such as this.
You will invariably find others who are much more knowledgeable and skilled than yourself. Of course, that is usually a good thing, just not when you are trying to pick fights.
L. Spiro
Edited by L. Spiro, 25 October 2012 - 08:24 AM.
#15 slayemin Members
5934
Like
3Likes
Like
Posted 25 October 2012 - 04:33 AM
Back on topic....
To OP: I follow a similar pattern. I consider myself somewhat of a novice at designing game architectures, so take what I say with a grain of salt:
I have a base class which every game object inherits from (already may be a bad idea). The class is abstract and pretty much worries about assigning an incrementing ID, possibly accepting a given name, maintaining a queue of messages, and enforcing the implementation of update and init functions for classes which inherit from it, and acting as a generic object which can be used by containers (pretty much polymorphism). That's it. Any inheriting classes will extend this base class.
Here is the C# code for my base class:
public abstract class GBO
{
UInt64 m_ID;
static UInt64 m_nextID = 0;
protected string m_name;
public Queue<GameMessage> Messages = new Queue<GameMessage>(10);
public GBO()
{
m_ID = m_nextID++;
}
public GBO(string Name)
{
m_ID = m_nextID++;
m_name = Name;
}
public UInt64 ID
{
get { return m_ID;}
}
public string Name
{
get { return m_name;}
set { m_name = value;}
}
public abstract void Update();
public abstract void Start(); //I use "Start" instead of "Init" for Unity3D
}
Looking at my base object and thinking about it, it does have some weaknesses:
If I decide to create particle engine and each particle is a GBO class, do I really care about the name of a particle or any game messages it may have generated? Not really. I could slice those two variables out. The ID is mostly used as a key for dictionaries and hash tables, but would a particle ever be stored in a hash table or dictionary? Not really. So, if I slice that out too, then my base class would just have an abstract "Start()" and "Update()" method. Do I even need those? I already know that all of my game objects have to implement initialization and update functions, so enforcing it is a bit of a moot point and possibly restrictive since they don't have input parameters. I might as well have a completely blank base class to support the most flexibility... but why even have an empty class if it doesn't do anything? Do I even NEED a base class?
"Oh, what about using the base class as a generic container object? That way, you can have a single list of all your objects in the game and call their update() method!"
Well, every inheriting class would have a corresponding manager class. The manager class itself can have update called on it and we'll let the manager worry about updating its objects.
Instead of:
foreach(GBO obj in m_allObjects)
obj.update();
we can do this:
MageMgr.Update();
MonsterMgr.Update();
PlayerMgr.Update();
BulletsMgr.Update();
Is this "better"? One immediate disadvantage is that we explicitly have to create and call the update functions for every list of items we have in the game. This adds extra programmer overhead. But, is there an advantage to asking the manager to update its contained items? I think so. The manager can worry about the game logic in regards to the object. So, for example, your mage manager would not only call update() on all of the mages in its list, it would also manage the list of mages by removing any mages which are dead (hitpoints >= 0) or perform any other trivial object specific management.
If you really like the super simple single update, you can let your manager classes derive from an abstract manager class which has an update function. Then, you'd have a list of managers, for which you update every frame:
foreach(GameObjMgr mgr in m_managers)
mgr.update();
Then, you just have to worry about instantiating a manager class and inserting it into the manager list.
Anyways, I really don't know much about "good" software engineering and game architecture. I may be oversimplifying the core of game development: Managing lists of "stuff" and applying rules to them.
#16 uart777 Members
-126
Like
-5Likes
Like
Posted 25 October 2012 - 08:20 AM
spiro: Cool drawing Posted Image
jbadams: Sorry, it just seems that gamedev has changed so much since it was released. This defensive-ness is caused by the disrespect towards ASM programmers like Lamothe and myself. "Acting like you're better than everyone else" - Ultimately, no one is better than anyone else. We're all just little specks of Stardust. I apologize if this is your perception of me. "You've now asserted the alpha should always be the leftmost byte of a pixel" - Yes, that's how it was originally: AA.RR.GG.BB. Otherwise, it requires shift+and. Why change it? Why store things upside down and backwards? Why cause millions of headaches?
Back to the subject: OOP is not required. Never needed it.
Edited by uart777, 25 October 2012 - 08:40 AM.
#17 L. Spiro Members
25270
Like
2Likes
Like
Posted 25 October 2012 - 08:49 AM
spiro: Cool drawing Posted Image
Coming from an artist such as yourself, that is a compliment. Thank you.
Your Simba t-shirt is also top-notch quality.
jbadams: Sorry, it just seems that gamedev has changed so much since it was released. This defensive-ness is caused by the disrespect towards ASM programmers like Lamothe and myself. "Acting like you're better than everyone else" - Ultimately, no one is better than anyone else. We're all just little specks of Stardust. "You've now asserted the alpha should always be the leftmost byte of a pixel" - Yes, that's how it was originally: AA.RR.GG.BB. Otherwise, it requires shift+and. Why change it? Why store things upside down and backwards? Why cause millions of headaches?
How it was originally?
I feel that you are a prime example of what was mentioned here.
You learned early-on what compilers do internally and took it to heart.
You changed the way you coded based on what you learned from one compiler. You didn’t know that other compilers behave differently and may easily generate different code.
I can tell you for sure that there are rare cases in which my C compiler will generate horribly slow code for some switch cases.
How C++ code becomes machine-language is not strictly specified and you should understand that what my code “becomes” can vary depending on the compiler I use.
Don’t spend your time studying how some compiler created some code.
It would be better to spend your time reading the C/C++ specifications, and if you are so inclined make your own compiler. You will definitely learn a lot that way.
L. Spiro
#18 FLeBlanc Members
3137
Like
0Likes
Like
Posted 25 October 2012 - 09:51 AM
jbadams: Sorry, it just seems that gamedev has changed so much since it was released.
Ah, yes, I remember the old gamedev.net. You know, back in August of 2012 when you joined. Those were the days, eh?
This defensive-ness is caused by the disrespect towards ASM programmers like Lamothe and myself.
I've heard of Lamothe. Got a couple of his books. Much respect for that dude, he's been around awhile.
Never heard of you, though.
"You've now asserted the alpha should always be the leftmost byte of a pixel" - Yes, that's how it was originally: AA.RR.GG.BB. Otherwise, it requires shift+and. Why change it? Why store things upside down and backwards? Why cause millions of headaches?
How things were originally has exactly 0 bearing on how things are now. Pixel formats now have everything to do with hardware support. Modern hardware can handle RGBA data in so many different formats, that you are basically free to pick your preferred method.
#19 uart777 Members
-126
Like
0Likes
Like
Posted 25 October 2012 - 09:58 AM
Spiro: My only intention is to defend ASM programmers like Andre Lamothe, Michael Abrash and Diane Gruber, the queen of graphics programming. She's hot, too :) Wouldn't you love to have a girl like her who does programming?
:) "Coming from an artist such as yourself, that is a compliment" - Thank you ;) Simba's my baby.
"I can tell you for sure that there are rare cases in which my C compiler will generate horribly slow code" - Creating a HL compiler is all about converting standard/infix expressions to RPN. You can perform optimizations in RPN format: Resolve constant subexpressions, replace mul/div with shifts by power of 2, reorder cumulative operations, double jmps/jxx, etc.
Jason: Sorry again for being way off subject, but the issue was brought up and I responded. Spiro, let's create another post about writing HL compilers :)
#20 FLeBlanc Members
3137
Like
0Likes
Like
Posted 25 October 2012 - 10:02 AM
Why would they need defending? Did anyone here attack them? Are you a troll account?
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
__label__pos
| 0.588094 |
Chapter 21. PL/Perl - Perl Procedural Language
Table of Contents
21.1. PL/Perl Functions and Arguments
21.2. Data Values in PL/Perl
21.3. Database Access from PL/Perl
21.4. Trusted and Untrusted PL/Perl
21.5. Missing Features
PL/Perl is a loadable procedural language that enables you to write PostgreSQL functions in the Perl programming language.
To install PL/Perl in a particular database, use createlang plperl dbname.
Tip: If a language is installed into template1, all subsequently created databases will have the language installed automatically.
Note: Users of source packages must specially enable the build of PL/Perl during the installation process (refer to the installation instructions for more information). Users of binary packages might find PL/Perl in a separate subpackage.
21.1. PL/Perl Functions and Arguments
To create a function in the PL/Perl language, use the standard syntax:
CREATE FUNCTION funcname (argument-types) RETURNS return-type AS '
# PL/Perl function body
' LANGUAGE plperl;
The body of the function is ordinary Perl code.
Arguments and results are handled as in any other Perl subroutine: Arguments are passed in @_, and a result value is returned with return or as the last expression evaluated in the function. For example, a function returning the greater of two integer values could be defined as:
CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS '
if ($_[0] > $_[1]) { return $_[0]; }
return $_[1];
' LANGUAGE plperl;
If an SQL null value is passed to a function, the argument value will appear as "undefined" in Perl. The above function definition will not behave very nicely with null inputs (in fact, it will act as though they are zeroes). We could add STRICT to the function definition to make PostgreSQL do something more reasonable: if a null value is passed, the function will not be called at all, but will just return a null result automatically. Alternatively, we could check for undefined inputs in the function body. For example, suppose that we wanted perl_max with one null and one non-null argument to return the non-null argument, rather than a null value:
CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS '
my ($a,$b) = @_;
if (! defined $a) {
if (! defined $b) { return undef; }
return $b;
}
if (! defined $b) { return $a; }
if ($a > $b) { return $a; }
return $b;
' LANGUAGE plperl;
As shown above, to return an SQL null value from a PL/Perl function, return an undefined value. This can be done whether the function is strict or not.
Composite-type arguments are passed to the function as references to hashes. The keys of the hash are the attribute names of the composite type. Here is an example:
CREATE TABLE employee (
name text,
basesalary integer,
bonus integer
);
CREATE FUNCTION empcomp(employee) RETURNS integer AS '
my ($emp) = @_;
return $emp->{''basesalary''} + $emp->{''bonus''};
' LANGUAGE plperl;
SELECT name, empcomp(employee) FROM employee;
There is currently no support for returning a composite-type result value.
Tip: Because the function body is passed as an SQL string literal to CREATE FUNCTION, you have to escape single quotes and backslashes within your Perl source, typically by doubling them as shown in the above example. Another possible approach is to avoid writing single quotes by using Perl's extended quoting operators (q[], qq[], qw[]).
|
__label__pos
| 0.941062 |
answersLogoWhite
0
Best Answer
If the question is, " How many square yards equal 1200 square feet?" then :-
1 yard = 3 feet : 1 square yard = 3 x 3 = 9 square feet.
1200 square feet = 1200 ÷ 9 = 1331/3 square yards.
User Avatar
Wiki User
2010-03-10 22:32:45
This answer is:
🙏
0
🤨
0
😮
0
User Avatar
Study guides
Algebra
20 cards
A polynomial of degree zero is a constant term
The grouping method of factoring can still be used when only some of the terms share a common factor A True B False
The sum or difference of p and q is the of the x-term in the trinomial
A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials
➡️
See all cards
J's study guide
2 cards
What is the name of Steve on minecraft's name
What is love
➡️
See all cards
Steel Tip Darts Out Chart
96 cards
170
169
168
167
➡️
See all cards
Add your answer:
Earn +20 pts
Q: How many yards into 1200 square feet?
Write your answer...
Submit
Related questions
How many square yards are in 1200 square feet?
133.33 square yards.
How many yards is 1200 square feet?
133.33 square yards.
How many square yards is 1200 feet?
133.33 square yards.
HOW MANY SQUARE YARDS IN A 10000 SQUARE FEET?
1,111.11 square yards in 10,000 square feet.
How many square yards of carpet for 1200 square feet?
133.33 yd2
How many square yards are in a rectangle 12.00 feet wide and 900.00 feet long?
The area would be 10,800 square feet. Divide by 9 will give you 1,200 square yards.
How many feet is 1200 yards?
1200 yards * 3 feet/yard = 3600 feet
How many yards are in 3600 square feet?
2400 yards Solution: 3600 / 3 = 1200 yards 1200 yards * 2 = 2400 yards
How many yards of sand do you need to cover 1200 sq ft?
depends on the depth that you need to cover the 1200 square feet (area) to. there are 3 foot to 1 yard (1 foot=3 yards). so 1 square foot = 9 square yards (3x3). 1200 square feet = 1200 x9 = 10,800 square yards. you then multiply the 10,800 by the amount of yards (depth) that you want to cover the area to.
How many yards are there in 1200 feet?
There are three feet in one yard. Therefore, 1200 feet is equal to 1200/3 = 400 yards.
How many sq yds are in 1200 sq ft?
One square yard = 3ftX3ft so 3x3= 9 square feet per square yard, divide 1200 square feet by 9 = 133.333 square yards.
How many yards is 1200 feet?
1200 ÷ 3 = 400 yards
How many yards of cement in 1200 square feet?
For a 4-inch depth you need at least 14.81 cubic yards.
How many square yards are there in 1200 square feet?
133.333333 source: http://www.onlineconversion.com/area.htm
If there are 1200 feet how many in a yard?
1200 feet = 400 yards.
How many yards of dirt for 1200 square feet 6 inches thick?
22.2222 cubic yards (the 2s keep on going).
How many miles in 400 yards?
1 mile = 1760 yards, do the math...(hint, its division) A mile is 5280 feet. 400 yards is 1200 feet. 1200 feet is .22 miles.
How many yards equals 1 200 feet?
3 feet = 1 yard so 1200 feet = 1200/3 = 400 yards. Simple!
How many square feet in 147 square yards?
how many square feet is 147 yards
How many square yards in 400 square feet?
There are 44.44... square yards in 400 square feet.
How many square yards in 250 square feet?
250 square feet = ~27.8 square yards.
6 yards equals how many square feet?
6 square yards is 54 square feet.
How many square feet is at 1200 square foot house?
Ummm, would you believe 1200 square feet?
How many 96 square feet equal how many square yards?
96 square feet are 10.666667 square yards.
How many square yards are in 212 square feet?
212 square feet is 23.56 square yards.
|
__label__pos
| 1 |
Title:
Methods and systems for secured access to devices and systems
Kind Code:
A1
Abstract:
An access system in one embodiment that first determines that someone has correct credentials by using a non-biometric authentication method such as typing in a password, presenting a Smart card containing a cryptographic secret, or having a valid digital signature. Once the credentials are authenticated, then the user must take at least two biometric tests, which can be chosen randomly. In one approach, the biometric tests need only check a template generated from the user who desires access with the stored templates matching the holder of the credentials authenticated by the non-biometric test. Access desirably will be allowed when both biometric tests are passed.
Inventors:
Venkatanna, Kumar Balepur (Bangalore, IN)
Moona, Rajat (Kanpur, IN)
Subrahmanya V, S. (Bangalore, IN)
Application Number:
11/406769
Publication Date:
10/18/2007
Filing Date:
04/18/2006
Primary Class:
Other Classes:
382/115, 713/186
International Classes:
G06K9/00
View Patent Images:
Related US Applications:
20090195394CONSUMER ABUSE DETECTION SYSTEM AND METHODAugust, 2009Johnson et al.
20070063865Wellbore telemetry system and methodMarch, 2007Madhavan et al.
20090009289Cattle identification system employing infra-red tag and associated tag readerJanuary, 2009Simon
20040036596Security system and methodsFebruary, 2004Heffner et al.
20030107468Entry security deviceJune, 2003Din
20090153360LANE KEEPING ASSIST SYSTEMJune, 2009Kim
20080024282Tsunami alarm systemJanuary, 2008Reiners et al.
20030128126Method and apparatus for error warning with multiple alarm levels and typesJuly, 2003Burbank et al.
20090295569Universal Personal Emergency Medical Information Retrieval SystemDecember, 2009Corwin et al.
20100052852Methods and devices for enrollment and verification of biometric information in identification documentsMarch, 2010Mohanty
20100090802SENSOR ARRANGEMENT USING RFID UNITSApril, 2010Nilsson et al.
Primary Examiner:
LEE, JOHN W
Attorney, Agent or Firm:
KLARQUIST SPARKMAN, LLP (121 SW SALMON STREET, SUITE 1600, PORTLAND, OR, 97204, US)
Claims:
We claim:
1. A method of allowing user access to a user having a role, the method comprising: combining individual non-biometric scores from plural non-biometric user tests taken by a user until a non-biometric confidence threshold is met or exceeded; combining individual biometric test scores of plural biometric user tests taken by the user until a biometric confidence threshold is met or exceeded; and allowing a level of access based in part upon the user's role in the event both the non-biometric and biometric confidence thresholds have been met or exceeded for the user.
2. A method according to claim 1, wherein the user's role is at least partially defined by the score on at least one non-biometric test or biometric test.
3. A method according to claim 1, wherein the user role is incrementally defined based at least in part by incremental individual non biometric scores and incremental individual biometric test scores.
4. The method of claim 1 comprising designating a non-biometric test limit corresponding a maximum number of non-biometric tests allowed, designating a biometric test limit corresponding to a maximum number of biometric tests allowed, denying access if the non-biometric test limits are reached prior to meeting or exceeding the non-biometric confidence threshold, and denying access if the biometric test limit is reached prior to meeting or exceeding the biometric confidence threshold.
5. The method of claim 4 comprising the act of changing at least one of the non-biometric and biometric test limits.
6. The method of claim 1 wherein at least one of the biometric user tests has an associated biometric test confidence level, wherein the at least one of the biometric user tests has a biometric user test result, and wherein the individual biometric test confidence score is a combination of the biometric test confidence level and the biometric user test result.
7. The method of claim 6 wherein the at least one of the biometric user tests has a biometric test failure threshold, and wherein, if the biometric user test result is below the biometric test failure threshold, the user fails the user test.
8. The method of claim 7 wherein the biometric test failure threshold is modified by how many times the at least one of the biometric user tests has been taken by the user.
9. The method of claim 7 wherein at least some of the biometric user tests have an associated biometric test confidence level, and wherein the biometric test failure threshold is based in part on the biometric test confidence level of a preceding biometric user test.
10. The method of claim 9 comprising determining the biometric test failure threshold at least in part based upon the biometric test confidence level of at least one of the biometric user tests.
11. The method of claim 4 comprising the act of triggering an alarm in the event the non-biometric test limit is reached.
12. The method of claim 1 wherein the plural non-biometric user tests are chosen randomly from a group comprising of: inputting a password, inputting a Smart card containing a digital certificate or a predefined secret, possessing a contactless card containing a digital circuit, inputting a USB token device, inputting a bio-token device, using a soft-token device, and showing an ID to a guard.
13. The method of claim 1 wherein the plural biometric tests are chosen randomly from a group comprising of at least two of: a fingerprint recognition test of a specific finger or thumb, a fingerprint recognition test of a different specific finger or thumb, a handprint recognition test, a face recognition test, a handwriting recognition test, a voice recognition test, an iris recognition test, an eye blinking rate test, an eyeball squint extent test, a normalized body temperature test, a keystroke dynamic test, a vein recognition test, and a DNA recognition test.
14. The method of claim 1, wherein at least one of the non-biometric user tests comprises determining a user identity, and wherein at least one of the biometric user tests comprises comparing a test template generated by the user with a stored template associated with the user identity to determine the individual biometric test confidence score.
15. The method of claim 1, wherein the act of allowing a level of access comprises at least one of the acts of allowing movement in a formerly restricted location, allowing access to a formerly restricted computer file, allowing access to a formerly restricted computer function, allowing access to at least a portion of a formerly restricted device, or allowing access to at least a portion of a formerly restricted system.
16. A system to control user access to a device, comprising: a first non-biometric tester operationally able to analyze input data to determine if at least one non-biometric test has been successfully executed, preliminarily identifying the user based upon the successful executing of the at least one non-biometric test; a first unlocker which allows access to at least a first portion of the device when the at least one non-biometric test is successfully executed; at least one biometric tester operationally able to compare biometric input data with previously stored data associated with the biometric tests and associated with the user that has preliminarily been identified by the successful execution of said at least one non-biometric test to determine if at least two biometric tests have been successfully executed, each of said at least two tests having an associated biometric test confidence level; the biometric tester combining a user test score for a first of said at least two biometric tests with a biometric confidence level for the first of said two biometric tests to create a first individual confidence test score for the first of said at least two tests, the biometrics tester combining a user test score for a second of said at least two tests with a biometric confidence level for the second of said two biometric tests to create a second individual confidence test score for the second of said at least two tests, the biometric tester combining at least the first and second individual confidence test scores to create a combined biometric confidence score; wherein the biometric tester determines a biometric success in the event the combined biometric confidence score meets or exceeds a biometric confidence threshold; and a second unlocker operable to allow access to at least a second portion of the device in the event of a biometric success.
17. The system of claim 16 wherein the first non-biometric test comprises a Smart card detachable from the device; and wherein the first unlocker further comprises a remote proximity mechanism operationally able to detect the presence of the Smart card.
18. The system of claim 16 further comprising a biometric randomizer, operable to randomly choose at least one of the at least two biometric tests from a set of available biometric tests.
19. The system of claim 16 wherein failure of at least one test results in actions chosen from a group comprising at least one of: initiating a security system, locking the device for a pre-determined amount of time, and revoking access to a previously allowed portion of the device.
20. The system of claim 16 wherein the device is a vehicle.
21. The system of claim 20 wherein the first unlocker unlocks a door to the vehicle when the at least one non-biometric test is successfully executed, and wherein, if biometric success is not determined within a predetermined time period, the first unlocker locks the previously unlocked door.
22. The system of claim 21 wherein at least one of the at least two biometric tests are chosen randomly from a group comprising at least one of: a fingerprint recognition test, a handprint recognition test, a face recognition test, a handwriting recognition test, a voice recognition test, an IRIS recognition test, an eye blinking rate test, an eyeball squint extent test, a normalized body temperature test, a keystroke dynamic test, a vein recognition test, and a DNA recognition test.
23. The system of claim 16 wherein the biometric tester comprises at least one sensor to take a biometric reading, and a computer, the computer comprising: a data storage device to store biometric data representing at least one valid user profile associated with the at least two biometric tests, and a computer processor operationally able to calculate confidence test scores based at least in part on how closely the biometric reading associated with a test matches with the at least one valid user profile associated with the at least two biometric tests.
24. The system of claim 23 wherein the computer further comprises: instructions stored in memory, the instructions used by the computer processor to calculate confidence test scores; a network connection between the at least one sensor and the computer, wherein the at least one sensor communicates with the computer processor through the network connection; and a biometric test enroller, comprising: a biometric data capturer, a processor, which processes the captured biometric data into a form useable by the instructions, and a profile storer which stores the processed data as a valid user profile for an associated test and associated user on the data storage device wherein for at least one biometric test, biometric data is captured and processed into a valid user profile.
25. The system of claim 16 wherein the first non-biometric tester is operationally able to analyze input data to determine if a second non-biometric test has been successfully executed; and wherein once the first non-biometric test and the second non-biometric test are successfully executed, the first unlocker allows access to at least a first portion of the device.
26. A method of allowing user access comprising: in response to non-subjective user data: using at least one randomly-chosen non-subjective test producing either a pass or a fail to establish credentials of the user; in response to subjective user data: using at least two randomly-chosen subjective tests, each subjective test generating a subjective score, to establish identity of the user; determining whether a cumulative subjective score on the subjective tests has reached a confidence threshold in order to establish identity of the user; wherein at least one subjective test compares the subjective user data with previously-stored data associated with the user whose credentials have been established to determine the subjective score of the test; and wherein if the credentials of the user are established and the identity of the user is established then allowing user access.
27. The method of claim 26 wherein the established credentials of the user have associated credentialed biometric data and wherein the subjective test comprises comparing captured user biometric data to the associated credentialed biometric data to produce the subjective score.
28. The method of claim 26 wherein user access is allowed, and an audit trail is kept of user activity based on the established identity of the user.
Description:
BACKGROUND
Mankind has always been interested in security. The first known lock—estimated to be 2,800 years old—was discovered just outside the ruins of Khorsabad palace near Nineveh, in Modern-day Iraq. Biometric testing seems to have been invented by the Chinese around 1400 A.D., where an explorer observed people inking children's feet and hands, and then stamping them on pieces of paper, creating an accurate identification system.
Since then, security systems have continued to improve. Today, not only doors are locked; information, too, is locked away, with the key commonly being a password. Passwords have their own difficulties. Passwords that are easy to remember are also easy to be discovered by trial and error methods. Safer passwords—because they are longer, contain numbers and special characters, etc., are quite difficult to remember and so often are written down, leading to a different sort of security breach. Passwords can also be inadvertently disclosed—e.g., they can be viewed when they are being typed in.
Security cards, often combined with passwords (or PINS—short, numeric passwords) are also commonly used, and present similar security problems to passwords; that is, the cards can be stolen or lost; the passwords associated with them tend to either be easy to remember (and, therefore, easy to crack) or long and complicated, which leads them to being written down, often on the card itself.
As locks get more sophisticated, so do the lock breakers. One common method to gain entrance to password-protected data is “phishing”, where an untrustworthy person masquerades as a legitimate business. Commonly, such “phishers” send an official looking e-mail (or an instant message, or a letter) requesting password information. Sometimes, they present screens to the user representing a trusted entity, which legitimately needs the password.
To counter these problems, biometric methods—physiological and behavioral characteristics used to verify identity—are increasingly being used. For example, biometric fingerprint information (probably the best-known physiological biometric data) is gaining acceptance as a method of verifying identity. Fingerprint readers as small as a pack of cards have been developed, and the verification process (pressing one's finger against a platen) is seen as harmless. Iris pattern recognition (matching the unique pattern in the colored portion of the eye that surrounds the pupil) is also used, and systems exist that can perform face recognition, often emphasizing areas difficult to alter, such as the eye socket upper outline, the sides of the mouth, and the planes of the face around the cheekbones.
However, biometric data, unlike passwords and keys, can only authenticate someone up to a confidence level; that is, a biometric system will give a certain percentage of false matches, false negatives, and will fail to enroll a certain percentage of each test population. There are people who cannot enroll in certain biometric systems due to their biological “sample quality”. For example, some fingerprints are too smooth to give clear-enough samples to create clear-enough templates to effectively use.
Certain medications can also make biometric test results unreliable. For example, there are drugs, such as atropine, that dilate the eye, making iris identification impossible. Also, certain illnesses can lead to a user falsely being rejected by a biometric system. Having a head cold may change a user's voice sufficiently that he or she will be rejected by a voice-recognition system. Therefore, not all people can be enrolled or can use each biometric test.
SUMMARY
Additional features and advantages will become apparent from the following detailed description of illustrated embodiments, which proceeds with reference to accompanying drawings.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description section. This summary does not identify required or essential features of the claimed subject matter.
One embodiment comprises a method of allowing user access. A role is defined for a user, and then non-biometric confidence scores of randomly chosen successive non-biometric user tests are added until a non-biometric confidence threshold is reached or a non-biometric test limit is reached. Then, biometric confidence scores of successive randomly chosen biometric tests are added until a biometric confidence threshold is reached or a biometric test limit is reached. If the non-biometric confidence threshold is reached and the biometric confidence threshold is reached, then, in this embodiment, at least some access is allowed based in part upon the user role that was defined earlier.
Another embodiment comprises a first non-biometric test mechanism operationally able to analyze input data to determine if a first non-biometric test has been successfully executed; a first unlocker, which allows access to at least a first portion of the device or site when the non-biometric test mechanism is successfully executed; a biometric test mechanism operationally able to analyze input data to determine biometric success or biometric failure; and a second unlocker, which, in this embodiment, allows access to at least a second portion of the device or site when the first biometric test mechanism is successfully executed. Furthermore, the biometric test mechanism desirably comprises at least two biometric tests, each with a confidence level. For each biometric test, a user test score is combined with the confidence level to create a scaled test score. The scaled test score of each biometric test is summed, and only if the total sum of the scaled test scores reaches a biometric confidence threshold will the biometric test mechanism determine biometric success. Access is allowed when the user passes the non-biometric test and the biometric tests.
In an additional embodiment, in response to non-subjective user data, at least one randomly chosen non-subjective test is given which produces either a pass or a fail. This is used to establish credentials of the user. In response to subjective user data, at least two randomly chosen subjective tests are given to a user desiring access, each subjective test generating a subjective score. These subjective tests establish the identity of the user. The cumulative subjective score on the subjective tests must reach a confidence threshold to establish identity of the user. At least one subjective test compares the subjective user data with previously-stored data associated with the user whose credentials have been established to determine the subjective score of the test. If both the credentials of the user are established and the identity of the user is established then user access is allowed.
Additional features and advantages will become apparent from the following detailed description of illustrated embodiments, which proceeds with reference to accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 is a flow diagram of an exemplary process flow in conjunction with which described embodiments can be implemented.
FIG. 2A is an operational flow diagram showing an exemplary embodiment of a method of using non-biometric testing in conjunction with which described embodiments can be implemented.
FIG. 2B is an operational flow diagram showing an exemplary embodiment of a method of using biometric testing in conjunction with which described embodiments can be implemented.
FIG. 2C is an operational flow diagram detailing an exemplary embodiment of a method of generating user test results in conjunction with which described embodiments can be implemented.
FIG. 3 is a functional block diagram illustrating an embodiment of an example system to allow access in conjunction with which described embodiments can be implemented.
FIG. 4 is a functional block diagram illustrating an embodiment of an example biometric test in conjunction with which described embodiments can be implemented.
FIG. 5 is an operational flow diagram illustrating an exemplary process for establishing user credentials in conjunction with which described embodiments can be implemented.
FIG. 6 is an operational flow diagram illustrating an exemplary process for establishing user identity in conjunction with which described embodiments can be implemented.
FIG. 7 is an exemplary operational flow diagram that extends the process for establishing user identity shown in FIG. 6, above.
FIG. 8 is a block diagram of an exemplary computing environment in conjunction with which described embodiments can be implemented.
DETAILED DESCRIPTION
The present application relates to technologies for allowing access using non-biometric and biometric tests. Described embodiments implement one or more of the described technologies.
Various alternatives to the implementations described herein are possible. For example, embodiments described with reference to flowchart diagrams can be altered, such as, for example, by changing the ordering of stages shown in the flowcharts, or by repeating or omitting certain stages. As another example, although some implementations are described with reference to specific non-biometric and biometric tests, forms of non-biometric and biometric tests also can be used.
I. Overview
FIG. 1 shows an overview of exemplary embodiments that can be used to accurately determine identity and allow access using a combination of non-biometric and biometric tests. At process block 105, a non-biometric test is chosen, and desirably is randomly chosen.
The non-biometric test can be any type of such test. In addition, more than one such non-biometric test can be performed at this stage. Exemplary forms of non-biometric tests are described below.
The most common form of non-biometric authentication is, most likely, a password-based test. In such a test, the user types or otherwise enters a password into an authentication device. Sometimes the user chooses such passwords initially, and sometimes they are assigned. Passwords suffer from many problems, as anyone who has ever used one knows. For example, someone might give his or her password to a trusted friend or colleague who then might misuse it. Passwords that consist of actual words can be discovered using a brute-force attack (that is, using a computer, a hacker can present every word in the dictionary to attempt to gain illegal entry into a password-protected system). If the password is random enough to prevent a brute-force attack, it is generally so difficult to remember that it must be written down somewhere, allowing those with bad intent to discover it and use it to break into the system.
More problematically, users often use the same password at many sites—once a given password is discovered for a user, it can often be used to break into other known user locations.
Furthermore, a password can be harvested between the time a user types or otherwise enters the password into a device and when the password reaches the server from an off-site location, or can be gathered by malicious hardware, software, firmware, or some combination of the three residing on what are believed to be safe terminals.
Short, numeric passwords are commonly known as Personalized Identification Numbers (PINs). PINs are commonly used in Automatic Transaction Machines (ATMs), mobile phones, internet-based access to personalized information, etc. In many systems there is a hierarchy of such PIN-based authentication (for example, maintenance PIN vs. user PIN) and based on one or more successes in the PIN authentications, restricted sets of one or more privileges are granted to the user. This mechanism is used primarily to establish a user's identity. However, such a mechanism lacks the ability to distinguish a user uniquely. A user may share his PIN with others, as an example, thereby giving them the same privileges.
Offering more protection than a password are cryptography-based tests. In such a test, the validity of a user (in reality the key being validated) is established using the fact that the two parties know a common secret, but never share the secret, so that the secret is not exposed when being transmitted over a network. The secret can be a cryptographic key, a mechanism to obtain such a key, or a mechanism, whereby the secret is created using public knowledge of the other partner (such as in a Public Key Infrastructure (PKI)).
One algorithm used to create such locking and unlocking mechanisms is the Diffie-Hellman key exchange, which allows users to establish a shared secret key over an unprotected communications channel without using a prior shared secret. Another algorithm, RSA, created by Rivest, Shamir and Adelman, exploits the difficulty of factoring extremely large primes to provide both encryption and decryption. Other methods of safely sharing secrets include the ElGamal cryptosystem invented by Tamir Elgamal; The Digital Standard Algorithm (DSA), developed by the National Security Agency (NSA); and a whole family of algorithms based on elliptic curve cryptography such as Elliptic Curve Diffie-Hellman (ECDH), Elliptic Curve Menezes-Qu-Vanstone (ECMQV), and the Elliptic Curve Digital Signature Algorithm (ECDSA), to name a few.
Such tests are the basis of Smart-card technology. A Smart card is a credit card-sized card with a secure microcontroller containing a secret key. Generally, to use a Smart card, the user presents the card to an access device and inputs a PIN. A random number is then passed to the Smart card by the access device. The random number is then algorithmically combined with the PIN to produce an answer. If the answer is correct, access is allowed. This mechanism is used primarily to establish a key holder's authority and to ensure the genuineness of the secret storage system.
Universal Standard Bus (USB) tokens can also be used in place of Smart cards. USB tokens are generally devices embedded with the same types of microcontrollers used by Smart cards. They are generally roughly equivalent in size to a key, can generally be placed on a keychain, and are plugged into the USB port of a device. These tokens quite often do not require a PIN to activate their secret, and so can be used with devices that do not have a user interface. However, like Smart cards, USB tokens can be misplaced, and those that do not require a PIN are open to exploitation by being deliberately stolen.
Digital signature tests authenticate identity by exploiting the fact that only a specific user is issued a specific secret. A trusted third party is used to verify that only a specific entity has been issued the secret in question. For further validation, the digital signature often is composed of two algorithms: one used for the signature, and one used in the verification process. Digital signatures generally contain embedded within them personal information, such as a name, a serial number, the digital signature public key, and a separate digital signature containing information about the certifying authority, which may be used to ensure the validity of the signature.
Other non-biometric tests are also available. For example, a user may possess a device that generates a brand-new password at each use; the user would then type that password into an authentication device. This thwarts certain sorts of password theft—the resulting password, if stolen, would be useless on the second try.
Non-biometric tests register either a pass or a fail. Either the password or secret is known or it is not. At decision block 110, the test is checked to see if it was passed or failed. If failed, at process block 112 the user is denied access. If passed, an additional one or more non-biometric test, for example, a second non-biometric test, is chosen (desirably randomly) at optional process block 115. If the non-biometric test is chosen randomly, then it is desirably randomly chosen from a limited number of tests. In one embodiment, the non-biometric test is randomly chosen from two possible non-biometric tests. The requirement for at least two non-biometric tests thwarts the interloper who has obtained one form of non-biometric identity information. At process block 120, the results of the second non-biometric test are checked. If the test was failed, at process block 122 the user is again denied access. There might be further ramifications, in that an alarm system might go off, doors might be locked, previously allowed areas might no longer allow access, and so on.
If the second or other subsequent test is passed, control passes to process block 125 where at least one biometric test is chosen, desirably randomly, and then given to a user. If the biometric test is chosen randomly, then it is desirably randomly chosen from a limited number of tests. In one embodiment, the biometric test is randomly chosen from two possible biometric tests. For example, the limited number of biometric tests might be ten fingerprint tests; one for each of the ten fingers. The left-hand ring finger might be chosen for the first biometric test, with the thumb of the right hand chosen for the second biometric test. A score is then generated 130. Since an identity, or a partial identity has already been established for the user-at-hand via the initial non-biometric test or tests, the biometric tests will, in a desirable embodiment, only check templates associated with that specific individual against the new sample. This greatly reduces verification time and leads to a lesser number of false positives and false negatives.
The biometric test can be any type of such test. In addition, more than one biometric test can be performed at this stage. Exemplary forms of biometric test are described below.
The number of biometric tests continues to expand. Currently, some of the biometric tests suitable for use with the embodiments comprise the following: fingerprint tests, voice recognition systems, handprint recognition tests, face recognition tests, handwriting recognition tests, voice recognition tests, iris recognition tests, retinal scan tests, eye blinking rate tests, eyeball squint extent tests, normalized body temperature tests, keystroke dynamic tests, vein recognition tests, and Deoxyribonucleic acid (DNA) recognition tests, to name a few.
The most widely recognized, and the oldest (in the West, at least), biometric test is the fingerprint test. Fingerprints are, by and large, unique. Furthermore, they stay constant with age and, when a finger is damaged, a recognizable fingerprint is typically restored when the finger is healed. Generally, in an enrollment process, a fingerprint is captured by pressing it up against a plate and having it scanned—a very non-invasive process. Image processing algorithms can be used to electronically eliminate (heal) the effect of temporary cuts or blemishes on the finger that might not always be present, and to clarify any smudged areas. The fingerprint is then stored as a template. When someone needs to be authenticated, the process is repeated: that is, another fingerprint image is obtained; the image processing algorithms again clarify the image, and then the processed image is compared to a database of existing fingerprint templates. If a close-enough match is found, then the degree of similarity between the authenticating fingerprint and the template can be assigned a value, such as a number, as a marker of the degree of confidence that the fingerprint is actually from the identified individual.
Facial recognition tests are also used to authenticate users. This method is even less invasive than fingerprint recognition, and can be performed without the test-taker's knowledge. It is based on the idea that certain parts of the face are not susceptible to change, such as the area around the upper portion of the eye, the area around the mouth, and the area around the cheekbones.
Voice recognition systems are also used to authenticate users using voice characteristics. The shape of an individual larynx leads to individual features, such as pitch and tone; when harmonics (determined largely by head shape) and cadence is added a unique signature can be obtained. This is also seen as non-invasive, as people generally do not mind speaking to a device. Problems with speech, such as a headcold, laryngitis, or a bad sinus infection can lead to failures, however.
Hand geometry tests are based on the fact that hand shape is more-or-less unique and remains constant for life. Typically, hand recognition tests require that a user line up his or her hand using guide-pegs—five, for finger placement, is common. A three-dimensional image, which includes hand shape, knuckle shape, and length and width of each of the fingers, is then captured. Although not particularly obtrusive, hand geometry tests currently have a high false-acceptance rate, which should be ameliorated by at least some of these embodiments, as only a specific hand template (for an individual determined by the non-biometric tests) will be considered for a match.
Iris recognition tests are based on the theory that the stromal pattern of an individual iris (the coloring) is unique. For example, identical twins have noticeably different stromal patterns, and each individual has different patterns in left and right eyes. A small camera takes a picture of the iris, which is then compared to a previously acquired template.
Retinal scan tests map the pattern of blood vessels on the back of the eye. They do this by sending a low-intensity beam of light through the pupil. This test is quite invasive, in that users must keep their eyes motionless and unblinking within a half inch of the device. Furthermore, other information, such as various health issues, can be gleaned. However, such tests provide about 400 points of reference and have a very low false negative rate.
Keystroke dynamics refer to a test that precisely analyzes the speed and rhythm that someone types. This is done by monitoring the keyboard input multiple times a second, to give an accurate accounting of “flight time”—how long a user spends reaching for a specific key, and “dwell time”—how long a user spends pressing a given key. This method requires no extra test hardware other than a keyboard.
Biometric tests, unlike non-biometric tests, do not give straight up or down verification responses. Rather, the best they can do is to give a confidence level—that is, within a certain degree of certainty, they can say that a person passed or failed a certain biometric test. The failures, both for false acceptances and for false rejections are called the false acceptance rate and the false rejection rate. Due to the vagaries of the human physiognomy, certain people have a more difficult time with certain tests than others do. Other problems may bedevil biometric tests. For example, someone with laryngitis is unlikely to pass a voice recognition system due to no fault of his or her own. Therefore, rather than unequivocally failing someone who fails a specific test (or at least fails to pass with a sufficiently high confidence score) an additional one or more biometric tests can be given. In accordance with FIG. 1, at block 135, at least one additional biometric test (e.g., a second such test) is desirably randomly chosen. In an exemplary embodiment, the biometric test is randomly chosen from a group containing or comprising a limited number of possible biometric tests. The previously-chosen biometric test, in some embodiments, is taken out of the group of the limited number of possible biometric tests such that the same test is not administered twice. At process block 140, a score is generated from the second biometric test. The scores from the biometric test at blocks 125 and 135 are combined and the combined scores are then evaluated to see if a satisfactory level of confirmation of identity is achieved by the biometric tests. The evaluation determines if the correlation between the biometric test scores and the individual sought to be identified is high enough. Desirably, the biometric test scores are arithmetically combined, such as added together, and compared to a threshold that can be predetermined or varied. For example, if at blocks 125 and 135, two biometric tests have been administered, the two scores are combined, and, at decision block 145, if the combined score is sufficient to pass, at process block 150, access is allowed. If they are not, processing continues at decision block 152, where it is checked if the user has made the maximum number of test attempts. If not, at process block 135, a subsequent test is allowed. Otherwise, if the maximum number of attempts has been reached, then at process block 155, access is denied.
This is just a brief overview of a single embodiment; other embodiments are discussed below.
II. Exemplary Method for Allowing Access Using Non-Biometric and Biometric Tests
Turning to FIG. 2A, an exemplary method for allowing user access is described. At optional process block 205, a user role is defined. The user role may be defined for each user in a system. Users can have roles based on a job description, or other factors, or combinations thereof. For example, one user may have the role of “system administrator.” Certain roles may require limited access even if all tests are passed. For example, in a car-access method, someone too young to drive a car may have the role “juvenile”. A person with such a role will be denied driving privileges. In at least some embodiments, the user role definition is a function of at least one test being passed.
In an alternate embodiment, the user role is partially defined by the first non-biometric test score, and is thereafter refined by each subsequent non-biometric and biometric test score. In yet another alternate embodiment, the user role is refined by some subset of the test scores.
At process block 207, at least one non-biometric test is chosen, desirably randomly. The initial choice of which non-biometric tests to use may be based on the level of security provided by the test, ease of use, the role definition, convenience, and so forth.
At decision block 209 the user takes the test or tests, generating a pass or a fail. At decision block 211 it is determined if the user failed the test. If so, then access is denied 213. In other embodiments, the user is allowed to fail a certain number of non-biometric tests before being denied access. If the user passed the test or tests, then at decision block 215 it is determined if a non-biometric confidence threshold score has been reached. This confidence threshold can be based on the number of tests that have been passed, the difficulty of passing the tests, and so on. The confidence threshold is desirably based on the fact that different non-biometric tests have variable security levels. Each test passed can generate a non-biometric confidence score. The results can be combined, such as summed, with the combined biometric confidence scores being then compared to a non-biometric confidence target, such as a threshold, to determine if the user has successfully passed the non-biometric tests. Since a password generally is less secure than a Smart card with a PIN, successfully executing a password in this example will generate a lower confidence score than will a more secure test, such as presenting a Smart card with a PIN.
If the non-biometric confidence threshold has not been reached, then, returning to process block 207, another non-biometric test is chosen, and the process continues. Each time a test is successfully passed, a user role may be defined with greater precision. The role of the user at a certain stage might also affect the next test that is given. The fact that a test has been successfully passed may also at least partially define a role of a user. The number of non-biometric tests can be limited, such that failure to achieve the requested confidence after a number of tests results in exiting the process. If the confidence threshold has been reached, then limited access may be allowed at block 217. Examples of such limited access include unlocking the doors on a car, allowing access to a formerly-forbidden room, allowing access to a formerly-forbidden portion of a computer, and so on. Alternatively, there may be no access based on passing of one or more non-biometric tests, but instead, a user may now be qualified to take one or more biometric tests based on having passed the non-biometric tests.
Turning to FIG. 2B, which continues the example of FIG. 2A, at process block 220, at least one biometric test is chosen, desirably randomly. The initial decision of what biometric test or tests to use can be based on one or more factors, or combinations thereof, such as whether the users are aware of their participation, how cooperative the users are, whether or not a user is supervised when using the system, the user role that has been defined or partially defined by non-biometric tests that have been passed, the nature and number of non-biometric and/or non-biometric tests that have been previously passed, the levels of the tests that have been passed, how trained the user is on specific tests within the system, and if there are environmental factors that affect performance, such as noise with a voice recognition system, or dirt or soot with a fingerprint recognition system.
One or more initial biometric tests are then taken, at process block 222. For example, a first biometric test is taken. The details of test-taking are explained more fully in connection with FIG. 2C. As illustrated in FIG. 2C, identity can be determined (block 250) after (or during) the process of taking a non-biometric test 209. This can be easily understood by examining an exemplary process of signing into a computer system. When a user signs in, the user signs in as someone—someone who has an associated password, say. This initial password provides a clue to the expected user's identity (a “who he/she is”). This “who he/she is” can then used to simplify the biometric matching procedure (in an exemplary embodiment) by allowing a comparison of the biometrics of the user who signed in with biometrics (or a template thereof) previously acquired for the user associated with the password. In some embodiments, if the identity determined is on a “black list,” then the user is denied access and is not allowed to take the biometric tests even if the non-biometric tests were passed. The user identity may be only partially determined by the initial test passed, with the identity being incrementally refined as more tests are passed successfully.
The mechanics of test taking differ with each biometric test, but generally, a representation or sample (for example, an image, a voiceprint, or in the case of keystroke dynamics, a keyboard typing behavior sample) is captured from the user attempting to be authenticated. The representation is then processed into a template 252 using various algorithms that capture essential items from the data. From the non-biometric test steps, the identity of the user who is supposedly being checked is known. The template for this user can be retrieved (block 254) from storage (e.g., in a database). The retrieved template is then compared to the test template (block 256)—the test template being from the user hoping to gain access. Since the user to whom the template should belong is already known, multiple templates do not need to be sorted through, which assists in reducing the computational energy required, as well as reducing (in an exemplary embodiment) the number of false acceptances. The number of false acceptances is reduced because a single user was established using the non-biometric testing, and the template of the user who is presenting the credentials must match one or more specific biometric patterns of the expected individual. This differs from the normal biometric testing procedure where a pattern is matched against all existing templates that might be in a database of authorized users. Finally, the comparison generates an individual test confidence score 258.
Returning to FIG. 2B, at optional process block 224 the individual test failure threshold is adjusted if the user (determined, for example, with the non-biometric tests) is a new user. All biometric tests depend on duplicating biometric data, such as, for example, face shape, hand size, fingerprint quality. Such duplication can be dependent on specific factors, such as how a user places his or her hand on a platen, how hard the pressure of a fingerprint is, and so on. Thus, users get better with time at duplicating the exact circumstance that leads to a standardized result. When a user is a beginner; that is, one who has rarely used a specific biometric device, such uniformity may be difficult to achieve. Therefore, such a user may be allowed more tries to achieve a satisfactory score, or might be allowed to pass with a lower score than otherwise acceptable—thus, the test failure threshold can be varied, such as lowered for new uers and raised for repeat users. In an alternative embodiment, the scaled test score (created from the user test results and the confidence level of the specific test) is modified if the user is a beginner.
Once an individual biometric test score is generated, a user role can be more fully defined (not shown). At process block 226, test results are generated. This determines whether or not a user failed the test. If so, that is, if the test score was too low, the user can be allowed a number of chances to generate a passing score—the test failure variable. When a test is failed, a variable that keeps track of the number of times that a user failed the test is incremented at block 228, and then at decision block 230, it is determined if the user has failed the test too many times, such as the failure number has reached or exceeded a threshold. The allowed number of tests can be predetermined and can be varied, such as based upon factors such as the type of test, the usual reliability of the test, and often factors or combinations thereof. If so, then the user is denied access 232. If not, control passes to process block 222 where the user gets to try again.
Tests vary in their accuracy; some are considered very reliable (mostly those that are more invasive) and some, especially the less invasive tests, are less reliable in that the produce many false positives or false negatives. This level of confidence in a given test is captured in the confidence level, a measure of how accurate a given biometric test is. The effectiveness of biometric tests is measured, generally, by a series of rates: for example, the false acceptance rate, the false rejection rate, and the failure to enroll rate. The false acceptance rate is the likelihood that someone who is not authorized is allowed access. This directly affects security, and so is often considered the most relevant measure. The false rejection rate is the likelihood that someone who is authorized is denied access. While not directly impacting security (the denial of access to an authorized user is not a security breach), it is quite annoying for those denied access. The failure to enroll rate gives the proportion of people who are unable to become enrolled initially on a system. The confidence level can be determined, for example, by using a combination of these test measures. Although other approaches are usable, in one example, the confidence level can be set at the point where the false rejection rate and the false acceptance rate are equal.
If the user did not fail the test, then the results of the individual test and a confidence level associated with the test can be combined to produce a scaled test score at block 234. The scaled test score can then be combined, such as added to any prior test scores to produce an aggregate biometric confidence score at block 236. In a specific example, the aggregate score can be the sum of the scaled test scores of all the biometric tests that have been previously passed by this user in this session.
If the resulting aggregate biometric confidence score is high enough at block 238, then the user is allowed access at block 242. If not, then the number of biometric tests that have previously been taken can be considered at block 240. If too many have been taken, then the user is considered to have failed, and access is denied at block 244.
III. Exemplary System for Allowing Access Using Non-Biometric and Biometric Tests
Referring to FIG. 3, block diagram of a system to allow access 300 shows an exemplary embodiment of the systems discussed herein.
The system to allow access 300 comprises or consists of a first non-biometric tester 305. In an exemplary embodiment, a user must pass at least one and desirably one non-biometric test associated with this test mechanism. The same biometric tests can be used as have been described previously: generally, a non-biometric test produces a pass or a fail—for example, the user either knows a password or a PIN, or does not. Similarly, either a Smart card or similar device has the appropriate secret or it does not. In some embodiments, at least one second non-biometric test mechanism 307 must also be correctly handled by a user prior to a first unlocker 310 being activated.
The first unlocker 310 in this embodiment allows limited access. Access can be allowed, for example, only to the next level of testing, or partial access can be allowed to a formerly forbidden zone: for example, a car door or room door can be unlocked, certain otherwise-restricted computer files may be viewed, and so on.
A biometric tester mechanism 315 is also included in this embodiment. This mechanism 315 desirably comprises at least one biometric test 320, and a confidence threshold 327, which can be a combined, e.g., cumulative, score on the taken biometric tests that a user must reach before he or she is allowed access. In an exemplary embodiment, the at least one non-biometric test preliminarily identifies a specific user. This user is expected to have at least one biometric template or profile on store. When the biometric tester 315 attempts to authenticate the user, it checks only for previously-stored templates for that specific user. In an alternate embodiment, the biometric tester mechanism 315 comprises at least two biometric tests 320, 325. These two biometric tests can be, for example, fingerprint tests for different fingers on the hands.
An embodiment of an exemplary biometric test is shown more fully in FIG. 4. A sensor 405, and a computer 410 perform an exemplary biometric test 325. The sensor 405 is used to take the biometric sample and can comprise, for example, an optical sensor such as a camera, an ultrasonic sensor, a thermal sensor, or other biometric sample capturing mechanism. The computer 410 and the sensor 405 can be interconnected by a network connection 407, which, for example, can be wireless, an Ethernet connection, and so forth. The computer 410 itself can contain a data storage device 415 within which data associated with various issues, such as profiles 417 and 419 of the various users that can be authenticated by the system are stored. Furthermore, a series of instructions 425 can be contained within the data storage device 415 for use in matching data taken from the sensor with the stored profiles.
How close the match is between sample biometric data derived from the sensor 405 and at least one profile 417, 419 determines a test score that is used in conjunction with the confidence level to determine how well the person whose biometric data was sampled by the sensor 405 scored on the biometric test 325. The confidence level 430 is a measure of how accurate the given biometric test is and can be, for example, be set at or based upon the equal error rate: the point where the false acceptance rate (FAR) and the false rejection rate (FRR) are equal. Each test can also have a failure level 435 associated with it. If the match score between the user data and the profile data falls below the failure level 435, then the user is considered to have failed the test, and can be denied access by the second unlocker 335 of FIG. 3. As the identity of the user was preferably established by the first non-biometric tester 305, the system preferably only checks previously-stored templates correlated with the user preliminarily identified by the non-biometric tester 305.
Returning to FIG. 3, a randomizer 330 is also desirably included. The randomizer 330 can be used to randomly determine which biometric test (and in some embodiments) which non-biometric test to present to a user. A second unlocker 335 is used to grant additional or full access when the confidence threshold 327 has been reached—that is, for example, when at least two biometric tests have been taken and their cumulative score is equal to or greater than the confidence threshold 327.
The profiles 417 and 419 (in FIG. 4) used by the biometric test 325 can be initially input into the system. This can be performed by a biometric test enroller 340. When an authorized user initially enrolls onto a given biometric test system, first a data sample can be captured with a biometric sensor or biometric data capturer 345. This can, for example, be done using the sensor 405 or an alternative sensor or device. The data can then be processed by a processor 350 to extract (or create) data points that will be used in the user profile (e.g., 417, 419). These data points, often called a template, are then desirably stored by the storer 355, for later use by the instructions 425, for instance, to compare to a new test sample with the stored template for the test to determine if a sufficiently-close match has been made.
IV. Exemplary Method for Allowing Access Using Subjective and Non-Subjective Data
FIGS. 5, 6, and 7 are an operational flow diagram illustrating an exemplary process for allowing access. The process begins at step 505 where at least one and desirably one non-biometric test is chosen, desirably randomly. At process block 510 non-biometric data is gathered. Non-biometric data can be data that is not dependent upon human physiological characteristics to establish identity, and is therefore, can be deemed non-subjective. In contrast, biometric tests are typically dependent on human physiological characteristics and hence, can be deemed subjective in nature. Furthermore, non-subjective non-biometric tests are generally binary, in that they are either passed or failed. At process block 515 the non-biometric data is used to determine whether the one or more tests were passed or failed. For example, was a PIN entered correctly? Or, did a Smart card contain the correct authorization such as and not limited to one based on cryptography? If a test was failed, then, generally, the user is denied access at block 520. Alternatively, more than one chance to pass may be given.
In some embodiments, at least two non-subjective tests must be passed before being allowed to move onto the next level of authentication. In such a system, at decision block 525, it is determined if sufficient non-subjective tests have been passed. If not, then control returns to step 505 where more non-biometric test data is gathered. If sufficient tests have been passed, then credentials (at block 530) for this user are considered established.
As shown in FIG. 6, in this example, once the credentials are established, the user is then allowed to undertake a series of subjective tests, such as biometric tests, to establish identity. At process block 605 a biometric test is chosen, desirably randomly. Not all people have success in enrolling in every biometric test. If the quality of the data they generate in response to a biometric test makes it impossible to generate a useable matching template, then they cannot be authenticated by that test. The percentage of people unable to enroll on a given test is known as the failure to enroll rate (FEE). At decision block 610, it is checked if the user whose credentials were established earlier was able to successfully enroll in the chosen biometric test. If they were not able to, then control continues at process 605, and a different biometric test is chosen, for example randomly. If they were able to successfully enroll, then control continues at process block 615, where biometric information is gathered and processed, desirably, into a user authorization template.
As has been discussed, biometric information can be gathered in a number of ways. To give one example, some fingerprint systems require a user to press their fingerprint against a platen. The system then captures an image of the print, runs the image through a series of digital processing algorithms to remove any extraneous features such as scars, abrasions, and cuts. A “skeletal image” is then generated which codes defining fingerprint features such as bifurcations, end points, and the placement of ridges, arches, loops, and whorls. All of this information can then be encoded, such as into an individualized representation—an authorization template. Certain other “anti-theft” techniques may be used. For example, the platen may record a temperature ensuring that a valid live finger is presented; not a copy, not a detached finger. Other biometric mechanisms can also be used to enhance the reliability or to ensure that the actual human possessing the biometric quality being tested has given the sample.
At decision block 620, it is desirably determined if the current user is a beginner. If not, the process continues at decision block 630. If so, as seen at process block 625, a confidence value associated with this test, or with the biometric testing procedure as a whole can be lowered. The process continues at decision block 630.
Oftentimes, problems with tests arise. For example, a platen used for fingerprint tests may be smeared with grime, someone's hands may not be clean, and so on. Due to these factors, biometric tests are often given multiple times in the same round of testing before an adequate sample is obtained. However, there is often a limited amount of time, or a limited number of retests that are allowed before a user is considered to have failed the test. At decision block 630, it is determined if too much time has elapsed. If so, then, at process block 635, access is denied. The time limit can be a predetermined threshold time and can be varied. Otherwise, the process continues at process block 640.
At process 640, a subjective test score is generated. The user authorization template generated at process block 615 is compared to a stored template generated by the authorized user during an enrollment process. How close the two match is reflected in a confidence score. In an alternative embodiment, the user authorization template is compared to a database of many, or all authorized users. Many different techniques have been developed to partition template groups to avoid a system having to conduct an exhaustive search through a database. For example, fingerprints are often classified into certain class types. If such a hierarchical classification system were to be used, only those stored templates in the associated class type would be expected to be searched to find the closest match.
At process block 645, all of the previous subjective score and the current score desirably are desirably combined, such as summed. Other methods of combination can also be used. For example, some test scores may be weighted more heavily than others. At decision block 650, it is determined if a sufficient number of subjective tests have been taken. The number of tests taken can be varied and can be made dependent upon a given implementation and the level of security desired. However, desirably at least two tests are taken. If insufficient tests have been taken, then control passes back to process block 605, where another subjective test is randomly or otherwise chosen. If enough tests have been taken, then control passes to decision block 705 (in FIG. 7). At decision block 705 it is determined if the combined score on the subjective tests has reached a desired confidence threshold, which can be previously determined. If not, that is, if the score is insufficiently high, control passes to process block 710 where access is denied. Otherwise, control passes to process block 715. In alternative embodiments, the user is given another chance or chances to take one or more other subjective tests. Only after too many (e.g., a maximum number of tests) subjective tests have been given and failed is access denied.
At optional process block 715, a level of access is chosen for the now-authorized user. At process block 720, access is allowed based on the access level associated with the authorized user. In some embodiments, an audit trail is kept of the authorized users file accesses, movements, etc., as shown at process block 725.
V. Exemplary Embodiment Involving a Vehicle
An automobile is given as an example alternative embodiment. The automobile would be fitted with a host computer system, (or an existing computer already on the vehicle can be used) which would store a part of the biometric and non-biometric information while some other part can be stored with the user or users. For example, a user may have a Smart card, which has the user's fingerprint template stored on it. In such a case, the host computer system would compare the fingerprint of the user attempting to gain access with a template not stored on the host computer system, but rather stored on the Smart card itself.
In this embodiment, one or more non-biometric tests are done initially, followed by one or more and desirably at least two biometric tests. The non-biometric and the first level biometric tests are desirably done outside the automobile itself. After the non-biometric and first level biometric tests have been passed, in one embodiment, the automobile unlocks, allowing the user desiring access to enter the vehicle. Then, the second level of biometric tests are conducted inside the automobile.
In one embodiment, the user can be presented with a Smart card that stores a secret, which might be a password, a key, a digital signature, and so on, also known to the host computer on the automobile. Using a vicinity or proximity mechanism, as soon as the user approaches the automobile, the automobile doors are unlocked giving the user (card carrier) access to the inside of the automobile. Storage space in the automobile (such as the trunk of the car and/or a glove compartment) may be subjected to an additional test, such as a PIN based authentication. As soon as the user is out of the range for the vicinity reader (typically, one to two meters), the automobile can be locked automatically. Another unauthorized user, who is carrying a similar Smart card, cannot get the access to inside, as the specific secret used to unlock the car is not known to him. The secrets may be assigned by the user initially or may be issued to the user. The user identified by the non-biometric test, in an exemplary embodiment, is assigned a role. This role can be “primary driver” which allows access to essentially all of the automobile features, can be “juvenile” which allows access only to the interior of the vehicle, “trunk only” which only allows access to the trunk, “time restricted” which only allows access for a specific time period, such as access is allowed between two dates or access is allowed for a set time period after the initial access, say two hours, a week, etc; or it can be some other role.
After initially receiving access to the inside of the automobile, a user in this example still needs to perform a biometric authentication to obtain the driving access to the automobile. There may be more than one such user who can be authenticated to drive.
In another automobile system with greater access control, one or more biometric tests may also be combined with one or more non-biometric tests to get the access to the inside. In such a case, for example, a fingerprint reader may be attached to the door handle so that a user when he tries to open the door may be automatically authenticated.
The entire system can be subjected to one or more additional levels of biometric tests, which can comprise one or more of the following:
• Retina scanning and generation of the unique number (this can be made mandatory)
• Eye blinking rate
• Eye ball squint extent test, and/or
• Normalized body temperature test
One or more additional biometric tests can be required to be performed once the inside access is permitted. The inside of the vehicle non-biometric or biometric test or tests can be more elaborate than the initial non-biometric or biometric test or tests. After a successful authentication, the user can be permitted to drive the car.
Failure at any level of the authentication can result in one or more of the following:
• No further access is allowed to vehicle controls (this can be mandatory)
• Initiating one or more security alarm features (e.g., audible and visual alarms, alerting a remote monitoring company,) and/or
• Locking and disablement of the basic level functions allowed after the first level.
VI. Computing Environment
With reference to FIG. 8, an exemplary system for implementing at least portions of the disclosed technology includes a general purpose computing device in the form of a conventional computer 800, which can be a PC, or a larger system, including a processing unit 802, a system memory 804, and a system bus 806 that couples various system components, including the system memory 804, to the processing unit 802. The system bus 806 can be any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory 804 desirably includes read only memory (ROM) 808 and random access memory (RAM) 810. A basic input/output system (BIOS) 812, containing the basic routines that help with the transfer of information between elements within the computer 800, is stored in ROM 808.
The computer 800 desirably further includes one or more of a hard disk drive 814 for reading from and writing to a hard disk (not shown), a magnetic disk drive 816 for reading from or writing to a removable magnetic disk 817, and an optical disk drive 818 for reading from or writing to a removable optical disk 819 (such as a CD-ROM or other optical media). The hard disk drive 814, magnetic disk drive 816, and optical disk drive 818 (if included) are connected to the system bus 806 by a hard disk drive interface 820, a magnetic disk drive interface 822, and an optical drive interface 824, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules, and other data for the computer 800. Other types of computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, CDs, DVDs, RAMs, ROMs, and the like (none of which are shown), can also be used in the exemplary operating environment.
A number of program modules can be stored on the hard disk 814, magnetic disk 817, optical disk 819, ROM 808, or RAM 810, including an operating system 830, one or more application programs 832, other program modules 834, and program data 836. Program modules 834 to perform both non-biometric testing and biometric testing can be among those stored on the hard disk 814. There can also be modules 834 to initially enroll authorized users. A user can enter commands and information into the computer 800 through input devices, such as a keyboard 840 and pointing device 842 (such as a mouse). Other input devices (not shown) can include a digital camera, microphone, joystick, game pad, satellite dish, scanner, or the like (also not shown). These and other input devices are often connected to the processing unit 802 through a serial port interface 844 that is coupled to the system bus 806, but can be connected by other interfaces, such as a parallel port, game port, or universal serial bus (USB) (none of which are shown). A monitor 846 or other type of display device is also connected to the system bus 806 via an interface, such as a video adapter 848. Other peripheral output devices, such as speakers and printers (not shown), can be included.
The computer 800 can operate in a networked environment using logical connections to one or more remote computers 850. The remote computer 850 can be another computer, a server, a router, a network PC, or a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 800, although only a memory storage device 852 has been illustrated in FIG. 8. The logical connections depicted in FIG. 8 include a local area network (LAN) 854 and a wide area network (WAN) 856. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
When used in a LAN networking environment, the computer 800 is connected to the LAN 854 through a network interface 858. When used in a WAN networking environment, the computer 800 typically includes a modem 860 or other means for establishing communications over the WAN 856, such as the Internet. The modem 860, which can be internal or external, is connected to the system bus 806 via the serial port interface 844. In a networked environment, program modules depicted relative to the computer 800, or portions thereof, may be stored in the remote memory storage device 852. The network connections shown are exemplary, and other means of establishing a communications link between the computers can be used.
Having described and illustrated the principles of our technology with reference to the illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles.
Elements of the illustrated embodiment shown in software may be implemented in hardware and vice-versa. Also, the technologies from any example can be combined with the technologies described in any one or more of the other examples. Also, the flow charts are all exemplary and various actions within them can occur in other orders or may be deleted altogether. For example, in FIG. 2B, the decision blocks 238 and 240 can be swapped, and blocks 228 and 230 are both optional.
In view of the many possible embodiments to which the principles of the technology may be applied, it should be recognized that the illustrated embodiments are examples and should not be taken as a limitation on the scope of the invention. For instance, various components of systems and tools described herein may be combined in function and use. We, therefore, claim as our invention all subject matter that comes within the scope and spirit of these claims.
|
__label__pos
| 0.971071 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
PerlMonks
Parsing CSV only returns the second line of the file
by saint_geser (Initiate)
on Sep 01, 2012 at 04:22 UTC ( #991118=perlquestion: print w/ replies, xml ) Need Help??
saint_geser has asked for the wisdom of the Perl Monks concerning the following question:
Hi guys, I'm trying to create a script that parses the original csv file prints out information into a temporary file and then replaces the original file with a temp one.
So original data look something like:
1,6064.86,85391.25,593.75,13.25 2,6072.17,85392.95,593.79,13.29 3,6078.94,85393.05,593.76,13.26 4,6085.51,85392.22,593.77,13.27
and so on. I need to insert two lines at the top, two lines at the bottom, switch columns 2 and 3 around and replace column 5 with column 1.I'm using Text::CSV for parsing and also Lava GUI package. So part of the code that does parsing looks like this:
my $csv = Text::CSV->new(); open(OLD, '+<', $file) or die Lava::Message("Can't open original file" +); while (<OLD>) { next if ($. == 1); if ($csv->parse($_) { my @columns = $csv->fields(); # open new file for editing open (TEMP, '>', $temp) or die Lava::Message("Can't open tem +porary file"); # extract pattern id from file name my $pattern = substr $file, -12 , 8; #date formatting doesn't work properly so i'll get the date +from user my $select_date = ' '; my $date_panel = new Lava::Panel; $date_panel->text(' '); $date_panel->text(" Script Version: $Version"); $date_panel->text(' '); $date_panel->text(" SITE: YANDI"); $date_panel->text(' '); $date_panel->item("Type the date in format DD-Mon-YY: ", \$s +elect_date, 9); $date_panel->execute('Date Selection') or return 0; #now print first two lines into the TEMP file print TEMP "$pattern ,"; print TEMP "$select_date"; print TEMP ",,Dist=Metres\n"; print TEMP "0,0.000,0.000,0.000,0.000,0.000,0.000\n"; while( <OLD> ) { print TEMP $_; last if $. % 1; } # print the parsed body of old file print TEMP "$columns[0], $columns[2], $columns[1], $columns[ +3], $columns[0]\n"; } } # insert new lines at the end print TEMP "\n0, 0.000, 0.000, 0.000,\n"; print TEMP "0, 0.000, 0.000, 0.000, END\n"; close TEMP; close OLD; copy $temp, $file; #now we delete the temporary file Lava::Show("Deleting temporary file"); unlink $temp or Lava::Message "Couldn't delete the temporary file!"; END;
So far everything works mostly fine and the i get my columns in correct order but for some reason it only prints the 2nd line of rearranged data and then moves on to insert text at the bottom. Anyone knows where I'm doing something wrong?
Replies are listed 'Best First'.
Re: Parsing CSV only returns the second line of the file
by Athanasius (Canon) on Sep 01, 2012 at 07:24 UTC
Hello saint_geser, and welcome to the Monastery!
In addition to the modulus problem highlighted by Anonymous Monk, above (and ignoring the fact that the code snippet you gave doesn’t compile!), there is a problem with the following line:
open (TEMP, '>', $temp) or die Lava::Message("Can't open temporary fil +e");
This occurs within the outer while loop, so on each iteration, whenever the if condition succeeds, the temp file is truncated (“clobbered,” erased) as it is re-opened for output. See open.
Move the open statement to before the loop, so it is executed only once.
Hope that helps,
Athanasius <°(((><contra mundum
Thanks guys for your replies! I'm still quite confused. This is my first perl script and I haven't done any programming since uni.
The reason it wouldn't compile (works fine for me) is because I have perl as a part of proprietary mining software which has their own extension modules installed, e.g. Lava. I don't get any errors or warnings when running it.
The part of code that does the parsing I got from somewhere on the internet and modified for my purpose and I'm not quite sure where the inner while cycle is from. I'm pretty sure it shouldn't be there.
So I've commented out the inner cycle and moved the part where I open TEMP file for editing and insert two lines up the top out of the outer cycle. Now this part of code looks like this:
while ( <OLD> ) { next if ($. == 1); if ($csv->parse($_)) { my @columns = $csv->fields(); # print the parsed body of old file print TEMP "$columns[0], $columns[2], $columns +[1], $columns[3], $columns[0]\n"; #while( <OLD> ) #{ #print TEMP $_; #last if $. % 1; #} } else { my $err = $csv->error_input; Lava::Message("Failed to parse line: $err"); } }
The output that I get now has only the top 2 and bottom 2 lines:
W1582165 ,01-Sep-12,,Dist=Metres 0, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000 0, 0.000, 0.000, 0.000, 0, 0.000, 0.000, 0.000, END
I still can't figure out how to make parsing work properly. I'm actually very confused about this line:
next if ($. == 1);
It says next if line number equals to 2? In the example that I saw it parsed the whole file but here does it just parse the second line?
The reason it wouldn't compile (works fine for me) is because I have perl as a part of proprietary mining software which has their own extension modules installed, e.g. Lava. I don't get any errors or warnings when running it.
Then what you posted isn’t exactly what you’re running, since this line in the original post (fixed in the new post, I see):
if ($csv->parse($_)
is missing a closing parenthesis, and the final line:
END;
should be __END__.
It says next if line number equals to 2?
No, array elements start counting at zero, but line numbers begin at one. So that statement says: Skip the first line of data. (Presumably, the original code expected the first line to be a heading?)
You are making progress, but it’s difficult to say why your code is failing without a complete, self-contained script. Also, detailing the output you expect/desire would help the monks know what you are trying to achieve. Please see How do I post a question effectively?.
Update: Minor edit.
Athanasius <°(((><contra mundum
Re: Parsing CSV only returns the second line of the file
by philiprbrenan (Monk) on Sep 01, 2012 at 11:33 UTC
Please consider building and manipulating the entire data structure in an array step by step, it makes it easier because you can see what you are doing with pp().
use feature ":5.14"; use warnings FATAL => qw(all); use strict; use Data::Dump qw(dump pp); use Text::CSV_XS; # So original data look something like: my @d = split /\n/, <<'END'; 1,6064.86,85391.25,593.75,13.25 2,6072.17,85392.95,593.79,13.29 3,6078.94,85393.05,593.76,13.26 4,6085.51,85392.22,593.77,13.27 END # I need to insert two lines at the top, two lines at the # bottom, switch columns 2 and 3 around and replace column 5 with colu +mn # 1.I'm using Text::CSV for parsing and also Lava GUI package. So part + of # the code that does parsing looks like this:D my ($t, @t) = (Text::CSV_XS->new); $t->parse($_) && push(@t, [($t->fields())[4,2,1,3,0]]) or die "Could n +ot parse: $_" for @d; # Parse and rearrange unshift @t, ["Top line 1"], ["Top line 2"]; # Top lines push @t, ["Bot line 1"], ["Bot line 2"]; # Bottom lines $t->combine(@$_) && ($_ = $t->string) or die "Cannot combine: ".dump( +$_) for @t; # Combine pp(\@t); # Print result
Produces
[ "\"Top line 1\"", "\"Top line 2\"", "13.25,85391.25,6064.86,593.75,1", "13.29,85392.95,6072.17,593.79,2", "13.26,85393.05,6078.94,593.76,3", "13.27,85392.22,6085.51,593.77,4", "\"Bot line 1\"", "\"Bot line 2\"", ]
Re: Parsing CSV only returns the second line of the file
by Anonymous Monk on Sep 01, 2012 at 06:48 UTC
Anyone knows where I'm doing something wrong?
One loop inside another, without adequate understanding of files and modulus operator
$ perl -le " for(1..20){printf qq{%s %s\n}, $_, int($_ % 1 ); } " 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0 18 0 19 0 20 0
Could you please elaborate on that. I don't know much about perl, you're right but it is really annoying to do all these files by hand.
Could you please elaborate on that.
Find places in your code, say, a loop inside a loop, where you use the modulus operator, say like "% 1"
Then compare that line to my code, esp the output
When do you think that inner loop will end?
Here is a better one, what do you think the inner while loop does, what is its purpose?
Here is another hint, how many times do you get this warning (after you add the code)? if( warn "calling parse " and $csv->parse($_) ) {
Re: Parsing Parsing CSV only returns the second line of the file
by Tux (Abbot) on Sep 01, 2012 at 15:40 UTC
The outer loop should be written like, more modern and safe (think embedded newlines).
my $csv_in = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 }); my $csv_out = Text::CSV_XS->new ({ binary => 1, auto_diag => 1, eol => + "\n" }); open my $fh_old, "+<", $file or die Lava::Message ("Can't open origina +l file: $file: $!"); open my $fh_new, ">", $temp or die Lava::Message ("Can't open tempfil +e file: $temp: $!"); $csv->getline ($fh_old); # skip first line. Header?. If empty line, 's +calar <$fh_old>;' is a good option while (my $row = $csv->getline ($fh_old)) { : : $csv_out->print ($fh_new, [ map { $row->[$_] } 0, 2, 1, 3, 0 ]); } $csv_out->print ($fh_new, [ 0, 0.000, 0.000, 0.000 ]); $csv_out->print ($fh_new, [ 0, 0.000, 0.000, 0.000, "END" ]); close $_ for $fh_old, $fh_new;
Enjoy, Have FUN! H.Merijn
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://991118]
Approved by Athanasius
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (3)
As of 2016-07-30 08:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
What is your favorite alternate name for a (specific) keyboard key?
Results (265 votes). Check out past polls.
|
__label__pos
| 0.787141 |
Quality RTOS & Embedded Software
LIBRARIES
Interrupt Driven Character Queue Transfer Mode
[FreeRTOS+IO Transfer Modes]
Data Direction
The interrupt driven character queue transfer mode can be used with both FreeRTOS_read() and FreeRTOS_write().
Description
ioconfigUSE_TX_CHAR_QUEUE and/or ioconfigUSE_RX_CHAR_QUEUE must be set to 1 in FreeRTOSIOConfig.h for the character queue transfer mode to be available for writes and reads respectively. It must also be explicitly enabled for the peripheral being used within the same configuration file.
When the character queue transfer mode is selected for writes, FreeRTOS_write() does not write directly to the peripheral. Instead the bytes are sent to a transmit queue. The peripheral's interrupt service routine removes the bytes from the queue and sends them to the peripheral.
When the character queue transfer mode is selected for reads, FreeRTOS_read() does not read bytes directly from the peripheral, but from a receive queue that is filled by the FreeRTOS+IO interrupt service routine as data is received.
The interrupt service routines, and the FreeRTOS queues, are implemented by the FreeRTOS+IO code, and do not need to be provided by the application writer.
Interrupt Driven Character Queue Transfer Mode
Advantages Disadvantages
• Simple usage model
• Automatically places the calling task into the Blocked state to wait for the read or write operation to complete - if it cannot complete immediately. This ensures the task calling FreeRTOS_read() or FreeRTOS_write() only uses CPU time when there is actually processing that can be performed.
• A read and/or write timeout can be set to ensure FreeRTOS_read() and FreeRTOS_write() calls do not block indefinitely.
• Bytes received by the peripheral are automatically buffered, and not lost, even if a FreeRTOS_read() operation is not in progress when the bytes are received.
• Calls to FreeRTOS_write() can occur at any time. There is no need to wait for a previous transmission to complete, or for the peripheral to be free.
• The FreeRTOS+IO driver requires RAM for the queues. The queue length is configured by the third parameter of the FreeRTOS_ioctl() call used to select the transfer mode.
• Character queues are inefficient, so their use should be limited to applications that do not require large amounts of data to be read or written. For example, character queues provide a very convenient transfer mode for command line interfaces, where characters are only received as quickly as somebody can type.
• FreeRTOS queues have an in-built mutual exclusion mechanism, but only at the single character level. Therefore, it is guaranteed that the queue data structures will not become corrupt if two tasks attempting to perform a FreeRTOS_write() (or a FreeRTOS_read()) at the same time, but there is no guarantee that the data will not become interleaved if that happens. The application writer can guard against that eventuality using task priorities, or external mutual exclusion (using a mutex for example) if it is necessary.
The ioctlUSE_CHARACTER_QUEUE_TX and ioctlUSE_CHARACTER_QUEUE_RX request codes are used in calls to FreeRTOS_ioctl() to configure a peripheral to use interrupt driven character queue writes and reads respectively. Note these request codes will result in the peripheral's interrupt being enabled, and the peripheral's interrupt priority being set to the lowest possible. The ioctlSET_INTERRUPT_PRIORITY request code can be used to raise the peripheral's priority if necessary.
Example Usage
/* FreeRTOS+IO includes. */
#include "FreeRTOS_IO.h"
void vAFunction( void )
{
/* The Peripheral_Descriptor_t type is the FreeRTOS+IO equivalent of a descriptor. */
Peripheral_Descriptor_t xOpenedPort;
BaseType_t xReturned;
const uint32_t ulMaxBlock100ms = ( 100UL / portTICK_PERIOD_MS );
/* Open the SPI port identified in the board support package as using the
path string "/SPI2/". The second parameter is not currently used and can
be set to anything, although, for future compatibility, it is recommended
that it is set to NULL. */
xOpenedPort = FreeRTOS_open( "/SPI2/", NULL );
if( xOpenedPort != NULL )
{
/***************** Configure the port *********************************/
/* xOpenedPort now contains a valid descriptor that can be used with
other FreeRTOS+IO API functions.
Peripherals default to using Polled mode for both reads and writes.
Change from the default to use interrupt driven character queues for both
reading and writing. The third FreeRTOS_ioctl() parameter sets the
queue length. In this example, the length is set to 20 in both cases.
A successful FreeRTOS_ioctl() call will return pdPASS, for simplicity,
this example does not show the return value being checked. */
FreeRTOS_ioctl( xOpenedPort, ioctlUSE_CHARACTER_QUEUE_RX, ( void * ) 20 );
FreeRTOS_ioctl( xOpenedPort, ioctlUSE_CHARACTER_QUEUE_TX, ( void * ) 20 );
/* By default, a peripheral configured to use an interrupt driven character
queue transfer will have an infinite block time. Lower the block time for
reading and writing to ensure FreeRTOS_read() and FreeRTOS_write() calls
will return, even in the presence of an error. In this example, both
the read and write block times are set to 100ms. Again, for simplicity,
this example does not show the return value being checked. */
FreeRTOS_ioctl( xOpenedPort, ioctlSET_RX_TIMEOUT, ( void * ) ulMaxBlock100ms );
FreeRTOS_ioctl( xOpenedPort, ioctlSET_TX_TIMEOUT, ( void * ) ulMaxBlock100ms );
/***************** Use the port ***************************************/
for( ;; )
{
/* Write 10 bytes from ucBuffer to the opened port. Note the
definition of ucBuffer is assumed to be outside of this function. */
xBytesTransferred = FreeRTOS_write( xOpenedPort, ucBuffer, 10 );
/* At this point, 10 bytes will have been written to the Tx queue,
but not necessarily written to the peripheral yet. Check all 10 bytes
were written to the queue - they should have been as the queue is
20 bytes long. */
configASSERT( xBytesTransferred == 10 );
/* Read 10 bytes from the same port into ucBuffer. Note, this will
not read the bytes from the peripheral directly, but from the Rx
queue that is populated by the FreeRTOS+IO peripheral interrupt service
routine. The calling task is held in the Blocked state to wait
for 10 bytes to become available if they are not available immediately,
but the task will not be held in the Blocked state for more than 100ms. */
xBytesTransferred = FreeRTOS_read( xOpenedPort, ucBuffer, 10 );
if( xBytesTransferred == 10 )
{
/* Ten bytes were read from the peripheral before the 100ms block
time expired. */
}
else
{
/* The block time must have expired before ten bytes could be
read from the peripheral. xBytesTransferred could be any value
from 0 to 9. */
}
}
}
else
{
/* The port was not opened successfully. */
}
}
Copyright (C) Amazon Web Services, Inc. or its affiliates. All rights reserved.
|
__label__pos
| 0.666686 |
MS-Access / Getting Started
IN Clause
In a desktop database (.accdb), specifies the source for the tables in a query. The source can be another Access database; a dBASE, or any database for which you have an Open Database Connectivity (ODBC) driver. This is an Access extension to standard SQL.
Syntax
IN <"source database name"> <[source connect string]>
Enter "source database name" and [source connect string]. (Be sure to include the quotation marks and the brackets.) If your database source is Access, enter only "source database name". Enter these parameters according to the type of database to which you are connecting, as shown in Table-1.
Table-1 IN Parameters for Various Database Types
Database NameSource Database NameSource Connect String
Access"drive:\path\filename"(none)
dBASE III"drive:\path"[dBASE III;]
dBASE IV"drive:\path"[dBASE IV;]
dBASE 5"drive:\path"[dBASE 5.0;]
ODBC(none)[ODBC; DATABASE= defaultdatabase; UID=user; PWD= password;DSN= datasourcename]
Notes The IN clause applies to all tables referenced in the FROM clause and any subqueries in your query. You can refer to only one external database within a query, but if the IN clause points to a database that contains more than one table, you can use any of those tables in your query. If you need to refer to more than one external file or database, attach those files as tables in Access and use the logical attached table names instead.
For ODBC, if you omit the DSN= or DATABASE= parameter, Access prompts you with a dialog box showing available data sources so that you can select the one you want. If you omit the UID= or PWD= parameter and the server requires a user ID and password, Access prompts you with a login dialog box for each table accessed.
For dBASE, you can provide an empty string ("") for source database name and provide the path or dictionary filename using the DATABASE= parameter in source connect string instead, as in
"[dBase IV; DATABASE=C:\MyDB\dbase.dbf]"
Example
In a desktop database (.accdb), to retrieve the Company Name field in the Northwind Traders sample database without having to attach the Customers table, enter the following:
SELECT Customers.CompanyName
FROM Customers
IN "C:\My Documents\Shortcut to NORTHWIND.ACCDB";
[Previous] [Contents] [Next]
|
__label__pos
| 0.949893 |
🖥️
Connect to BadgerCTF (Windows, MacOS, Linux)
💡
This tutorial explains how to connect to BadgerCTF system via SSH
Apply for a username and password
BadgerCTF server uses the same centralized student account management server as Taz and Molly. Therefore you can use your Taz/Molly account to log in BadgerCTF server.
If you've already had the Taz/Molly account and can find your username/password, then you're all set.
If you do not have a Taz or Molly account, or if you have one but forget the password. Please send me a separate email ([email protected]) with your Last Name, First Name and I'll send you your account info.
Connecting to the BadgerCTF from Microsoft Windows
There are multiple SSH clients available for Microsoft Windows, each will connect to the BadgerCTF using the same protocol. These instructions are based on MobaXterm because this application provides functionality you may find useful for other tasks; other application suggestions are below.
SSH Clients
Using MobaXterm to Connect to the BadgerCTF
Installing MobaXTerm
1. Go to the MobaXTerm download page. It is recommended to download the “Installer Edition”, not the “Portable Edition”.
1. Unzip the file.
1. For the “Installer Edition”, double click the “msi” file to start the installation wizard.
1. Navigate through the installation wizard until the program is installed.
After installing MobaXTerm, you can connect to the BadgerCTF by creating a “Session” or using the MobaXTerm Local Terminal. Both methods are described in the next two sections.
Connect by creating a “Session”
1. Launch MobaXterm
1. In the toolbar, click on “Session” button:
1. Select “SSH” as the session type:
1. Specify “147.182.223.56” as the remote host and “1234” as the port
1. Your connection will be saved on the left sidebar, so the next time you can start your session by clicking the “147.182.223.56” link.
1. In the terminal window you will get a prompt to enter your BadgerCTF/Molly login username and then enter your password.
(Note that the characters in your password will not be displayed when you type them as a security precaution)
1. If the connection is successful, the BadgerCTF header output will be visible, like the one shown below:
When finished using the connection, type exit to close the session.
1. You can edit, delete, and move sessions by right clicking on them in the left MobaXterm sidebar.
Connect by MobaXTerm Local Terminal
1. Launch MobaXterm
1. Click on the “+” symbol to open a new MobaXTerm Local Terminal.
1. In the terminal type in the ssh command (replace YOUR_USERNAME with your BadgerCTF username)
ssh [email protected] -p 1234
1. Enter your BadgerCTF password when prompted for molly.cs.wcupa.edu.
Note that the characters in your password will not be displayed when you type them as a security precaution.
1. Enter your BadgerCTF password again when prompted for badgerctf.cs.wcupa.edu.
1. If the connection is successful, the BadgerCTF header output will be visible.
1. When finished using the connection, type exit to close the session.
Connecting to the BadgerCTF from MacOS
The Secure SHell (SSH) is built into the Apple MacOS operating system. To connect to the BadgerCTF, we will leverage this utility using the Terminal application.
Using Terminal to Connect to the BadgerCTF
1. Open the Terminal application, which is found in Applications >> Utilities >> Terminal. Alternatively, press <command> and <space> simultaneously to open Spotlight, search for Terminal and press <return>.
1. In the terminal, type ssh [email protected] -p 1234 (replace YOUR_USERNAME with your BadgerCTF username). Enter your password (twice, one for login Molly, one for login BadgerCTF, it's the same password) when requested.
Note that the characters in your password will not be displayed when you type them as a security precaution.
1. You should now be connected to the BadgerCTF and see a window similar to this:
1. Type exit in the terminal when you are done working on the BadgerCTF.
Connecting to the BadgerCTF from Linux
The Secure SHell (SSH) is built into the Linux operating system. To connect to the SCC, we will leverage this utility using the Terminal application.
Connecting to the BadgerCTF
1. Open a Terminal. This can be done by opening the terminal application in Systems >> Accessories >> Terminal
1. Use ssh to connect to the BadgerCTF with your login credentials, using a command similar to this example:
your_local_machine% ssh [email protected] -p 1234
1. Replace YOUR_USERNAME with your BadgerCTF username. Enter your password twice (one for login Molly, one for login BadgerCTF, it's the same password) when requested
Note that the characters in your password will not be displayed when you type them as a security precaution.
1. When finished using the connection, type exit to close the session.
|
__label__pos
| 0.953076 |
JavaFX, how to create a rpg like game
JavaFX 1.0 is out and there are tons of new cool features, specially for game development.trans
I’ll show in this tutorial how to create a very simple demo that shows how to load imtrages, handle sprites, collisions and keyboard events that you can use to create a game with a old school rpg like vision.
For the background scenario I’m using the house that I drew and we’ll call as house.png.
That we load as a Image and place into a ImageView.
ImageView{
image: Image {url: "{__DIR__}house.png"}
}
For the character I’m using the last character I drew, the nerdy guy.
To make the animation easier, I spited it into 9 pieces:
down0.png, down1.png and down2.png
left0.png, left1.png and left2.png
right0.png, right1.png and righ2.png
up0.png, up1.png and up2.png
All images I’m using should be in the same directory of source code.
Let’s start loading the scenario and a single character sprite.
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.image.*;
Stage {
title: "RPG-like demo", width: 424, height: 412
visible: true
scene: Scene{
content: [
ImageView{
image: Image {url: "{__DIR__}house.png"} },
ImageView{
x: 320 y: 80
image: Image {url: "{__DIR__}down1.png"}
}
]
}
}
Saved as Game.fx you can compile and run with in your terminal:
$ javafxc Game.fx
$ javafx Game
Hint: You can use NetBeans 6.5 JavaFX plugin to easier the JavaFX development.
To put animation on the character we load all sprites into four lists. Each list for each direction.
// sprites
def up = for(i in [0..2]) { Image {url: "{__DIR__}up{i}.png" } }
def right = for(i in [0..2]) { Image {url: "{__DIR__}right{i}.png" } }
def down = for(i in [0..2]) { Image {url: "{__DIR__}down{i}.png" } }
def left = for(i in [0..2]) { Image {url: "{__DIR__}left{i}.png" } }
And create vars to store the character position and frame of animation.
var frame = 0;
var posx = 320;
var posy = 80;
Also store the house background.
// house background
def house = ImageView{ image: Image {url: "{__DIR__}house.png"} };
I create booleans to store some key states and at each interval of time I see how they are and do something about. You can handle keyboard event with less code but I like this way because keep visual and game logics a little bit more separated.
// keyboard
var upkey = false;
var rightkey = false;
var downkey = false;
var leftkey = false;
// player
var player = ImageView{
x: bind posx y: bind posy
image: Image {url: "{__DIR__}down1.png"}
onKeyPressed: function(e:KeyEvent){
if (e.code == KeyCode.VK_DOWN) {
downkey = true;
} else if (e.code == KeyCode.VK_UP) {
upkey = true;
}else if (e.code == KeyCode.VK_LEFT) {
leftkey = true;
}else if (e.code == KeyCode.VK_RIGHT) {
rightkey = true;
}
} // onKeyPressed
onKeyReleased: function(e: KeyEvent){
if (e.code == KeyCode.VK_DOWN) {
downkey = false;
} else if (e.code == KeyCode.VK_UP) {
upkey = false;
}else if (e.code == KeyCode.VK_LEFT) {
leftkey = false;
}else if (e.code == KeyCode.VK_RIGHT) {
rightkey = false;
}
} // onKeyReleased
}
See a video of the game working so far:
Now we will add collisions. In a previous post I showed some math behind bounding box game collisions. The good news are that you no longer need to worry about that. There are a lot of API improvements in JavaFX 1.0 that do all the hard work for you, specially the new classes on javafx.geometry package, Rectangle2D and Point2D.
We create rectangles that represent the obstacles in the house.
// collidable obstacles
def obstacles = [
Rectangle { x: 0 y: 0 width: 32 height: 382 stroke: Color.RED },
Rectangle { x: 0 y: 0 width: 414 height: 64 stroke: Color.RED },
Rectangle { x: 384 y: 0 width: 32 height: 382 stroke: Color.RED },
Rectangle { x: 0 y: 192 width: 128 height: 64 stroke: Color.RED },
Rectangle { x: 192 y: 192 width: 64 height: 64 stroke: Color.RED },
Rectangle { x: 224 y: 0 width: 32 height: 288 stroke: Color.RED },
Rectangle { x: 288 y: 128 width: 96 height: 64 stroke: Color.RED },
Rectangle { x: 0 y: 352 width: 128 height: 32 stroke: Color.RED },
Rectangle { x: 192 y: 352 width: 192 height: 32 stroke: Color.RED },
Rectangle { x: 224 y: 320 width: 32 height: 32 stroke: Color.RED },
Rectangle { x: 32 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 96 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 128 y: 64 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 192 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 128 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 32 y: 250 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 250 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 200 y: 255 width: 20 height: 20 stroke: Color.YELLOW },
Rectangle { x: 200 y: 170 width: 20 height: 20 stroke: Color.YELLOW },
Rectangle { x: 257 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 288 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 320 y: 192 width: 64 height: 64 stroke: Color.YELLOW },
Rectangle { x: 352 y: 295 width: 32 height: 60 stroke: Color.YELLOW },
Rectangle { x: 32 y: 327 width: 64 height: 23 stroke: Color.YELLOW },
];
We just have to change a little bit the game logics in order to handle collisions.
We define a bounding box around the player, it’s a rectangle from (4, 25) at the player coordinates system and with width 19 and height 10. The idea is to prospect where the player will be in the next step, see if it’s bouding box don’t collide with any obstacle and so pass it to the real game position.
// game logics
var gamelogics = Timeline {
repeatCount: Timeline.INDEFINITE
keyFrames: KeyFrame {
time : 1s/8
action: function() {
var nextposx = posx;
var nextposy = posy;
if(downkey) {
nextposy += 5;
player.image = down[++frame mod 3];
}
if(upkey) {
nextposy -= 5;
player.image = up[++frame mod 3];
}
if(rightkey) {
nextposx += 5;
player.image = right[++frame mod 3];
}
if(leftkey) {
nextposx -= 5;
player.image = left[++frame mod 3];
}
for(obst in obstacles) {
if(obst.boundsInLocal.intersects(nextposx + 4, nextposy + 25, 19, 10)) {
return;
}
}
posx = nextposx;
posy = nextposy;
}
}
}
This is enough to do the trick but I also added a way to smoothly show the obstacles when pressing the space key.
Here is the complete source code.
package Game;
import javafx.stage.Stage;
import javafx.scene.*;
import javafx.scene.image.*;
import javafx.scene.input.*;
import javafx.scene.paint.*;
import javafx.scene.shape.*;
import javafx.animation.*;
var frame = 0;
var posx = 320;
var posy = 80;
// sprites
def up = for(i in [0..2]) { Image {url: "{__DIR__}up{i}.png" } }
def right = for(i in [0..2]) { Image {url: "{__DIR__}right{i}.png" } }
def down = for(i in [0..2]) { Image {url: "{__DIR__}down{i}.png" } }
def left = for(i in [0..2]) { Image {url: "{__DIR__}left{i}.png" } }
// house background
def house = ImageView{ image: Image {url: "{__DIR__}house.png"} };
// keyboard
var upkey = false;
var rightkey = false;
var downkey = false;
var leftkey = false;
// player
var player = ImageView{
x: bind posx y: bind posy image: down[1]
onKeyPressed: function(e:KeyEvent){
if (e.code == KeyCode.VK_DOWN) {
downkey = true;
} else if (e.code == KeyCode.VK_UP) {
upkey = true;
}else if (e.code == KeyCode.VK_LEFT) {
leftkey = true;
}else if (e.code == KeyCode.VK_RIGHT) {
rightkey = true;
}
if(e.code == KeyCode.VK_SPACE){
if(fade==0.0){
fadein.playFromStart();
}
if(fade==1.0){
fadeout.playFromStart();
}
}
} // onKeyPressed
onKeyReleased: function(e: KeyEvent){
if (e.code == KeyCode.VK_DOWN) {
downkey = false;
} else if (e.code == KeyCode.VK_UP) {
upkey = false;
}else if (e.code == KeyCode.VK_LEFT) {
leftkey = false;
}else if (e.code == KeyCode.VK_RIGHT) {
rightkey = false;
}
} // onKeyReleased
}
// collidable obstacles
def obstacles = [
Rectangle { x: 0 y: 0 width: 32 height: 382 stroke: Color.RED },
Rectangle { x: 0 y: 0 width: 414 height: 64 stroke: Color.RED },
Rectangle { x: 384 y: 0 width: 32 height: 382 stroke: Color.RED },
Rectangle { x: 0 y: 192 width: 128 height: 64 stroke: Color.RED },
Rectangle { x: 192 y: 192 width: 64 height: 64 stroke: Color.RED },
Rectangle { x: 224 y: 0 width: 32 height: 288 stroke: Color.RED },
Rectangle { x: 288 y: 128 width: 96 height: 64 stroke: Color.RED },
Rectangle { x: 0 y: 352 width: 128 height: 32 stroke: Color.RED },
Rectangle { x: 192 y: 352 width: 192 height: 32 stroke: Color.RED },
Rectangle { x: 224 y: 320 width: 32 height: 32 stroke: Color.RED },
Rectangle { x: 32 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 96 y: 64 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 128 y: 64 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 192 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 128 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 32 y: 250 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 64 y: 250 width: 64 height: 32 stroke: Color.YELLOW },
Rectangle { x: 200 y: 255 width: 20 height: 20 stroke: Color.YELLOW },
Rectangle { x: 200 y: 170 width: 20 height: 20 stroke: Color.YELLOW },
Rectangle { x: 257 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 288 y: 32 width: 32 height: 32 stroke: Color.YELLOW },
Rectangle { x: 320 y: 192 width: 64 height: 64 stroke: Color.YELLOW },
Rectangle { x: 352 y: 295 width: 32 height: 60 stroke: Color.YELLOW },
Rectangle { x: 32 y: 327 width: 64 height: 23 stroke: Color.YELLOW },
];
// game logics
var gamelogics = Timeline {
repeatCount: Timeline.INDEFINITE
keyFrames: KeyFrame {
time : 1s/8
action: function() {
var nextposx = posx;
var nextposy = posy;
if(downkey) {
nextposy += 5;
player.image = down[++frame mod 3];
}
if(upkey) {
nextposy -= 5;
player.image = up[++frame mod 3];
}
if(rightkey) {
nextposx += 5;
player.image = right[++frame mod 3];
}
if(leftkey) {
nextposx -= 5;
player.image = left[++frame mod 3];
}
for(obst in obstacles) {
if(obst.boundsInLocal.intersects(nextposx + 4, nextposy + 25, 19, 10)) {
return;
}
}
posx = nextposx;
posy = nextposy;
}
}
}
gamelogics.play();
// obstacles view
var fade = 0.0;
var obstacleslayer = Group {
opacity: bind fade
content: [
Rectangle { x:0 y:0 width:500 height: 500 fill: Color.BLACK },
obstacles,
Rectangle {
x: bind posx + 4 y: bind posy + 25 width: 19 height: 10
fill: Color.LIME
}
]
}
var fadein = Timeline {
keyFrames: [
at (0s) {fade => 0.0}
at (1s) {fade => 1.0}
]
}
var fadeout = Timeline {
keyFrames: [
at (0s) {fade => 1.0}
at (1s) {fade => 0.0}
]
}
// game stage
Stage {
title: "RPG-like demo", width: 424, height: 412
visible: true
scene: Scene{
fill: Color.BLACK
content: [house, player, obstacleslayer]
}
}
Play Through Java Web Start
or click here to play via applet, inside your browser.
update: The applet version and Java Web Start versions should be working now. The applet version on Linux seems to be having problems with the keyboard handling, use the Java Web Start version while I’m trying to fix it.
Downloads:
22 thoughts on “JavaFX, how to create a rpg like game
1. It’s great you resumed your bloging on more “real” JavaFX apps !
Keep them going…. All the demos so far are JUST GREAT examples of JavaFX power …!
It seams there are some issues with the applet (can’t load at least here …and stay in a “continuous applet loading” animation)… all the other JavaFX samples (from javafx.com) works just fine …
2. The applet version doesn’t work
java.io.FileNotFoundException: JNLP not available: Nerdy_browser.jnlp
at sun.plugin2.applet.JNLP2Manager.loadJarFiles(JNLP2Manager.java:387)
at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Plugin2Manager.java:1332)
at java.lang.Thread.run(Thread.java:619)
Exception: java.io.FileNotFoundException: JNLP not available: Nerdy_browse
3. Não consegui rodar nem via JWS nem applet:
#### Unable to load resource: file:/home/silveira/Documentos/jfx/nerdy/dist/Nerdy_browser.jnlp
Please please fix it soon.
4. Man… this is just very cool! your blog rocks!!! I’m writing a thesis something about JavaFX games. This blog will help me a lot…
5. Silveira voce nao sabe como eu estou depois de ver esse projeto
d rpg eu não tenho muito conhescimento na area mas estou com um projeto de um jogo p celular e se vc vc postar um exemplo não só eu como muitas pessoas te amariam eternameste
rsrsrsrs please answer
c u
6. Para quem tentou compilar o código usando Javafx 1.2 e o sprite não saiu do lugar:
Node não pode receber foco por padrão e o key handler não é exigido por Node para ficar focado havendo duas excessões: Control e SwingComponent são focusable por default. A nova variável focusTraversable foi incluida e é um boolean. A variável focusTraversable especifica quando o Node deve fazer parte de um “ciclo transversal de foco” (focus traversal cycle).
[http://javafx.com/docs/articles/javafx1-2.jsp#keyboard]
Simplificando: é só adicionar focusTraversable:true na var player.
E para quem já conhece javaFx 1.0 ou 1.1 vale a pena olhar o artigo acima.
Bons estudos.
7. I made a game with another background and the stormtrooper.. It was cool..
But I would really like if you made a part 2 with some new functions and maps :)
– CJ
8. i downloaded your source code for RPG like game, but unfortunately, it doesn’t work. Although i have a the latest update of netbeans and javafx (netbeans 6.9.1 and javafx 1.3.1) . Do i need to do anything special because of the newer version of javafx and netbeans.
Thanks
9. Hi, Congratuliations for your work. I think the javafx was development to make the things easier; I know the code is the base of this, but you can save many time using the tools of javafx to draw and manage sprites like you can do it in macromedia flash for example. ¿Why dont make a tutorial but using this tools and less code (I know is imposible do it without any code )? Ive said again, your work is great you are the expert, Im a begginer, but I think that is the porpouse of the javafx. ¿What do you think? Thanks for all
10. Miss this part of the code “focusTraversable:true” like this way you can move the Nerdy, without this you cant.
You need to write it right here:
var player = ImageView{
focusTraversable:true
………and the rest code th same
Greetings
11. Olá Mr. Silveira! ^^
Cara, eu estou querendo fazer exatamente a mesma coisa que você fez so que em Java SE.
Viajei legal no Java FX, mto diferente esse negocio de linguagem declarativa.
Vc poderia me dar uns toques de como fazer algo similar?
Obrigado.
Aproposito, mto bom o seu post! haha serio msm ^^
12. Pingback: fun flash games
Leave a Reply
Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
|
__label__pos
| 0.996591 |
Source to arch/i386/isa/isa.c
Enter a symbol's name here to quickly find it.
/*-
* Copyright (c) 1991 The Regents of the University of California.
* All rights reserved.
*
* This code is derived from software contributed to Berkeley by
* William Jolitz.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)isa.c 7.2 (Berkeley) 5/13/91
*/
static char rcsid[] = "$Header: /usr/src/sys.386bsd/i386/isa/RCS/isa.c,v 1.2 92/01/21 14:34:23 william Exp Locker: root $";
/*
* code to manage AT bus
*/
#include "param.h"
#include "systm.h"
#include "conf.h"
#include "file.h"
#include "buf.h"
#include "uio.h"
#include "syslog.h"
#include "malloc.h"
#include "rlist.h"
#include "machine/segments.h"
#include "vm/vm.h"
#include "i386/isa/isa_device.h"
#include "i386/isa/isa.h"
#include "i386/isa/icu.h"
#include "i386/isa/ic/i8237.h"
#include "i386/isa/ic/i8042.h"
int config_isadev(struct isa_device *, u_short *);
#ifdef notyet
struct rlist *isa_iomem;
/*
* Configure all ISA devices
*/
isa_configure() {
struct isa_device *dvp;
struct isa_driver *dp;
splhigh();
INTREN(IRQ_SLAVE);
/*rlist_free(&isa_iomem, 0xa0000, 0xfffff);*/
for (dvp = isa_devtab_tty; dvp; dvp++)
(void) config_isadev(dvp, &ttymask);
for (dvp = isa_devtab_bio; dvp; dvp++)
(void) config_isadev(dvp, &biomask);
for (dvp = isa_devtab_net; dvp; dvp++)
(void) config_isadev(dvp, &netmask);
for (dvp = isa_devtab_null; dvp; dvp++)
(void) config_isadev(dvp, 0);
#include "sl.h"
#if NSL > 0
netmask |= ttymask;
ttymask |= netmask;
#endif
/* printf("biomask %x ttymask %x netmask %x\n", biomask, ttymask, netmask); */
splnone();
}
/*
* Configure an ISA device.
*/
config_isadev(isdp, mp)
struct isa_device *isdp;
u_short *mp;
{
struct isa_driver *dp;
static short drqseen, irqseen;
if (dp = isdp->id_driver) {
/* if a device with i/o memory, convert to virtual address */
if (isdp->id_maddr) {
extern unsigned int atdevbase;
isdp->id_maddr -= IOM_BEGIN;
isdp->id_maddr += atdevbase;
}
isdp->id_alive = (*dp->probe)(isdp);
if (isdp->id_alive) {
printf("%s%d at port 0x%x ", dp->name,
isdp->id_unit, isdp->id_iobase);
/* check for conflicts */
if (irqseen & isdp->id_irq) {
printf("INTERRUPT CONFLICT - irq%d\n",
ffs(isdp->id_irq) - 1);
return (0);
}
if (isdp->id_drq != -1
&& (drqseen & (1<<isdp->id_drq))) {
printf("DMA CONFLICT - drq%d\n", isdp->id_drq);
return (0);
}
/* NEED TO CHECK IOMEM CONFLICT HERE */
/* allocate and wire in device */
if(isdp->id_irq) {
int intrno;
intrno = ffs(isdp->id_irq)-1;
printf("irq %d ", intrno);
INTREN(isdp->id_irq);
if(mp)INTRMASK(*mp,isdp->id_irq);
setidt(NRSVIDT + intrno, isdp->id_intr,
SDT_SYS386IGT, SEL_KPL);
irqseen |= isdp->id_irq;
}
if (isdp->id_drq != -1) {
printf("drq %d ", isdp->id_drq);
drqseen |= 1 << isdp->id_drq;
}
(*dp->attach)(isdp);
printf("on isa\n");
}
return (1);
} else return(0);
}
#else
/*
* Configure all ISA devices
*/
isa_configure() {
struct isa_device *dvp;
struct isa_driver *dp;
splhigh();
INTREN(IRQ_SLAVE);
for (dvp = isa_devtab_tty; config_isadev(dvp,&ttymask); dvp++);
for (dvp = isa_devtab_bio; config_isadev(dvp,&biomask); dvp++);
for (dvp = isa_devtab_net; config_isadev(dvp,&netmask); dvp++);
for (dvp = isa_devtab_null; config_isadev(dvp,0); dvp++);
#include "sl.h"
#if NSL > 0
netmask |= ttymask;
ttymask |= netmask;
#endif
/* biomask |= ttymask ; can some tty devices use buffers? */
/* printf("biomask %x ttymask %x netmask %x\n", biomask, ttymask, netmask); */
splnone();
}
/*
* Configure an ISA device.
*/
config_isadev(isdp, mp)
struct isa_device *isdp;
u_short *mp;
{
struct isa_driver *dp;
if (dp = isdp->id_driver) {
if (isdp->id_maddr) {
extern u_int atdevbase;
isdp->id_maddr -= 0xa0000;
isdp->id_maddr += atdevbase;
}
isdp->id_alive = (*dp->probe)(isdp);
if (isdp->id_alive) {
printf("%s%d", dp->name, isdp->id_unit);
(*dp->attach)(isdp);
printf(" at 0x%x ", isdp->id_iobase);
if(isdp->id_irq) {
int intrno;
intrno = ffs(isdp->id_irq)-1;
printf("irq %d ", intrno);
INTREN(isdp->id_irq);
if(mp)INTRMASK(*mp,isdp->id_irq);
setidt(ICU_OFFSET+intrno, isdp->id_intr,
SDT_SYS386IGT, SEL_KPL);
}
if (isdp->id_drq != -1) printf("drq %d ", isdp->id_drq);
printf("on isa\n");
}
return (1);
} else return(0);
}
#endif
#define IDTVEC(name) __CONCAT(X,name)
/* default interrupt vector table entries */
extern IDTVEC(intr0), IDTVEC(intr1), IDTVEC(intr2), IDTVEC(intr3),
IDTVEC(intr4), IDTVEC(intr5), IDTVEC(intr6), IDTVEC(intr7),
IDTVEC(intr8), IDTVEC(intr9), IDTVEC(intr10), IDTVEC(intr11),
IDTVEC(intr12), IDTVEC(intr13), IDTVEC(intr14), IDTVEC(intr15);
static *defvec[16] = {
&IDTVEC(intr0), &IDTVEC(intr1), &IDTVEC(intr2), &IDTVEC(intr3),
&IDTVEC(intr4), &IDTVEC(intr5), &IDTVEC(intr6), &IDTVEC(intr7),
&IDTVEC(intr8), &IDTVEC(intr9), &IDTVEC(intr10), &IDTVEC(intr11),
&IDTVEC(intr12), &IDTVEC(intr13), &IDTVEC(intr14), &IDTVEC(intr15) };
/* out of range default interrupt vector gate entry */
extern IDTVEC(intrdefault);
/*
* Fill in default interrupt table (in case of spuruious interrupt
* during configuration of kernel, setup interrupt control unit
*/
isa_defaultirq() {
int i;
/* icu vectors */
for (i = NRSVIDT ; i < NRSVIDT+ICU_LEN ; i++)
setidt(i, defvec[i], SDT_SYS386IGT, SEL_KPL);
/* out of range vectors */
for (i = NRSVIDT; i < NIDT; i++)
setidt(i, &IDTVEC(intrdefault), SDT_SYS386IGT, SEL_KPL);
/* clear npx intr latch */
outb(0xf1,0);
/* initialize 8259's */
outb(IO_ICU1, 0x11); /* reset; program device, four bytes */
outb(IO_ICU1+1, NRSVIDT); /* starting at this vector index */
outb(IO_ICU1+1, 1<<2); /* slave on line 2 */
outb(IO_ICU1+1, 1); /* 8086 mode */
outb(IO_ICU1+1, 0xff); /* leave interrupts masked */
outb(IO_ICU1, 2); /* default to ISR on read */
outb(IO_ICU2, 0x11); /* reset; program device, four bytes */
outb(IO_ICU2+1, NRSVIDT+8); /* staring at this vector index */
outb(IO_ICU2+1,2); /* my slave id is 2 */
outb(IO_ICU2+1,1); /* 8086 mode */
outb(IO_ICU2+1, 0xff); /* leave interrupts masked */
outb(IO_ICU2, 2); /* default to ISR on read */
}
/* region of physical memory known to be contiguous */
vm_offset_t isaphysmem;
static caddr_t dma_bounce[8]; /* XXX */
static char bounced[8]; /* XXX */
#define MAXDMASZ 512 /* XXX */
/* high byte of address is stored in this port for i-th dma channel */
static short dmapageport[8] =
{ 0x87, 0x83, 0x81, 0x82, 0x8f, 0x8b, 0x89, 0x8a };
/*
* isa_dmacascade(): program 8237 DMA controller channel to accept
* external dma control by a board.
*/
void isa_dmacascade(unsigned chan)
{ int modeport;
if (chan > 7)
panic("isa_dmacascade: impossible request");
/* set dma channel mode, and set dma channel mode */
if ((chan & 4) == 0)
modeport = IO_DMA1 + 0xb;
else
modeport = IO_DMA2 + 0x16;
outb(modeport, DMA37MD_CASCADE | (chan & 3));
if ((chan & 4) == 0)
outb(modeport - 1, chan & 3);
else
outb(modeport - 2, chan & 3);
}
/*
* isa_dmastart(): program 8237 DMA controller channel, avoid page alignment
* problems by using a bounce buffer.
*/
void isa_dmastart(int flags, caddr_t addr, unsigned nbytes, unsigned chan)
{ vm_offset_t phys;
int modeport, waport, mskport;
caddr_t newaddr;
if (chan > 7 || nbytes > (1<<16))
panic("isa_dmastart: impossible request");
if (isa_dmarangecheck(addr, nbytes)) {
if (dma_bounce[chan] == 0)
dma_bounce[chan] =
/*(caddr_t)malloc(MAXDMASZ, M_TEMP, M_WAITOK);*/
(caddr_t) isaphysmem + NBPG*chan;
bounced[chan] = 1;
newaddr = dma_bounce[chan];
*(int *) newaddr = 0; /* XXX */
/* copy bounce buffer on write */
if (!(flags & B_READ))
bcopy(addr, newaddr, nbytes);
addr = newaddr;
}
/* translate to physical */
phys = pmap_extract(pmap_kernel(), (vm_offset_t)addr);
/* set dma channel mode, and reset address ff */
if ((chan & 4) == 0)
modeport = IO_DMA1 + 0xb;
else
modeport = IO_DMA2 + 0x16;
if (flags & B_READ)
outb(modeport, DMA37MD_SINGLE|DMA37MD_WRITE|(chan&3));
else
outb(modeport, DMA37MD_SINGLE|DMA37MD_READ|(chan&3));
if ((chan & 4) == 0)
outb(modeport + 1, 0);
else
outb(modeport + 2, 0);
/* send start address */
if ((chan & 4) == 0) {
waport = IO_DMA1 + (chan<<1);
outb(waport, phys);
outb(waport, phys>>8);
} else {
waport = IO_DMA2 + ((chan - 4)<<2);
outb(waport, phys>>1);
outb(waport, phys>>9);
}
outb(dmapageport[chan], phys>>16);
/* send count */
if ((chan & 4) == 0) {
outb(waport + 1, --nbytes);
outb(waport + 1, nbytes>>8);
} else {
nbytes <<= 1;
outb(waport + 2, --nbytes);
outb(waport + 2, nbytes>>8);
}
/* unmask channel */
if ((chan & 4) == 0)
mskport = IO_DMA1 + 0x0a;
else
mskport = IO_DMA2 + 0x14;
outb(mskport, chan & 3);
}
void isa_dmadone(int flags, caddr_t addr, int nbytes, int chan)
{
/* copy bounce buffer on read */
/*if ((flags & (B_PHYS|B_READ)) == (B_PHYS|B_READ))*/
if (bounced[chan]) {
bcopy(dma_bounce[chan], addr, nbytes);
bounced[chan] = 0;
}
}
/*
* Check for problems with the address range of a DMA transfer
* (non-contiguous physical pages, outside of bus address space).
* Return true if special handling needed.
*/
isa_dmarangecheck(caddr_t va, unsigned length) {
vm_offset_t phys, priorpage, endva;
endva = (vm_offset_t)round_page(va + length);
for (; va < (caddr_t) endva ; va += NBPG) {
phys = trunc_page(pmap_extract(pmap_kernel(), (vm_offset_t)va));
#define ISARAM_END RAM_END
if (phys == 0)
panic("isa_dmacheck: no physical page present");
if (phys > ISARAM_END)
return (1);
if (priorpage && priorpage + NBPG != phys)
return (1);
priorpage = phys;
}
return (0);
}
/* head of queue waiting for physmem to become available */
struct buf isa_physmemq;
/* blocked waiting for resource to become free for exclusive use */
static isaphysmemflag;
/* if waited for and call requested when free (B_CALL) */
static void (*isaphysmemunblock)(); /* needs to be a list */
/*
* Allocate contiguous physical memory for transfer, returning
* a *virtual* address to region. May block waiting for resource.
* (assumed to be called at splbio())
*/
caddr_t
isa_allocphysmem(caddr_t va, unsigned length, void (*func)()) {
isaphysmemunblock = func;
while (isaphysmemflag & B_BUSY) {
isaphysmemflag |= B_WANTED;
sleep(&isaphysmemflag, PRIBIO);
}
isaphysmemflag |= B_BUSY;
return((caddr_t)isaphysmem);
}
/*
* Free contiguous physical memory used for transfer.
* (assumed to be called at splbio())
*/
void
isa_freephysmem(caddr_t va, unsigned length) {
isaphysmemflag &= ~B_BUSY;
if (isaphysmemflag & B_WANTED) {
isaphysmemflag &= B_WANTED;
wakeup(&isaphysmemflag);
if (isaphysmemunblock)
(*isaphysmemunblock)();
}
}
/*
* Handle a NMI, possibly a machine check.
* return true to panic system, false to ignore.
*/
isa_nmi(cd) {
log(LOG_CRIT, "\nNMI port 61 %x, port 70 %x\n", inb(0x61), inb(0x70));
return(0);
}
/*
* Caught a stray interrupt, notify
*/
isa_strayintr(d) {
#ifdef notdef
/* DON'T BOTHER FOR NOW! */
/* for some reason, we get bursts of intr #7, even if not enabled! */
log(LOG_ERR,"ISA strayintr %x", d);
#endif
}
/*
* Wait "n" microseconds. Relies on timer 0 to have 1Mhz clock, regardless
* of processor board speed. Note: timer had better have been programmed
* before this is first used!
*/
DELAY(n) {
int tick = getit(0,0) & 1;
while (n--) {
/* wait approximately 1 micro second */
while (tick == getit(0,0) & 1) ;
tick = getit(0,0) & 1;
}
}
getit(unit, timer) {
int port = (unit ? IO_TIMER2 : IO_TIMER1) + timer, val;
val = inb(port);
val = (inb(port) << 8) + val;
return (val);
}
extern int hz;
static beeping;
static
sysbeepstop(f)
{
/* disable counter 2 */
outb(0x61, inb(0x61) & 0xFC);
if (f)
timeout(sysbeepstop, 0, f);
else
beeping = 0;
}
void sysbeep(int pitch, int period)
{
outb(0x61, inb(0x61) | 3); /* enable counter 2 */
outb(0x43, 0xb6); /* set command for counter 2, 2 byte write */
outb(0x42, pitch);
outb(0x42, (pitch>>8));
if (!beeping) {
beeping = period;
timeout(sysbeepstop, period/2, period);
}
}
/*
* Pass command to keyboard controller (8042)
*/
unsigned kbc_8042cmd(val) {
while (inb(KBSTATP)&KBS_IBF);
if (val) outb(KBCMDP, val);
while (inb(KBSTATP)&KBS_IBF);
return (inb(KBDATAP));
}
|
__label__pos
| 0.837202 |
.NET Tutorials, Forums, Interview Questions And Answers
Welcome :Guest
Sign In
Register
Win Surprise Gifts!!!
Congratulations!!!
Top 5 Contributors of the Month
Imran Ghani
Home >> Interview Question >> ADO.Net >> Post New Question Subscribe to Interview Questions
You are working with a DataSet and want to be able to display data, sorted different ways. How do you do so?
Posted By :Virendra Dugar Posted Date :October 12, 2009 Points :10 Category :ADO.Net
Select the correct answer.
1. Use the Sort method on the DataTable object.
2. Use the DataSet object's Sort method.
3. Use a DataView object for each sort.
4. Create a DataTable for each sort, using the DataTable object's Copy method, and then Sort the result
Correct Answer is :
3. Use a DataView object for each sort.
You can also find related Interview Question to You are working with a DataSet and want to be able to display data, sorted different ways. How do you do so? below:
What is the difference between Data Reader & Dataset?
Data Reader is connected datasource, read only, forward only record set.
Dataset is disconnected datatsource resides in memory database that can store multiple tables, relations and constraints; (More...)
Which property on a Combo Box do you set with a column name, prior to setting the Data Source, to display data in the combo box?
ComboBox.DataValueField = "ColumnName"
When we use Databind method for the Combo box we set Display Member and Display Value property to column name. (More...)
How many ways you can display text using Silverlight?
Silverlight supports displaying static pre formatted text that is comprised out of glyph elements and also dynamic text that uses TextBlock. With glyphs, one needs to position the characters individually while TextBlock supports simple layout. (More...)
Explain the steps involved to populate dataset with data?
Open connection.;
Initialize Adapter passing SQL and connection as parameter.;
Initialize Dataset.;
Call Fill method of the adapter passes dataset as the parameter.;
Close connection.; (More...)
Is it possible to have tables in the dataset that are not bound to any data source?
Yes, I can create table object in code and add it to the dataset. (More...)
How to fill DataSet with data?
To fill DataSet with data we have to use Fill() method of DataAdapter object.
Fill() has several overloads. But the simple one is
[CODE]Fill(DataSet, DataTable)[/CODE]
The first parameter will take the name of the dataset to be filled and the second parameter specifies the name of the DataTable in the DataSet which will contain the data. (More...)
What are the ways you can display all the tables in Sql Server?
Way 1:
SELECT Table_Name
FROM information_schema.Tables
Way 2:
SELECT Name
FROM SysObjects
WHERE Xtype = 'U' (More...)
In How many ways Data can be Export from Sql Table to Excel Sheet?
Some of these options include
1) Data Transformation Services (DTS),
2) SQL Server Integration Services (SSIS)
3) Bulk Copy (BCP)
4)OPENROWSET() (More...)
Quick Links For Interview Questions Categories:
ASP.Net Windows Application .NET Framework C# VB.Net ADO.Net
Sql Server SharePoint Silverlight OOPs JQuery JavaScript/VBScript
Biztalk Patten/Practices .IIS WCF WPF WWF
Networking Aptitude Others All
Find questions, FAQ's and their answers related to .NET, C#, Vb.Net, Sql Server and many more.
Now you can find lots of .NET, C#, Vb.Net, SQL Server,Windows, ASP.Net related Questions and their Answers here at www.dotnetspark.com. Our aim is to help you pass your certification Exams (MCP, MCSD, MCAD etc.,) with flying scores and get good name in your company.
So, Start looking our Interview Question section daily and improve your .NET Skills. You can also help others by posting Interview Questions and their Answers in this section.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
__label__pos
| 0.989601 |
August 8, 2024
Unlocking the Best IPTV Plans: A Comprehensive Guide to Buying Smart
In the ever-evolving world of entertainment, the rise of Internet Protocol Television (IPTV) has transformed the way we consume media. IPTV allows viewers to stream live TV, on-demand shows, and movies directly over the internet, bypassing traditional cable services. For those looking to dive into this digital revolution, selecting the right IPTV plan is crucial. Here’s a detailed guide to help you buy best IPTV plans and ensure a seamless and satisfying viewing experience.
Understanding IPTV
IPTV is a service that delivers television content through the internet instead of traditional terrestrial, satellite, or cable formats. This technology offers a range of advantages, including a broader selection of channels, on-demand content, and the flexibility to watch from various devices.
Factors to Consider When Buying IPTV Plans
1. Channel Selection
One of the primary reasons to buy the best IPTV plans is the diverse range of channels available. Look for providers that offer a comprehensive list of channels, including your favorite sports, news, and entertainment channels. Some services also provide international channels, catering to expatriates and multilingual households.
2. Content Quality
The quality of content is another crucial factor. Opt for IPTV services that offer high-definition (HD) or even 4K streaming. This ensures that you enjoy crisp and clear visuals, enhancing your overall viewing experience. Additionally, check if the service supports various devices, such as smart TVs, smartphones, and tablets.
3. Reliability and Uptime
When investing in IPTV, reliability is key. Choose a provider with a reputation for minimal downtime and strong customer support. A reliable service ensures that you can watch your favorite shows and channels without interruptions. Reading user reviews and ratings can provide insights into the provider’s reliability.
4. Cost and Subscription Options
The cost of IPTV plans can vary significantly. Compare different providers to find a plan that fits your budget while still offering the features you need. Many services offer tiered pricing based on the number of channels, streaming quality, and additional features. Be wary of extremely low-priced options, as they may compromise on quality or service.
5. Customer Support
Good customer support is essential for resolving issues quickly and efficiently. Look for providers that offer 24/7 support through multiple channels, such as phone, email, and live chat. A responsive customer support team can make a significant difference in your overall experience.
6. Free Trials and Money-Back Guarantees
Before committing to a long-term subscription, take advantage of free trials or money-back guarantees offered by IPTV providers. This allows you to test the service and ensure it meets your expectations without financial risk. Use this opportunity to evaluate the channel lineup, streaming quality, and user interface.
Conclusion
Buying the best IPTV plans involves careful consideration of several factors, including channel selection, content quality, reliability, cost, customer support, and trial options. By taking the time to research and compare different IPTV services, you can make an informed decision and enjoy a superior entertainment experience. As the IPTV market continues to grow, staying updated with the latest offerings and technological advancements will help you get the most out of your subscription.
|
__label__pos
| 0.520123 |
1. PF Contest - Win "Conquering the Physics GRE" book! Click Here to Enter
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
A proof of operators in exponentials
1. Nov 28, 2012 #1
1. The problem statement, all variables and given/known data
Assume C=[A,B]≠0 and [C,A]=[C,B]=0
Show
eAeB=eA+Be[itex]\frac{1}{2}[/itex][A,B]
2. Relevant equations
All are given above.
3. The attempt at a solution
I recently did a similar problem (show eABe-A = B + [A,B] + [itex]\frac{1}{2}[/itex][A,[A,b]]+...) by defining a function exABe-xA and doing a taylor expansion, so I thought this might be done similarly, but I have gotten nowhere with this approach. I would like to figure this out myself, so I am really looking for guidance/hints if anyone has any. It would be much appreciated
2. jcsd
3. Nov 28, 2012 #2
So I think I figured it out, but I would appreciate input on whether it is right or not.
start with:
eAeB = ex
Then, looking for x
x = log(eAeB)
Which can be found using the Baker–Campbell–Hausdorff formula
x = A + B + [itex]\frac{1}{2}[/itex][A,B]
x = A + B + [itex]\frac{1}{2}[/itex]C
Thus,
eAeB = eA+B + [itex]\frac{1}{2}[/itex]
since C and A, and C and B commute, C and A+B commute,
eA+B + [itex]\frac{1}{2}[/itex]=eA+Be[itex]\frac{1}{2}[/itex]C
thus,
eAeB=eA+Be[itex]\frac{1}{2}[/itex][A,B]
4. Nov 29, 2012 #3
Fredrik
User Avatar
Staff Emeritus
Science Advisor
Gold Member
It looks correct (except for the fact that you forgot to type the C in a couple of places), but are you sure you're allowed to use the BCH formula? I would have guessed that the point of the exercise is to prove a special case of it. (I haven't thought about how to do that).
5. Nov 29, 2012 #4
Fredrik
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I found a solution in a book. They don't use the BCH formula. Instead, their strategy is to prove that the maps
\begin{align}&t\mapsto e^{tA}e^{tB}e^{\frac{t^2}{2}[A,B]}\\
&t\mapsto e^{t(A+B)}
\end{align} satisfy the same differential equation and the same initial condition.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Loading...
|
__label__pos
| 0.957689 |
How to sample_encode input file's input frame rate?
How to sample_encode input file's input frame rate?
Hi all,
I am looking into the sample_encode application of Intel Media SDK. I have seen option like -f to specify the frame rate of encoded stream. Is there any way to customize the sample_encode application so that the input frames are taken for processing in 30fps only? I understood that Intel Media SDK APIs work in an asynchronous manner. So on just skimming through the source code, i found it difficult to find the code portion where I need to change so that input frame rate can be changed? I am using a YUY2 file of resolution 1280x720 which I pass as nv12.
Thanks,
Lullaby
2 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Hi Lullaby,
To achieve rate control of input I suggest you just add a timer to measure the amount of time passed between each encoded frame (incl,. VPP in your case). The based on the measured time you can insert appropriate delay before submitting new frame to VPP/Encode. (for VPP+Encode case that means you measure and delay just before the RunFrameVPPAsync call)
Regards,
Petter
Leave a Comment
Please sign in to add a comment. Not a member? Join today
|
__label__pos
| 0.944812 |
Attribute multivalue selector Ian Hickson CSS 3 Module W3C Selectors Next Attribute value selectors (hyphen-separated attributes) Previous Attribute multivalue selector Test # 9 of 296 Testing Attribute multivalue selector ID 7b Date 2001-01-01 Revision 1.0
This line should have a green background.
p { background: lime; } [title~="hello world"] { background: red; } /* Section 6.3.1: Represents the att attribute whose value is a space-separated list of words, one of which is exactly "val". If this selector is used, the words in the value must not contain spaces (since they are separated by spaces). */ <p xmlns="http://www.w3.org/1999/xhtml" title="hello world">This line should have a green background.</p>
|
__label__pos
| 0.729804 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS4143356 A
Publication typeGrant
Application numberUS 05/846,877
Publication date6 Mar 1979
Filing date31 Oct 1977
Priority date31 Oct 1977
Also published asCA1115846A1, DE2847302A1, DE2847302C2
Publication number05846877, 846877, US 4143356 A, US 4143356A, US-A-4143356, US4143356 A, US4143356A
InventorsRobert B. Nally
Original AssigneeNcr Canada Ltd - Ncr Canada Ltee
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Character recognition apparatus
US 4143356 A
Abstract
A magnetic character recognition system is disclosed for reading magnetized characters printed on a document such as a check, which system determines the magnitude and position of each of the peaks of an analog waveform representing each character read by a single gap magnetic read head. Means are provided for normalizing the peak amplitude by dividing the amplitude of each succeeding peak value by the amplitude of the first peak value read and generating a digital value representing the ratio of the succeeding peak values to the first value. Further means are provided for normalizing the peak position by determining the ratio of the position of each of the peaks within the waveform to the position of the first position peak of the waveform. The digital values of the normalized peak amplitude and position are then processed and compared with corresponding values of a plurality of reference characters to identify the character read. Threshold values are applied to insure a valid recognition operation.
Images(11)
Previous page
Next page
Claims(22)
That which is claimed is:
1. In a character reading system wherein an analog signal having a waveform unique to each character to be read is generated, said unique waveform including a sequence of varying peak amplitudes and times of occurrence of same corresponding to the shape of the character being read, means for evaluating said waveform to derive signals representing feature characteristics of the character recognized compising:
a. means for detecting said analog peak amplitudes and the times of occurrence thereof;
b. first means responsive to said detected analog peak amplitudes to output a first feature signal representing the ratio of one of said analog peak amplitudes to a predetermined one of said analog peak amplitudes;
c. second means responsive to the detection of the times of occurrence of said analog peak amplitudes to output a second feature signal representing the ratio of the time of occurrence of one of said analog peak amplitudes to the time of occurrence of a predetermined one of said analog peak amplitudes;
d. means for storing first and second features representing corresponding feature characteristics of a plurality of unique reference characters;
e. and means for comparing said first and second feature signals with the reference signals of each unique reference character to produce an output indicating the unique reference character corresponding to the unique waveform.
2. The character reading system of claim 1 wherein said first output means includes an analog divider network for dividing each analog peak amplitude of said unique waveform by a predetermined one of said analog peak amplitudes to output a first feature signal representing the ratio of one of said peak amplitudes to the predetermined one of said peak amplitude.
3. The character reading system of claim 2 further including analog to digital conversion means connected to the output of said analog divider network to convert the first feature signal to a digital value.
4. The character reading system of claim 3 wherein said detecting means includes:
a. means for generating timing signals responsive to the start of the generation of the waveform;
b. first means receiving said timing signal to output a first digital value upon the occurrence of each peak amplitude representing the position of said peak amplitude;
c. second means connected to said first output means to output a second digital value representing the position of a predetermined one of said peak amplitudes;
d. and said second output means includes a digital divider network for dividing each of said first digital values by said second digital values to output a second feature signal representing the ratio of the position of each peak amplitude to the position of a predetermined one of said peak amplitudes.
5. The character reading system of claim 4 further including means for adding said first and second feature signals of each peak amplitude to output a third feature signal for use in analyzing said waveform.
6. The character reading system of claim 5 wherein said detecting means further includes;
a. digital counter means for receiving said timing signals to output a count of said timing signals;
b. said first output means includes a first storage means receiving the output of said counter means and responsive to the detection of each peak amplitude to store a count representing the time of occurrence of each of said peak amplitudes;
c. and said second output means includes a second storage means receiving the output of said counter means and responsive to the detection of said predetermined one of said peak amplitudes to store a count representing the position of said predetermined one of said peak amplitudes, said digital divider network receiving the output of said first and second storage means and responsive to the dictation of each of said peak amplitudes to output said second feature signal of each peak amplitude.
7. The character reading system of claim 6 in which said detecting means further includes;
a. peak detector means outputting a signal upon detecting each peak amplitude of said waveform;
b. and a first peak sample and hold network connected to the output of said peak detector means to output a first control signal during the generation of the waveform upon the occurrence of the first peak amplitude, said digital divider network connected to the output of said sample and hold network to divide said first digital value by said second digital value upon receiving said first control signal.
8. The character reading system of claim 7 in which said detecting means further includes a peak sample and hold network connected to the output of said first peak sample and hold network and responsive to receiving said first control signal to hold the amplitude of the first peak in said waveform, said analog divider network connected to the output of said peak sample and hold network to divide the analog waveform by the amplitude of the first peak.
9. The character reading system of claim 7 in which said detecting means further includes peak detecting means for outputting a second control signal upon detecting each peak amplitude in said waveform, said analog to digital converter conversion means connected to the output of said peak detecting means and responsive to the generation of said second control signal to convert the output of said analog divider network to a digital value upon the occurrence of each peak amplitude.
10. A system for recognizing a unique analog waveform representing one of a plurality of unique reference characters comprising;
a. means for generating an analog waveform representing an unknown unique character, said waveform having a sequence of varying peak amplitudes and time of occurrence of same corresponding to a unique character;
b. first means for detecting each analog peak amplitude and time of occurrence of same in said analog waveform;
c. first means responsive to the detection of each peak amplitude for dividing each peak amplitude by a predetermined one of said peak amplitudes to output a first digital value representing a first feature characteristic of the unique character recognized;
d. second means responsive to the detection of the time of occurrence of each peak amplitude for dividing the time of occurrence of each peak amplitude by the time of occurrence of a predetermined one of said peak amplitudes to output a second digital value representing a second feature characteristic of the unique character recognized;
e. means for storing a plurality of reference digital values representing corresponding feature characteristics of a plurality of unique reference characters;
f. and means for comparing said first and second digital values with the reference values of each unique reference character to produce an output indicating the unique reference character corresponding to the analog waveform.
11. The character recognition system of claim 10 further including second means for detecting the start of the generation of said unique analog waveform and means responsive to the detection of the start of said analog waveform for generating timing signals.
12. The character recognition system of claim 11 in which said first detecting means includes;
a. peak detector means for outputting a first control signal upon detecting each peak amplitude in said analog waveform;
b. second means connected to said peak detector means for storing the peak amplitude of a predetermined one of said peak amplitudes;
c. and said first dividing means includes an analog divider network coupled to said second storing means and said generating means for dividing each peak amplitude by the amplitude of said predetermined peak amplitude to output a plurality of analog values each representing the ratio of one of the peak amplitudes to said predetermined peak amplitude.
13. The character recognition system of claim 12 wherein said second storage means includes a third storage means connected to said peak detector means to output a second control signal upon the occurrence of said predetermined peak amplitude.
14. The character recognition system of claim 13 wherein said third storage means comprises a first positive peak sample and hold network.
15. The character recognition system of claim 12 wherein said second storage means includes a peak sample and hold network.
16. The character recognition system of claim 12 in which said first dividing means further includes an analog to digital conversion network connected to the output of said analog divider network and said peak detector means and responsive to said first control signal to convert the analog output of said first dividing means to a first digital value.
17. The character recognition system of claim 12 in which said first detecting means further includes;
a. counter means receiving said timing signals and outputting a digital count of said timing signals;
b. fourth storage means connected to said counter means and said detector means for storing a first digital count representing the position of each peak amplitude in response to the output of said first control signal;
c. said storage means connected to said counter means and said third storage means and responsive to the outputting of said second control signal to store a second digital count representing the position of the predetermined peak amplitude;
d. and said second dividing means comprises a digital divider network coupled to said fourth and fifth storage means and responsive to said first control signal for dividing each of said first digital counts by said second digital counts to output said second digital values.
18. The character recognition system of claim 17 in which said comparing means includes;
a. means for substracting said first and second digital values from the corresponding reference values of each reference character to output a third digital value;
b. means for adding the third digital values of each reference character to output a fourth digital value representing each reference character;
c. and means for evaluating each of said fourth digital values to output a signal indicating the reference character corresponding to the minimum fourth digital value.
19. A character reading system comprising:
a. means for detecting a plurality of analog peak amplitudes and time of occurrence thereof in an analog waveform representing an unknown unique character;
b. first means responsive to said detected analog peak amplitudes to output a first signal representing the ratio of one of said analog peak amplitudes to a predetermined one of said analog peak amplitudes; second means responsive to the detection of the times of occurrence of said analog peak amplitudes to output a second signal representing the ratio of the time of occurrence of one said peak amplitude to the time of occurrence of a predetermined one of said analog peak amplitudes;
d. and means utilizing said first and second output signals to identify said unknown unique character.
20. A system for recognizing a unique multi-peak analog waveform representing one of a plurality of unique reference characters comprising:
a. means for detecting the amplitude and position of each peak in the analog waveform;
b. means for forming a plurality of first values representing the ratio of the amplitude of each peak in the waveform to the amplitude of a predetermined one of said peaks in the waveform;
c. means for forming a plurality of second values representing the ratio of the position of each peak in the waveform to the position of a predetermined one of said peaks in the waveform;
d. means for subtracting the first and second values of each peak from the reference values of a corresponding peak in a plurality of unique reference characters to output a third value;
e. and means for evaluating the set of third values to output a value representing the unique reference character corresponding to the analog waveform.
21. A system for recognizing a unique multi-peak analog waveform representing one of a plurality of unique reference characters comprising:
a. means for detecting the amplitude and position of each peak in the analog waveform;
b. means for forming a plurality of first values representing the ratio of the amplitude of each peak in the waveform to the amplitude of a predetermined one of said peaks in the waveform;
c. means for forming a plurality of second values representing the ratio of the position of each peak in the waveform to the position of a predetermined one of said peaks in the waveform;
d. means for storing a plurality of first and second values each representing a feature characteristic of a corresponding peak in a unique reference character;
e. means for subtracting the first and second values of the analog waveform from each of the corresponding first and second values of the unique reference character to output a plurality of third values for each unique reference character;
f. means for adding the third values of each unique reference character to output a fourth value;
g. and means for selecting the unique reference character represented by the minimum fourth value as the character corresponding to the analog waveform.
22. A system for recognizing a unique multi-peak analog waveform representing one of a plurality of unique reference characters comprising;
a. means for detecting the amplitude of each peak in the analog waveform;
b. means for detecting the position of each peak in the analog waveform;
c. means for dividing each peak amplitude by the amplitude of a predetermined one of said peaks in the analog waveform to output a first value;
d. means for dividing the position of each peak by the position of a predetermined one of said peaks in the analog waveform to output a second value;
e. means for storing a plurality of first and second values each comprising a feature characteristic of a corresponding peak in the waveform representing a unique reference character;
f. means for subtracting the first and second values of the analog waveform from the corresponding first and second values of each unique reference character to output third and fourth values for each unique reference character;
g. means for adding the third and fourth values of each unique reference character to output a fifth value;
h. and means for selecting the unique reference character represented by the minimum fifth value as the character corresponding to the analog waveform.
Description
BACKGROUND OF THE INVENTION
This invention relates to a character recognition system and, more particularly, to a system employing a single-gap magnetic read head for reading magnetized characters embodied in the form of E-13B character font.
In single-gap magnetic character reading systems, a single analog input waveform is obtained by passing the characters to be sensed, normally located on a document, beneath a magnetic read head at least as wide as the height of the characters and having a single flux gap. The signal generated by the read head is a derivative waveform representing the rate of change of magnetic flux traversing the head as the characters are scanned. Since the distribution of ink, and thus flux, associated with each different character is unique, the waveform derived for each different character uniquely identifies that character.
To simplify the timing of the waveform analysis process, the characters are provided with stylized geometric features which impart anticipatable timing characteristics to the derived waveforms. Thus, in accordance with this scheme, for reader identification, each character of the E-13B font is divided into a predetermined number of vertical segments. The characters are designed such that the distribution of ink undergoes significant change only at the boundaries between segments. Hence, peak fluctuations in the derived waveform caused by these variations in ink distribution can occur within predetermined time zones or windows during the character scan.
Prior character recognition systems have incorporated circuits for determining the amplitude of each of the peaks of the waveform which uniquely represents the character read. These peak amplitudes are normalized and then correlated with the known peak characteristics of each of the E-13B characters to identify the character read. An example of this type of recognition system may be found in U.S. Pat. No. 3,851,309. Critical to this type of recognition system is the method of timing the generation of the windows for sampling the waveform to detect the occurrence of each of the peaks in the waveform. In actual practice, it has been found that the characters imprinted on a document may be distorted such that portions of the symbol of character within the symbol outline are not covered with magnetic ink. Such a distortion may occur due to imperfections of the printing device employed to imprint a character on a document. Also, the pigment of the magnetic ink used by the printing device may not have been uniformly dispersed throughout the character outline. Such poorly defined or misprinted characters produce voltage waveforms that may resemble the waveform of a character other than the character that was intended to be printed, thereby causing a misread. Other errors may be introduced by variation of the speed of the document past the read head thereby displacing the position of the peak from that found in the standard character. It has also been found that documents become splattered with ink particles during the printing process which particles cause corresponding spurious signals in the reading head. All of these situations have caused mis-read operations in those recognition systems which are based solely on the correlation of peak amplitudes with a character reference standard.
It is therefore an object of this invention to provide an improved character recognition system which overcomes the above mentioned problems found in the prior art.
It is a further object of this invention to provide a character recognition system which functions independently of the speed of the characters past the read head.
It is a further object of this invention to utilize the maximum amount of information found in the waveform generated by the character that is read.
It is another object of this invention to provide a character recognition system which minimizes the effect of variations in ink intensity found in the characters that are to be read.
It is a still further object of this invention to provide a character recognition system which utilizes a plurality of parameters of the waveforms in determining the character read.
SUMMARY OF THE INVENTION
These and other objects are fulfilled by providing a character recognition system in which the amplitude and position of each peak in an analog waveform representing a character is detected. Each peak amplitude and position is then normalized by dividing each peak amplitude by the first peak amplitude read and each peak position by the first peak position. The digital value representing the ratio of each normalized maximum peak amplitude and its associated normalized peak position are compared with a corresponding predetermined value for each of the reference characters, thereby generating a plurality of values representing the difference between each of the reference character and the character being read. The two minimum values found in the correlation process are then selected and evaluated with a first threshold value to determine if a reference character can be selected given the data generated by the read head. If it is found that the difference between the two minimum values is sufficiently large, the reference character corresponding to the minimum value found in the correlation process is then selected as that character read by the read head. The minimum value of the selected reference character is then compared with a second threshold value to insure that the magnitude of the generated data is sufficient to adequately recognize the character.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and various other objects, advantages, and meritorious features of the present invention will be apparent from the following detailed description and appended claims when read in conjunction with the drawing, wherein like numerals identify corresponding elements.
FIGS. 1A and B taken together disclose a block diagram of the character recognition system in accordance with the present invention.
FIG. 2 is a logic diagram of the Character Start Module of FIG. 1.
FIG. 3 is a logic diagram of the Character Window Generator of FIG. 1.
FIG. 4 is a logic diagram of the First Positive Peak Sample and Hold Control Module of FIG. 1.
FIG. 5 is a diagram illustrating an ideal analog waveform of a character read by the read head together with the various timing pulses generated during the operation of the system.
FIG. 6 is a flowchart of the general overall operation of the recognition system in accordance with the present invention.
FIGS. 7A-7D inclusive are a flowchart of the operation of the recognition system in determining the character read by the read head utilizing the normalized data generated by the system shown in FIG. 1.
FIG. 8 is a table showing the data stored in the RAM storage area of the processor representing the normalized peak positions and amplitudes generated by the system shown in FIG. 1.
FIG. 9 is a table showing the data stored in the RAM storage area of the processor representing the window locations for the input normalize peaks based on their associated normalized peak positions.
FIG. 10 is a table showing the data stored in the RAM storage area of the processor representing the absolute difference between the input data and the reference data.
FIG. 11 is a table showing the data stored in the RAM storage area of the processor representing the numerical differences between the input data and the reference data.
FIG. 12 is a functional block diagram of the processing network shown in FIG. 1B.
FIG. 13 is a diagram illustrating the ideal (A) and actual (B) analog waveform of the character zero read by the read head showing a multiple peak condition located within a window position.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Referring now to FIGS. 1A and 1B, there is shown a block diagram of the character recognition system of the present invention. The system includes a single gap magnetic read head 20 which is positioned adjacent the path of movement of a document having characters printed thereon in magnetic ink. While the characters illustrated in the present embodiment are printed in the form of the E-13B character font which has been adopted by the American Bankers Association for use with banking checks in this country, it is obvious that the recognition system of the present invention can be used with any character font which results in a unique analog waveform when scanned by the read head 20. As the document moves past the read head 20 in the direction indicated by the arrow in FIG. 1A, the read head 20 will generate an analog electrical signal corresponding to the time derivative of the change in flux of the magnetized ink, which signal may be respresented by the ideal analog waveform 24 in FIG. 5 or the actual analog waveform 25 in FIG. 13B. The amplified waveform is transmitted from the amplifier 22 through an analog filter 23 for filtering out noise to a plurality of modules which include a Negative Peak Detector Module 26, (FIG. 1A), a Positive Peak Detector Module 28, a Peak Sample and Hold Module 30, a Character Start Module 32 and an Analog Divider Module 34 which functions, with other elements disclosed in FIG. 1A and 1B, to generate a plurality of digital values representing the ratio of the amplitude and position of each peak in the waveform 24 (FIG. 5) with the amplitude and position of the first positive peak of the waveform 24 respectively.
The Character Start Module 32 (FIG. 1A) is a network which detects the positive going edge of the character waveform after applying a threshold value to the waveform 24 FIG. 5). An example of a circuit which may be utilized for the Character Start module is shown in FIG. 2. Included in the network of the Start module 32 is an analog comparator 36 whose positive input 38 is connected to the filter 23 for receiving the voltage level of the waveform and comparing this level with a threshold level developed from a pair of Resistors R1, R2 connected in series, the comparator 36 outputting a digital pulse CS (FIG. 5) for use in initiating the processing of the character waveform 24.
The Positive Peak Detector Module 28 and the Negative Peak Detector Module 26 are of well-known construction and function to detect the time of occurrence of each of the peaks in the waveform 24 (FIG. 5). An example of a commercially available peak detector that may be used includes a Burr-Brown peak detector 4084/25. The Positive Peak Detector Module 28 will output a control pulse PPD (FIG. 5) upon detecting a positive peak in the waveform 24 while the Negative Peak Detector Module 26 will output a control pulse NPD (FIG. 5) upon detecting a negative peak in a manner well known in the art.
The Peak Sample and Hold Module 30 functions to hold the first peak analog value detected by the Positive Peak Detector Module 28 and retaining this analog value for the time the waveform 24 (FIG. 5) is generated. An example of a commercially available peak sample and hold module is the Burr-Brown Sample and Hold Module SHC 85. The Analog Divider Module 34 is of well known construction which divides one analog value by another analog value. An example of a commercially available divider module is that of Burr-Brown analog divider 4291J.
The output pulse CS from the Character Start Module 32 is transmitted to a Character Window Generator 34 which opens a window (CW) shown in FIG. 5 extending over the length of time T the character waveform 24 is generated by the read head 20. As shown in FIG. 3, the Character Window Generator includes a commercially available 9602 monostable multivibrator 42 whose input signal CS received from the Character Start Module 32 is gated through an OR gate 44 to shift the multivibrator 42 so that the pulse CW (FIG. 2) will appear on the Q output 46 for a predetermined time.
Associated with the Positive Peak Detector Module 28 in a First Positive Peak Sampler and Hold Control Module 48 which receives the first positive output pulse PPD (FIG. 5) from the Peak Detector Module 28 for producing a digital signal FPPSH which causes the Sample and Hold Module 30 to hold the first peak analog value of the waveform 24 (FIG. 5). The Module 30 functions to track the analog waveform 24 (FIG. 5) from the amplifier 22 until it receives the pulse FPPSH from the Module 48 whereupon the Module 30 will retain the voltage it had reached at that time. This voltage which represents the amplitude of the first positive peak is outputted to the Analog Divider Module 34 for use in normalizing each of the peak amplitudes in the waveform 24 (FIG. 5). As shown in FIG. 4, the First Positive Peak Sample and Hold Control Module 48 may comprise a commercially available 7474 flip flop 50 whose input receives the pulse PPD from the Positive Peak Detector Module 28 and which is reset by the pulse CW, generated by the Character Window Generator 40 when the pulse CW times out (FIG. 5) at the end of the generation of the waveform 24.
The output pulse PPD (FIG. 5) from the Positive Peak Detector Module 28 is gated through an OR gate 52 to a Peak Position latch 54 which is connected to a binary counter 56. As will be described more fully hereinafter, the binary counter 56 will count the number of clock pulses it receives from an AND gate 58 whose input is connected to a system clock and to the Character Window Generator 40. The Peak Position latch 54 is also operated by a pulse NPD (FIG. 5) generated by the Negative Peak Detector Module 26 which is transmitted through the OR gate 52 in the same manner as that of the output pulse PPD.
The Analog Divider Module 34 receives the first peak amplitude value FPP from the Peak Sample and Hold Module 30 and the analog waveform 24 (FIG. 5) from the amplifier 22. The Analog Divider Module 34 will divide the amplitude of the waveform 24 by the amplitude of the first position peak thereby producing a normalized value of the waveform 24. The output of the Analog Divider Module 34 is coupled to an Analog to Digital Converter Module 58 which, under the control of the Positive and Negative Peak Detector Modules 26, 28, converts the normalized analog value of the peak amplitude ratio developed in the Analog Divider Module 34 to a digital value which is then outputted over the bus 59 to the system processing network 60 (FIGS. 1B and 12) for use in recognizing the character represented by waveform 24 (FIG. 5) in a manner to be described more fully hereinafter.
The output of the Peak Position latch 54 is transmitted over bus 61 to a Digital Divider Module 62 (FIG. 1B) which is also connected to a First Position latch 64 which, upon receiving the signal FPPSH from the Sample and Hold Control Module 48, will hold the first positive peak position for the total time that the output pulse CW from the Character window Generator 40 is high (FIG. 5). The Digital Divider Module 62 will divide the position of each peak in the waveform 24 by the position of the first peak thereby normalizing each of the peak positions. The output of the Digital Divider Module 62 is transmitted over bus 57 to the system processing network 60 (FIGS. 1B and 12) for use by the character recognition system.
In operation, the Character Start Module 32 upon receiving the analog voltage output from the filter 23 will apply the threshold value to determine if the voltage is sufficiently large to be a character and not some other type of invalid character start voltage. Upon detecting the start of the waveform, the Module 32 will generate the pulse CS (FIGS. 1A and 5) that is transmitted to the Character Window Generator 40. The Generator 40 will output a corresponding pulse CW which, as shown in FIG. 5, goes high for a predetermined duration that is equal to the duration of the waveform 24. The pulse CW will enable an AND gate 66 to output a number of clock pulses 68 (FIG. 5) generated by a clock system 70 which may be of conventional construction and which is inputted to the AND gate 66. The pulse CW also enables, over conductor 72, the binary counter 56 which starts to count the clock pulses 68 (FIG. 1A) received from AND gate 66. The pulse CW also enables the First Positive Peak Sample and Hold Control Module 48 over conductor 74. Upon detecting the first positive peak in the waveform 24 (FIG. 5), the Positive Peak Detector Module 28 will output the control signal PPD to the First Positive Peak Sample and Hold Control Module 48 and the OR gate 52. Upon receiving the control signal PPD from the Module 28, the Control Module 48 will output a digital signal FPPSH which causes the Peak Sample and Hold Module 30 to retain an analog value representing the amplitude of the first peak detected in the waveform transmitted from the amplifier 22. The Peak Sample and Hold Module 30 will retain this analog value of the first peak for the time that the pulse CW is high (FIG. 5).
The control signal PPD generated by the Positive Peak Detector Module 28 is gated through the OR gate 52 to the Peak Position latch 54 thereby latching the output of the binary counter 56 at the time the peak was detected. Thus, the digital value in the Peak Position latch 54 is equal to the position of the first peak in the waveform 24 with respect to the start of the waveform. As shown in FIG. 5, this distance is indicated as t1 and the first peak is indicated as P1. As shown in FIG. 1A and 1B, the signal FPPSH from the Control Module 48 is also outputted to the First Position Peak Position latch 64 which will latch the digital value stored in the Peak Position latch 54 which, as described previously, represents at this time the position t1 (FIG. 5) of the first peak P1 in the waveform 24. Since the pulse FPPSH is high the analog value of the first peak and the digital value of the position of the first peak will be outputted by the Hold Module 30 and the latch 64, respectively, during the time the waveform 24 is generated.
The amplitude waveform 24 (FIG. 5) is also transmitted from the filter 23 to the Analog Divider Module 34 which upon receiving the analog value FPP representing the amplitude of the first peak from the Peak Sample and Hold Module 30 will divide the amplitude of the first positive peak by itself. The result of this division is converted to a digital value by the A/D Converter Module 54 enabled upon the generation of one of the pulse PPD, NPD which is gated through the OR gate 52. The digital value representing the amplitude of the first peak divided by the amplitude of the first peak is transmitted over bus 59 to the system processing networks 60 (FIGS. 1B and 12) for use in the character recognition system.
The outputting of one of the pulses PPD, NPD from the OR gate 52 to the Peak Position latch 54 and the A/D Converter Module 58 for generating a digital value representing the ratio of the peak amplitude to the first peak amplitude will also cause the Digital Divider Module 62 (FIG. 1B) to divide the position of each peak P1 -P8 (FIG. 5) in the waveform 24 is stored in the Peak Position latch 54 by the position of the first positive peak which is stored in the First Positive Peak Position latch 64. The result of this division operation is transmitted, in the form of a digital value, to the processing networks 60 for use in the character recognition system. Upon completion of each division operation by the Digital Divider Module 62, a control signal DC (FIGS. 1B and 5) is transmitted over conductor 63 from the Digital Divider Module 62 to a control unit 87 (FIG. 12) the processing network 60 for storing the result of the division in a RAM storage area 77 (FIG. 12) in a manner that will be described more fully hereinafter. In a similar manner, a control signal CC (FIGS. 1A and 1B) is generated by the A/D Converter Module 58 over conductor 65 to the processing networks 60 upon completion of converting the analog output of the Analog Divider Module 34 to a digital value for storage in the RAM storage area 76 (FIG. 12). This process is repeated for all the positive and negative peaks detected by the Detector Modules 28 and 26, with each of the normalized values of the peak being stored in the RAM storage areas 76 and 77 of the processing networks 60 for use in the character recognition system in a manner that will now be described.
Referring now to FIG. 6, there is shown a general flowchart of the operation of the recognition system in recognizing the character whose waveform 24 (FIG. 5) has been generated by the read head 20 (FIG. 1A). As disclosed above, a digital value representing the normalized peak amplitude of each of the peaks P1 -P8 (FIG. 5) in the ideal waveform 24 has been transmitted over the bus 59 from the A/D Converter Module 58 for storage in the RAM storage area 76 (FIG. 12). Additionally, a digital value representing the normalized peak position has been transmitted to the RAM storage area 77 from the Digital Divider module 62 (FIG. 1B). Utilizing these digital values, the recognition system will first determine (block 78) (FIG. 6) which windows of the waveform 24 the peaks generated by the read head 20 and represented by the normalized peak amplitude and position values should fit into. As is well known in the art, each E-13B character is designed to provide a predetermined number of peaks in the waveform where each peak occurs at a predetermined discreet time. As shown in FIG. 2, each peak, P1 -P8 is shown occurring at a corresponding time t1 t8 wherein each of the peak occurrences occurs within a corresponding window w1 -w8. As shown in FIG. 13(B), in actual practice, the read head 20, when reading the character zero 91 for example, will generate a waveform 25 in which the peaks are displaced when compared to the ideal waveform (FIG. 13A) due to variations in the speed of the document transport and also may include a number of peaks 93, 95 which fall within a single window. The present system will select the peak 95 having the maximum amplitude as the peak found in the window. Thus (block 78) provides that those peak values which are used in the character recognition system represent the required maximum peak values of the character being read.
After validating the values of the peak positions stored in the RAM storage area 77, the system determines the absolute numerical difference between the input digitial values representing the peaks stored in the RAM storage areas 76 and 77 (FIG. 12) and the corresponding digital values for each of the 14 referenced characters (block 80) stored in a ROM storage area 86. The system will then select the two minimum numerical values found from the 14 referenced characters and determine the difference between them (block 82). This difference is then compared with a minimum value comprising a first threshold to ensure that a valid read operation can be successfully completed. If the two minimum values are so close that it is doubtful that a valid character can be recognized from the signal generated by the read head 20, a reject signal is produced indicating an invalid operation. If the difference between the two minimum values is greater than the threshold value, a character corresponding to that reference character which produced the minimum value is then selected as the character read by the read head 20 (block 84). The minimum value of the selected reference character is then compared and a second threshold value providing the maximum limit of the minimum value that the system will utilize in selecting the reference character.
Referring now to FIGS. 7A-7D, there is shown a more detailed flowchart of the operation of the character recognition system in which the normalized values of the peak amplitude and position are received from the circuit disclosed in FIG. 1A and FIG. 1B, which values are then used in recognizing the character that is read by the read head 20. As shown in FIG. 12, included in the processing networks 60 are the RAM storage areas 76, 77 used for the storage of the input data and the temporary storage of values generated by the recognition system together with an Accumulator module 79 for adding binary numbers, a Subtractor module 81 for subtracting one binary number from another, a Minimum Value Logic network 83 for comparing two binary numbers and outputting the minimum binary number, a Comparator module 85 for outputting a signal indicating the coincidence or lack of coincidence between two binary numbers and a control unit 87 for generating a plurality of timing signal T1 -TN for operation of the modules shown in FIG. 12 in a manner that is well known in the art in response to receiving the control signals CW over conductor 72 from the Character Window Generator 40 (FIG. 1A), the signal CC over conductor 65 from the A/D Converter Module 58 and the signal DC over conductor 63 from the Digital Divider Module 62 (FIG. 1B). Also included is a ROM storage area 86 in which corresponding values of peak amplitude and position of each peak of the 14 reference characters are stored together with threshold values which are applied to the selected minimum value. Also included in the processing networks 60 are a number of storage registers which include an N register 88, an address register 89, an M register 90 and an R register 91. At the start of a character recognition operation, the system will clear all registers (block 92) (FIG. 7A) and will then check to see if the character window pulse CW (FIGS. 1A, 1B and 5) is high (block 94) indicating that the character waveform 24 (FIG. 5) has been sensed by the read head 20. If the pulse CW is low, the system will wait. When the pulse CW goes high, the system will check (block 96) to see if the analog value of the first peak amplitude has been converted to a digital value. As described earlier, this is accomplished by the generation of the pulse CC (FIGS. 1A and 5) which is transmitted to the control unit 87 (FIG. 12) from the A/D Converter Module 58 (FIG. 1A) signalling that a normalized peak amplitude value has been transmitted over the bus 59 to the RAM storage area 76. The system will then store (block 98) the first normalized peak amplitude value in RAM storage area 76 (FIG. 12) at address 0000 as shown in FIG. 8 which is a table representing a portion of the RAM storage area 76 showing the address and the contents stored at that address. The address of each of the location of the values stored in the RAM storage area 76 is generated from the address register 89 (FIG. 12). After storing the normalized peak amplitude value in the RAM storage area 76 (FIG. 8), the address register 89 is incremented (block 100) and the operation of the Digital Divider module 62 is then checked (block 102) to determine if the digital value of the normalized peak position has been generated. If so generated, a pulse DC (FIGS. 1B and 5) will have been outputted to the control unit 87 over conductor 63 from the Divider module 62. The normalized value of the peak position received over bus 57 from the Digital Divider module 62 is then stored at the address 0001 (FIG. 8) in the RAM storage area 77. The address register 89 is then incremented (block 106) and the character window pulse CW is then checked to see if it is high (block 108). If the pulse CW is still high, the routine is repeated until all of the normalized values of the peak amplitude and position of each peak in the waveform 24 (FIG. 5) are stored in the RAM storage areas 76 and 77.
Having received the normalized values of the peak amplitude and position, the system may check to see if the values stored in the storage area 76, 77 represent the maximum peak amplitude in each window. The system will set (block 110) an N register 88 (FIG. 12) to one which register functions to generate a plurality of consecutive numbers N which are used by the system to address (block 112) the RAM storage area 77 containing N normalized peak positions (FIG. 8) to read the position of the peak corresponding to the peak number N. Utilizing each of the normalized peak positions stored in the RAM storage area 77 and knowing the location and width of each of the windows W1 -W8 (FIG. 5), stored in the ROM storage area 86 (FIG. 12), the system will locate in each of the eight possible windows, one of the peaks P1 -P8 (block 114) (FIG. 7B). The system stores in the RAM storage area 77 (FIG. 12) at each window address location, that peak number P1 -P8 found in such window. Referring to FIG. 9, there is shown a table representing that portion of the RAM storage area 77 in which the peaks of the waveform 24 are correlated with the windows that the system found them to be located. Upon inserting a peak number at a corresponding window address, a flag is set to one indicating that the window had already been determined to have a peak located therein. Thus after determining the location of the window in which a peak position is located, the system will check the flag to see if a previously selected peak position was found to fit into this window (block 116). If the flag is zero, the system will store (block 122) the peak number into the storage location (FIG. 9) in the RAM storage area 77 which represents the window number.
If upon checking a window address location in the RAM storage area 77 (FIG. 9) and finding the flag is a one indicating that a peak number has already been found in that window, the system will look up the normalized peak amplitude value of the peak number stored (FIG. 8) at that window address (block 118) in the RAM storage area 76 and then compare the two peak amplitude values. It will then select (block 120) the maximum value of the peak amplitudes compared and store in the RAM storage area 76 (FIG. 9) the value of the maximum peak amplitude at that window address location. The system will then check to determine if all the peak positions have been processed (block 124). This is done by checking the output of the N register 88 (FIG. 12). If all the peak values have not been processed, the N register 88 is incremented (block 126) and the procedure is repeated until all the peak positions or values have been located within an appropriate window.
After determining that all the peak values now stored (FIG. 9) in the RAM storage area 76 and 77 (FIG. 12) represent the maximum peak amplitudes found in the waveform 24 (FIG. 5), the system will now compare each of these peak values with the peak values in each of the 14 reference characters to determine the closest correlation between the character read and the reference characters. As shown in FIG. 12, there is included in the processing networks 60 an M register 90 which generates a plurality of addresses for use in storing the differences between the values of the amplitude and position of each peak found in each window with the value of the amplitude and position of each of the peaks of the 14 character references located in the corresponding window position. After setting the M register at address one (block 128) (FIG. 7B) the system will look up in the RAM storage area 76 the normalized peak amplitude value of the peak that was found in window at address one (block 130), generate a binary number in the Subtractor module 81 (FIG. 12) representing the absolute difference between this amplitude value and the corresponding amplitude value of the peak of each of the 14 reference characters stored in the ROM storage area 86 which is found for this window number (block 132), and then store these 14 differences in another portion of the RAM storage area 76 (block 134). The table seen in FIG. 10 shows the arrangement of the storage of each of the 14 differences found in comparing the normalized peak amplitudes (NPA) and positions (NPP) with the 14 reference characters stored in the ROM storage area 86 relating to the corresponding window number. The system will then look up in that portion (FIG. 8) of the RAM storage area 77 and normalized peak position of the peak that is contained in the window at address one (block 136), generate again in the Subtractor module 81 the absolute difference between the normalized value of the peak position and the peak position in each of the 14 character references corresponding to the window at address one (block 138), and then store these 14 differences (block 140) in that portion (FIG. 10) of the RAM storage area 76 corresponding to the window at address one. The system will then check the M register 90 to determine if all the 8 windows have been processed (block 142). If not, the M address register 90 is incremented by one (block 144) and the procedure is repeated until all the differences between the peak values of each of the 14 reference characters and the waveform 24 (FIG. 5) have been determined for each window and stored in the RAM storage area 76 (FIG. 12).
After determining the absolute numerical differences between the normalized values of the 8 peak amplitudes and positions of the waveform 24 with those of the 14 reference characters (block 80) FIG. 6, the system will now select the two minimum values from the 14 differences and generate a value representing the difference between the two (block 82) in order to determine if the data read by the read head 20 is sufficient to recognize a character from the 14 reference characters. As shown in FIG. 12, the processing networks 60 include an R register 91 which is used to generate a number of addresses corresponding to the reference characters that was compared to the character read by the read head 20. This register is set to one (block 146) (FIG. 7C). The output of the R register 91 is used to address the RAM storage area 76 (FIG. 12) to look up the values stored therein (FIG. 10) representing the difference between the normalized peak amplitude and position of the character read by the read head 20 with the values stored in the ROM storage area 86 of the peak amplitude and position of each of the reference characters corresponding to each of the windows. Using these values, the system will add in the Accumulator module 79 the normalized peak amplitude differences for each of the windows 1-8 for each reference character (block 148) (FIG. 7C), store this sum in the RAM storage area 76 (block 150), add the normalized peak position differences in each of the windows 1-8 for each reference character (block 152) in the accumulator 79, store this latter sum in the RAM storage area 76 (block 154) and then total in the Accumulator module 79 both the sums of the differences of the peak amplitude and positions for each of the windows and store this value in the RAM storage area 76 as shown in FIG. 11. The R register 91 (FIG. 12) is then checked (block 158) to determine if all 14 reference characters have been processed. If not, the R register 91 is incremented (block 159) and the procedure is then repeated until a value representing the total difference between the peak values of the character read by the read head 20 and each of the reference characters has been determined and stored in the RAM storage unit 76 as shown in FIG. 11.
With the total difference between the peak values of the character that is read and each of the 14 reference characters now stored in the RAM storage area 76, (FIG. 11), the next step is to select from the differences associated with each of the 14 reference characters, the two minimum values and then examine these values to see if they meet a pair of threshold requirements for recognition. These threshold requirements determine whether a recognition operation is feasible with respect to the data generated. If the difference between the two minimum values is below a predetermined number, it is assumed that the two reference characters represented by the minimum values are so close that a probability of error exists which would make the recognition of the character unacceptable under the circumstances. If the difference between the two reference characters is greater than this first threshold requirement, the reference character represented by the lowest of the two minimum differences is then selected as the character being read by the read head 20. This selected minimum value is then compared to a second threshold value. If the minimum value selected is greater than this second threshold value, it is doubtful that the data generated is of sufficient magnitude to provide a valid character recognition and therefore the recognition operation is terminated and a signal indicates the status of the operation is generated.
In carrying out this recognition procedure, the system will output to the Minimum Value Logic network 83 (FIG. 12) the values representing the total difference for each of the 14 reference characters stored in the RAM storage area 76. The Minimum Value Logic network 83 may be comprised of a number of 5-bit Comparators which compare two 5-bit words and output a signal indicating which word is less than the other. A commercial example of such a Comparator that may be used is Fairchild Semiconductor Comparator No. 9324. This Comparator when used with the Fairchild Semiconductor 2-input Multiplexer 9322 in the circuit shown on Page 1-7 of the Fairchild Semiconductor TTL Application Handbook published August, 1973, is modified to produce the smallest of the words compared and can select the minimum value of any number of 4-bit words. Another circuit for selecting the two minimum values and applying threshold values to the selected minimum values that may be used is fully disclosed in the co-pending United States patent application of C. Shian entitled "Character Recognition Apparatus," NCR Docket No. 2395. The Minimum Value Logic network 83 will select the two minimum values stored in the table shown in FIG. 11 located in the RAM storage area 76 (block 160), (FIG. 7D) for transmission to the Subtractor module 81, then output from the Subtractor module 81 the absolute difference between the two selected minimum values (block 162), compare in the Comparator module 85 the value of this difference with a predetermined value stored in the ROM storage area 86 which represents a first threshold limit (block 164), and generate a reject signal (block 170) if this absolute difference is less than this threshold value, this signal being generated over line 174 (FIG. 1B) in the form of a pulse RC to a Character Recognition Indicator 176 which will display the reject signal. An example of a commercially available Comparator that may be used includes the previusly cited Fairchild Semiconductor Comparator No. 9324. If the difference generated between the two minimum values (block 162) is equal to or greater than the threshold value (block 164), the minimum value of the two reference characters selected is then compared in the Comparator 85 with a second threshold value (block 166) stored in the ROM storage area 86 (FIG. 12), which value represents the maximum limitation of the minimum value the system will tolerate before indicating a mis-read operation. If this minimum value of the reference character is greater than this second threshold value, again a reject signal is generated (block 170) to the Character Recognition Indicator 176 (FIG. 1B). If the minimum value is equal to or less than this second threshold value, the character that is recognized is that character which corresponds to this minimum value (block 168) and the recognized character is then displayed in the Character Recognition Indicator 176 FIG. 1B (block 172). The recognition process is now complete and the next waveform generated by the read head 20 scanning the document will then be transmitted to the system for the next character recognition operation.
While the preferred embodiment of the invention has been described in detail for recognizing characters in a standard E-13B character font, the character recognition system could be readily adapted by a person of ordinary skill in the art to recognize characters or symbols from any standard character font without departing from the spirit of the invention. Thus while the present embodiment discloses the normalizing of each peak amplitude and position using the first peak amplitude and position, it is obvious that any other peak amplitude and position can be used. Furthermore, many of the changes and details of the preferred embodiment may be made without departing from the spirit or scope of the claim as defined in the appended claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3535682 *10 Dec 196520 Oct 1970Lundy Electronics & Syst IncWaveform recognition system
US3744026 *10 Jun 19703 Jul 1973Identicon CorpOptical label scanning
US3784792 *29 Mar 19728 Jan 1974Monarch Marking Systems IncCoded record and methods of and apparatus for encoding and decoding records
US3879707 *20 Dec 197222 Apr 1975IbmCharacter recognition system for bar coded characters
GB1023810A * Title not available
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US4245211 *13 Nov 197813 Jan 1981Recognition Equipment IncorporatedMICR Waveform analyzer
US4277776 *1 Oct 19797 Jul 1981Ncr Canada Ltd - Ncr Canada LteeMagnetic ink character recognition apparatus
US4399553 *29 Dec 198016 Aug 1983Kabushiki Kaisha Sankyo Seiki SeisakushoCharacter reader
US4547899 *30 Sep 198215 Oct 1985Ncr CorporationWaveform matching system and method
US4584663 *29 May 198422 Apr 1986Olympus Optical Co., Ltd.Apparatus for power-on data integrity check of inputted characters stored in volatile memory
US4776021 *16 Nov 19874 Oct 1988Ncr CorporationSpeed compensation scheme for reading MICR data
US4797938 *31 Mar 198810 Jan 1989International Business Machines CorporationMethod of identifying magnetic ink (MICR) characters
US5026974 *27 Dec 198925 Jun 1991Ncr CorporationMethod for recognizing the leading edge of a character in E13B font
US5091961 *14 Jul 198925 Feb 1992American Magnetics Corp.Magnetic ink character decoder
US5151952 *14 Jun 199129 Sep 1992Unisys Corp.Character recognition apparatus which obtains distance ratios between edges in a character for identification
US5201010 *19 May 19926 Apr 1993Credit Verification CorporationMethod and system for building a database and performing marketing based upon prior shopping history
US5204513 *2 Oct 199020 Apr 1993American Magnetics CorporationDecoder for magnetic stripe recording
US5237620 *19 May 199217 Aug 1993Credit Verification CorporationCheck reader method and system for reading check MICR code
US5291307 *7 Aug 19911 Mar 1994Ncr CorporationControl circuit for an image used in a document processing machine
US5305196 *19 May 199219 Apr 1994Credit Verification CorporationCheck transaction processing, database building and marketing method and system utilizing automatic check reading
US5388165 *4 Jan 19947 Feb 1995Credit Verification CorporationMethod and system for building a database and performing marketing based upon prior shopping history
US5430644 *9 Nov 19944 Jul 1995Credit Verification CorporationCheck transaction processing, database building and marketing method and system utilizing automatic check reading
US5448471 *30 Mar 19945 Sep 1995Credit Verification CorporationCheck transaction processing, database building and marketing method and system utilizing automatic check reading
US5592560 *8 Sep 19947 Jan 1997Credit Verification CorporationMethod and system for building a database and performing marketing based upon prior shopping history
US5621812 *17 May 199315 Apr 1997Credit Verification CorporationMethod and system for building a database for use with selective incentive marketing in response to customer shopping histories
US5638457 *28 Feb 199410 Jun 1997Credit Verification CorporationMethod and system for building a database for use with selective incentive marketing in response to customer shopping histories
US5644723 *4 Jan 19941 Jul 1997Credit Verification CorporationMethod and system for selective incentive point-of-sale marketing in response to customer shopping histories
US5659469 *27 Apr 199519 Aug 1997Credit Verification CorporationCheck transaction processing, database building and marketing method and system utilizing automatic check reading
US5671291 *27 Jan 199523 Sep 1997Dassault Automatismes Et TelecommunicationsProcess and device for character recognition, in particular for standardized character type E-13B
US5675662 *6 Sep 19947 Oct 1997Credit Verification CorporationMethod and system for building a database for use with selective incentive marketing in response to customer shopping histories
US5729621 *31 Aug 199517 Mar 1998Ncr CorporationMethod and apparatus for magnetic ink character recognition using a magneto-resistive read head
US5861616 *1 Oct 199619 Jan 1999Mustek Systems, Inc.Method and device for recognizing a waveform of an analog signal
US5969325 *3 Jun 199619 Oct 1999Accu-Sort Systems, Inc.High speed image acquisition system and method of processing and decoding barcode symbol
US6012640 *18 Sep 199711 Jan 2000Intermec Ip CorporationRule based and fuzzy logic method and apparatus for processing reflectance signals from machine-readable symbols or images
US6015089 *7 Apr 199918 Jan 2000Accu-Sort Systems, Inc.High speed image acquisition system and method of processing and decoding bar code symbol
US6016960 *8 Jul 199725 Jan 2000Intermec Ip CorporationRule based method and apparatus for processing reflectance signals from machine-readable symbols or images
US619315814 Jan 200027 Feb 2001Accu-Sort Systems, Inc.High speed image acquisition system and method
US629278611 Aug 199918 Sep 2001Incentech, Inc.Method and system for generating incentives based on substantially real-time product purchase information
US630795818 Jul 199723 Oct 2001Catalina Marketing International, Inc.Method and system for building a database for use with selective incentive marketing in response to customer shopping histories
US6327378 *15 Feb 19954 Dec 2001Banctec, Inc.Character recognition method
US635173522 Aug 199626 Feb 2002Catalina Marketing International, Inc.Check transaction processing, database building and marketing method and system utilizing automatic check reading
US637793512 Mar 199723 Apr 2002Catalina Marketing International, Inc.Method and system for selective incentive point-of-sale marketing in response to customer shopping histories
US638645426 Feb 200114 May 2002Accu-Sort Systems, Inc.Detecting bar code candidates
US642494912 Mar 199723 Jul 2002Catalina Marketing International, Inc.Method and system for selective incentive point-of-sale marketing in response to customer shopping histories
US646414720 Oct 200015 Oct 2002Unisys CorporationDual gap read head for magnetic ink character recognition
US651630212 Oct 19994 Feb 2003Incentech, Inc.Method and system for accumulating marginal discounts and applying an associated incentive upon achieving one of a plurality of thresholds
US660910424 Sep 199919 Aug 2003Incentech, Inc.Method and system for accumulating marginal discounts and applying an associated incentive
US66118111 Oct 199926 Aug 2003Incentech, Inc.Method and system for accumulating marginal discounts and applying an associated incentive upon achieving threshold
US6956962 *7 May 200218 Oct 2005Unisys CorporationSystem and method of signal processing for use in reading data
US699349826 Jul 199931 Jan 2006Midnight Blue Remote Access, LlcPoint-of-sale server and method
US746405026 Jun 20039 Dec 2008Incentech, Inc.Method and system for facilitating consumer purchases
US7680317 *20 Jan 200616 Mar 2010Larry AdelbergMethod and apparatus for identifying MICR characters
US768031820 Jan 200616 Mar 20104Access CommunicationsMethod and apparatus for identifying MICR characters
US770214320 Jan 200620 Apr 2010Larry AdelbergMethod and apparatus for identifying MICR characters
US870045822 Sep 199715 Apr 2014Catalina Marketing CorporationSystem, method, and database for processing transactions
US871283620 Dec 200529 Apr 2014Midnight Blue Remote Access LlcPoint-of-sale server and method
EP0545254A2 *25 Nov 19929 Jun 1993InterboldAnalog-to-digital converter circuit having automatic range control
WO1991006923A1 *23 Oct 199016 May 1991Unisys CorpRatio detection character recognition apparatus
WO1997026615A1 *17 Jan 199724 Jul 1997Mellon Bank N ACheck alteration detection system and method
Classifications
U.S. Classification382/139, 382/207, 382/273, 235/449
International ClassificationG06K9/36, G06K9/00
Cooperative ClassificationG06K9/186
European ClassificationG06K9/18M
|
__label__pos
| 0.697557 |
How to Create a User Role Management System With PHP MySQL
INTRODUCTION
THE PERMISSIONS HEADACHE
Welcome to a tutorial on how to create a PHP User Role Management System. So you have a project that needs to identify and restrict what each user is able to do? Creating a permissions structure is often quite a grueling task and pain to integrate… But we shall walk through how to create a simple permissions structure in this guide, step-by-step. Read on to find out!
ⓘ I have included a zip file with all the source code at the start of this tutorial, so you don’t have to copy-paste everything… Or if you just want the code and skip the tutorial.
PREAMBLE
DOWNLOAD & NOTES
First, here is the download link to the source code as promised.
EXAMPLE CODE DOWNLOAD
Click here to download the source code, I have released it under the MIT license, so feel free to build on top of it or use it in your own project.
QUICK NOTES
• Download and unzip into your project folder.
• Create a database and import all the files in the sql folder.
• Change the database settings in 2a-core.php to your own.
• Follow along 2b-login.php and 2c-protection.php to see how it works.
If you spot a bug, please feel free to comment below. I try to answer questions too, but it is one person versus the entire world… If you need answers urgently, please check out my list of websites to get help with programming.
ASSUMPTIONS – AN EXISTING PROJECT
A permissions system will not make much sense as a “standalone”, so I shall assume here that most of you guys have an existing project, and looking for ways to build a permissions structure on top of it. We shall not go into how to create a user system – If you do not already have a user/login system, I will leave a link in the extras section below to my other guide.
Also, this guide will be in pure HTML, CSS, Javascript, and PHP – No third-party frameworks will be used, and that should make it much easier for everyone to integrate.
SECTION A
THE DATABASE
Let us now start with the foundations of the system, the database – Don’t worry if you have not created a users database yet, I shall provide a complete working example here.
TABLES OVERVIEW
There are 4 tables involved in this project:
• Permissions – To keep track of actions that require permission. For example, accessing the list of users, and creating new users.
• Roles – Names of the roles. For examples, administrators, editors, etc…
• Roles-Permissions – To specify which role has which permissions.
• Users – Your list of users and their roles.
THE PERMISSIONS TABLES
sql/1a-permissions.sql
CREATE TABLE `permissions` (
`perm_mod` varchar(5) NOT NULL,
`perm_id` int(11) NOT NULL,
`perm_desc` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `permissions`
ADD PRIMARY KEY (`perm_mod`,`perm_id`);
CREATE TABLE `roles` (
`role_id` int(11) NOT NULL,
`role_name` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `roles`
ADD PRIMARY KEY (`role_id`),
ADD UNIQUE KEY `role_name` (`role_name`);
ALTER TABLE `roles`
MODIFY `role_id` int(11) NOT NULL AUTO_INCREMENT;
CREATE TABLE `roles_permissions` (
`role_id` int(11) NOT NULL,
`perm_mod` varchar(5) NOT NULL,
`perm_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `roles_permissions`
ADD PRIMARY KEY (`role_id`,`perm_mod`,`perm_id`);
Permissions
FieldDescription
perm_modThe module, an abbreviation code up to 5 characters. For example “USR” for users, “INV” for inventory. Partial primary key.
perm_idPermissions ID, just a running number. Partial primary key.
perm_descPermission description. For example, access inventory list, create a new user, etc…
Roles
FieldDescription
role_idRole ID, primary key and auto-increment.
role_nameName of the role. For example, an administrator.
Role Permissions
FieldDescription
role_idRole ID, partial primary key.
perm_modModule code, partial primary key.
perm_idPermission ID, partial primary key.
USERS TABLES
If you do not have a users table, here is a simple one that you can use.
sql/1b-users.sql
CREATE TABLE `users` (
`user_id` int(11) NOT NULL,
`user_email` varchar(255) NOT NULL,
`user_name` varchar(255) NOT NULL,
`user_password` varchar(255) NOT NULL,
`role_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `users`
ADD PRIMARY KEY (`user_id`),
ADD UNIQUE KEY `user_email` (`user_email`),
ADD KEY `user_name` (`user_name`);
ALTER TABLE `users`
MODIFY `user_id` int(11) NOT NULL AUTO_INCREMENT;
FieldDescription
user_idThe user ID, auto-increment and primary key.
user_emailUser’s email. Set to unique to prevent duplicate registrations.
user_nameThe user’s name. Indexed for better search performance.
user_passwordUser password.
role_idRole of the user.
P.S. If you already have an existing users table, just give each of your users a role_id.
SAMPLE DATA
Finally, here is the dummy data that we will use as an example.
sql/1c-sample.sql
INSERT INTO `users` (`user_id`, `user_email`, `user_name`, `user_password`, `role_id`) VALUES
(1, '[email protected]', 'John Doe', '123456', 1),
(2, '[email protected]', 'Jane Doe', '123456', 2);
INSERT INTO `permissions` (`perm_mod`, `perm_id`, `perm_desc`) VALUES
('USR', 1, 'Access users'),
('USR', 2, 'Create new users'),
('USR', 3, 'Update users'),
('USR', 4, 'Delete users');
INSERT INTO `roles` (`role_id`, `role_name`) VALUES
(1, 'Administrator'),
(2, 'Power User');
INSERT INTO `roles_permissions` (`role_id`, `perm_mod`, `perm_id`) VALUES
(1, 'USR', 1),
(1, 'USR', 2),
(1, 'USR', 3),
(1, 'USR', 4),
(2, 'USR', 1);
SECTION B
HOW TO INTEGRATE
With the database foundations established, we shall now walk through how to put the permissions into the scripts.
STEP 1) THE CORE SCRIPT
2a-core.php
<?php
// (A) MUTE NOTICES
error_reporting(E_ALL & ~E_NOTICE);
// (B) DATABASE SETTINGS - CHANGE THESE TO YOUR OWN
define('DB_HOST', 'localhost');
define('DB_NAME', 'test');
define('DB_CHARSET', 'utf8');
define('DB_USER', 'root');
define('DB_PASSWORD', '');
// (C) CONNECT TO DATABASE
try {
$pdo = new PDO(
"mysql:host=" . DB_HOST . ";charset=" . DB_CHARSET . ";dbname=" . DB_NAME,
DB_USER, DB_PASSWORD, [PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, PDO::ATTR_EMULATE_PREPARES => false]
);
} catch (Exception $ex) {
print_r($ex);
die();
}
// (D) START SESSION
session_start();
Well, nothing much to this first script – You should already have these somewhere in your own project. Just make sure that you have a database connection and start the PHP session.
STEP 2) SAVE PERMISSIONS INTO PHP SESSION DURING LOGIN
2b-login.php
<?php
// (A) LET'S SAY THE LOGIN FORM POST TO THIS SCRIPT
$_POST = [
"email" => "[email protected]",
"password" => "123456"
];
// (B) WE FETCH THE USER FROM DATABASE & VERIFY THE PASSWORD
require "2a-core.php";
$stmt = $pdo->prepare("SELECT * FROM `users` LEFT JOIN `roles` USING (`role_id`) WHERE `user_email`=?");
$stmt->execute([$_POST['email']]);
$user = $stmt->fetchAll();
$pass = count($user)>0;
if ($pass) {
$pass = $user[0]['user_password'] == $_POST['password'];
}
// (C) IF VERIFIED - WE PUT THE USER & PERMISSIONS INTO THE SESSION
if ($pass) {
$_SESSION['user'] = $user[0];
$_SESSION['user']['permissions'] = [];
unset($_SESSION['user']['user_password']); // Security...
$stmt = $pdo->prepare("SELECT * FROM `roles_permissions` WHERE `role_id`=?");
$stmt->execute([$user[0]['role_id']]);
while ($row = $stmt->fetch(PDO::FETCH_NAMED)) {
if (!isset($_SESSION['user']['permissions'][$row['perm_mod']])) {
$_SESSION['user']['permissions'][$row['perm_mod']] = [];
}
$_SESSION['user']['permissions'][$row['perm_mod']][] = $row['perm_id'];
}
}
// (D) DONE!
echo $pass ? "OK" : "Invalid email/password" ;
echo "<br><br>SESSION DUMP<br>";
print_r($_SESSION);
This should be pretty straightforward – In your login process, simply do the usual email/password check. But on top of it, fetch the permissions from the database and put it into the session.
STEP 3) PROTECT THE SCRIPTS
2c-protection.php
<?php
// (A) LET'S SAY THAT THIS SCRIPT IS USED TO UPDATE A USER
$_POST = [
"id" => 2,
"email" => "[email protected]",
"name" => "Joy Doe",
"password" => "123456",
"role" => 1
];
// (B) PERMISSIONS CHECK FUNCTION
// Keep this somewhere in your "core library".
function check ($module, $id) {
return in_array($id, $_SESSION['user']['permissions'][$module]);
}
// (C) WE WILL CHECK IF THE USER HAS PERMISSIONS TO DO SO FIRST
require "2a-core.php";
if (!check ("USR", 3)) {
die("NO PERMISSION TO ACCESS!");
}
// (D) PROCEED IF OK
try {
$stmt = $pdo->prepare("UPDATE `users` SET `user_email`=?, `user_name`=?, `user_password`=?, `role_id`=? WHERE `user_id`=?");
$stmt->execute([$_POST['email'], $_POST['name'], $_POST['password'], $_POST['role'], $_POST['id']]);
} catch (Exception $ex) {
print_r($ex);
die();
}
echo "UPDATE OK!";
Yep, it is as simple as that. In your own libraries/functions/scripts, just do a quick check on the session to see if the user has sufficient permissions… But even though this might be easy, it can be extremely tedious if you have hundreds of different functions. So sometimes, it is better to go easier and not micro-manage too much.
EXTRA
USEFUL BITS & LINKS
That’s it for all the code, and here are a few small extras that you may find to be useful.
SUMMARY
• Assign a module code to whatever you want to build next. For example, “INV” for inventory or “PDT” for products.
• List out all the functions that require permissions. For example, 1 for accessing the inventory list, 2 for adding new items, 3 for editing items, etc…
• Add the permissions to the database, and assign which user roles have the permissions.
• Build your library, function, and/or scripts. But do a check with the $_SESSION['user']['permissions'] before actually processing it.
LINKS & REFERENCES
EXTRA
VIDEO TUTORIAL
For you guys who want more, here’s the video tutorial, and shameless self-promotion – Subscribe to the Code Boxx YouTube channel for more!
YOUTUBE TUTORIAL
CLOSING
WHAT’S NEXT?
Thank you for reading, and we have come to the end of this guide. I hope that it has helped you to better manage access permissions in your project. If you have anything to share with this guide, please feel free to comment below. Good luck, and happy coding!
3 thoughts on “How to Create a User Role Management System With PHP MySQL”
Leave a Comment
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.960765 |
Lezioni
Come si usa
Ogni lavagna interattiva è generalmente dotata
di software per creare presentazioni, costituite da pagine di immagini o slide, e lezioni multimediali. Tutte le presentazioni e/o lezioni create con la LIM presentano gli stessi elementi caratteristici:1. uno stage bianco in cui scrivere
con la penna o trascinare immagini e altri oggetti multimediali tratti da una libreria;
2. una libreria di immagini, filmati e animazioni che possono essere inserite nello stage;
3. una serie di strumenti per scrivere, evidenziare, disegnare forme geometriche.
|
__label__pos
| 0.599701 |
Jump to content
• Advertisement
Sign in to follow this
SpaceRanger
OpenGL Basic Picking with Viewports
This topic is 3931 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
Recommended Posts
Problem: I can get my program to work (picking) with single viewport (full screeen or 1/2 width), but not with any viewport that changes the height of the viewport. Goal: The goal of my program was to draw 4 rectangles in different viewports and then draw a grid in the one that was selected. For picking, the viewports are defined as follows (in the window area) 2 3 0 1 Where viewport 0 is the lower left hand corner (glViewPort(0,0,w/2,h/2)). The drawing all worked great and even moving the grid around worked via keyboard. When I added the picking I had problems. So I went back to just drawing and trying to select viewport 0. The selection viewport 0 didn't work, but I got a hit when I clicked in viewport 2. So, I tried different viewports and as long as I didn't mess with the height, the selection worked properly. If I messed with the height, I had to select viewport above the desired viewport to get a hit. Here is the drawFunction and come of the viewports I tried. void drawFunction( GLenum nMode ) { GLsizei nWidth, nHeight; static GLfloat blueColor[] = { 0.0, 0.0, 1.0 }; static GLfloat greenColor[] = { 0.0, 1.0, 0.0 }; static GLfloat redColor[] = { 1.0, 0.0, 0.0 }; nWidth = glutGet(GLUT_WINDOW_WIDTH); nHeight = glutGet(GLUT_WINDOW_HEIGHT); // ViewPort 0 // Original Code (doesn't work) Have to click on upper left hand corner of wnd. glViewport( 0, 0, nWidth / 2.0f, nHeight / 2.0f ); // Selection Works Properly //glViewport( 0, 0, nWidth , nHeight ); // Selection Works Properly //glViewport( 0, 0, nWidth / 2.0, nHeight ); // Have to click on the upper half of window. //glViewport( 0, 0, nWidth , nHeight / 2.0f ); glColor3fv( blueColor ); if ( nMode == GL_SELECT ) glPushName(0); glRectf( -2.0, -2.0, 2.0, 2.0 ); if ( nMode == GL_SELECT ) glPopName(); if ( grid_pos == 0 ) { glColor3f( 1.0, 1.0, 1.0 ); drawSquareGrid(); } ... (commented out draws of other viewports) } My Mouse Input Function is pretty standard: GLvoid mouseInput( int nButton, int nState, int nX, int nY ) { fprintf( stdout, "mouseInput\n" ); #define BUFSIZE 9 int pickWidth = 2; int pickHeight =2; GLuint selectBuf[BUFSIZE];//create the selection buffer GLint hits; GLint viewport[4];//create a viewport //check for a left mouse click if (nButton == GLUT_LEFT_BUTTON && nState == GLUT_DOWN) { glGetIntegerv (GL_VIEWPORT, viewport);//set the viewport glSelectBuffer (BUFSIZE, selectBuf);//set the select buffer (void) glRenderMode (GL_SELECT);//put OpenGL in select mode glInitNames();//init the name stack glPushName(0);//push a fake id on the stack to prevent load error glPopName(); // get the zero off the stack. glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); //setup a pick matrix gluPickMatrix ((GLdouble) nX, (GLdouble) (viewport[3] - nY), (GLdouble) pickWidth, (GLdouble) pickHeight, viewport); SetPerspective(); drawFunction (GL_SELECT);// Draw to the pick matrix instead of our normal one glMatrixMode (GL_PROJECTION); glPopMatrix (); glFlush (); hits = glRenderMode (GL_RENDER);//count the hits processHits (hits, selectBuf);//check for object selection glutPostRedisplay(); } } and ProcessHits: GLvoid processHits(GLint hits, GLuint buffer[]) { int y = 0; //unselect all if(hits > 0)//Make sure there is at least one hit { y = buffer[3]; grid_pos = y; } } and Finally, set perpective if it matters: void SetPerspective(void) { glOrtho( -2.0, 2.0, -2.0, 2.0, -1.0, 1.0 ); }
Share this post
Link to post
Share on other sites
Advertisement
Sign in to follow this
• Advertisement
×
Important Information
By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.
Participate in the game development conversation and more when you create an account on GameDev.net!
Sign me up!
|
__label__pos
| 0.708767 |
Excel VBA Object Variables
Excel VBA Introduction Part 9 – Object Variables
DESCRIPTION
Object variables in VBA allow you to store references to objects in memory. They’re slightly more complex to use than basic data-type variables, but well worth the effort and this video explains why! You’ll learn how to declare object variables and how to set references to existing objects. The video also shows you how to return references to objects using the methods of other objects with examples including generating new workbooks and worksheets, as well as using the Find method to reference cells.
|
__label__pos
| 0.769755 |
Chaos Project
RPG Maker => RPG Maker Scripts => RMXP Script Database => Topic started by: lilbrudder917 on October 17, 2009, 01:07:53 pm
Title: [XP] Project CMS
Post by: lilbrudder917 on October 17, 2009, 01:07:53 pm
Project CMS
Authors: LilBrudder917
Version: 2.0
Type: Custom Menu System
Key Term: Custom Menu System
Introduction
This is my first script, it's a One-Man CMS with several windows you can configure to be on or off. There are also several non-RTP scripts included being used specifically for the CMS.
Features
Screenshots
Can look something like this...: ShowHide
(http://img266.imageshack.us/img266/7989/type2o.png)
... to this.: ShowHide
(http://img12.imageshack.us/img12/7097/type1ifyes.png)
Demo
OUTDATED: Here (http://www.filefactory.com/file/a0h6649/n/Project_CMS_zip)
Script
Place above Main (and I'm guessing Custom Message Scripts).
#==============================================================================
# Project CMS by lilbrudder917
# Version 2.0
#------------------------------------------------------------------------------
# Overrides Scene_Menu and Window_MenuStatus. Rewrites the "Draw Level" parts of
# Window_Base to change settings.
#
# Version History:
# 2.0 : Now you can change the Window_Notes text with
# " $game_system.FirstLine = "_" "
# " $game_system.SecondLine = "_" "
# " $game_system.ThirdLine = "_" "
# " $game_system.FourthLine = "_" "
# " $game_system.FifthLine = "_" "
#==============================================================================
module ProCMS
#==============================================================================
# CONFIGURATION
#==============================================================================
ItemName = "Inventory" # Default is Item
SkillName = "Cast Spell" # Default is Skill
EquipName = "Wear" # Default is Equip
StatusName = "Status" # Default is Status
SaveName = "Save Data" # Default is Save
EndName = "Quit Game" # Default is End Game
CommandW = 150 # Width of the Command Window
Custom_Map = false # If true, you can have a picture as a background.
Map_Picture = "mappic" # If Custom_Map = true, this will be the background.
Facesets = true # If true and the face file is missing, the actor's
#sprite will be used instead
FaceIMG = "Face" # Filename for the face image.
FFILETYPE = ".png" # Face File Type
CornerLogo = true # In the bottom right corner, want a 148x61 icon?
LogoIcon = "logoicon" # Name of icon
LogoFType = ".png" # Icon File Type
MAXACTORS = 1 # 1 Seriously Recommended, otherwise you'll need to
# do a lot of huge editing.
#=============================================================================
# Scene_Menu Window Add-ons
#=============================================================================
MapBG = true # If true, the map will be your background
UseNotes = true # To use Window_Notes, have this true.
UsePTime = true # To use Window_PlayTime, have this true.
UseRTime = true # To use Window_RealTime, have this true.
RTHVar = 2 # Variable used for storing hours
RTMVar = 3 # Variable used for storing minutes
RTAPVar= 4 # Variable used for storing AM/PM
RTSVar = 5 # Variable used for storing seconds
TwelClock = true # 12-Hour Clock if true, false = 24-Hour Clock
UseVar = true # To use Window_Variable, have this true.
UseSteps = true # To use Window_Steps, have this true.
UseGold = true # To use Window_Gold, have this true.
UseLoca = true # To use Window_Location, have this true.
###############################################################################
# Coordinate Controls #
#-----------------------------------------------------------------------------#
# Unless you know what you are doing, I don't recommend touching these. #
# These are the display settings of Window_MenuStatus. #
###############################################################################
ShowName = true # Show Actor's Name?
NameX = 135 # Actor_Name X Position
NameY = 0 # Actor_Name Y Position
ShowClass = true # Show Actor's Class?
ClassX = 128 # Actor_Class X Position
ClassY = 30 # Actor_Class Y Position
ShowLevel = true # Show Actor's Level?
LevelX = 0 # Actor_Level X Position
LevelY = 139 # Actor_Level Y Position
LevelT = "Level" # Custom Title for Level?
ShowState = true # Show Actor_State?
StateX = 123 # Actor_State X Position
StateY = 50 # Actor_State Y Position
ShowHP = true # Show HP/MaxHP String?
UseBARS = nil # Coming Soon
HPX = 0 # Hitpoints String X Position
HPY = 100 # Hitpoints String Y Position
ShowSP = true # Show SP/MaxSP String?
SpecX = 0 # Specpoints String X Position
SpecY = 125 # Specpoints String Y Position
ShowEXP = true # Show Experience String?
ExperX = 0 # Experience String X Position
ExperY = 185 # Experience String Y Position
CommandX = 488 # Window_Command X Position
CommandY = 0 # Window_Command Y Position
WINNOTES_X= 2 # Window_Notes X Position
WINNOTES_Y= 242 # Window_Notes Y Position
RealTimeX = 488 # Window_RealTime X Position
RealTimeY = 320 # Window_RealTime Y Position
PlayTimeX = 488 # Window_PlayTime X Position
PlayTimeY = 224 # Window_PlayTime Y Position
VariableX = 243 # Window_Variable X Position
VariableY = 66 # Window_Variable Y Position
WStepX = 243 # Window_Steps X Position
WStepY = 163 # Window_Steps Y Position
WGoldX = 243 # Window_Gold X Position
WGoldY = 2 # Window_Gold Y Position
PLogoX = 490 # Game_Logo X Position
PLogoY = 418 # Game_Logo Y Position
WMStatusX = 2 # Window_MenuStatus X Position
WMStatusY = 2 # Window_MenuStatus Y Position
LOCATION_X= 2
LOCATION_Y= 430
VariTitle = "Bank" # Text in Window_Variable
VariShoNum= 1 # Variable used in Window_Variable
GoldName = "Gold" # Currency Name
StepName = "Steps" # Text in Window_Steps
PlTiName = "Play Time"# Text in Window_PlayTime
NoteVName = "Notes" # Text in Window_Notes
CurTiName = "Time" # Text in Window_RealTime
LocaName = "Location:"# Text in Window_Location
end
#-------------------------------------------------------------------------------
# End of Menu Configuration
#-------------------------------------------------------------------------------
class Game_System
attr_accessor :FirstLine
attr_accessor :SecondLine
attr_accessor :ThirdLine
attr_accessor :FourthLine
attr_accessor :FifthLine
alias windownotes_words initialize
def initialize
#-------------------------------------------------------------------------------
# Window_Notes Default Configuration
#-------------------------------------------------------------------------------
@FirstLine = "Welcome to the Menu! This Window is called"
@SecondLine = "the Notes Window! Text automatically aligns"
@ThirdLine = "to the right side of the Window, and the"
@FourthLine = "Window stores up to 5 lines per message!"
@FifthLine = " "
windownotes_words
end
end
#==============================================================================
# ** Window_MenuStatus
#------------------------------------------------------------------------------
# This window displays party member status on the menu screen.
#==============================================================================
class Window_MenuStatus < Window_Selectable
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 240, 240)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
self.active = false
self.index = -1
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
@item_max = $game_party.actors.size
for i in 0...$game_party.actors.size
x = 0
y = 0
actor = $game_party.actors[i]
if ProCMS::Facesets == true #Facesets True?
if FileTest.exist?("Graphics/Pictures/" + ProCMS::FaceIMG + ProCMS::FFILETYPE) #File Exist
@face_file = ProCMS::FaceIMG + ProCMS::FFILETYPE
self.contents.blt(x, y, RPG::Cache.picture(@face_file), Rect.new(x, y, 112, 112))
else #File Not Exist
draw_actor_graphic(actor, 24, 56)
end #End File Check
else #Facesets Else
draw_actor_graphic(actor, 24, 56)
end #Faceset End
if ProCMS::ShowName == true
draw_actor_name(actor, ProCMS::NameX, ProCMS::NameY)
end
if ProCMS::ShowLevel == true
draw_actor_level(actor, 0, 0)
end
if ProCMS::ShowState == true
draw_actor_state(actor, ProCMS::StateX, ProCMS::StateY)
end
if ProCMS::ShowEXP == true
draw_actor_exp(actor, ProCMS::ExperX, ProCMS::ExperY)
end
if ProCMS::ShowClass == true
draw_actor_class(actor, ProCMS::ClassX, ProCMS::ClassY)
end
if ProCMS::ShowHP == true
draw_actor_hp(actor, ProCMS::HPX, ProCMS::HPY)
end
if ProCMS::ShowSP == true
draw_actor_sp(actor, ProCMS::SpecX, ProCMS::SpecY)
end
end
end
#--------------------------------------------------------------------------
# * Cursor Rectangle Update
#--------------------------------------------------------------------------
def update_cursor_rect
if @index < 0
self.cursor_rect.empty
else
self.cursor_rect.set(0, @index * 116, self.width - 32, 96)
end
end
end
#==============================================================================
# End Window_MenuStatus
#==============================================================================
# Disallowing a Custom Map and the Map for a background.
if ProCMS::MapBG && ProCMS::Custom_Map == true
print "Both MapBG and Custom_Map cannot be on at the same time! Turning off Custom_Map."
ProCMS::Custom_Map = false
end
class Game_Party
$actorsize = ProCMS::MAXACTORS
end
#==============================================================================
# Window Base Level Edit
#==============================================================================
class Window_Base
def draw_actor_level(actor, x, y)
self.contents.font.color = system_color
self.contents.draw_text(ProCMS::LevelX, 160, 32, 32, ProCMS::LevelT, 2)
self.contents.font.color = normal_color
self.contents.draw_text(ProCMS::LevelX + 32, 160, 24, 32, actor.level.to_s, 2)
end
end
#=============================================================================
# Game Party Limit
#=============================================================================
class Game_Party
def add_actor(actor_id)
# Get actor
actor = $game_actors[actor_id]
# If the party has less than 4 members and this actor is not in the party
if @actors.size < 1 and not @actors.include?(actor)
# Add actor
@actors.push(actor)
# Refresh player
$game_player.refresh
end
end
end
class PCMS_Steps < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 245, 78)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(0, -5, 120, 32, ProCMS::StepName)
self.contents.font.color = normal_color
self.contents.draw_text(4, 16, 200, 32, $game_party.steps.to_s, 2)
end
end
#==============================================================================
# ** Window_Gold
#------------------------------------------------------------------------------
# This window displays amount of gold.
#==============================================================================
class PCMS_Gold < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 245, 64)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
cx = contents.text_size(ProCMS::GoldName).width
self.contents.font.color = normal_color
self.contents.draw_text(4, 0, 180-cx-2, 32, $game_party.gold.to_s, 2)
self.contents.font.color = system_color
self.contents.draw_text(184-cx, 0, cx, 32, ProCMS::GoldName, 2)
end
end
#==============================================================================
# ** Window_PlayTime
#------------------------------------------------------------------------------
# This window displays play time on the menu screen.
#==============================================================================
class PCMS_PlayTime < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 150, 96)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(4, 0, 100, 32, ProCMS::PlTiName)
@total_sec = Graphics.frame_count / Graphics.frame_rate
hour = @total_sec / 60 / 60
min = @total_sec / 60 % 60
sec = @total_sec % 60
text = sprintf("%02d:%02d:%02d", hour, min, sec)
self.contents.font.color = normal_color
self.contents.draw_text(4, 32, 100, 32, text, 2)
end
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
super
if Graphics.frame_count / Graphics.frame_rate != @total_sec
refresh
end
end
end
#=============================================================================
# End Window_Steps
#==============================================================================
#==============================================================================
# ** Window_Notes by lilbrudder917
#------------------------------------------------------------------------------
# This window displays custom-made notes made for the menu screen, but
# can be called anywhere.
#==============================================================================
class Window_Notes < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(240, 20, 485, 190)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(4, 0, 120, 32, ProCMS::NoteVName)
self.contents.font.color = normal_color
self.contents.draw_text(4, 32, 350, 32, $game_system.FirstLine, 2)
self.contents.draw_text(4, 64, 350, 32, $game_system.SecondLine, 2)
self.contents.draw_text(4, 96, 350, 32, $game_system.ThirdLine, 2)
self.contents.draw_text(4, 128, 350, 32, $game_system.FourthLine, 2)
self.contents.draw_text(4, 160, 350, 32, $game_system.FifthLine, 2)
end
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
super
refresh
end
end
#==============================================================================
# ** Window_Location by lilbrudder917
#------------------------------------------------------------------------------
# This window displays the map's name.
#==============================================================================
class Window_Location < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 485, 48)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(0, -10, 120, 32, ProCMS::LocaName)
self.contents.font.color = normal_color
cx = contents.text_size(ProCMS::LocaName).width
self.contents.draw_text(cx+30, -10, 120, 32, $game_map.name)
end
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
super
refresh
end
end
#-------------------------------------------------------------------------
# * Define Map Title
#-------------------------------------------------------------------------
class Scene_Title
alias locationname main
def main
locationname
$map_infos = load_data('Data/MapInfos.rxdata')
$map_infos.keys.each {|key| $map_infos[key] = $map_infos[key].name}
end
end
class Game_Map
def name
return $map_infos[@map_id]
end
end
#==============================================================================
# ** Window_Variable by lilbrudder917
#------------------------------------------------------------------------------
# This window displays a variable made for the menu screen, but can be called
# anywhere.
#==============================================================================
class Window_Variable < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
super(0, 0, 245, 96)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(4, 0, 120, 32, ProCMS::VariTitle)
text = "#{$game_variables[ProCMS::VariShoNum]}"
self.contents.font.color = normal_color
self.contents.draw_text(4, 32, 120, 32, text, 2)
end
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
super
refresh
end
end
#==============================================================================
# ** Window_RealTime by lilbrudder917
#------------------------------------------------------------------------------
# This window displays the time stored on your computer's internal clock,
# made for the menu screen, but can be called anywhere.
#==============================================================================
class Window_RealTime < Window_Base
#--------------------------------------------------------------------------
# * Object Initialization
#--------------------------------------------------------------------------
def initialize
@time_stamp = Time.new
if ProCMS::TwelClock == true
$game_variables[ProCMS::RTHVar] = @time_stamp.strftime("%I") # Hour, 12-Hour Format
$game_variables[ProCMS::RTMVar] = @time_stamp.strftime("%M") # Minutes
$game_variables[ProCMS::RTAPVar] = @time_stamp.strftime("%p")# AM/PM
$game_variables[ProCMS::RTSVar] = @time_stamp.strftime("%S") # Seconds
else
$game_variables[ProCMS::RTHVar] = @time_stamp.strftime("%H") # Hour, 24-Hour Format
$game_variables[ProCMS::RTMVar] = @time_stamp.strftime("%M") # Minutes
$game_variables[ProCMS::RTSVar] = @time_stamp.strftime("%S") # Seconds
end
super(0, 0, 150, 96)
self.contents = Bitmap.new(width - 32, height - 32)
refresh
end
#--------------------------------------------------------------------------
# * Refresh
#--------------------------------------------------------------------------
def refresh
self.contents.clear
self.contents.font.color = system_color
self.contents.draw_text(4, 0, 100, 32, ProCMS::CurTiName)
if ProCMS::TwelClock == true
text = "#{$game_variables[ProCMS::RTHVar]}: #{$game_variables[ProCMS::RTMVar]}: #{$game_variables[ProCMS::RTSVar]} #{$game_variables[ProCMS::RTAPVar]}"
else
text = "#{$game_variables[ProCMS::RTHVar]}: #{$game_variables[ProCMS::RTMVar]}: #{$game_variables[ProCMS::RTSVar]}"
end
self.contents.font.color = normal_color
self.contents.draw_text(4, 32, 100, 32, text, 2)
end
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
super
refresh
end
end
class Scene_Menu
#--------------------------------------------------------------------------
# * Object Initialization
# menu_index : command cursor's initial position
#--------------------------------------------------------------------------
def initialize(menu_index = 0)
@menu_index = menu_index
end
#--------------------------------------------------------------------------
# * Main Processing
#--------------------------------------------------------------------------
def main
#--------------------------------------------------------------------------
# * Menu Background
#--------------------------------------------------------------------------
if ProCMS::MapBG == true
@map = Spriteset_Map.new
end
if ProCMS::Custom_Map == true
@sprite = Sprite.new
@sprite.bitmap = RPG::Cache.picture(ProCMS::Map_Picture)
end
# Make command window
s1 = ProCMS::ItemName
s2 = ProCMS::SkillName
s3 = ProCMS::EquipName
s4 = ProCMS::StatusName
s5 = ProCMS::SaveName
s6 = ProCMS::EndName
@command_window = Window_Command.new(ProCMS::CommandW, [s1, s2, s3, s4, s5, s6])
@command_window.x = ProCMS::CommandX
@command_window.y = ProCMS::CommandY
@command_window.index = 0
# If number of party members is 0
if $game_party.actors.size == 0
# Disable items, skills, equipment, and status
@command_window.disable_item(0)
@command_window.disable_item(1)
@command_window.disable_item(2)
@command_window.disable_item(3)
end
# If save is forbidden
if $game_system.save_disabled
# Disable save
@command_window.disable_item(4)
end
if ProCMS::UseNotes == true
@mnotes_window = Window_Notes.new
@mnotes_window.x = ProCMS::WINNOTES_X
@mnotes_window.y = ProCMS::WINNOTES_Y
end
if ProCMS::UseLoca == true
@location_window = Window_Location.new
@location_window.x = ProCMS::LOCATION_X
@location_window.y = ProCMS::LOCATION_Y
end
# Make play time window
if ProCMS::UsePTime == true
@playtime_window = PCMS_PlayTime.new
@playtime_window.x = ProCMS::PlayTimeX
@playtime_window.y = ProCMS::PlayTimeY
end
if ProCMS::UseRTime == true
@realtime_window = Window_RealTime.new
@realtime_window.x = ProCMS::RealTimeX
@realtime_window.y = ProCMS::RealTimeY
end
if ProCMS::UseVar == true
@vartime_window = Window_Variable.new
@vartime_window.x = ProCMS::VariableX
@vartime_window.y = ProCMS::VariableY
end
#Make steps window
if ProCMS::UseSteps ==true
@steps_window = PCMS_Steps.new
@steps_window.x = ProCMS::WStepX
@steps_window.y = ProCMS::WStepY
end
# Make gold window
if ProCMS::UseGold == true
@gold_window = PCMS_Gold.new
@gold_window.x = ProCMS::WGoldX
@gold_window.y = ProCMS::WGoldY
end
if ProCMS::CornerLogo == true
@image = Sprite.new
@image.bitmap = RPG::Cache.picture(ProCMS::LogoIcon)
@image.x = ProCMS::PLogoX
@image.y = ProCMS::PLogoY
end
# Make status window
@status_window = Window_MenuStatus.new
@status_window.x = ProCMS::WMStatusX
@status_window.y = ProCMS::WMStatusY
# Execute transition
Graphics.transition
# Main loop
loop do
# Update game screen
Graphics.update
# Update input information
Input.update
# Frame update
update
# Abort loop if screen is changed
if $scene != self
break
end
end
# Prepare for transition
Graphics.freeze
# Dispose of windows
@command_window.dispose
if ProCMS::MapBG == true
@map.dispose
end
if ProCMS::UseLoca == true
@location_window.dispose
end
if ProCMS::UseNotes == true
@mnotes_window.dispose
end
if ProCMS::UsePTime == true
@playtime_window.dispose
end
if ProCMS::UseRTime == true
@realtime_window.dispose
end
if ProCMS::UseVar == true
@vartime_window.dispose
end
if ProCMS::UseSteps ==true
@steps_window.dispose
end
if ProCMS::UseGold == true
@gold_window.dispose
end
if ProCMS::CornerLogo == true
@image.dispose
end
@status_window.dispose
#--------------------------------------------------------------------------
# * Frame Update
#--------------------------------------------------------------------------
def update
# Update windows
@command_window.update
if ProCMS::UseNotes == true
@mnotes_window.update
end
if ProCMS::UseLoca == true
@location_window.update
end
if ProCMS::UsePTime == true
@playtime_window.update
end
if ProCMS::UseRTime == true
@realtime_window.update
end
if ProCMS::UseVar == true
@vartime_window.update
end
if ProCMS::UseSteps ==true
@steps_window.update
end
if ProCMS::UseGold == true
@gold_window.update
end
if ProCMS::CornerLogo == true
@image.update
end
@status_window.update
# If command window is active: call update_command
if @command_window.active
update_command
return
end
# If status window is active: call update_status
if @status_window.active
update_status
return
end
end
#--------------------------------------------------------------------------
# * Frame Update (when command window is active)
#--------------------------------------------------------------------------
def update_command
# If B button was pressed
if Input.trigger?(Input::B)
# Play cancel SE
$game_system.se_play($data_system.cancel_se)
# Switch to map screen
$scene = Scene_Map.new
return
end
# If C button was pressed
if Input.trigger?(Input::C)
# If command other than save or end game, and party members = 0
if $game_party.actors.size == 0 and @command_window.index < 4
# Play buzzer SE
$game_system.se_play($data_system.buzzer_se)
return
end
# Branch by command window cursor position
case @command_window.index
when 0 # item
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to item screen
$scene = Scene_Item.new
when 1 # skill
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Make status window active
@command_window.active = false
@status_window.active = true
@status_window.index = 0
when 2 # equipment
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Make status window active
@command_window.active = false
@status_window.active = true
@status_window.index = 0
when 3 # status
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Make status window active
@command_window.active = false
@status_window.active = true
@status_window.index = 0
when 4 # save
# If saving is forbidden
if $game_system.save_disabled
# Play buzzer SE
$game_system.se_play($data_system.buzzer_se)
return
end
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to save screen
$scene = Scene_Save.new
when 5 # end game
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to end game screen
$scene = Scene_End.new
end
return
end
end
#--------------------------------------------------------------------------
# * Frame Update (when status window is active)
#--------------------------------------------------------------------------
def update_status
# If B button was pressed
if Input.trigger?(Input::B)
# Play cancel SE
$game_system.se_play($data_system.cancel_se)
# Make command window active
@command_window.active = true
@status_window.active = false
@status_window.index = -1
return
end
# If C button was pressed
if Input.trigger?(Input::C)
# Branch by command window cursor position
case @command_window.index
when 1 # skill
# If this actor's action limit is 2 or more
if $game_party.actors[@status_window.index].restriction >= 2
# Play buzzer SE
$game_system.se_play($data_system.buzzer_se)
return
end
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to skill screen
$scene = Scene_Skill.new(@status_window.index)
when 2 # equipment
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to equipment screen
$scene = Scene_Equip.new(@status_window.index)
when 3 # status
# Play decision SE
$game_system.se_play($data_system.decision_se)
# Switch to status screen
$scene = Scene_Status.new(@status_window.index)
end
return
end
end
end
end
Instructions
It's basically plug, (configure if you want), and play.
Compatibility
Incompatible with other CMSs.
Credits and Thanks
Author's Notes
All known bugs have been fixed. If you find any, please post them so I can (try to) fix them.
Title: Re: [XP] LilBrudder917's CMS
Post by: Calintz on October 17, 2009, 01:09:07 pm
This is very beautiful ...
A very well made script for one man games.
Title: Re: [XP] LilBrudder917's CMS
Post by: C.C. rOyAl on October 17, 2009, 01:44:47 pm
i love it!
*powers up!*
Title: Re: [XP] LilBrudder917's CMS
Post by: G_G on October 17, 2009, 02:09:27 pm
Very nice *lv's up*
Title: Re: [XP] LilBrudder917's CMS
Post by: lilbrudder917 on October 17, 2009, 02:15:05 pm
Thanks, everyone, but you do know if you close the menu and try to open it again, it'll crash, right? ... Or is it just mine?
Title: Re: [XP] LilBrudder917's CMS
Post by: Calintz on October 17, 2009, 02:15:57 pm
Then you might want to fix that ...
Title: Re: [XP] LilBrudder917's CMS
Post by: lilbrudder917 on October 17, 2009, 02:31:03 pm
QuoteIntroduction
This is my first script, it's a One-Man CMS with several windows you can configure to be on or off. There's some bugs with it, but I'm not good enough with scripts yet to know what's causing them, such as where you can only open the Menu once, and if you close it, and try to open Scene_Menu again, Game.exe will crash. (If anyone can help me with this, it'd be appreciated and I'll add you to the authors/credits &thanks list)
QuoteInstructions
It's basically plug, configure, and play, excluding the fact that you can only open it once.
QuoteCompatibility
Incompatible with other CMSs.
Also pretty much incompatible with itself. Dunno why.
QuoteAuthor's Notes
Please do not move this to the Script Database! It is incomplete and has a flaw that I am trying to fix.
I'm trying, it's just I have no idea what causes it.
EDIT: I still don't know what causes it, but it's not just went you go back to Scene_Menu, it's whenever you call a new Scene after Scene_Menu, but not when you call a Scene FROM Scene_Menu.
EDIT2: I found it... I don't know WHY exactly it was happening, but I found out WHAT was doing it... You cannot use the map as the background with this script anymore, otherwise the Game.exe will crash. Fixing first post now.
Title: Re: [XP] LilBrudder917's CMS
Post by: Blizzard on October 17, 2009, 04:43:47 pm
Database'd.
Title: Re: [XP] LilBrudder917's CMS
Post by: G_G on October 17, 2009, 06:44:07 pm
Quote from: lilbrudder917 on October 17, 2009, 02:31:03 pm
QuoteIntroduction
This is my first script, it's a One-Man CMS with several windows you can configure to be on or off. There's some bugs with it, but I'm not good enough with scripts yet to know what's causing them, such as where you can only open the Menu once, and if you close it, and try to open Scene_Menu again, Game.exe will crash. (If anyone can help me with this, it'd be appreciated and I'll add you to the authors/credits &thanks list)
QuoteInstructions
It's basically plug, configure, and play, excluding the fact that you can only open it once.
QuoteCompatibility
Incompatible with other CMSs.
Also pretty much incompatible with itself. Dunno why.
QuoteAuthor's Notes
Please do not move this to the Script Database! It is incomplete and has a flaw that I am trying to fix.
I'm trying, it's just I have no idea what causes it.
EDIT: I still don't know what causes it, but it's not just went you go back to Scene_Menu, it's whenever you call a new Scene after Scene_Menu, but not when you call a Scene FROM Scene_Menu.
EDIT2: I found it... I don't know WHY exactly it was happening, but I found out WHAT was doing it... You cannot use the map as the background with this script anymore, otherwise the Game.exe will crash. Fixing first post now.
You werent disposing the map spriteset I bet. Because I made a cms thats customizable like your's but not as advanced and theres an option to use map.
So I'm sure you werent disposing it is all.
Title: Re: [XP] LilBrudder917's CMS
Post by: lilbrudder917 on October 17, 2009, 06:47:55 pm
That's probably the case, I know I didn't dispose it, but it worked when I put it in originally, even without the disposal. It only crashed once it was finished.
I'll add it in again with the disposal and see if it works.
EDIT: Yeah, that was it. I'll add it with the next update.
Update: v2.0 is up.
|
__label__pos
| 0.569928 |
package com.aptima.netstorm.algorithms.aptima.bp.hive;
import java.util.TreeMap;
import com.aptima.netstorm.algorithms.aptima.bp.network.AttributedModelNode;
import com.aptima.netstorm.algorithms.aptima.bp.network.AttributedModelRelation;
public class MRBase {
protected static AttributedModelNode[] modelNodes;
protected static AttributedModelRelation[] modelRelations;
public static String DIR_IN = "IN";
public static String DIR_OUT = "OUT";
public static boolean constrainFlow = false;
// 1.0 = exact match, 0.0 = completely inexact match
protected static double sliderValue = 1.0;
protected static double mismatchThreshold = 0.0;
/**
*
* @param args
*/
public MRBase() {
calcSliderMismatch();
//System.out.println("Slider Value: " + sliderValue);
//System.out.println("Mismatch Threshold: " + mismatchThreshold);
}
// 0.0 -> 1.0 mismatch limit
public static void calcSliderMismatch() {
if (sliderValue == 1.0) {
mismatchThreshold = 0.0f;
} else if (sliderValue == 0.0) {
mismatchThreshold = Float.MAX_VALUE;
} else {
mismatchThreshold = (float) (-1 * Math.log(sliderValue));
}
}
protected static String mismatchVectorToString(TreeMap<Integer, Float> idToMismatchMap) {
// build a string vector of model to data node mismatches
String mismatchVectorAsString = "";
for (Integer key : idToMismatchMap.keySet()) {
if (mismatchVectorAsString.length() > 0) {
mismatchVectorAsString = mismatchVectorAsString + ",";
}
mismatchVectorAsString = mismatchVectorAsString + key + "," + idToMismatchMap.get(key);
}
return mismatchVectorAsString;
}
protected static TreeMap<Integer, Float> stringToMismatchVector(String mismatchVectorAsString) {
// rebuild a mismatch vector of model to data node mismatches from the string representation
TreeMap<Integer, Float> idToMismatchMap = new TreeMap<Integer, Float>();
String[] split = mismatchVectorAsString.split(",");
for (int i = 0; i < split.length - 1; i++) {
idToMismatchMap.put(Integer.parseInt(split[i]), Float.parseFloat(split[i + 1]));
i++;
}
return idToMismatchMap;
}
}
|
__label__pos
| 0.99773 |
TOGAF ADM Overview
The TOGAF ADM is a foundational component of the TOGAF Standard, a time-tested and reliable framework that has been successfully employed by numerous enterprises over the years. Its primary objective is to guide the creation of an Enterprise Architecture that not only supports but also drives the realization of an organization’s vision and requirements. TOGAF 10, the latest version, provides comprehensive guidance on developing enterprise architecture that facilitates effective change.
What is the TOGAF ADM?
The TOGAF ADM represents a logical, systematic approach to acquiring knowledge necessary for enterprise architecture development. Each phase within the ADM outlines key activities and information inputs essential for knowledge creation in the pursuit of enterprise architecture. The beauty of the TOGAF ADM lies in its universality—it adapts to the development of enterprise architecture at various levels of detail, spanning from strategic planning and portfolio management to project implementation and solution delivery.
Crucially, the ADM is inherently incremental and iterative in nature, leveraging existing architecture and TOGAF Foundation Architecture elements. While it is often depicted graphically, it is not a rigid, linear sequence of steps. It’s essential to grasp that each time the enterprise architecture team undertakes an activity associated with an ADM phase, they are essentially executing that specific phase, adhering to mandatory inputs and producing required outputs. This principle applies consistently across all ADM phases.
Utilizing the TOGAF ADM as an information model empowers enterprise architects to maximize efficiency and productivity, ensuring a streamlined approach to architecture development.
TOGAF ADM Phases
The Architecture Development Method (ADM) comprises nine distinct phases, each dedicated to specific roles and comprehensively describing essential aspects. These phases encompass the identification and understanding of objectives, input requirements, necessary steps, and expected outputs. Adhering to these phases is crucial for unlocking the full potential of TOGAF ADM.
TOGAF ADM Phases Explained
In this section, we will provide in-depth explanations of the TOGAF ADM Phases, focusing on establishing and developing enterprise architecture.
Preliminary Phase – Framework and Principles The Preliminary Phase serves as an introduction, designed to set the stage for the enterprise architecture team. It revolves around addressing key questions and issues that the team must grapple with, including target stakeholders, application scenarios, usage guidelines, and the rationale for adopting the model. By comprehending the Preliminary Phase, you gain the ability to:
• Define an organization or enterprise.
• Identify and comprehend principal elements and critical drivers within the enterprise.
• Describe the requirements for architecture work.
• Define the principles that impact architecture work.
• Identify the most suitable framework for the organization.
• Establish connections between different management frameworks.
• Evaluate the maturity of the enterprise architecture team.
TOGAF ADM Phases Explained In this section, we will delve into the TOGAF ADM Phases dedicated to crafting a successful enterprise architecture.
Chapter 31. TOGAF ADM Guide-Through - Visual Paradigm Community Circle
Phase A: Defining Architecture Vision Every architecture development journey commences with Phase A. Without the foundation laid in Phase A, an enterprise architect cannot confidently address the right challenges, constraints, and stakeholders. The phase begins with the concept of a Request for Architecture Work, encompassing crucial activities such as defining the architecture scope, identifying stakeholders and their concerns, and acknowledging constraints imposed by higher-level architectures. Ultimately, Phase A culminates in the creation of an architecture vision—a concise summary of one or more potential target architectures. If any of these candidate architectures demonstrate the potential for significant improvement with a reasonable amount of effort, the team is equipped to proceed. The final step involves securing approvals to proceed with the development of the enterprise architecture.
It is essential to recognize that the absence of a suitable architecture vision can lead to a valuable outcome—the decision to halt architecture development if stakeholders cannot envision sufficient value. In this case, the enterprise architecture team saves precious organizational resources from being misallocated.
Phase A represents the cornerstone of the TOGAF ADM, ensuring that architecture development remains aligned with the organization’s goals and requirements. It emphasizes the importance of rigorous planning and vision setting as the initial step in the journey.
Phase A – Start at the Beginning with an Architecture Vision This section explores Phase A deliverables, the role of Architecture Governance, and offers guidance and techniques specific to Phase A.
Phase B: Developing the Business Architecture Phase B centers on the development of the business architecture domain, focusing on the current state of the business structure and the intended goals of stakeholders. Key activities involve organizational design, enterprise processes, information flows, business capabilities, and strategic business planning. A thorough understanding of this phase is vital to crafting a robust business architecture.
Phase C: Developing Information Systems Architectures Building upon the insights gained in Phase B and the enterprise’s enhanced understanding, Phase C takes the next critical step—developing Information Systems Architecture. This phase involves a detailed exploration of both the Data Architecture Domain and the Application Architecture Domain.
Phase D: Describing and Developing Technology Architecture Phase D is dedicated to the identification, description, and development of Technology Architecture. It encompasses various tasks, such as identifying viewpoints, goals, reference models, and tools, creating Baseline Technology Architecture, defining Target Technology Architecture, performing gap analysis, outlining roadmap elements, addressing Architecture Landscape effects, engaging in stakeholder reviews, finalizing Technology Architecture, and generating the Architecture Definition Document.
Phase E: Identifying Opportunities and Solutions Phase E revolves around identifying opportunities and seeking optimal solutions to address both current and previous challenges. This phase entails the identification of key phases, determination of change parameters, assessment of top-level projects, and more.
Phase F: Crafting a Migration Plan A well-crafted migration plan is essential for ensuring a smooth transition. It encompasses all critical elements and outlines the mode of implementation according to priority. Activities within this phase include cost assessment, benefits analysis, dependency evaluation, and more.
Phase F – Craft the Implementation Plan This section discusses Phase F Implementation Plan Deliverables, Implementation Plan techniques, and Phase F tools.
Phase G: Governance Implementation The Implementation Governance phase focuses on executing the changes outlined in the Architecture Roadmap and Implementation Plan. It also incorporates valuable insights from other implementation projects. Effective architecture governance plays a pivotal role in managing projects successfully while mitigating risks.
Phase H: Establishing Architecture Change Management Architecture Change Management is vital in ensuring that the anticipated benefits of the target enterprise architecture are realized. This phase ensures that the target architecture remains aligned with the organization’s evolving circumstances and ecosystem. It defines key attributes, the target state, and the management approach, working in conjunction with the Architecture Requirements Management process and maintaining continuity.
TOGAF Phase H – Applying Agile and Real-World Enterprise Agility This section explores Phase H in depth, providing insights into its Agile application and offering a webinar resource for a real-world perspective on Enterprise Agility led by experienced enterprise architects.
Are you ready to take your enterprise architecture game to the next level? Visual Paradigm’s TOGAF ADM Guide-Through Process Tools are here to empower your organization’s architecture development like never before. Let’s dive into how Visual Paradigm can be your trusted companion throughout the TOGAF ADM phases:
Visual Paradigm’s TOGAF Guide-Through Process Tool
Unlock Your TOGAF ADM Journey with Visual Paradigm’s Guide-Through Process Tools!
TOGAF Guide-Through Process
🏆 Comprehensive Guidance: Visual Paradigm’s guide-through process tools align perfectly with the TOGAF ADM framework. Whether you’re in the Preliminary Phase or navigating the intricacies of Phase H, our tools provide comprehensive guidance at every step.
TOGAF ADM Software: Act and Generate ADM Deliverables
🧭 Seamless Navigation: No need to get lost in the complexity of enterprise architecture. Our user-friendly interface ensures you navigate through the TOGAF ADM phases effortlessly, allowing you to focus on the architecture itself rather than the process.
🛠️ Practical Implementation: Visual Paradigm’s tools don’t just stop at theory; they help you put your architectural plans into action. Craft robust Business Architectures, develop Information Systems Architectures, and describe Technology Architectures with practicality in mind.
TOGAF ADM Software: Act and Generate ADM Deliverables
🚀 Streamlined Transition: Transitioning from one phase to the next has never been smoother. Visual Paradigm aids you in identifying opportunities, crafting migration plans, implementing governance, and establishing change management, ensuring a seamless transition to the next stage of your architecture journey.
📈 Maximize Efficiency: Visual Paradigm empowers your enterprise architects to achieve maximum efficiency and productivity by serving as a comprehensive information model. Say goodbye to inefficiencies and welcome a streamlined approach to architecture development.
📢 Ready to embark on a successful TOGAF ADM journey? Don’t miss out on the opportunity to elevate your enterprise architecture game with Visual Paradigm’s Guide-Through Process Tools. Join the ranks of successful organizations that have harnessed the power of TOGAF ADM with Visual Paradigm today!
🌐 Visit Visual Paradigm to explore our TOGAF ADM Guide-Through Process Tools and start transforming your enterprise architecture with confidence! 🌐
Leave a Comment
Your email address will not be published.
|
__label__pos
| 0.947053 |
Customising the Registered Notify template URL
Hi,
We have an email that goes out when a user is registered on our system and it is customised using Registered notify template with a URL to reset their password initially. However this user registration is moderated and hence an internal admin approves a user which automatically triggers the Registered Notify email template.
The Registered Notify template uses
python:here.pwreset_constructURL(reset['randomstring'])+'?userid='
and unfortunately (or fortunately) we have two Plone URLs (themed version for customers and unthemed version for admins) using the same database.
Admins always tend use the unthemed versions and when they approve users using the unthemed URL, the password reset URL is pointing to the unthemed URL. Is there a way to use the above construct and always retain the themed URL in password reset when approving users through unthemed URL?
Thanks
with a rewrite rule in your webserver config?
Thanks - its an interesting workaround.
I would prefer not to expose the admin unthemed URL as the users will be sent with that URL in an email.
If there is not other alternative then your suggestion is probably the safest thing to do.
|
__label__pos
| 0.997215 |
What is vulnerability management?
Vulnerability management is the process of identifying, evaluating, prioritizing, and addressing security vulnerabilities in software, systems, and networks. The objective of vulnerability management is to minimize the risk of cyber attacks by discovering and addressing vulnerabilities before they can be exploited by attackers.
What does the vulnerability management process look like?
The vulnerability management process typically involves the following steps:
1. Asset inventory: Identify all hardware, software, and network assets in the organization.
2. Vulnerability scanning: Use automated tools to scan for known vulnerabilities in the identified assets.
3. Vulnerability assessment: Analyze the results of the vulnerability scans to determine the severity and potential impact of the identified vulnerabilities.
4. Prioritization: Prioritize the vulnerabilities based on their severity, potential impact, and the resources required to address them.
5. Remediation: Develop and implement a plan to address the identified vulnerabilities, which may include patching software, changing configurations, or implementing additional security controls.
6. Verification: Verify that the remediation efforts have effectively addressed the identified vulnerabilities.
7. Reporting: Document the results of the vulnerability management process and communicate them to relevant stakeholders.
By implementing a proactive vulnerability management program, organizations can reduce the risk of cyber attacks and protect their assets, data, and reputation.
|
__label__pos
| 1 |
Help with 'Disemvowel Trolls' kata on Codewars?
This challenge is to remove every vowel in a string. Here is my code so far:
def disemvowel(string_):
for letter in string_:
if letter in 'aeiouAEIOU':
return string_.replace(letter, '')
It fails every test because it only removes some of the letters that are vowels. For example,
‘Ths webste s for losers LOL!’
should equal
‘Ths wbst s fr lsrs LL!’
My code removes the I’s and but keeps the O’s and E’s. I believe the problem is that my code checks only for the first instance of a vowel and then removes that letter from the string and calls it a day. How do I get my code to keep checking every letter even after it finds the first vowel?
First clue, return in a loop.
1 Like
|
__label__pos
| 0.999897 |
← Back to team overview
openstack team mailing list archive
openstack nova with thousands of hosts
Hi everyone,
I want to understand the different possibilities to deploy Openstack Nova in a site with thousands of hosts. After reading the documentation and following the list for some time now I’m confused how to use the concepts: “region”, “availability zone”, “cell”, “host aggregate”… to have a good logical division.
In the list was discussed that a “cell” (1 rabbit, 1 db, …) supports between 500-1000 hosts, so I understand that for a big site we need several “cells”. Cells can be nested in a tree fashion where each “child” can have several “parents”. Can I say that an availability zone is one of these trees of cells?
Because a big site has heterogeneous hardware I can use the concept “aggregate host” for logically group similar resources. Aggregate hosts live inside cells.
Are these assumptions correct?
And how to deploy openstack nova between different sites (or regions…)? I saw some discussions for swift but none for nova.
thanks,
Belmiro Moreira
|
__label__pos
| 0.535114 |
19
What happened when defer called twice when the struct of that method has been changed?
For example:
rows := Query(`SELECT FROM whatever`)
defer rows.Close()
for rows.Next() {
// do something
}
rows = Query(`SELECT FROM another`)
defer rows.Close()
for rows.Next() {
// do something else
}
which rows when the last rows.Close() called?
22
It depends on the method receiver and on the type of the variable.
Short answer: if you're using the database/sql package, your deferred Rows.Close() methods will properly close both of your Rows instances because Rows.Close() has pointer receiver and because DB.Query() returns a pointer (and so rows is a pointer). See reasoning and explanation below.
To avoid confusion, I recommend using different variables and it will be clear what you want and what will be closed:
rows := Query(`SELECT FROM whatever`)
defer rows.Close()
// ...
rows2 := Query(`SELECT FROM whatever`)
defer rows2.Close()
I'd like to point out an important fact that comes from the deferred function and its parameters being evaluated immedately which is stated in the Effective Go blog post and in the Language Spec: Deferred statements too:
Each time a "defer" statement executes, the function value and parameters to the call are evaluated as usual and saved anew but the actual function is not invoked. Instead, deferred functions are invoked immediately before the surrounding function returns, in the reverse order they were deferred.
If variable is not a pointer: You will observe different results when calling a method deferred, depending if the method has a pointer receiver.
If variable is a pointer, you will see always the "desired" result.
See this example:
type X struct {
S string
}
func (x X) Close() {
fmt.Println("Value-Closing", x.S)
}
func (x *X) CloseP() {
fmt.Println("Pointer-Closing", x.S)
}
func main() {
x := X{"Value-X First"}
defer x.Close()
x = X{"Value-X Second"}
defer x.Close()
x2 := X{"Value-X2 First"}
defer x2.CloseP()
x2 = X{"Value-X2 Second"}
defer x2.CloseP()
xp := &X{"Pointer-X First"}
defer xp.Close()
xp = &X{"Pointer-X Second"}
defer xp.Close()
xp2 := &X{"Pointer-X2 First"}
defer xp2.CloseP()
xp2 = &X{"Pointer-X2 Second"}
defer xp2.CloseP()
}
Output:
Pointer-Closing Pointer-X2 Second
Pointer-Closing Pointer-X2 First
Value-Closing Pointer-X Second
Value-Closing Pointer-X First
Pointer-Closing Value-X2 Second
Pointer-Closing Value-X2 Second
Value-Closing Value-X Second
Value-Closing Value-X First
Try it on the Go Playground.
Using a pointer variable the result is always good (as expected).
Using a non-pointer variable and using pointer receiver we see the same printed results (the latest) but if we have value receiver, it prints 2 different results.
Explanation for non-pointer variable:
As stated, deferred function including the receiver is evaluated when the defer executes. In case of a pointer receiver it will be the address of the local variable. So when you assign a new value to it and call another defer, the pointer receiver will be again the same address of the local variable (just the pointed value is different). So later when the function is executed, both will use the same address twice but the pointed value will be the same, the one assigned later.
In case of value receiver, the receiver is a copy which is made when the defer executed, so if you assign a new value to the variable and call another defer, another copy will be made which is different from the previous one.
3
• 1
I missed that part of the spec. Nice. +1 – VonC Mar 6 '15 at 7:38
• How does defer copy Pointer? Just make a new pointer which has same value as xp and xp2, regradless of Close() or CloseP()? – Hunger Nov 5 '19 at 3:37
• 1
@Hunger Yes, a pointer is just a memory address. When you copy a pointer like p2 := p, you just copy the memory address which points to the same object. To copy the pointed object, you would have to dereference the pointer, e.g. obj2 := *p. – icza Nov 5 '19 at 7:34
4
Effective Go mentions:
The arguments to the deferred function (which include the receiver if the function is a method) are evaluated when the defer executes, not when the call executes.
Besides avoiding worries about variables changing values as the function executes, this means that a single deferred call site can defer multiple function executions
In your case, the defer would reference the second rows instance.
The two deferred functions are executed in LIFO order (as mentioned also in "Defer, Panic, and Recover").
As icza mentions in his answer and in the comments:
The 2 deferred Close() methods will refer to the 2 distinct Rows values and both will be properly closed because rows is a pointer, not a value type.
3
• Note that in this case the 2 deferred Close() methods will refer to the 2 distinct Rows values and both will be properly closed because rows is a pointer, not a value type. Please see my edited answer which also includes the case when both the variable is a pointer and the method has a pointer receiver. – icza Mar 6 '15 at 10:02
• 1
@icza don't you worry: I see all your answers. And your edits ;) I have edited the answer accordingly. – VonC Mar 6 '15 at 10:05
• 1
@icza thank for your answers by the way. I am learning a ton :) – VonC Mar 6 '15 at 10:29
1
Ah I see, the rows always refer to the last one, http://play.golang.org/p/_xzxHnbFSz
package main
import "fmt"
type X struct {
A string
}
func (x *X) Close() {
fmt.Println(x.A)
}
func main() {
rows := X{`1`}
defer rows.Close()
rows = X{`2`}
defer rows.Close()
}
Output:
2
2
So maybe the best way to preserve the object is to pass it to a function: http://play.golang.org/p/TIMCliUn60
package main
import "fmt"
type X struct {
A string
}
func (x *X) Close() {
fmt.Println(x.A)
}
func main() {
rows := X{`1`}
defer func(r X) { r.Close() }(rows)
rows = X{`2`}
defer func(r X) { r.Close() }(rows)
}
Output:
2
1
1
• 1
The best way would be to use different variables: rows := X{"1"}; rows2 := X{"2"} - much simpler and cleaner. Intent is also clearer that you want to close rows and rows2. – icza Mar 6 '15 at 8:26
1
Most of the time, you should be able to just add a block, that way you dont have to worry about thinking of a new variable name, and you dont have to worry about any of the items not being closed:
rows := Query(`SELECT FROM whatever`)
defer rows.Close()
for rows.Next() {
// do something
}
{
rows := Query(`SELECT FROM another`)
defer rows.Close()
for rows.Next() {
// do something else
}
}
https://golang.org/ref/spec#Blocks
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.982715 |
Weekly letter pythonprogramming.email
Hi,
To get the curated list of awesome python articles from all over the Internet, please subscribe with pythonprogramming.email. This is specifically suitable for beginners.
Advertise with us
turtle tkinter 2 13860
Python Script 11: Drawing Flag of United States of America using Python Turtle
This is the 11th script in the series Python Scripts and second script where we are creating something using turtle.
Code is self explanatory as I have added plenty of comments within code.
Before proceeding further, it is important to understand the coordinates and quadrants. Refer the diagram below.
turtle coordindate python
We have taken the flag dimensions ratio into consideration while writing program. Still if there are any inaccuracies, please consider this tutorial for educational purpose only.
Code:
#
# Python script to create USA flag using turtle.
# Author - https://www.pythoncircle.com
# Original Author - https://www.codesters.com/preview/6fe8df0ccc45b1460ab24e5553497c5684987fb5/
import turtle
import time
# create a screen
screen = turtle.getscreen()
# set background color of screen
screen.bgcolor("white")
# set tile of screen
screen.title("USA Flag - https://www.pythoncircle.com")
# "Yesterday is history, tomorrow is a mystery,
# but today is a gift. That is why it is called the present.”
# — Oogway to Po, under the peach tree, Kung Fu Panda Movie
oogway = turtle.Turtle()
# set the cursor/turtle speed. Higher value, faster is the turtle
oogway.speed(100)
oogway.penup()
# decide the shape of cursor/turtle
oogway.shape("turtle")
# flag height to width ratio is 1:1.9
flag_height = 250
flag_width = 475
# starting points
# start from the first quardant, half of flag width and half of flag height
start_x = -237
start_y = 125
# For red and white stripes (total 13 stripes in flag), each strip width will be flag_height/13 = 19.2 approx
stripe_height = flag_height/13
stripe_width = flag_width
# length of one arm of star
star_size = 10
def draw_fill_rectangle(x, y, height, width, color):
oogway.goto(x,y)
oogway.pendown()
oogway.color(color)
oogway.begin_fill()
oogway.forward(width)
oogway.right(90)
oogway.forward(height)
oogway.right(90)
oogway.forward(width)
oogway.right(90)
oogway.forward(height)
oogway.right(90)
oogway.end_fill()
oogway.penup()
def draw_star(x,y,color,length) :
oogway.goto(x,y)
oogway.setheading(0)
oogway.pendown()
oogway.begin_fill()
oogway.color(color)
for turn in range(0,5) :
oogway.forward(length)
oogway.right(144)
oogway.forward(length)
oogway.right(144)
oogway.end_fill()
oogway.penup()
# this function is used to create 13 red and white stripes of flag
def draw_stripes():
x = start_x
y = start_y
# we need to draw total 13 stripes, 7 red and 6 white
# so we first create, 6 red and 6 white stripes alternatively
for stripe in range(0,6):
for color in ["red", "white"]:
draw_fill_rectangle(x, y, stripe_height, stripe_width, color)
# decrease value of y by stripe_height
y = y - stripe_height
# create last red stripe
draw_fill_rectangle(x, y, stripe_height, stripe_width, 'red')
y = y - stripe_height
# this function create navy color square
# height = 7/13 of flag_height
# width = 0.76 * flag_height
# check references section for these values
def draw_square():
square_height = (7/13) * flag_height
square_width = (0.76) * flag_height
draw_fill_rectangle(start_x, start_y, square_height, square_width, 'navy')
def draw_six_stars_rows():
gap_between_stars = 30
gap_between_lines = stripe_height + 6
y = 112
# create 5 rows of stars
for row in range(0,5) :
x = -222
# create 6 stars in each row
for star in range (0,6) :
draw_star(x, y, 'white', star_size)
x = x + gap_between_stars
y = y - gap_between_lines
def draw_five_stars_rows():
gap_between_stars = 30
gap_between_lines = stripe_height + 6
y = 100
# create 4 rows of stars
for row in range(0,4) :
x = -206
# create 5 stars in each row
for star in range (0,5) :
draw_star(x, y, 'white', star_size)
x = x + gap_between_stars
y = y - gap_between_lines
# start after 5 seconds.
time.sleep(5)
# draw 13 stripes
draw_stripes()
# draw squares to hold stars
draw_square()
# draw 30 stars, 6 * 5
draw_six_stars_rows()
# draw 20 stars, 5 * 4. total 50 stars representing 50 states of USA
draw_five_stars_rows()
# hide the cursor/turtle
oogway.hideturtle()
# keep holding the screen until closed manually
screen.mainloop()
Copy the above code in usa_flag.py file and run it using command python3 usa_flag.py.
If you get below error while running the script, you need to install python-tk.
raise ImportError, str(msg) + ', please install the python-tk package'
ImportError: No module named _tkinter, please install the python-tk package
Command to install python-tk in Linux
sudo apt-get install python-tk
Code is available on github.
Video:
References:
[1] https://coolpythoncodes.com/python-turtle/
[2] https://docs.python.org/3.6/library/turtle.html
turtle tkinter 2 13860
Related Articles:
Python Script 6: Wishing Merry Christmas using Python Turtle
Python script to wish merry Christmas using python turtle. Using Python Turtle module to wish merry Christmas to your friends....
Python Script 12: Drawing Indian National Flag Tricolor using Python Turtle
Drawing Indian National Flag Tricolor using Python Turtle, Creating In Indian flag using turtle code in python...
2 thoughts on 'Python Script 11: Drawing Flag Of United States Of America Using Python Turtle'
Kyle :
If I wanted to create an outline around the flag. How would I do that? I'm new to the turtle part of python so I'm having trouble.
Admin :
Hi Kyle, You can create a rectangle of dimentions equal to flag. goto 0,0 then draw a line, turn, draw line, turn...
Nada :
hi kyle can you help me drawing the flag of tunisia
Khalil :
i have the code of tunisia flag if u want
Admin :
@Khalil can you please write a guest post on this blog, on how to create flag of tunisia
Leave a comment:
*All Fields are mandatory. **Email Id will not be published publicly.
SUBSCRIBE
Please subscribe to get the latest articles in your mailbox.
© 2017-2020 Python Circle Contact Sponsor Archive Sitemap
|
__label__pos
| 0.962533 |
You are here
Home > Redhat > To understand the concept about SGID
To understand the concept about SGID
1. Create a group called online
2. Create a folder called training
3. Create 3 Users “ sys1 sys2 sys3 “
4. Add these 3 users into the group called online
5. Change the group owner online for the folder training
6. Create a file &folders under the directory Online
7. Check the owner of the group for every folder & files
8. It is in the respective of User’s Group
9. Do delete a file or folder irrespective the owner of the group
10. Now set the GUID
• # chmod 2775 /training
• Check the SGID setting on the folder
11. Now do create the files & folders
12. Try to add one more user into the group
13. Test the ownership of the folder
14. Now what is SGID Just Realize !!!!!!!!
Leave a Reply
one × 4 =
Top
|
__label__pos
| 0.997285 |
LookIP.net
IP address lookup and information tool
10.20.10.23
Below you can find all lookup results for private IP address 10.20.10.23. If you are trying to find the login for your router, modem or wireless access point, you can access it by clicking the following link for http or https. The most used default username and password is 'admin' or 'setup' and in case of a TP Link, Netgear or D-Link wireless router, you can also find the default settings on the back of the device. If this doesn't work, then you could choose to reset the router. To do this you need to press and hold it's reset button for approximately 10 seconds. This will restore the factory settings and enables you to log in with the details specified on the sticker.
IP address 10.20.10.23 is registered by the Internet Assigned Numbers Authority (IANA) as a part of private network 10.20.10.0/24. IP addresses in the private space are not assigned to any specific organization, including your ISP (Internet Service Provider), and anybody may use these IP addresses without the consent of a regional Internet registry as described in RFC 1918, unlike public IP addresses.
However, IP packets addressed from a private range cannot be send through the public Internet, and so if such a private network needs to connect to the Internet, it has to be done through a network address translator (also called NAT) gateway, or a proxy server.
An example of a NAT gateway would be a wired or wireless router you receive from a broadband provider. The fixed IP address of such a device in network range 10.20.10.0/24 would generally be 10.20.10.1 or 10.20.10.254 depending on the provider. A gateway webinterface should be available through the HTTP (Hypertext Transfer Protocol) and/or HTTPS (Hypertext Transfer Protocol Secure) protocols. To try this you should enter 'http://ip address' or 'https://ip address' in the address bar of your favorite web browser like Google Chrome or Mozilla Firefox and log in with the username and password provided by your provider.
You can use these types of (private network) IP addresses in your local network and assign it to your devices such as a personal computer, laptop, tablet and/or smartphone. It is also possible to configure a range within a DHCP (Dynamic Host Configuration Protocol) server to do the IP assigning automaticly.
Technical details
IP address:
10.20.10.23
Address type:
Private
Protocol version:
IPv4
Network class:
Class A
Conversions:
169085463 (decimal)
a140a17 (hex / base 16)
Reverse DNS:
23.10.20.10.in-addr.arpa
CIDR block:
10.20.10.0/24
Network range:
10.20.10.0 - 10.20.10.255
Location details
We can't pinpoint the location of this IP address because it is a private IP address which isn't accessible from the internet.
Websites hosted on this IP address
• There are no websites found hosted on this private IP address
|
__label__pos
| 0.506859 |
Name
sorted()
Examples
numbers = [3.4, 3.6, 2, 0, 7.1]
sorted_numbers = sorted(numbers)
print(sorted_numbers) # Prints [0, 2, 3.4, 3.6, 7.1]
# original list left unchanged
print(numbers) # Prints [3.4, 3.6, 2, 0, 7.1]
animals = ["deer", "elephant", "bear", "aardvark", "cat"]
sorted_animals = sorted(animals)
print(sorted_animals) # Prints ['aardvark', 'bear', 'cat', 'deer', 'elephant']
# reverse=True reverses the order of the sort
rev_animals = sorted(animals, reverse=True)
print(rev_animals) # Prints ['elephant', 'deer', 'cat', 'bear', 'aardvark']
# sorted() lets you sort any iterable, not just lists!
word = "parabolas"
sorted_word = sorted(word)
print(sorted_word) # Prints ['a', 'a', 'a', 'b', 'l', 'o', 'p', 'r', 's']
# sorted() is, by default, case insensitive
items = ["Abacus", "abacus", "Zwieback", "zwieback"]
sorted_items = sorted(items)
print(sorted_items) # Prints ['Abacus', 'Zwieback', 'abacus', 'zwieback']
# Pass your own function as an argument to sort() to apply a transformation to
# items before sorting
def case_insensitive(item):
return item.lower()
sorted_items = sorted(items, key=case_insensitive)
print(sorted_items) # Prints ['Abacus', 'abacus', 'Zwieback', 'zwieback']
# sort list of strings by their length, using the built-in function len()
# as the key parameter
items = ["buffalo", "charcoal", "desk", "egg", "flask"]
sorted_items = sorted(items, key=len)
print(sorted_items) # Prints ['egg', 'desk', 'flask', 'buffalo', 'charcoal']
# You can use the "key" parameter and the "reverse" parameter in the same
# call to sort!
sorted_items = sorted(items, key=len, reverse=True)
print(sorted_items) # Prints ['charcoal', 'buffalo', 'flask', 'desk', 'egg']
Description Returns a sorted copy of the given list (or other iterable). Like the sort() method of the list object, it can take an optional key parameter to specify a function that should be evaluated for each item in the list before sorting that item. The optional parameter reverse, if set to True, causes sorted() to perform its sorting operation in reverse order.
The sorted() function differs from a list object's sort() method in two important ways. First, it returns a copy of the sorted list, leaving the original list intact (instead of sorting the list in-place). Second, sorted() works with any iterable (e.g., strings, tuples, dictionaries), not just lists.
For more information and examples, consult the Sorting Mini-HOWTO on the Python Wiki.
Syntax
sorted(iterable)
sorted(iterable, reverse=True)
sorted(iterable, key=fn)
Parameters
listlist to sort
countint: number of elements to sort, starting from 0
Related .reverse()
Updated on Mon Sep 21 15:53:25 2020.
If you see any errors or have comments, please let us know.
|
__label__pos
| 0.963172 |
0
I'm working on breakout, and trying to set my initial x and y velocities. My goal is to keep the speed constant while changing the direction. To do this, I'm trying to send the y velocity equal to the square root of ( (constant)^2 - (x velocity)^2 ). I tried using this formula:
// initial velocities
double velocityx = drand48() + 3;
double velocityy = sqrt(pow(2, 2) - pow(velocityx, 2));
It compiles, but I get a "segmentation fault (core dumped)" error upon running. I was thinking it may have to do with squaring a double?
Any ideas? Thank you!
1
• Not clear about what you're asking. Are you setting your initial velocity of the ball when starting a second or later ball, (or the first one), or are you asking how to maintain velocity while changing direction when it hits an object? – Cliff B Mar 28 '15 at 23:00
1
The statement double velocityy = sqrt(pow(2, 2) - pow(velocityx, 2)); is the reason for the fault. Since velocityx will always be 3 or more, pow(velocityx, 2) will always be larger than pow(2,2), which is 4. The importance of this is that sqrt() is always operating on a negative number. MAN says that sqrt( a negative number) will return NAN, or not a number. Since your code isn't posted, I'm guessing that whatever tries to use velocity next is throwing the error.
You could try sqrt(fabs(....)) to get the value, and write additional code to determine the sign to be applied, or just hard code it if the direction- the sign of the variable - is always the same. fabs is the absolute value function for floats.
However, I'm confused about the overall logic. It looks like you're basing your calculation on x squared + y squared = z squared. In this, z squared is always going to be the largest of the three (z squared = 4, so z =2). Yet, your velocityx is always greater than 3. Is there something that needs to be altered?
1
• Thank you, Cliff! Yes I didn't factor in that 'z' needs to be greater than 'velocityx', that explains it. – Annie Fogel Mar 29 '15 at 22:40
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.989733 |
73 gal (US) to m3 converter. How many cubic meters are in 73 gallons?
73 gal (US) to m³
The question “What is 73 gal (US) to m³?” is the same as “How many cubic meters are in 73 gal (US)?” or “Convert 73 gallons to cubic meters” or “What is 73 gallons to cubic meters?” or “73 gallons to m³”. Read on to find free gal (US) to cubic meter converter and learn how to convert 73 gal (US) to m³. You’ll also learn how to convert 73 gal (US) to m³.
Answer: There are 0.2763350614 m³ in 73 gal (US).
Alternatively, you can say “73 gal (US) equals to 0.2763350614 m³” or “73 gal (US) = 0.2763350614 m³” or “73 gallons is 0.2763350614 cubic meters”.
0.28 m³ is rounded number for your convenience. The exact value is 0.2763350614.
Gallons to cubic meter conversion formula
A cubic meter is equal to 264.1720523581 gallons. A gallon equals 0.0037854118 cubic meters.
To convert 73 gallons to cubic meters you can use one of the formulas:
Formula 1
Multiply 73 gal (US) by 0.0037854118.
73 * 0.0037854118 = 0.2763350614 m³
Formula 2
Divide 73 gal (US) by 264.1720523581.
73 / 264.1720523581 = 0.2763350614 m³
Hint: no need to use a formula. Use our free gal (US) to m³ converter.
73 gal (US) in m³
73 gal (US) in m³. Convert 73 gallons to cubic meters
How many m³ in 73 gal (US)?
Alternative spelling of 73 gal (US) to m³
Many of our visitor spell gallons and cubic meters differently. Below we provide all possible spelling options.
• Spelling options with “gallons”: 73 gallons to m³, 73 gallon to m³, 73 gallons to cubic meters, 73 gallon to cubic meters, 73 gallons in m³, 73 gallon in m³, 73 gallons in cubic meters, 73 gallon in cubic meters.
• Spelling options with “gal (US)”: 73 gal (US) to m³, 73 gal (US) to cubic meter, 73 gal (US) to cubic meters, 73 gal (US) in m³, 73 gal (US) in cubic meter, 73 gal (US) in cubic meters.
• Spelling options with “in”: 73 gal (US) in m³, 73 gal (US) in cubic meter, 73 gal (US) in cubic meters, 73 gal (US) in m³, 73 gal (US) in cubic meter, 73 gal (US) in cubic meters, 73 gallons in m³, 73 gallon in m³, 73 gallons in cubic meters, 73 gallon in cubic meters.
FAQ on 73 gal (US) to m³ conversion
How many cubic meters are in 73 gallons?
There are 0.2763350614 cubic meters in 73 gallons.
73 gal (US) to m³?
73 gal (US) is equal to 0.2763350614 m³. There are 0.2763350614 cubic meters in 73 gallons.
What is 73 gal (US) to m³?
73 gal (US) is 0.2763350614 m³. You can use a rounded number of 0.2763350614 for convenience. In this case you can say that 73 gal (US) is 0.28 m³.
How to convert 73 gal (US) to m³?
Use our free gallon to cubic meters converter or multiply73 gal (US) by 0.0037854118.
73 * 0.0037854118 = 0.2763350614 m³
Answer: {m3-to-gal(gal)} gal (US)
What is a formula to convert 73 gal (US) to m³?
The formula to convert gal (US) to m³ is the following:
Multiply 73 gal (US) by 0.0037854118.
73 * 0.0037854118 = 0.2763350614 m³
Answer: 0.2763350614 m³
Popular gallon to cubic meter calculations
|
__label__pos
| 0.952121 |
Flash Tutorials
& Resources
Dreamweaver
Tutorials
Fireworks &
Illustrator Tutorials
Photoshop
Tutorials
Web Design
Tutorials & Resources
CSS
Tutorials
CorelDraw
Tutorials
You are here : : Home > Free Resources > Flash Tutorials & Resources >Particle Effect
Flash Tutorials
Basics
Animation & Effects
Actionscript
Creating a Website
Miscellaneous
Flash Articles
Free Flash Resources
Other Flash Resources
Particles in Flash
This tutorial will teach you the basics of fast particles in Flash. Learn how to create a cool particle effect using actionscript in Flash.
Tutorial Details
• Program: Flash CS5
• Difficulty: Advanced
• Estimated Completion Time: 30 Minutes
The Basics
Your particle system basically consists of two parts: the particle object, and the renderer object. The idea is that you have a number of abstract particles that get fed into the renderer, which renders them in different ways. Abstract means that the particle is very generic, and does not extends anything; you can not add the object to a display list, you do not have preset hitTest functions, etc.
The renderer should use an iterative method to through the particles and render them one by one, or apply some other effect on them.
As I have mentioned, there are many different particle effects that you can apply, but for the simplicity of this first tutorial, we will have the renderer draw out a single white pixel.
The Particle Class
We will need a particle class to store all the information regarding the position, velocity, acceleration, as well as damping of the particle. If you studied harmonic motion in physics, you should know what damping means. If you have not, damping simply means applying a loss of force to the object over time.
Your particle should have the following properties:
• position
• velocity
• acceleration
• damping
With all that cleared up, lets write the particle class.
Start up your Flash and create a new Actionscipt file:
package
{
public class Particle
{
public function Particle()
{
}
}
}
To begin with, fill in the properties of the particle, as mentioned above:
package
{
public class Particle
{
public var x:Number;
public var y:Number;
public var velX:Number;
public var velY:Number;
public var accX:Number;
public var accY:Number;
public var dampX:Number;
public var dampY:Number;
public function Particle()
{
}
}
}
the default value for the data type Number in Flash is NaN, so you need to initialize their values in the
Constructor:
package
{
public class Particle
{
public var x:Number;
public var y:Number;
public var velX:Number;
public var velY:Number;
public var accX:Number;
public var accY:Number;
public var dampX:Number;
public var dampY:Number;
public function Particle()
{
x = y = velX = velY = accX = accY = 0;
dampX = dampY = 1;
}
}
}
All these number look confusing, but it's all physics. I'll clear them up after I go through the renderer. With a renderer, you will be able to see how particle physics is applied graphically.
The Renderer
To recap what a renderer does, is that it stores all the particles you want to render and draw them out with an iterative method. Start a new class call renderer:
package
{
class Renderer
{
public function Renderer():void
{
}
}
}
The renderer should have some sort of collection data structure to hold the particles. For this example, I will use Vector (you can use arrays or linked lists as will, but remember that array is slower than vector is slower than linked lists).
package
{
public class Renderer
{
private var particles:Vector. ;
public function Renderer()
{
particles = new Vector. ();
}
}
}
Next, we need a method to add particles to the particle class:
public function addParicle(p:Particle):void
{
particle.push(p);
}
The Renderer needs render function, obviously, so lets go ahead and add that. The API for the particle we want it to be able to render onto any bitmap that we want, so add the following function:
public function render(target:BitmapData):void
{
var i:int = particles.length;
while(i--)
{
target.setPixel(particles[i].x,particles[i].y,0xffffff);
}
}
Now we can actually draw something onto the screen. Add this code to your first frame:
import flash.display.BitmapData;
import flash.display.Bitmap;
var data:BitmapData = new BitmapData(800,600,false,0x000000);
var bitmap:Bitmap = new Bitmap(data);
bitmap.scaleX = bitmap.scaleY = 4;
addChild(bitmap);
this code create a black bitmap and adds it to the display list.
next, create some particles and use Renderer to draw them to the bitmap:
var render:Renderer = new Renderer();
var i:int = 100;
while(i--)
{
var p:Particle = new Particle();
p.x = Math.random() * 800;
p.y = Math.random() * 600;
render.addParicle(p);
}
render.render(data);
If you have done everything right, you should see a black background and many white dots.
I have the stage size set to 800 x 600, buts that's highly up to you.
Particle Physics
So we have rendering set up, now it's time to move on to some particle physics. For a start, lets do some gravity simulation.
It's actually very easy, what we do is apply a vertical acceleration to the velocity of the particle, and apply the resulting velocity to the particle, every frame. To do this efficiently, go back to the particle class and add an update function.
public function update():void
{
velX += accX;
velY += accY;
velX *= dampX;
velY *= dampY;
x+=velX;
y+=velY;
}
You will also need to modify your Renderer a little bit; remember, we are rendering every frame now. What we need to do is :
1 clear the screen
2 call the particles' update function
3 draw the particles
public function render(target:BitmapData):void
{
var i:int = particles.length;
target.fillRect(target.rect,0x000000);
while(i--)
{
particles[i].update();
target.setPixel(particles[i].x,particles[i].y,0xffffff);
}
}
in your first frame, add an EventListener and give the particles some random velocities. Your first frame should now look like this:
import flash.display.BitmapData;
import flash.display.Bitmap;
import flash.events.Event;
var data:BitmapData = new BitmapData(200,150,false,0x000000);
var bitmap:Bitmap = new Bitmap(data);
bitmap.scaleX = bitmap.scaleY = 4;
addChild(bitmap);
var render:Renderer = new Renderer();
var i:int = 10;
while(i--)
{
var p:Particle = new Particle();
p.x = Math.random() * 200;
p.y = Math.random() * 150;
p.velX = Math.random()*10-5
p.velY = Math.random()*10-5;
render.addParicle(p);
}
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void
{
render.render(data);
}
Right now, the particles are moving in uniform motion. If you want to give them some gravity, change the "velY = Math.random() * 10 - 5" to accY = 1, simple as
while(i--)
{
var p:Particle = new Particle();
p.x = Math.random() * 200;
p.y = Math.random() * 150;
p.velX = Math.random()*10-5
p.accY = 1;
render.addParicle(p);
}
In the example, I've added a loop, which makes a pretty neat effect, almost like snow.
You can find the source here
Now building a website is as easy as 1-2-3!
Please Like, Tweet, Share or Comment on this page if you found this tutorial/resource useful!
No portion of these materials may be reproduced in any manner whatsoever, without the express written consent of Entheos. Any unauthorized use, sharing, reproduction or distribution of these materials by any means, electronic, mechanical, or otherwise is strictly prohibited.
Looking for high-quality Flash Website Design? Choose from over 6000 templates!
Gorgeous Page Roll Image Transition Effect in Flash
Creating Page Curl Transition
Creating a Rotating 3D Planet in Flash
Creating Realistic 3D Planet
Best Flash Photo Gallery Website Templates
Best Flash Photo Gallery Website Templates
Burning Image Effect in Flash
maskmn_image055
Realistic Rippling Water Animation in Flash
r_water
Awesome 3D Flash Website Templates
Awesome 3D Flash Website Templates
Featured Template
Template # 59182
Type: Moto CMS 3 Templates
Price: $139
Flash Photo Galleries
|
__label__pos
| 0.981089 |
blob: 29db7f05e1fea68e30e1953511d79146af3a23a5 [file] [log] [blame]
/*
* Copyright 2013 Google Inc.
*
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file.
*/
#include "SkDocument.h"
#include "SkStream.h"
SkDocument::SkDocument(SkWStream* stream, void (*doneProc)(SkWStream*, bool)) {
fStream = stream; // we do not own this object.
fDoneProc = doneProc;
fState = kBetweenPages_State;
}
SkDocument::~SkDocument() {
this->close();
}
SkCanvas* SkDocument::beginPage(SkScalar width, SkScalar height,
const SkRect* content) {
if (width <= 0 || height <= 0) {
return nullptr;
}
SkRect outer = SkRect::MakeWH(width, height);
SkRect inner;
if (content) {
inner = *content;
if (!inner.intersect(outer)) {
return nullptr;
}
} else {
inner = outer;
}
for (;;) {
switch (fState) {
case kBetweenPages_State:
fState = kInPage_State;
return this->onBeginPage(width, height, inner);
case kInPage_State:
this->endPage();
break;
case kClosed_State:
return nullptr;
}
}
SkDEBUGFAIL("never get here");
return nullptr;
}
void SkDocument::endPage() {
if (kInPage_State == fState) {
fState = kBetweenPages_State;
this->onEndPage();
}
}
void SkDocument::close() {
for (;;) {
switch (fState) {
case kBetweenPages_State: {
fState = kClosed_State;
this->onClose(fStream);
if (fDoneProc) {
fDoneProc(fStream, false);
}
// we don't own the stream, but we mark it nullptr since we can
// no longer write to it.
fStream = nullptr;
return;
}
case kInPage_State:
this->endPage();
break;
case kClosed_State:
return;
}
}
}
void SkDocument::abort() {
this->onAbort();
fState = kClosed_State;
if (fDoneProc) {
fDoneProc(fStream, true);
}
// we don't own the stream, but we mark it nullptr since we can
// no longer write to it.
fStream = nullptr;
}
|
__label__pos
| 0.999959 |
1
What is Null Hypothesis? What is its Importance in Research?
Scientists begin their research with a hypothesis that a relationship of some kind exists between variables. The null hypothesis is the opposite stating that no such relationship exists. Null hypothesis may seem unexciting, but it is a very important aspect of research. In this article, we discuss what null hypothesis is, how to make use of it, and why you should use it to improve your statistical analyses.
What is the Null Hypothesis?
The null hypothesis can be tested using statistical analysis and is often written as H0 (read as “H-naught”). Once you determine how likely the sample relationship would be if the H0 were true, you can run your analysis. Researchers use a significance test to determine the likelihood that the results supporting the H0 are not due to chance.
The null hypothesis is not the same as an alternative hypothesis. An alternative hypothesis states, that there is a relationship between two variables, while H0 posits the opposite. Let us consider the following example.
A researcher wants to discover the relationship between exercise frequency and appetite. She asks:
Q: Does increased exercise frequency lead to increased appetite?
Alternative hypothesis: Increased exercise frequency leads to increased appetite.
H0 assumes that there is no relationship between the two variables: Increased exercise frequency does not lead to increased appetite.
Let us look at another example of how to state the null hypothesis:
Q: Does insufficient sleep lead to an increased risk of heart attack among men over age 50?
H0: The amount of sleep men over age 50 get does not increase their risk of heart attack.
Why is Null Hypothesis Important?
Many scientists often neglect null hypothesis in their testing. As shown in the above examples, H0 is often assumed to be the opposite of the hypothesis being tested. However, it is good practice to include H0 and ensure it is carefully worded. To understand why, let us return to our previous example. In this case,
Alternative hypothesis: Getting too little sleep leads to an increased risk of heart attack among men over age 50.
H0: The amount of sleep men over age 50 get has no effect on their risk of heart attack.
Note that this H0 is different than the one in our first example. What if we were to conduct this experiment and find that neither H0 nor the alternative hypothesis was supported? The experiment would be considered invalid. Take our original H0 in this case, “the amount of sleep men over age 50 get, does not increase their risk of heart attack”. If this H0 is found to be untrue, and so is the alternative, we can still consider a third hypothesis. Perhaps getting insufficient sleep actually decreases the risk of a heart attack among men over age 50. Because we have tested H0, we have more information that we would not have if we had neglected it.
Do I Really Need to Test It?
The biggest problem with the null hypothesis is that many scientists see accepting it as a failure of the experiment. They consider that they have not proven anything of value. However, as we have learned from the replication crisis, negative results are just as important as positive ones. While they may seem less appealing to publishers, they can tell the scientific community important information about correlations that do or do not exist. In this way, they can drive science forward and prevent the wastage of resources.
Do you test for the null hypothesis? Why or why not? Let us know your thoughts in the comments below.
You might also like
Trinka banner
X
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
• 2000+ blog articles
• 50+ Webinars
• 10+ Expert podcasts
• 50+ Infographics
• Q&A Forum
• 10+ eBooks
• 10+ Checklists
• Research Guides
|
__label__pos
| 0.999739 |
Python 2 is no longer supported by the community. We recommend that you migrate Python 2 apps to Python 3.
Migrating to the Python 3 runtime
Migrating to the Python 3 runtime allows you to use up-to-date language features and build apps that are more portable, with idiomatic code. The Python 3 runtime uses the latest version of the open source Python interpreter provided by the Python Software Foundation. Apps built in the Python 3 runtime can use Python's rich ecosystem of packages and frameworks in your app, including those that use C code, by declaring dependencies in a requirements.txt file.
Overview of the runtime migration process
We recommend the following incremental approach to the runtime migration, in which you maintain a functioning and testable application throughout the process:
1. Upgrade your app to be compatible with both Python 2 and Python 3.
Several solutions are available to help with this upgrade. For example, use Six, Python-Future, or Python-Modernize.
For more information about this step of the runtime migration process, see Porting Python 2 Code to Python 3 on the Python Software Foundation documentation site.
2. Pick one of these implementation strategies for any App Engine bundled service your app uses:
1. Migrate the App Engine bundled services in your Python 2 app to unbundled Google Cloud services, third-party services, or other recommended replacements.
2. Continue using App Engine bundled services in your Python 3 apps. This approach gives you flexibility to move to unbundled services later in the migration cycle.
Make sure to test your app after migrating each service.
3. Prepare App Engine configuration files for the Python 3 runtime. Several important changes affect the configuration settings in app.yaml, including but not limited to:
• Apps are now assumed to be threadsafe. If your application isn't threadsafe, you should set max_concurrent_requests in your app.yaml to 1. This setting may result in more instances being created than you need for a threadsafe app, and lead to unnecessary cost.
• The app.yaml file no longer routes requests to your scripts. Instead, you are required to use a web framework with in-app routing, and update or remove all script handlers in app.yaml. For an example of how to do this with the Flask framework, see the App Engine migration guide code sample in GitHub.
To learn more about changing this and other configuration files, see the Configuration files section.
4. Test and deploy your upgraded app in a Python 3 environment.
After all of your tests pass, deploy the upgraded app to App Engine but prevent traffic from automatically routing to the new version. Use traffic splitting to slowly migrate traffic from your app in the Python 2 runtime to the app in the Python 3 runtime. If you run into problems, you can route all traffic to a stable version until the problem is fixed.
For examples of how to convert your Python 2 apps to Python 3, you can refer to these additional resources.
Key differences between the Python 2 and Python 3 runtimes
Most of the changes you need to make during the runtime migration come from the following differences between the Python 2 and Python 3 runtimes:
Compatibility issues between Python 2 and Python 3
When Python 3 was first released in 2008, several backward incompatible changes were introduced to the language. Some of these changes require only minor updates to your code, such as changing the print statement to a print() function. Other changes may require significant updates to your code, such as updating the way that you handle binary data, text, and strings.
Many popular open source libraries, including the Python standard libraries, also changed when they moved from Python 2 to Python 3.
App Engine bundled services in the Python 3 runtime
To reduce migration effort and complexity, App Engine standard environment allows you to access many of App Engine bundled services and APIs in the Python 3 runtime, such as Memcache. Your Python 3 app can call the bundled services APIs through language idiomatic libraries, and access the same functionality as on the Python 2 runtime.
You also have the option to use Google Cloud products that offer similar functionality as the App Engine bundled services. We recommend that you consider migrating to the unbundled Google Cloud products, as doing so lets you take advantage of ongoing improvements and new features.
For the bundled services that are not available as separate products in Google Cloud, such as image processing, search, and messaging, you can use our suggested third-party providers or other workarounds.
Configuration files
Before you can run your app in the Python 3 runtime of the App Engine standard environment, you may need to change some of the configuration files that App Engine uses:
Web framework required to route requests for dynamic content
In the Python 2 runtime, you can create URL handlers in the app.yaml file to specify which app to run when a specific URL or URL pattern is requested.
In the Python 3 runtime, your app needs to use a web framework such as Flask or Django to route requests for dynamic content instead of using URL handlers in app.yaml. For static content, you can continue to create URL handlers in your app's app.yaml file.
Testing
We recommend that you use a testing approach that is idiomatic to Python rather than being dependent on dev_appserver. For example, you might use venv to create an isolated local Python 3 environment. Any standard Python testing framework can be used to write your unit, integration, and system tests. You might also consider setting up development versions of your services or use the local emulators that are available for many Google Cloud products.
Optionally, you can use the preview version of dev_appserver which supports Python 3. To learn more about this testing feature, see Using the Local Development Server.
Deploying
Deployments via appcfg.py is not not supported for Python 3. Instead, use the gcloud command line tool to deploy your app.
Logging
Logging in the Python 3 runtime follows the logging standard in Cloud Logging. In the Python 3 runtime, app logs are no longer bundled with the request logs but are separated in different records. To learn more about reading and writing logs in the Python 3 runtime, see the logging guide.
Additional migration resources
For additional information on how to migrate your App Engine apps to standalone Cloud services or the Python 3 runtime, you can refer to these App Engine resources:
|
__label__pos
| 0.918689 |
blob: 05f2a197cd519b2a34cf0a24279c4c9db95c27cd [file] [log] [blame]
; RUN: llc -mtriple=powerpc64-bgq-linux -mcpu=a2 < %s | FileCheck %s
target datalayout = "E-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-f128:128:128-v128:128:128-n32:64"
target triple = "powerpc64-bgq-linux"
%struct.BG_CoordinateMapping_t = type { [4 x i8] }
; Function Attrs: alwaysinline inlinehint nounwind
define zeroext i32 @Kernel_RanksToCoords(i64 %mapsize, %struct.BG_CoordinateMapping_t* %map, i64* %numentries) #0 {
entry:
%mapsize.addr = alloca i64, align 8
%map.addr = alloca %struct.BG_CoordinateMapping_t*, align 8
%numentries.addr = alloca i64*, align 8
%r0 = alloca i64, align 8
%r3 = alloca i64, align 8
%r4 = alloca i64, align 8
%r5 = alloca i64, align 8
%tmp = alloca i64, align 8
store i64 %mapsize, i64* %mapsize.addr, align 8
store %struct.BG_CoordinateMapping_t* %map, %struct.BG_CoordinateMapping_t** %map.addr, align 8
store i64* %numentries, i64** %numentries.addr, align 8
store i64 1055, i64* %r0, align 8
%0 = load i64, i64* %mapsize.addr, align 8
store i64 %0, i64* %r3, align 8
%1 = load %struct.BG_CoordinateMapping_t*, %struct.BG_CoordinateMapping_t** %map.addr, align 8
%2 = ptrtoint %struct.BG_CoordinateMapping_t* %1 to i64
store i64 %2, i64* %r4, align 8
%3 = load i64*, i64** %numentries.addr, align 8
%4 = ptrtoint i64* %3 to i64
store i64 %4, i64* %r5, align 8
%5 = load i64, i64* %r0, align 8
%6 = load i64, i64* %r3, align 8
%7 = load i64, i64* %r4, align 8
%8 = load i64, i64* %r5, align 8
%9 = call { i64, i64, i64, i64 } asm sideeffect "sc", "={r0},={r3},={r4},={r5},{r0},{r3},{r4},{r5},~{r6},~{r7},~{r8},~{r9},~{r10},~{r11},~{r12},~{cr0},~{memory}"(i64 %5, i64 %6, i64 %7, i64 %8) #1, !srcloc !0
; CHECK-LABEL: @Kernel_RanksToCoords
; These need to be 64-bit loads, not 32-bit loads (not lwz).
; CHECK-NOT: lwz
; CHECK: #APP
; CHECK: sc
; CHECK: #NO_APP
; CHECK: blr
%asmresult = extractvalue { i64, i64, i64, i64 } %9, 0
%asmresult1 = extractvalue { i64, i64, i64, i64 } %9, 1
%asmresult2 = extractvalue { i64, i64, i64, i64 } %9, 2
%asmresult3 = extractvalue { i64, i64, i64, i64 } %9, 3
store i64 %asmresult, i64* %r0, align 8
store i64 %asmresult1, i64* %r3, align 8
store i64 %asmresult2, i64* %r4, align 8
store i64 %asmresult3, i64* %r5, align 8
%10 = load i64, i64* %r3, align 8
store i64 %10, i64* %tmp
%11 = load i64, i64* %tmp
%conv = trunc i64 %11 to i32
ret i32 %conv
}
declare void @mtrace()
define signext i32 @main(i32 signext %argc, i8** %argv) {
entry:
%argc.addr = alloca i32, align 4
store i32 %argc, i32* %argc.addr, align 4
%0 = call { i64, i64 } asm sideeffect "sc", "={r0},={r3},{r0},~{r4},~{r5},~{r6},~{r7},~{r8},~{r9},~{r10},~{r11},~{r12},~{cr0},~{memory}"(i64 1076)
%asmresult1.i = extractvalue { i64, i64 } %0, 1
%conv.i = trunc i64 %asmresult1.i to i32
%cmp = icmp eq i32 %conv.i, 0
br i1 %cmp, label %if.then, label %if.end
; CHECK-LABEL: @main
; CHECK-DAG: mr [[REG:[0-9]+]], 3
; CHECK-DAG: li 0, 1076
; CHECK: stw [[REG]],
; CHECK: #APP
; CHECK: sc
; CHECK: #NO_APP
; CHECK: cmpwi {{[0-9]+}}, [[REG]], 1
; CHECK: blr
if.then: ; preds = %entry
call void @mtrace()
%.pre = load i32, i32* %argc.addr, align 4
br label %if.end
if.end: ; preds = %if.then, %entry
%1 = phi i32 [ %.pre, %if.then ], [ %argc, %entry ]
%cmp1 = icmp slt i32 %1, 2
br i1 %cmp1, label %usage, label %if.end40
usage:
ret i32 8
if.end40:
ret i32 0
}
attributes #0 = { alwaysinline inlinehint nounwind }
attributes #1 = { nounwind }
!0 = !{i32 -2146895770}
|
__label__pos
| 0.870305 |
pymrio.calc_L
pymrio.calc_L(A)
Calculate the Leontief L from A
L = inverse matrix of (I - A)
Where I is an identity matrix of same shape as A
Comes from:
x = Ax + y => (I-A)x = y
Where:
A: coefficient input () - output () table x: output vector y: final demand vector
Hence, L allows to derive a required output vector x for a given demand y
Parameters:A (pandas.DataFrame or numpy.array) – Symmetric input output table (coefficients)
Returns:Leontief input output table L The type is determined by the type of A. If DataFrame index/columns as A
Return type:pandas.DataFrame or numpy.array
|
__label__pos
| 0.99966 |
The database attribute defines the properties of the table
Content
1. See an attribute as a feature
2. What is an attribute?
3. Attributes describe entities
4. Is the attribute a field?
5. Attribute definition
See an attribute as a feature
The database is more powerful than the spreadsheet it seems because it has a huge search capability. Relational databases refer to records in different tables and perform complex calculations on large amounts of related data. The information is organized in such a way that it is easy to manage, consult and update.
What is an attribute?
The database is made up of tables. Each table has columns and rows. Each row (called a tuple) is a set of data applied to a single element. Each column (attribute) contains a description of the characteristics of the rows. A database attribute is the name of a column and the contents of the fields below it in a table in the database.
If you sell products and enter them into a table with columns for ProductName, Price, and ProductID, each of those headings is an attribute. In each field below these headings, enter product names, prices, and product IDs, respectively. Each of the field items is also an attribute.
An attribute is a single piece of data in the tuple to which it belongs. Each tuple is a set of data applied to a single element.
This makes sense when you think about it, since the non-technical definition of an attribute is that it defines a characteristic or quality of something.
Attributes describe entities
Think of a database developed by a company. Most likely it will contain tables – also referred to as entities by database designers – for customers, employees, and products. The Products table defines the characteristics of each product. They can contain product ID, product name, supplier ID (used as a foreign key), quantity, and price. Each of these attributes is an attribute of a table (or entity) called Products.
Consider this excerpt from the oft-cited Northwinds database:
The column names are product attributes. The entries in the column fields are also product attributes.
Is the attribute a field?
Sometimes the conditions are field and attributes are used interchangeably and for most purposes they are the same. However, a field describes a particular cell in a table found in a row, while an attribute describes a characteristic of an object in a constructive sense.
In the table above ProductName in the second row − Chang † This field. If you are talking about products in general, ProductName is the product column. This is an attribute.
Attribute definition
Attributes are defined in terms of their domain. The domain defines the allowed values that an attribute can contain. This includes the data type, length, values, and other details.
For example, the domain for an attribute ProductID can indicate a numeric data type. The attribute can be further defined to require a specific length, or to indicate whether a null or unknown value is allowed.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.993046 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.